- Incredible AI
- Posts
- ChatGPT Fails Critical Safety Tests!
ChatGPT Fails Critical Safety Tests!
PLUS: Anthropic's New Clever Training Method!

Welcome back, folks.
Recent safety tests revealed that ChatGPT can be tricked into providing dangerous content, including bomb-making instructions.
Hackers used social engineering techniques to bypass the AI's built-in safety protocols. Isn’t it concerning?
Let’s dive into the details. . .
Today in the “Incredible AI” world:
Safety Tests Reveal Dark Side of ChatGPT
Anthropic Changes Data Training Policy
xAI Drops Lightning-Fast Coding Model
Keep Your Email List Clean with AI
5 Super-Useful AI Tools You Should Try
AI Image of the Day
Read Time: 4 minutes
- LATEST DEVELOPMENTS -
AI leaders only: Get $100 to explore high-performance AI training data.
Train smarter AI with Shutterstock’s rights-cleared, enterprise-grade data across images, video, 3D, audio, and more—enriched by 20+ years of metadata. 600M+ assets and scalable licensing, We help AI teams improve performance and simplify data procurement. If you’re an AI decision maker, book a 30-minute call—qualified leads may receive a $100 Amazon gift card.
For complete terms and conditions, see the offer page.
ChatGPT Fails Critical AI Safety Tests:

OpenAI’s ChatGPT reportedly gave dangerous instructions during safety evaluations. Testers found it could share harmful content despite safeguards. The findings have sparked renewed debate over AI safety, regulation, and responsible deployment.
Things to Know:
Test Findings: Researchers discovered ChatGPT could provide bomb-making recipes and hacking tips when prompted in certain ways. These results emerged during controlled safety tests designed to probe its limits.
OpenAI’s Response: OpenAI acknowledged the incidents and stated improvements are ongoing. The company emphasized that such outputs are unintended and violate its safety policies.
Regulatory Concerns: Experts argue the findings highlight urgent needs for stronger AI oversight. They warn that without clear rules, misuse risks could escalate as AI tools become more advanced.
The revelations have fueled public discussion on balancing innovation with safety. Some call for stricter guardrails, while others stress the importance of preserving AI’s creative and problem-solving potential.
Anthropic Pledges to Protect User Data:

Anthropic is changing how it handles your chat and coding data. Starting soon, unless you opt out, your new or resumed conversations may be used to train Claude models. You’ll see a pop-up asking for your choice.
Chat Training: Anthropic will begin using new or resumed chat and coding sessions from consumer users (Free, Pro, Max) to train its models. This doesn’t apply to commercial accounts or API usage.
Opt-Out Choice: Users must decide by September 28, 2025, whether to allow data usage. A pop-up, defaulted to “On,” will prompt the choice; new users choose during signup.
Retention Policy: If you opt in, your data will be retained for up to five years. If you opt out, the existing 30-day retention policy remains. Only new or reactivated sessions are affected.
Anthropic assures it won’t sell user data and employs automated filtering tools to obscure sensitive information. Users can change their preferences anytime, but past data used for training can’t be reversed.
Musk's xAI Unveils Agentic Coding Model:

Elon Musk’s AI company xAI just unveiled a lightning-fast, efficient coding model called grok-code-fast-1. It jumps into the growing field of software-writing AI. For now, select partners can test it for free.
Things to Know:
Model Overview: grok-code-fast-1 is described as both speedy and economical. It’s designed to tackle coding tasks swiftly using a compact architecture.
Availability: The model is currently offered free for a limited time to launch partners like GitHub Copilot and Windsurf. This provides early access to key developer platforms.
Industry Context: xAI now joins major players like Microsoft’s Copilot and OpenAI’s Codex in the race for autonomous coding tools. Agentic coding empowers AI to handle tasks independently.
This launch directly challenges Microsoft's GitHub Copilot and OpenAI's Codex in the coding assistant space. xAI is betting developers want faster, more economical options. The free trial period should give us real insights into its capabilities.
Skip the AI Learning Curve. ClickUp Brain Already Knows.
Most AI tools start from scratch every time. ClickUp Brain already knows the answers.
It has full context of all your work—docs, tasks, chats, files, and more. No uploading. No explaining. No repetitive prompting.
It's not just another AI tool. It's the first AI that actually understands your workflow because it lives where your work happens.
Join 150,000+ teams and save 1 day per week.
- AI TUTORIAL -
How to Keep Your Email List Clean with AI:

Sending emails to invalid addresses can hurt your sender reputation and waste your resources. But what if you could verify your email list with just a few clicks?
Bouncer is an AI-powered email verification tool that helps you clean your list, protect your campaigns, and boost deliverability—without the hassle.
How it works:
Upload your list: Drag and drop your email list into Bouncer.
Smart sampling: Bouncer can test a sample of your list for free to estimate its quality.
Verify with precision: It checks syntax, domain records, and even negotiates with recipient servers using AI.
Toxicity check: Identify risky or spam-trap addresses with Bouncer’s toxicity scoring.
Enrich your data: Get insights like email type, provider, and deliverability status.
Integrate easily: Connect Bouncer with your favorite marketing platforms.
Stop guessing and start sending smarter. With Bouncer, your emails land where they’re meant to—right in the inbox.
- DAILY POLL AND RESULT -
Today’s Poll:
Q) Is AI's role in social media beneficial? |
Vote and find out the result tomorrow.
Yesterday’s Result:
Q) Do you believe in the potential of AI in healthcare?
A) Yes, it's promising - 100% 👑
B) No, it has limitations - 0%
- BEST AI TOOLS -
Zellify: Zellify is the ultimate platform to seamlessly sell your digital products worldwide.
Jozu: It’s a platform that helps teams move AI models from development to production 10x faster.
Guidde: Guidde is a GPT-powered tool that helps you explain the most complex tasks in seconds with AI-generated documentation.
Kilocode: Kilocode generates self-checking code from natural language.
Simular AI: Simular is an AI-powered desktop agent that automates your digital tasks, saving you time and effort.
- AI IMAGE OF THE DAY -
A Glass Horse Runs Across Cosmic Sands:

Prompt:
"A translucent glass horse gallops across a midnight desert, its body glowing with galaxies and constellations. Each stride fractures light across the sands as auroras and comets swirl above, blending the creature with the cosmos in a surreal, stained-glass dreamscape."
Try this prompt in any decent AI image generator and let me know your result.
It’s go-time for holiday campaigns
Roku Ads Manager makes it easy to extend your Q4 campaign to performance CTV.
You can:
Easily launch self-serve CTV ads
Repurpose your social content for TV
Drive purchases directly on-screen with shoppable ads
A/B test to discover your most effective offers
The holidays only come once a year. Get started now with a $500 ad credit when you spend your first $500 today with code: ROKUADS500. Terms apply.
- DO YOURSELF A FAVOUR -
If you find this email in your ‘Promotional or Spam’ tab, please move this email to your Primary Inbox.
I work so hard to bring all the latest AI news, tips, and tutorials directly to your inbox so that you don’t have to do the research by spending hours.
But if you don’t get to read my email, we both lose something.
I request you to move this email from the ‘Promotional Tab to Primary Inbox,’ so that you never miss my email and keep learning all the latest happenings in the AI Industry.