- Insane AI
- Posts
- š§ OpenAIās $300B bet is testing faith
š§ OpenAIās $300B bet is testing faith
Scammers are getting smarter. So are the schools. And OpenAI? Still raising billions.
Your Thursday AI briefing, straight to the pointā¦
1. A āBiblicalā test of investor faith in AI

Sam Altman called the response to ChatGPTās new image tool ābiblicalā ā the same could be said for the size of OpenAIās latest funding round. SoftBank is investing $40B, valuing OpenAI at $300B ā a bet thatāll require burning through $35B over the next few years.
Itās not just OpenAI. Elon Musk just merged xAI and X at a $110B valuation. And CoreWeave, the self-proclaimed āAI hyperscaler,ā had to slash its IPO, only to be rescued by big investors ā one of them being Nvidia, its own chip supplier.
The divide:
Bulls point to OpenAIās explosive growth.
Bears highlight a 20% drop in chip stocks and fear an oversaturated market.
Still, CoreWeave soared 42% on day one. In this AI cycle, belief still beats balance sheets.
PRESENTED BY GUIDDE
Tired of explaining the same thing over and over again to your colleagues?
Itās time to delegate that work to AI. Guidde is a GPT-powered tool that helps you explain the most complex tasks in seconds with AI-generated documentation.
Share or embed your guide anywhere
Turn boring documentation into stunning visual guides
Save valuable time by creating video documentation 11x faster
Simply click capture on the browser extension and the app will automatically generate step-by-step video guides complete with visuals, voiceover, and call to actions.
The best part? The extension is 100% free.
2. ChatGPTās image tool could help scammers fake receipts, jobs, and ads

OpenAIās new image generator is already raising red flags. Axios tests showed it can create convincing fakes ā like Philz Coffee receipts or fake Apple job offers ā even with guardrails in place.
š It wonāt make an official driverās license ā but itāll make a convincing ātemplate.ā
š§ Smart prompts can sneak past filters.
Why it matters: Scams donāt need deepfakes ā realistic documents and ads are enough to trick people. And now theyāre easier than ever to generate.
The bottom line: OpenAI is monitoring misuse, but cybersecurity experts warn that itās only a matter of time before scammers weaponize these tools at scale.
3: American University launches AI institute ā no bans, just use it smartly

While some schools still ban AI tools, American Universityās business school is embracing them with a new Institute for Applied AI.
š Students will use AI for marketing, finance, and risk analysis ā not coding.
š Incoming students say AI was off-limits in high school. Here, itās part of the curriculum from day one.
Details:
Launching with 15 faculty members.
Students get free access to Perplexity Enterprise Pro.
The takeaway: Instead of fighting AI, the school is teaching students how to use it responsibly ā and getting them job-ready in the process.
4: Google says we canāt wait to prep for AGI
Google DeepMind is urging action on AI safety ā now, not later. In a new paper, it warns that superhuman AI could arrive by 2030, and governments and developers need to be ready.
šØ Key risks: misuse, misalignment, accidents, and AI-vs-AI chaos.
š¢ DeepMind wants clear regulations and a shared safety framework.
But industry tensions remain: While scientists warn of risks, policymakers and CEOs (especially under the Trump admin) are focused on winning the AI arms race.
The bottom line: The tech is moving fast. DeepMindās message? Build guardrails now ā or risk losing control later.
Editors picks āļø
Google ā A practical approach to creative content and AI training.
Financial Times ā AI race gives Washington another reason to be tough on TikTok.