• Insane AI
  • Posts
  • 🧠 OpenAI’s $300B bet is testing faith

🧠 OpenAI’s $300B bet is testing faith

Scammers are getting smarter. So are the schools. And OpenAI? Still raising billions.

Your Thursday AI briefing, straight to the point…

1. A ā€˜Biblical’ test of investor faith in AI

Sam Altman called the response to ChatGPT’s new image tool ā€œbiblicalā€ – the same could be said for the size of OpenAI’s latest funding round. SoftBank is investing $40B, valuing OpenAI at $300B – a bet that’ll require burning through $35B over the next few years.

It’s not just OpenAI. Elon Musk just merged xAI and X at a $110B valuation. And CoreWeave, the self-proclaimed ā€œAI hyperscaler,ā€ had to slash its IPO, only to be rescued by big investors – one of them being Nvidia, its own chip supplier.

The divide:

  • Bulls point to OpenAI’s explosive growth.

  • Bears highlight a 20% drop in chip stocks and fear an oversaturated market.

Still, CoreWeave soared 42% on day one. In this AI cycle, belief still beats balance sheets.

PRESENTED BY GUIDDE

Tired of explaining the same thing over and over again to your colleagues?

It’s time to delegate that work to AI. Guidde is a GPT-powered tool that helps you explain the most complex tasks in seconds with AI-generated documentation.

  • Share or embed your guide anywhere

  • Turn boring documentation into stunning visual guides

  • Save valuable time by creating video documentation 11x faster

Simply click capture on the browser extension and the app will automatically generate step-by-step video guides complete with visuals, voiceover, and call to actions.

The best part? The extension is 100% free.

2. ChatGPT’s image tool could help scammers fake receipts, jobs, and ads

OpenAI’s new image generator is already raising red flags. Axios tests showed it can create convincing fakes – like Philz Coffee receipts or fake Apple job offers – even with guardrails in place.

šŸ” It won’t make an official driver’s license – but it’ll make a convincing ā€œtemplate.ā€

🧠 Smart prompts can sneak past filters.

Why it matters: Scams don’t need deepfakes – realistic documents and ads are enough to trick people. And now they’re easier than ever to generate.

The bottom line: OpenAI is monitoring misuse, but cybersecurity experts warn that it’s only a matter of time before scammers weaponize these tools at scale.

3: American University launches AI institute – no bans, just use it smartly

While some schools still ban AI tools, American University’s business school is embracing them with a new Institute for Applied AI.

šŸŽ“ Students will use AI for marketing, finance, and risk analysis – not coding.

šŸ“Š Incoming students say AI was off-limits in high school. Here, it’s part of the curriculum from day one.

Details:

  • Launching with 15 faculty members.

  • Students get free access to Perplexity Enterprise Pro.

The takeaway: Instead of fighting AI, the school is teaching students how to use it responsibly – and getting them job-ready in the process.

4: Google says we can’t wait to prep for AGI

Google DeepMind is urging action on AI safety – now, not later. In a new paper, it warns that superhuman AI could arrive by 2030, and governments and developers need to be ready.

🚨 Key risks: misuse, misalignment, accidents, and AI-vs-AI chaos.

šŸ“¢ DeepMind wants clear regulations and a shared safety framework.

But industry tensions remain: While scientists warn of risks, policymakers and CEOs (especially under the Trump admin) are focused on winning the AI arms race.

The bottom line: The tech is moving fast. DeepMind’s message? Build guardrails now – or risk losing control later.

Google — A practical approach to creative content and AI training.

Financial Times — AI race gives Washington another reason to be tough on TikTok.