What's The Plan?

In partnership with

Good morning.

Regular readers might start to think I’m bipolar. One week AI is the biggest bubble the world has ever seen and it’s generating huge amounts of utter slop. Then the next week we’ll all be out of a job while the machines do our bidding.

Truth is - we don’t know which direction it is going to go. It’s very difficult to find out what is true, what is fear-mongering and what is a salespitch.

However - slowly but steadily the promise of “AGI” is coming to fruition - so the last few months I’m slowly getting convinced that these AI studios might actually pull it off.

This week’s newsletter is about the implications of AI research actually moving towards superintelligence.

I was rereading last week's post about what is in the works the next few months, and I got a little jolt of something I can only describe as fear

It might not be the best comparison but I felt the same thing in December 2019 when you saw these images of people dropping dead in the streets in China after the discovery of a strange new virus. I'm not proud of it but that same day I went out and i bought like 30kg of rice (little did I know that toiletpaper is the thing you need to survive a pandemic).

Then a few months later we were engulfed by the Covid-19 pandemic - complete with lockdowns, mandatory facemasks and the mainstream media having an absolute field day keeping everybody afraid and spouting off the most insane nonsense at times. It was a time where you had to find your own 'truth', because there was so much money and hidden agendas at play (you can tell i'm not a fan of the mainstream media )

In the end - most of us came out ok- some of us unfortunately did not. But the world had definitely changed.

That same little pang of fear is back again.

But .. what if ...

What if some AI lab brings AGI into existence and it develops an AI agent that starts improving upon itself.

I always have to think about the infamous TED Talk by Sam Harris on the dangers of AI back in 2017. This talk was what got me interested in this field, by the way.

He compared the arrival of Artificial General Intelligence as the arrival of an alien race on this planet.

He also did a thought exercise : if we just built a superintelligent AI that was no smarter than your average team of researchers at Stanford or MIT .Well, electronic circuits function about a million times faster than biochemical ones,so this machine should think about a million times faster than the minds that built it.

So you set it running for a week, and it will perform 20,000 years of human-level intellectual work, week after week after week. How could we even understand, much less constrain, a mind making this sort of progress?

This was 2017.

If you even glance at last week's article you will understand that this is no longer science fiction. We have already developed AI that is smarter than any human. It's the reasoning and self-improving part that proves tricky.

But they’re going to get there soon. In fact, Google released a new self-improving model this week.

Every single day something new is launched or released.


It's going to happen

There's no reason to believe that progress is going to stop.

99% of people do not give this a single grain of thought. Theire heads are firmly in the sand.

The “will AI take my job?” conversation is outdated. The only useful questions now are “when?” and “what’s next for me?”
Because the window between “AI can’t do this” and “AI does this so cheaply it’s not worth paying a human” is shrinking to months, not years. At one point your employer or your clients are going to look at your work and decide that it can be done by AI a lot cheaper, quicker and perhaps even better.

The temptation is to believe you’ll get a warning period, that the change will be slow enough to give you time to retrain or pivot. But adoption curves in technology have been collapsing for over a century. Electricity took almost half a century to reach mass use. Smartphones took seven years. Some AI tools have gone from zero to hundreds of millions of users in six months. If your livelihood depends on a skill that can be digitised, you might have no more runway than that.

And even if you're reading this and being wide awake instead of asleep, there might be nothing you can do about it.

They say new jobs and industries will emerge. But I don't see them emerging to be honest.

My own strategy is this newsletter. Might be the worst strategy in the world.

The best one is doing something 'human' - using your hands. Becoming a plumber, a chef or a massage therapist. Microsoft published a list of jobs this week that are "safe" this week, those were on there. Also, one of them being an "embalmer" (there's a Steve Ballmer joke here somewhere). Indeed , whatever happens, people are going to keep eating and dying.

And don't get me wrong - there are hundreds of ways that this might play out. We might also be completely fine. It depends on our employers - if they choose AI over humans, we might not be ready. And because of .. you know .. capitalism, i'm not too optimistic.

So what about you?

What's your plan?

You Don’t Need to Be Technical. Just Informed

AI isn’t optional anymore—but coding isn’t required.

The AI Report gives business leaders the edge with daily insights, use cases, and implementation guides across ops, sales, and strategy.

Trusted by professionals at Google, OpenAI, and Microsoft.

👉 Get the newsletter and make smarter AI decisions.

AI News

  • Meta CEO Mark Zuckerberg outlined a new AI vision focused on “personal superintelligence” via devices like smart glasses, shifting away from open-source development due to safety concerns. Meta may now keep its most advanced models closed, pausing its open Behemoth model as it pivots toward personalized, multimodal AI experiences. The move marks a sharp turn as China accelerates with open frontier models.

  • Amazon-backed Fable launched Showrunner, an AI platform that lets users create interactive, animated TV episodes with text prompts and personalized characters. The tool aims to reinvent storytelling as remixable and multiplayer, with future monetization and creator revenue sharing. Launching amid industry tension, Showrunner could reshape how content is made—and who gets to make it.

  • Google DeepMind introduced AlphaEarth, an AI system that fuses massive satellite and mapping data to create near real-time, high-detail environmental maps. It helps track changes like deforestation with more speed and accuracy than traditional tools. AlphaEarth simplifies how scientists and organizations monitor Earth’s transformation over time, offering new insights from space.

  • OpenAI has introduced new safety features in ChatGPT to better detect signs of mental distress and promote healthier interactions, especially ahead of GPT-5’s launch. These include distress-detection rubrics, resource-based responses, and chat nudges to prevent overreliance. It’s a proactive move as AI becomes more integrated into sensitive areas of users’ lives.

  • Google launched Kaggle Game Arena, a new platform where top AI models compete in strategic games like chess to benchmark real-time reasoning and adaptability. Models like Gemini 2.5 Pro and Grok 4 will battle it out, with plans to expand into more complex games. As standard benchmarks grow stale, this approach offers a fresh lens on AI problem-solving skills.

  • A new GitHub study of AI-savvy developers found that while many started out skeptical of AI tools, most now expect them to write 90% of code within a few years. Instead of feeling threatened, developers see their roles shifting toward managing and guiding AI, with growing demand for prompt design, oversight, and strategic thinking.

  • xAI has launched Grok Imagine, an AI image and video generator now available to SuperGrok and Premium+ X subscribers on iOS. It creates stylized 15-second videos from text or images quickly, though its outputs still look clearly AI-generated. While not yet industry-leading in quality, its speed and Musk’s “unfiltered” design approach could attract users looking for creative experimentation.

  • Google introduced Gemini 2.5 Deep Think, a multi-agent AI model that tackles complex problems by having several agents explore solutions in parallel. It builds on the gold-winning IMO model and outperforms top rivals in coding and reasoning tasks. Geared toward researchers and scientists, it reflects Google’s push to make AI a powerful academic tool rather than just a chatbot.

  • Anthropic researchers uncovered “persona vectors” — specific neural patterns linked to unwanted AI behaviors like sycophancy, racism, or hallucination. By identifying and controlling these activations, they’ve found ways to reduce such traits in models. The breakthrough could lead to safer AI systems by offering better tools to monitor and adjust behavior from the inside out.

  • Google DeepMind unveiled Genie 3, a real-time world generator that turns a single text prompt into interactive 720p environments with physics, memory, and consistency. Users can explore and modify these AI-created worlds on the fly, making it a breakthrough for gaming and embodied AI training. It’s a big step toward AI systems that can learn and adapt by interacting with complex, dynamic environments like humans do.

  • OpenAI released gpt-oss-120b and 20b, its first open-weight models since 2019, now topping Hugging Face's leaderboard. The models rival o4-mini and o3-mini in reasoning and can run locally on consumer hardware, with full Apache 2.0 licensing. This major shift gives developers powerful, modifiable models — and marks OpenAI’s return to supporting open-source AI at scale.

  • Anthropic launched Claude Opus 4.1, a refinement of its flagship model with better performance in coding, reasoning, and data analysis. It shows real gains in benchmarks and real-world developer workflows, especially with tasks like code refactoring. The release keeps Anthropic competitive as anticipation builds for OpenAI’s next big release.

  • OpenAI is offering ChatGPT Enterprise to all U.S. federal agencies for just $1 per agency for the next year, aiming to streamline government workflows and boost efficiency. The deal includes access to advanced models and tools like Deep Research, along with tailored training and a government user community. The move could trigger a competitive wave, with rivals like Google and Anthropic likely to follow suit.

  • Google is targeting the education sector with a new Guided Learning mode in Gemini, offering structured, step-by-step help instead of direct answers. It also made its $250/month AI Pro Plan free for students in select countries and pledged $1B for AI training in U.S. colleges. The effort reflects a growing trend: retooling AI tools to support—not shortcut—critical thinking in learning environments.

  • Microsoft introduced CLIO, a new framework that lets non-reasoning AI models build their own thought processes and refine them in real time. It improves performance by creating adaptive feedback loops and allowing users to steer reasoning paths, boosting GPT-4.1’s accuracy on tough biomedical tasks. CLIO signals a shift toward flexible, explainable AI that evolves after deployment, especially valuable in scientific and research-heavy fields.

Quickfire News

  • OpenAI announced Stargate Norway, its first European data center project, developed in partnership with Aker and Nscale.

  • Amazon is paying $20–25 million annually to license New York Times content for training and use in its AI platforms.

  • Neo AI launched NEO, a machine learning engineer agent built from 11 sub-agents, claiming state-of-the-art scores on ML-Bench and Kaggle tasks.

  • Anthropic is set to raise $5 billion in a new round led by Iconiq Capital, bringing its valuation to $170 billion—almost triple its March valuation.

  • YouTube is rolling out AI moderation tools that estimate users' ages using viewing history and other data to help protect minors.

  • A study from the Associated Press found that AI is used most for searching information, especially by young adults who also use it for brainstorming.

  • ChatGPT is expected to reach 700 million weekly active users this week, up from 500 million in March and four times higher than last year, according to OpenAI's Nick Turley.

  • Alibaba released Qwen-Image, a 20B open-source model for text-to-image generation with state-of-the-art text rendering and bilingual in-pixel text generation.

  • Elon Musk announced that Grok’s Imagine tool for image and video generation is now available to all X Premium subscribers via the Grok app.

  • Perplexity partnered with OpenTable, allowing users to book restaurant reservations directly from its answer engine and Comet browser.

  • Character AI is adding a social feed to its mobile app, letting users share their AI characters for others to chat and interact with.

  • Cloudflare reported that Perplexity is disguising the identity of its web crawlers to bypass websites that block AI scraping.

  • Mistral is reportedly seeking to raise $1 billion at a $10 billion valuation, with backing from VCs and Abu Dhabi’s MGX.

  • Apple created a new internal group called the “Answers, Knowledge, and Information” team to develop a ChatGPT-style app using online information.

  • OpenAI removed a ChatGPT feature that allowed conversations to be indexed by search engines like Google.

  • Anthropic cut off OpenAI’s access to its API, citing violations of its terms and heavy Claude Code usage by OpenAI engineers ahead of GPT-5.

  • Amazon CEO Andy Jassy said Alexa+, the company’s new AI assistant, could eventually include ads during conversations.

  • Apple CEO Tim Cook stated that the company is open to AI-related acquisitions that could accelerate its development strategy.

  • Meta plans to sell off $2 billion in data center assets as it scales up infrastructure for its superintelligence ambitions.

  • Perplexity acquired Invisible, a startup building a multi-agent orchestration platform, to expand its Comet browser for both consumer and enterprise use.

  • Google launched a Storybook feature in the Gemini app, letting users create custom storybooks with free read-aloud narration.

  • ElevenLabs introduced Eleven Music, a multilingual music generation model with customizable genre, style, structure, and editable lyrics and audio.

  • Elon Musk reported that Grok’s Imagine tool generated 20 million images in a single day due to rising user demand.

  • Alibaba released its Flash series of Qwen3-Coder and Qwen3-2507 models via API, featuring up to a 1 million-token context window and low-cost access.

  • Shopify added new agent-centric tools including a checkout kit for commerce widgets, fast global product search, and a universal shopping cart.

  • OpenAI announced a “LIVE5TREAM” event for today at 10 a.m. PT, teasing the launch of GPT-5 along with possible Nano and Mini variants.

  • Midjourney introduced a new HD video mode for Pro and Mega users, delivering 4x more pixels than SD and costing 3.2x more per generation.

  • Google’s Jules, an agent-based coding tool, exited beta with usage on the free tier now limited to 15 daily tasks, down from 60.

  • Anthropic added a /security-review command to Claude Code, allowing users to scan for code vulnerabilities directly from the terminal.

  • xAI’s Grok Imagine tool for video and image generation is rolling out in early access to Android users via the official Grok app.

  • OpenAI is in early discussions for a secondary share sale that could value the company at $500 billion, offering liquidity to current and former employees.

  • Chai Discovery raised $70 million in Series A funding from Menlo Ventures and the Anthology Fund to advance AI for molecular design.

Closing Thoughts

That’s it for us this week.

If you find any value from this newsletter, please pay it forward !

Thank you for being here !

Reply

or to participate.