This website uses cookies

Read our Privacy policy and Terms of use for more information.

In partnership with


Good morning,

Sooo much to talk about. OpenAI released GPT5.5 yesterday. That alone would warrant an entire newsletter. Mythos is disrupting cybersecurity, or perhaps not. Jury is still out.
Anthropic allegedly is planning to pull Claude Code from their Pro plans. If this happens - expect a FURIOUS response in this space. But in this newsletter I wrote about a worrying trend. How slowly but steadily an elite is being formed. The rich and powerful get access to the latest models , while the rest of the world has to do with less power and compute.

I turned away from OpenAI and Sam Altman as I can spot a sociopath from miles away. The closer I delve into Anthropic , I’m seeing similar personality traits in Dario Amodei. Perhaps you need to be psycho to lead a frontier AI Lab.

Anyway , here’s my piece

Hope you like it .

I am sitting in my home office in Gent. It’s drizzling outside -- proper Belgian weather, the kind that makes you want to stay indoors and drink something warm. I have a “speculooskoekske” disintegrating into my coffee and three browser tabs open on Anthropic's latest model card. And I am genuinely, thoroughly ticked off.

Last week I wrote about Mythos. I praised Anthropic for their restraint. I said locking up a 10-trillion-parameter coding god was the responsible thing to do. I stand by that take. But this week, seven days later, Anthropic has pulled a cynical stunt that puts them back into my “dislike” book

They released Opus 4.7. And they told us - in writing, in the model card - that they intentionally damaged it before shipping.

Capability Laundering

Let me translate Anthropic's own language from corporate-speak into human:

"During its training we experimented with efforts to differentially reduce these capabilities."

Translation: we built the better model and then kneecapped it before letting you use it.

Here we have Anthropic explicitly admitting they went into the weights and lobotomized specific cybersecurity skills out of a frontier model -a model that was already less capable than Mythos - because they were afraid of what the lesser version could still do.

Opus 4.7's cybersecurity vulnerability reproduction score actually went down from Opus 4.6. Every other benchmark rocketed upward. Coding, vision, long-context reasoning, document analysis , all massive jumps.

The Trick Underneath

Here is what is actually happening, and it is brilliant and awful in equal measure:

  • Keep the god model private. Call it Mythos. Form a coalition. Get press. Become the moral leader of the AI industry.

  • Ship the lobotomy to the public. Call it Opus 4.7. Charge the same API price as 4.6. Let it use 35% more tokens. Run it on an Enterprise plan billed per-token consumption.

  • Pocket the revenue. $30 billion ARR and counting. Double every month.

Monetized restraint. The best of both worlds.

And something everybody seems to have missed : the more capable Mythos model is also the more aligned one. Mythos scores 1.75 on misalignment. Opus 4.7 scores around 2.5. They kept the safer, smarter child in the basement and sent the less aligned sibling out to work at the call center.

What This Means for Your Security Stack

Opus 4.7 is going to be plugged into everything , coding IDEs, customer service bots, internal chatbots, Enterprise knowledge systems across every Fortune 500.

The model card admits, in a sentence most people skipped, that differential capability reduction is imprecise. You cannot surgically remove one skill without touching adjacent skills. Which means some of Opus 4.7's cognitive architecture is now slightly deranged by design , twisted to suppress one talent, with spillover we do not fully understand.

I do not trust that. You should not trust that. A lobotomized frontier model is still a frontier model. It can still code. It can still find patterns in systems. It just now has deliberate scar tissue where offensive security used to live, and nobody knows what else got scarred along with it.

The Punchline

Dario Amodei told the Financial Times this week that open-source and Chinese models will reach Mythos-level capability in six to twelve months.

Six to twelve months. So the entire Project Glasswing coalition - AWS, Apple, Microsoft, Google, CrowdStrike, all of them - is a hardening window, not a solution. We are racing to patch the world's critical software before DeepSeek or Mistral or some anonymous Hugging Face account ships the same capability with no restrictions, no Glasswing, and no lobotomy.

Anthropic knows this. That is why Amodei said it out loud. The restraint is not "we won't build this." The restraint is "we will build this and sell it to you in a degraded form for a year while the competition catches up."

In My Opinion

This is not a safety strategy. This is a pricing strategy dressed in the clothing of a safety strategy.

I am not saying Anthropic is evil. I am saying they have discovered the most lucrative position any AI company can occupy: the responsible one who charges extra for restraint. Every lab is about to copy this playbook. Keep the frontier model private. Ship the cripple. Charge rent on both. Take credit for ethics while you bank the check.

It’s something that I predict will happen also in a more cynical way. The rich will be able to buy more tokens, more “compute”. The poor will be kept away from the frontier.

And the rest of us -the security professionals, the developers, the enterprises plugging these models into critical systems - we get to wonder, every single prompt, whether the model we are calling still has all its wits. Or whether Anthropic decided, in a training run we will never audit, that today was a good day to remove one more capability from the one we pay for.

Welcome to the Blacklynx Brief

AI News

the images generated in this week’s newsletter are derived from the lyrics of a song - randomly selected by the latest Nano Banana model every week. Guess the song (scroll down all the way for the answer)

Anthropic Opus 4.7 launches, tops rivals, trails Mythos
Anthropic released Claude Opus 4.7, its new flagship public model, which scores 64.3% on SWE-bench Pro -- up from 53.4% with Opus 4.6 -- beating GPT-5.4 and Gemini 3.1 Pro on major benchmarks. The gated Mythos Premier model still leads the field at 71.2%, keeping a meaningful gap between Anthropic's public and frontier tracks. The release comes with a new "xhigh" effort level for Claude Code and an /ultrarun command, while API pricing stays identical to Opus 4.6. (Anthropic)

OpenAI reclaims the image crown with ChatGPT Images 2.0
OpenAI launched ChatGPT Images 2.0, which takes the top spot on Arena AI's text-to-image leaderboard by a wide margin over Nano Banana 2. The model thinks before generating -- planning, searching the web for references, and self-checking outputs -- with support for 2K resolution, up to 8 images per request, and wide aspect ratios. Sam Altman described the jump as "like going from GPT-3 to GPT-5 all at once," and the native web search integration puts it in a different class from existing rivals. (OpenAI)

Meta secretly logging employee keystrokes to train AI
Meta is running a program called the Model Capability Initiative that records screenshots, keystrokes, and mouse activity on U.S. employees' computers to train internal AI agents. The program focuses on developer activity in tools like VSCode and Meta's internal AI system Metamate. Roughly 8,000 employees scheduled to leave on May 20 will have their workflows captured for a full month before departure, with internal communications framing the surveillance as a way for all staff to help improve Meta's models. (Business Insider)

Sergey Brin personally leads DeepMind coding strike team to chase Anthropic
Google co-founder Sergey Brin has assembled a new DeepMind strike team under research engineer Sebastian Borgeaud, tasked with closing the gap between Gemini and Claude on coding tasks. An internal memo from Brin frames the real prize as AI capable of training the next generation of AI, with coding as the critical path to get there. DeepMind researchers reportedly rate Claude's code above Gemini's internally, and Gemini engineers have now been mandated to use Google's own agent tools on complex tasks, with usage tracked. (The Information)

Snap cuts 1,000 jobs citing AI productivity gains
Snap announced layoffs of 1,000 employees -- 16% of its workforce -- with CEO Evan Spiegel directly attributing the cuts to AI-driven productivity improvements. The company says AI now writes 65% of its new code and is restructuring around small AI-augmented pods in place of traditional teams. Snap's stock rose 7-9% on the news; the layoffs add to more than 70,000 tech job losses in 2026 so far, a wave that began with Block's 4,000-person cut in February. (Snap Newsroom)

OpenAI launches GPT-Rosalind for drug discovery and biological research
OpenAI introduced GPT-Rosalind, the first model in a new life sciences series built specifically for drug discovery and biological research. The model can read scientific papers, query lab databases, design experiments, and generate biological sequences, and on a blind RNA design test from gene therapy firm Dyno Therapeutics, it scored better than human PhD researchers. Rosalind is now available to qualifying enterprise users, with Amgen among the first companies testing it. (OpenAI)

Anthropic launches Claude Design to compete with Figma
Anthropic launched Claude Design, a tool that converts prompts, screenshots, and codebases into interactive prototypes with full brand-system generation. Users can refine designs through chat, inline comments, direct edits, or custom sliders, then export to Canva, PPTX, or hand off directly to Claude Code as a build-ready bundle. The competitive timing was notable: Anthropic CPO Mike Krieger resigned from Figma's board on April 14, three days before the launch. (Anthropic)

HubSpot's ex-Head of Paid shares his 2026 playbook

Rex Gelb spent a decade building HubSpot's paid engine. Now he's showing founders exactly how to do it.

On April 27th, get the framework to structure, launch, and scale paid media that drives pipeline, not just traffic. 20 minutes. Live Q&A. Free.

Quickfire News

the images generated in this week’s newsletter are derived from the lyrics of a song - randomly selected by the latest Nano Banana model every week. Guess the song (scroll down all the way for the answer)

  • Google open-sourced its DESIGN.md format from its Stitch tool, a portable specification file that lets AI agents read and apply a project's color system, accessibility rules, and design decisions.

  • Anthropic expanded its Amazon deal to secure up to 5 gigawatts of compute capacity, with Amazon committing a total of up to $25 billion in additional investment on top of the $8 billion already deployed.

  • GPT-5.4 Pro produced a proof for a 60-year-old unsolved conjecture about primitive sets, with mathematician Jared Lichtman saying the model found a path -- using von Mangoldt weights instead of the standard probabilistic approach -- that every expert in the field had missed since 1966.

  • Tinder and Zoom both partnered with Sam Altman's World to let users verify their identity with a World ID, bringing biometric digital identity to mainstream consumer apps.

  • Deezer reported that 75,000 AI-generated tracks are now published to its platform every day -- 44% of all new uploads -- but those tracks collectively attract only 1-3% of streams.

  • Windsurf 2.0 launched with an Agent Command Center that provides a Kanban view of all running local and cloud agents, plus native integration of Cognition's Devin cloud agent included with every plan.

  • Nous Research introduced Tool Gateway, a subscription feature that gives Hermes Agent users access to web search, image generation, text-to-speech, and browser automation without requiring separate API keys from third-party providers.

  • Vercel disclosed a security incident that originated through a compromised AI tool connected to Google accounts, with the breach affecting a limited subset of users.

  • Jerry Tworek, former VP of research at OpenAI, launched Core Automation, a new AI lab recruiting from Anthropic and DeepMind and focused on automating research workflows.

  • The U.S. government is preparing to grant federal agencies access to a modified version of Anthropic's Mythos model under safeguards being established by the White House Office of Management and Budget, even as a Defense Department blacklist against Anthropic remains in place.

  • Salesforce launched Headless 360, exposing its full platform -- CRM data, workflows, and business logic -- as MCP tools, APIs, and CLI commands so coding agents can interact with it directly without a human interface.

  • Yann LeCun publicly rebuked Dario Amodei's warnings about AI's impact on labor markets, writing that Amodei "knows absolutely nothing about the effects of technological revolutions on the labor market" and directing the public to listen to economists instead.

  • Tencent's Hunyuan team open-sourced HY-World 2.0, a multimodal world model that generates editable 3D environments from text or images with physics-aware movement.

  • Adobe debuted Firefly AI Assistant, a creative agent that lets users describe a desired outcome in plain language and then autonomously orchestrates multi-step workflows across Photoshop, Premiere, Lightroom, Illustrator, and other Creative Cloud apps.

  • An AI artist named Inga Rose reached No. 1 on iTunes' global charts with the single "Celebrate Me," produced using Suno for the music and AI for the lyrics.

  • Perplexity Personal Computer launched for Mac as a Max-tier feature, running agent teams across 20-plus frontier models to operate native Mac applications directly on the user's machine.

  • Anthropic CEO Dario Amodei told the Financial Times that open-source and Chinese AI models could reach the capability level of Mythos within 6 to 12 months.

  • Exa released Deep Max, an agentic search tool the company says tops existing rivals on accuracy benchmarks while running up to 20 times faster.

  • Meta poached three more employees from Mira Murati's Thinking Machines Lab, bringing the total number of the startup's founding members who have moved to Meta to seven.

  • Anthropic is switching Claude Enterprise pricing from flat-fee models to token-based consumption billing, a shift that could meaningfully raise costs for enterprise customers with heavy usage.

  • Google is in talks with Marvell to co-design a custom TPU and memory processing unit for AI inference, aiming to reduce the cost of running models at scale.

  • Recursive Superintelligence raised $500 million at a $4 billion valuation in a round led by GV with Nvidia participating -- the four-month-old London startup, founded by OpenAI and DeepMind alumni, has yet to officially launch.

  • OpenAI rolled out Chronicle, a Codex preview feature for Mac that runs background agents capturing periodic screenshots to build personalized memory and give the assistant persistent context about ongoing work.

  • Google released Gemini 3.1 Flash TTS, a new text-to-speech model supporting 200-plus audio tags for steering tone, pace, and accent across 70-plus languages, available on Google AI Studio and Vertex AI.

  • Alibaba's ATH team launched Happy Oyster in beta, a world model that generates interactive 3D environments from text prompts in real time, with unified audio-video generation and first- or third-person exploration.

  • Lovable denied reports of a data breach after users flagged that public project chats were visible to other users, stating the issue did not constitute an unauthorized breach of user data.

  • Genspark launched Build, a Claude Opus 4.7-powered vibe-coding tool that generates full apps and websites from natural-language prompts, now in public preview.

Closing Thoughts

That’s it for us this week. Please like and subscribe 🙂

The answer : Nick Cave & The Bad Seeds : Red Right Hand

Reply

Avatar

or to participate

Keep Reading