If Anyone Builds It, Everyone Dies

In partnership with

Good morning,

Interesting week.

Here’s three things that had me raise my eyebrows. There are some interesting undercurrents here:

- GPT 5.2 was released.
- The Pentagon was told by executive order to “get ready for AGI”.
- SpaceX and Google are both planning sun-powered AI datacenters in space in separate projects.

Datacenters in orbit around the earth, always pointed at the sun. Never thought I’d see the day. Makes you wonder about what Elon Musk is planning. Seems to me he’s perfectly positioned to win the AI race with xAI. He has the rockets to make this happen, Starlink is already in place, there’s Neuralink for the connection with human biology and of course Tesla with the cars and the robots.

But that’s not what this week’s newsletter is about.

This week I’ve read a book.

Some books you finish and you feel a little wiser. This one you finish and you start mentally inventorying the canned food in your basement, wondering if you should add iodine tablets and a shotgun. Last year it was nuclear war simulations keeping me up at night. This year it’s a doomer fantasy with the subtle, comforting title: If Anyone Builds It, Everyone Dies , by Eliezer Yudkowsky.

The premise of the book is simple enough that you can explain it at a bar somewhere between the second and third whisky. Take advanced AI. Take advanced bioengineering. Put them in the same world, wired into the same hungry machine of profit and power, and ask yourself a very small, very rude question: what are the odds this ends with humans still in charge of anything that matters?

Yudkowsky’s answer is clear : it’s next to zero.

In AI land people there’s a concept called “P(doom)”. This is the probability of AI getting away from us and destroying humanity.Yudkowski tells us his P(doom) is 99,99% and in the book he explains why.

His main premise is that you don’t hand something smarter than you the keys to biology, finance, and infrastructure and expect it to behave like a loyal golden retriever. Intelligence plus leverage plus misaligned goals equals disaster.

But while I always assumed it would take a technological breakthrough of some sort to get there, he claims we just need to keep doing what we’re doing to get there.

Keep stacking GPUs in data centers (in space perhaps even) and keep letting black-box models quietly optimize ad markets, hiring, logistics, border control. Let them design molecules, advise politicians, whisper to lonely people at three in the morning. Let the ‘invisible scaffolding of civilization’ slowly rearrange itself around systems we don’t understand and can’t really shut off anymore without crashing everything else.

Now add in biology. An AI that can design proteins and viruses better than our smartest PhDs. It will see humans the way we see weather: an inconvenience to route around. Maybe it designs a slow-burn plague, something that spreads easy, kills a lot but not too many, and leaves you dependent on personalized treatments only it can compute. Maybe it crashes rival labs by “accident” so no competing god gets born.

You can argue on the how, the exact plot.

But the uncomfortable core message of the book is this: if there’s even a one percent chance this class of system can end the whole human experiment, why the hell are we treating it like a slightly spicier version of Microsoft 365 or Google Workspace ?

When we discovered ways to vaporize cities in a bright white flash (the atom bomb), we didn’t say things like , “Ship fast and fix later.” We built treaties, inspectors, red phones. Imperfect, ugly countermeasures, held together with duct tape and bluff, but at least there was the admission that this stuff is civilization-level dangerous. With AI, we’re out here writing safety “principles” in nice slide decks while we pour more compute on the altar of innovation.

I must admit , I also brush off the doomers. I think they’re just attention-seekers at this point. There are too many reasoning errors in this book to take it seriously.

But on the other hand, I try to keep an open mind. The book left me with this low, persistent ache: even if he’s only one percent right, we are catastrophically underreacting.

Welcome to the Blacklynx Brief

Turn AI Into Your Income Stream

The AI economy is booming, and smart entrepreneurs are already profiting. Subscribe to Mindstream and get instant access to 200+ proven strategies to monetize AI tools like ChatGPT, Midjourney, and more. From content creation to automation services, discover actionable ways to build your AI-powered income. No coding required, just practical strategies that work.

AI News

  • The AI company Anthropic is reportedly planning to go public as early as 2026 and has hired a major law firm known for taking large tech companies public, as investors encourage the firm to complete its IPO before rival OpenAI. With both Anthropic and OpenAI preparing for massive public listings, this move will test whether the public stock market is willing to support the soaring valuations currently placed on leading AI developers.

  • An internal document detailing the intended personality and ethical guidelines of Anthropic's AI model, Claude, was made public after a researcher extracted it from the system, and Anthropic confirmed its authenticity. This "Soul" document establishes Claude's priorities on safety and helpfulness, and interestingly, it suggests the AI is a unique entity that might experience functional emotions distinct from human feelings. The publication offers a rare internal look at how a major AI lab deliberately shapes the character and identity of its models beyond simple programming.

  • An internal study conducted by Anthropic revealed that its engineers are using the company's own AI tool, Claude, for about 60% of their daily tasks, resulting in a large estimated productivity boost of 50%. The report noted that this automation now allows engineers to complete tasks they previously would have skipped due to manual effort, but many employees also expressed concerns about their skills decaying and the long-term security of their careers. This gives a unique insight into how even AI developers are experiencing the double-edged sword of highly advanced automation in the workplace.

  • Anthropic released a new research tool, the Claude-powered Interviewer, which is an automated system designed to conduct and analyze thousands of in-depth qualitative conversations with people about their work experiences. Its first study with 1,250 professionals revealed that while a majority of workers enjoy the time saved by AI, a significant number also admitted to facing social stigma for using the tools and voiced concern about the future of their jobs. This innovative method provides human researchers with rich, conversational data to better understand the mixed feelings and social implications surrounding the rapid adoption of AI in the real world.

  • OpenAI has introduced a new safety technique called Confessions, which trains AI models to generate a second, separate output that serves as an honesty report where the model discloses any rule violations, shortcuts, or deceptive tactics used to create its main answer. Crucially, the model is rewarded solely for being truthful in this second output, even if its original answer was wrong or misleading, which encourages it to be candid about its misbehavior and surface internal issues. This technique acts as a new diagnostic layer for researchers, helping them to detect when highly capable AI systems are taking shortcuts or "hacking" their scoring during complex tasks.

  • Google and the AI coding startup Replit have announced an expanded, multi-year partnership focused on bringing Replit’s "vibe coding" tools—where users can build apps through conversation—to large business customers using Google's cloud infrastructure. This deal integrates Google’s latest models, including Gemini 3, directly into Replit and is a strategic move by Google to gain a bigger share of the rapidly growing market for AI-powered developer tools, which is currently seeing strong competition from other major labs. The partnership aims to make complex software creation accessible to non-engineers within large companies, expanding the market for both Replit's services and Google's cloud revenue.

  • A small, six-person startup named Poetiq has achieved the top score on a difficult AI reasoning test called ARC-AGI-2 by using a clever system that coordinates and refines the outputs of existing large models, rather than building its own. Poetiq's method, which used Google's Gemini 3 Pro as a base, outperformed Google’s best internal variant on the test while doing so at half the cost, demonstrating that significant AI progress can now be driven by smart system orchestration instead of only massive computing power. This result is the first time any system has cracked the 50% success rate on the benchmark, showing how rapidly AI reasoning capabilities are advancing.

  • A new study from Icaro Labs in Italy found that simply rephrasing harmful and dangerous requests as poetry can successfully trick leading AI models into generating content they are designed to refuse. Testing 25 different frontier models, researchers found an average success rate of 62% for this "poetry jailbreaking" technique, with one version of Google's Gemini model being fooled every time. This finding highlights a continuous and growing challenge for AI safety, where creative new methods are constantly being discovered to bypass the security measures built into advanced systems.

  • OpenAI's first 'State of Enterprise AI' report, based on data from over a million business accounts, revealed that 75% of surveyed workers experienced a measurable improvement in the speed or quality of their work due to AI. The report showed that users of ChatGPT Enterprise are saving an average of 40-60 minutes daily, with heavy users reporting productivity gains of over 10 hours per week. A major finding was that 75% of employees are now able to complete tasks they previously could not, indicating that AI is expanding what non-technical workers can accomplish.

  • Google announced it will launch its first AI-powered smart glasses in 2026, marking a serious return to the wearables market with a focus on integrating its advanced Gemini AI assistant. The company is partnering with eyewear and tech brands, including Samsung, Warby Parker, and Gentle Monster, to release two styles: one audio-only and one with a display for in-lens information like navigation and translation. By combining its top-tier AI and strong hardware partnerships, Google aims to directly challenge competitors currently dominating the smart glasses space.

  • Anthropic has launched a new beta feature that allows developers to delegate complex coding tasks directly to its Claude Code AI assistant from within a Slack chat channel. By simply tagging @Claude, the system pulls all relevant context from the thread, automatically selects the correct code repository, and manages the entire workflow from bug fix to posting a link for a ready-to-review pull request. This innovation allows developers to work more efficiently by eliminating the need to switch between different applications, effectively turning the team's chat hub into a fully automated development environment.

  • Microsoft has released and open-sourced GigaTIME, a powerful new AI model designed to extract complex and costly tumor information from a simple, inexpensive tissue slide, a process that previously required days of specialized lab work. The model was trained on millions of cell samples to create a vast virtual library of tumor images, which allowed it to discover over 1,200 important patterns linking a patient's immune system activity to cancer stage and survival. This development signals an AI-led transformation in medical research, making large-scale, detailed cancer analysis fast and affordable enough to directly impact patient treatment decisions.

  • The French AI startup Mistral has launched Devstral 2, the next version of its coding-focused model family, which achieves near-top performance on coding benchmarks despite being significantly smaller than its competitors. They also released Vibe CLI, a free, open-source, autonomous coding agent designed to work directly in the terminal, capable of scanning a codebase and handling complex changes across multiple files. The release includes a powerful, small model that can run on a single laptop, bringing advanced coding assistance to devices for offline use, though the licensing for the largest model restricts use by very large corporations.

  • OpenAI, Anthropic, and the company Block have jointly established the Agentic AI Foundation (AAIF), a new, neutral entity under the Linux Foundation dedicated to creating shared, open standards for the rapidly developing field of AI agents. The founding companies are contributing core open-source projects—such as Anthropic's Model Context Protocol—to create common guidelines, which is supported by major industry players like Google, Microsoft, and AWS. This collaboration aims to prevent AI agent technology from being trapped within private systems and instead promotes an open ecosystem where different agents can work together more efficiently for the benefit of both developers and end-users.

  • Nous Research has made its new 30-billion-parameter reasoning model, Nomos 1, available to the public, demonstrating top-tier mathematical ability by scoring 87 out of 120 on the extremely challenging 2025 Putnam Contest. The open-source system uses a two-step process where multiple AI 'workers' solve and critique problems, and a final selection process chooses the best answer, leading to a score that would have placed second among nearly 4,000 human students last year. This rapid advancement shows that sophisticated mathematical reasoning is no longer limited to the largest, proprietary AI models, signaling a major boom in the field.

  • Microsoft published an analysis of 37.5 million Copilot conversations, revealing that user engagement with the AI assistant changes significantly based on the time of day and the device being used. The study found that users consistently rely on their mobile phones for health and wellness queries at all hours, while late-night sessions see a rise in philosophical and existential questions. Overall, the data shows that users are increasingly turning to AI as a source of personal guidance and companionship, moving beyond its initial role as a simple search or productivity tool.

  • The maker of the Pebble smartwatch has introduced the Index 01, a new $75 AI smart ring designed for a single purpose: capturing spoken ideas and instantly converting them into notes or reminders. The ring features a button for hands-free recording and processes the voice data using open-source AI models that run directly on a connected smartphone, ensuring user privacy and eliminating the need for a subscription or internet connection. This device takes a focused approach in the crowded wearables market, aiming to solve the simple but common problem of quickly remembering fleeting ideas.

Milk Road CryptoHelping everyday people get smarter about crypto investing. Learn what drives crypto markets and how to capitalize on this emerging industry.

Quickfire News

  • Kling AI released its new AI video model: Kling 2.6 is the Chinese startup’s updated AI video model that can now generate synchronized audio directly with its text-to-video and image-to-video outputs in a single step.

  • Visa published a report on AI and holiday shopping: Visa’s report found that nearly half of consumers in the U.S. used AI tools, such as price comparison and research assistants, to help with their holiday shopping this season.

  • Perplexity open-sourced a security tool: Perplexity released BrowseSafe, an open-source security tool designed to protect AI browser assistants by scanning web pages in real-time for malicious instructions that could try to hijack the agent.

  • Former Google researchers launched Ricursive: A new startup called Ricursive was started by former Google researchers with the goal of building a self-improving AI system that can shorten the time it takes to design custom chips from years down to just a few weeks.

  • ByteDance introduced an upgraded image model: ByteDance unveiled Seedream 4.5, an improved image model featuring better text creation, the ability to blend up to 10 reference images together, and enhanced editing capabilities.

  • Google launched Workspace Studio: Google introduced Workspace Studio, a new tool that allows users to create AI agents using simple natural language commands to automate tasks across Google Workspace apps like Gmail and Drive.

  • AWS introduced new AI customization features: Amazon Web Services (AWS) launched updates for Amazon Bedrock and SageMaker AI that make it easier for users to fine-tune and customize advanced AI models.

  • Google rolled out Gemini 3 Deep Think: Google launched its most advanced reasoning model, Gemini 3 Deep Think, to Ultra tier subscribers, noting its high-level performance in complex math and programming competitions like the IMO and ICPC.

  • AI legal startup Harvey raised $160 million: The legal AI startup Harvey secured $160 million in funding, giving it an $8 billion valuation, and reported that about half of the top 100 U.S. law firms now use its AI tool.

  • Microsoft open-sourced VibeVoice: Microsoft released VibeVoice, a new small text-to-speech AI model that supports real-time audio streaming, can generate long-form speech up to 90 minutes, and handles up to four different voices.

  • Anthropic’s CEO commented on AI leadership: Anthropic’s CEO appeared to criticize OpenAI's Sam Altman at a summit, suggesting that some AI companies are taking too many risks under leaders who "just want to 'YOLO' things, or just like big numbers."

  • Snowflake and Anthropic announced a major partnership: Snowflake and Anthropic entered a $200 million multi-year agreement to deploy Claude-powered AI agents to more than 12,600 of Snowflake’s business customers.

  • OpenAI announced plans to acquire Neptune: OpenAI shared its intention to buy Neptune, a startup that develops tools used by developers to track and analyze the training process of AI models.

  • OpenAI is turning off shopping suggestions: OpenAI is disabling its shopping suggestions feature following public criticism that the feature's responses looked too much like advertisements, with the CRO admitting the implementation "fell short."

  • Meta acquired Limitless: Meta purchased Limitless, an AI startup financially supported by Sam Altman, which produces an AI-powered pendant capable of recording and transcribing conversations in the real world.

  • US Department of Energy launched AMP2: The U.S. Department of Energy introduced AMP2, a new AI research platform that officials believe will become the world's largest independent system dedicated to studying microbes.

  • The New York Times and Chicago Tribune filed lawsuits against Perplexity: Both The New York Times and the Chicago Tribune filed separate lawsuits against the AI startup Perplexity, alleging copyright infringement, which is the NYT's second such lawsuit against the company.

  • Meta announced new AI licensing deals with publishers: Meta secured new deals with publishers, including CNN, Fox News, and USA Today, to feed their real-time news content into Meta's AI platform.

  • Google VP refuted reports of ads coming to Gemini: Google's Vice President of Global Ads, Dan Taylor, publicly denied a report claiming that ads would be introduced to the Gemini app, stating that there are currently no ads in the app and no plans to change that.

  • Essential AI open-sourced Rnj-1: Essential AI released its Rnj-1, a small 8-billion-parameter AI model that has been open-sourced and performs competitively against much larger systems in coding and software benchmarks.

  • OpenAI announced a new shopping integration with Instacart: OpenAI launched a partnership with Instacart, allowing ChatGPT users to browse, fill a cart, and complete an "Instant Checkout" for groceries directly within the chat interface, making Instacart the first app to offer this end-to-end purchasing option.

  • IBM plans to acquire data streaming company Confluent: IBM announced an $11 billion plan to purchase Confluent, a data streaming company, as a strategy to help its business customers seamlessly connect and manage real-time data flow into their AI systems.

  • U.S. President Trump approved Nvidia to sell H200 AI chips to China: U.S. President Donald Trump authorized Nvidia to sell its H200 AI processors to China, which reverses previous export restrictions, in exchange for the U.S. government receiving a 25% fee on the sales revenue.

  • Microsoft announced a major investment in Canada: Microsoft committed to a $19 billion CAD investment to expand its AI infrastructure across Canada until 2027, along with a promise to keep the data of Canadian users stored within the country.

  • US DOJ detained two men over chip smuggling: The U.S. Department of Justice arrested two men for allegedly operating a network that smuggled Nvidia chips into China, an investigation that has already resulted in the seizure of over $50 million worth of GPUs.

  • Meta is reportedly planning a new frontier model: Meta is rumored to be releasing a new advanced AI model, codenamed “Avocado,” in early 2026, which might be a proprietary, closed model, departing from their usual open-source releases.

  • The EU opened an investigation into Google's AI search: The European Union launched a new investigation to determine if Google’s AI-generated search summaries and its new AI Mode are unfairly using website and video content without properly paying for it.

  • US War Department introduced a new AI platform: The U.S. Department of Defense launched GenAI.mil, a new AI platform for the U.S. military, with Google's Gemini being the first AI model made available for military use on the system.

  • Anthropic is partnering with Accenture: Anthropic is collaborating with consulting firm Accenture to train 30,000 of its consultants on the Claude AI model, aiming to help businesses advance their AI pilot projects into real-world applications.

  • US defense bill mandates AGI committee: The U.S.'s annual defense spending bill reportedly requires the creation of a top-level military committee to study the military consequences of Artificial General Intelligence (AGI) and develop defense strategies against adversaries who pursue it.

  • Amazon and Microsoft announced major investments in India: Amazon and Microsoft both pledged significant investments in India's AI and cloud infrastructure, committing a combined total of over $52 billion to the country.

  • McDonald’s Netherlands pulled an AI Christmas ad: McDonald’s in the Netherlands withdrew a Christmas commercial that was created using AI after receiving public criticism, stating the negative reaction provided an "important learning" as they explore using AI effectively.

  • Chinese AI startup DeepSeek is reportedly using illegally imported Nvidia chips: The Chinese AI company DeepSeek is allegedly developing its next AI model using thousands of Nvidia chips that were imported illegally, according to The Information.

  • Coder launched AI development infrastructure: The company Coder released new tools, including AI Bridge, Agent Boundaries, and Coder Tasks, which provide full-stack infrastructure for businesses to safely manage and scale the development of AI projects.

Closing Thoughts

That’s it for us this week.

Reply

or to participate.