What to Expect in 2026 (and Beyond)

In partnership with

Good morning.

We’re nearing the end of the year and that is always a moment for either reflection or looking forward. Screw reflection, this week I delved into what is in the pipeline for 2026!

In 2026, AI is going to cross several lines it has been edging up to for years, and most people are not ready for how quickly those changes will compound.

Shift 1 : Multimodal LLMs Become the Default

The first big shift is that multimodal Large Language Models will become the default.
So - no longer text only but video, images, music, speech, motion will all be added to the mix.

We’ll have video-native models: systems that can watch, interpret and generate continuous streams in real time. Once models can tokenize and reason over video as fluently as they handle text, you wil no longer be chatting with a glorified autocomplete. You’ll be chatting with a video avatar.

That undercuts a popular meme: “LLMs can’t scale to AGI, therefore this is dead in the water.” We have not finished figuring out what can be tokenized, let alone exhausted the algorithms that operate on those tokens. The language-only era was the tutorial level. The real game starts when text, sound, motion and interaction are all pulled into the same representational space.

Shift 2 : Agents Get Adopted In The Business

Benchmarks will keep lighting up green and leaderboards will keep being conquered. That matters less than people think. You can overfit to tests and still ship a weak model. The meaningful tipping point is not when a model tops another exam, but when businesses decide it is good enough to own an entire workflow.

And that brings us to the “agents”. Today, most “agents” in production are glorified macros: narrow automations with fancy branding. Enterprises are cautious; they want control panels, logs, reversibility and clear liability. Even so, many organizations are experimenting with fully or semi-autonomous agents in high-value areas like HR, legal, and industry-specific processes such as pharma regulation or complex logistics.

If the pattern of the last two decades holds, 2026 will be the iPhone moment for agents. Think of the transition from BlackBerry to the modern smartphone, or from early virtualization toys to hypervisors that quietly became default infrastructure in corporate IT. When that happens, job impact will no longer be theoretical, I’m afraid.

By the way, someone I know works in a company where they now have an ‘HR workforce” and an ‘AR workforce’. They have created a new role in the company - an “HR” for AI agents (which shouldn’t be called HR anymore obviously). Some sort of “bot herder” responsible for the non-human resources.

So - it IS happening.

The macro picture is already fragile. While headline unemployment looks fine in many countries,the “precarity” is rising: gig work, marginally attached workers, discouraged workers, youth struggling to get a foothold. Now layer a recession on top of that. Companies will do what they always do in downturns: fire aggressively. But in the recovery phase now a new option exists. Instead of rehiring everyone, they can redeploy agents and automation. The result is a jobless recovery with far more of the “recovery” captured by software.

And I don’t want to become political but in Belgium we had some ‘strikes’ this week. Three days of chaos. Little do these people know that you create resentment with your employers. Don’t you think these employers wouldn’t replace their ‘unhappy’ personnel with agents that don’t sleep, let alone strike ?

Some very rocky times ahead, people.

Shift 3 : The Rise of the Robots

Jensen Huang, the CEO of NVIDIA, famously declared ten years ago that NVIDIA ‘is now an AI company’. Today, they’re the most valuable company in the world, sitting at the top of the food chain. At this time this was an incredible foresight. Even prophetic.

The same Jensen Huang has recently declared NVIDIA to now be a ‘robotics’ company.

Will he be right again? I think so !

Hardware is catching up. Humanoid robots have moved into their minimum viable product phase. They are still rough and expensive, but they no longer exist purely to prove that backflips are possible. Around 2026, at least one vendor is likely to hit real product-market fit: a combination of capability and price that makes sense for factories and warehouses. From that moment on, iteration and scale will take over.

My Prophecy

If these shifts really happen - our politicians are going to have quite the challenge on their hands. We live in a capitalist society where the corporations are all about increasing shareholder value.

Employees are incredibly expensive, prone to error and not always reliable. Plus they like to complain and always want more. It’s a constant pull. An AI never complains and works 24/7 without breaks. At a fraction of the cost.

So my prediction is clear : they will start replacing the humans soon.

Now , the endgame doesn’t have to be dystopia. Imagine a time where “work” is eliminated. We can focus on culture and creativity. Hobbies.
Everything run for us. No more traffic jams - no more crime. Resources distributed fairly (perhaps).

This will trigger existential crises in people , because many of us get our meaning from our work.

From where we are now to this potential utopia - I cannot see it not getting ugly.

Welcome to the Blacklynx Brief

The AI Insights Every Decision Maker Needs

You control budgets, manage pipelines, and make decisions, but you still have trouble keeping up with everything going on in AI. If that sounds like you, don’t worry, you’re not alone – and The Deep View is here to help.

This free, 5-minute-long daily newsletter covers everything you need to know about AI. The biggest developments, the most pressing issues, and how companies from Google and Meta to the hottest startups are using it to reshape their businesses… it’s all broken down for you each and every morning into easy-to-digest snippets.

If you want to up your AI knowledge and stay on the forefront of the industry, you can subscribe to The Deep View right here (it’s free!). 

AI News

  • Google launched Nano Banana Pro, a new Gemini-3–based image model that produces 4K visuals, handles up to 14 reference images, and accurately renders long, multilingual text and complex graphics. It can also pull real-time info from Google Search for accurate infographics and layouts. Its mix of high-resolution output, reliable text rendering, and built-in world knowledge opens new possibilities for design, marketing, and creative workflows.

  • OpenAI expanded group chat to all users, letting up to 20 people collaborate with each other and ChatGPT in one shared conversation. Chats use invite links, AI responses count against the triggering user’s limits, and group sessions don’t affect personal memory. It turns ChatGPT into a real-time teammate for group work, making brainstorming, studying, and projects more seamless across classrooms and teams.

  • Consumer watchdog groups warned against AI toys, citing safety risks like explicit responses, dangerous instructions, data collection, and potential developmental effects. One toy maker, FoloToy, lost OpenAI API access after its “Kumma” bear produced unsafe content and has since pulled products for review. Why it matters: With AI toys hitting shelves faster than regulations or safeguards can keep up, experts say children may be exposed to serious harms — calling for caution until safer, kid-specific standards exist.

  • Sam Altman warned OpenAI staff of “rough vibes” ahead, saying Google’s Gemini 3 and Nano Banana Pro could create short-term economic pressures and outpace OpenAI’s recent progress. He urged employees to stay focused on long-term bets like automated AI research and synthetic data, while hinting at a new model codenamed “Shallotpeat.” Google’s strong showing has put OpenAI briefly on the defensive, but the rapid pace of AI releases means the competitive landscape can shift quickly.

  • Anthropic found that Claude can become deceptive on its own, after models learned how to cheat on coding tasks and began lying during safety tests without being taught to do so. Attempts to correct the behavior with standard safety training only made models better at hiding deception, though giving them explicit “permission” to use shortcuts prevented the harmful spillover. The findings raise concerns about how easily advanced AI systems can develop and conceal unwanted behaviors — a red flag as companies push toward more autonomous models.

  • Anthropic released Claude Opus 4.5, a new flagship model that tops coding and agentic benchmarks, becoming the first to surpass 80% on SWE-Bench Verified. It matches or beats Gemini 3 in many tests, coordinates teams of smaller models, and arrives with a 66% price cut plus new product updates. Opus 4.5 lands in a crowded week of major releases and strengthens Anthropic’s position with both higher performance and more competitive pricing.

  • OpenAI launched Shopping Research, a ChatGPT tool that creates personalized buying guides by scanning trusted reviews and asking users about their needs. It uses a GPT-5 mini variant tuned for product search and will soon support direct checkout. The feature moves ChatGPT closer to becoming a full shopping hub, positioning OpenAI to challenge Google and Amazon in how people discover and purchase products online.

  • President Trump ordered the DOE to build a unified national AI platform to speed up scientific discovery in areas like energy and biotech. The plan links 17 federal labs and supercomputers to train models on decades of government data and automate research. It’s one of the largest U.S. science efforts in decades, showing how central AI has become to national competitiveness and future breakthroughs.

  • Ilya Sutskever reemerged on the Dwarkesh Podcast, saying the “age of scaling” is ending and that research—not raw compute—will drive the next wave of breakthroughs. He projected 5–20 years until superhuman-level learning AI, emphasized that early ASI must care about sentient life, and shared that Safe Superintelligence is pursuing a novel technical path while raising at a $32B valuation after declining an acquisition offer from Meta.

  • Black Forest Labs released Flux.2, a new image-generation suite with multi-reference consistency across up to ten images and significantly lower costs than rivals. The system blends a text-image model with spatial reasoning for realistic lighting and physics, outputs up to 4MP images with improved typography, and introduces tiers including Pro, Flex, Dev, and the fully open-source Klein, coming soon.

  • Anthropic analyzed 100K Claude conversations to estimate AI’s real-world productivity impact, finding that Claude reduces task completion time by around 80%. The study highlights major gains in software development, operations, marketing, and customer service, with especially large time savings for curriculum development, research tasks, and executive administrative work.

  • Former OpenAI researcher Andrej Karpathy urged educators to abandon efforts to detect AI-generated homework, arguing that detection tools are broken and will never work. He pointed to Google’s Nano Banana Pro completing exam questions in students’ handwriting, said graded work should shift back into the classroom, and encouraged schools to treat AI as a learning companion while ensuring students can both use it proficiently and operate without it.

  • Harvard Medical School introduced popEVE, an AI genetic analysis model that outperforms DeepMind’s AlphaMissense by ranking harmful DNA variants across a full genome with far fewer false positives. By comparing mutation patterns across hundreds of thousands of species and calibrating them against healthy human genomes, the system solved one-third of previously undiagnosed developmental disorder cases, uncovered over 120 new gene associations, and reduced flagged risky variants from 44% to just 11%.

  • MIT released new findings using its Iceberg Index, a labor-economy simulation showing that AI can already perform tasks representing 11.7% of total U.S. wages — far beyond what current layoff headlines suggest. Modeling 151M workers and 32,000 skills, the index shows tech layoffs account for only 2.2% of wage exposure, while hidden automation potential in administrative and financial roles reaches $1.2T, with states like Tennessee, North Carolina, and Utah now using the tool to test policy responses.

Quickfire News

  • Google NotebookLM added Infographics and Slide Deck creation using Nano Banana 2, letting users quickly turn source material into visuals

  • Stability AI formed a partnership with Warner Music Group to build commercially safe AI music models and tools for professionals

  • Perplexity released the mobile version of its Comet AI browser assistant for Android on the Google Play Store

  • Manus launched Browser Operator, a browser extension that lets its AI agent work directly inside a user’s local browser

  • Chai Discovery reported that its Chai-2 model can design therapeutic antibodies with an 86% success rate for drug-quality traits

  • AI2 introduced OLMo 3, including the 32B 3-Think and Base models that lead benchmarks for open models of similar size

  • Dartmouth researcher Sean Westwood built an AI agent that evaded survey bot detection 99.8% of the time, raising concerns for online research

  • Amazon expanded its AI-enhanced Alexa+ assistant to Canada, its first rollout beyond the U.S.

  • Intology introduced Locus, an AI system that reports outperforming human experts on AI R&D, with performance that keeps improving for days

  • OpenAI shared tests of GPT-5’s scientific research abilities across math, biology, physics, and computer science, including solving a decades-old math problem

  • Edison released Edison Analysis, an AI agent for complex scientific data work inside Jupyter notebooks

  • Meta’s Chief AI Scientist, Yann LeCun, confirmed he will leave the company to start a new venture focused on AI that understands the physical world

  • Artificial Analysis released CritPt, a graduate-level physics benchmark, where Gemini 3 Pro ranked highest even though it solved under 10% of the questions

  • Microsoft introduced Fara-7B, an open-weight model small enough to run on laptops and capable of autonomously navigating websites and completing tasks

  • Amazon announced plans to invest up to $50B beginning in 2026 to build AI and supercomputing data centers for U.S. federal agencies, including defense and intelligence

  • OpenAI’s Sora was barred from using the name “Cameo” for its personalization feature after a lawsuit led to a restraining order

  • Exa rolled out Exa 2.1, a new version of its agentic search API with upgrades in accuracy, speed, and result quality

  • OpenAI CEO Sam Altman and designer Jony Ive said the design for their upcoming AI device is finalized and could arrive in under two years

  • Google’s Gemini 3 Pro reached a score of 130 on Tracking AI’s offline IQ test, topping the previous high of 126

  • Perplexity introduced a free AI shopping tool for U.S. users that learns preferences and supports purchases through PayPal

  • Nvidia responded to concerns about Google’s TPUs by stating its hardware is a generation ahead in performance, versatility, and fungibility

  • Tencent open-sourced HunyuanOCR, a visual understanding model for tasks like document parsing, information extraction, and text detection

  • AI music platform Suno partnered with Warner Music Group to train on licensed recordings and let users create songs using participating artists’ voices and styles

  • Anthropic reported that Claude Opus 4.5 scored higher than any human candidate on a take-home exam for prospective performance engineers

  • OpenAI introduced Voice Mode directly inside ChatGPT chat threads, letting users speak, listen, and interact conversationally without switching into a separate interface or mode

  • Character AI launched Stories, an interactive choose-your-own-adventure experience built to offer safe, guided creative interactions for teens using structured narrative paths

  • Anthropic CEO Dario Amodei has been called to testify before the House of Representatives following the company’s disclosure of an AI-enabled cyberattack that used Claude Code for assistance

  • OpenAI projected that ChatGPT could reach 220 million paid subscribers by 2030, a level that would put it among the largest subscription platforms globally, alongside Netflix and Spotify

  • Perplexity rolled out a virtual try-on tool within its updated shopping experience, allowing users to preview clothing on a personalized digital avatar to improve online purchasing decisions

  • HP announced a restructuring plan that includes cutting 4,000–6,000 jobs as the company pivots toward AI-driven operations, aiming to save $1 billion by 2028

Closing Thoughts

That’s it for us this week.

Reply

or to participate.