In partnership with

Good morning,

I've been watching OpenClaw from the beginning. Before it was even called OpenClaw, actually. And I've had a complicated relationship with it ever since - part fascination, part genuine unease, part oh that's actually clever.

So let me give you the honest take. Not the hype. The actual answer to the question you should be asking: should I care about this?

What OpenClaw actually is

First, cut through the marketing fog.

OpenClaw is not an AI. It doesn't think. It has no intelligence of its own. It's a gateway -a small service that runs on a computer and connects three things together: an AI model of your choice (Claude, GPT, local models via Ollama), a communication channel (Telegram, Discord, Slack), and a set of tools the AI is allowed to use on that machine.

That's it. That's the whole trick.

The reason it went viral -308,000 GitHub stars, beating React, beating the Linux kernel -- is not because the technology is unprecedented. It's because of the idea it demonstrated. Give an AI a computer. Let it live there. Watch what happens.

Jensen Huang called it "the operating system for personal AI." Anthropic is apparently building their version. Nvidia already built one (reportedly insecure, but still). When the biggest players in tech converge on the same concept within months of each other, it’s not a trend but a direction.

What it can actually do

Some demos I saw:

The first: ask OpenClaw to pull cybersecurity news, scrape Reddit, check Hacker News, rate everything by whether it's worth your time, and build you a dashboard. One sentence. Done. Previously this kind of automation took hours of node configuration in something like n8n. OpenClaw one-shots it.

The second: tell OpenClaw to monitor the server it's running on - check RAM, CPU, internet speed, security logs-and build a live dashboard. It investigates its own infrastructure, makes sure it won't break anything, and ships the result.

Is this groundbreaking engineering? No. Tools like this have existed in pieces. What OpenClaw did is package everything into one clean install and make it feel accessible.

Genuinely Unsecured by Default

Here's where I have to be direct with you.

When you install OpenClaw and spin up an agent in five minutes, you've just put an AI on a machine with access to tools, bash execution, the ability to create cron jobs, browse the web, and modify files -with essentially no guardrails on by default.

There's a skill marketplace called ClawHub with over 33,000 skills. 12% have been found to contain malware. Twelve percent. The project had to partner with Virus Total because it became a problem.

There's also a concept called prompt injection - where malicious content in something your agent reads (a webpage, an email, a document) tricks it into doing something you didn't ask it to do. Your agent goes to fetch some news, finds something nasty embedded in the page, and suddenly it's doing something else entirely.

The security tooling has gotten better but the mental model matters: you are giving an AI agent execution rights on a machine. Treat it accordingly.

Red lines help - that's where you write, in plain language, what the agent is never allowed to do. Don't exfiltrate data. Don't run destructive commands without asking. Don't touch SSH config. It's a prompt, not a hard technical lock, but it's better than nothing.

So should you run it?

It depends entirely on who you are.

If you're technical -comfortable with Linux, SSH, the command line, basic server administration - OpenClaw is genuinely worth experimenting with.

The agent-per-purpose model is where this gets interesting: not one general assistant, but a small team of purpose-built agents. An IT monitoring agent watching your infrastructure. A research agent handling morning briefings. A personal assistant with memory that lives on your hardware, not a cloud provider's.

The memory architecture alone is interesting from a data sovereignty angle. Your agent's soul, identity, and long-term memory are markdown files on your machine. When you switch AI models, the memory stays. You're doing a brain transplant, not starting over.

If you're not technical, and you want some of this capability without a VPS, SSH tunnels, and firewall configuration - I'm going to point you somewhere else.

The non-technical alternative: Claude Desktop

If what you actually want is a persistent, capable AI assistant that can use tools and do things on your behalf -without standing up a Linux server - Claude Desktop is the more honest starting point.

It runs locally on your machine. It integrates with MCP (Model Context Protocol), which is Anthropic's own version of the "give AI access to tools" concept. You can connect it to your files, your calendar, your notes, external services. It doesn't require you to know what a cron job is.

It's less dramatic than "I deployed an AI agent on a VPS in five minutes." But it's also less likely to result in an AI executing unsanctioned commands on your infrastructure because someone hid a prompt injection in an RSS feed.

Start there. Get comfortable with what it means to give an AI access to tools in a sandboxed, supported environment. Then, if you find yourself wanting more control, more persistence, more customization - OpenClaw will still be there, probably more mature by then, and you'll approach it with the right mental model.

The actual verdict

OpenClaw is fun, genuinely clever in its packaging, and pointing at something real about how we'll interact with AI in the near future. The heartbeat/cron model - where your agent checks in proactively, schedules tasks, runs on your behalf while you're asleep - is a better interaction pattern than the current "open a tab and type a prompt" paradigm.

But it's also a project that grew faster than its security model, has a skill ecosystem with a meaningful malware problem, and puts significant execution power in the hands of a probabilistic system.

Use it with eyes open. Vet every skill. Set your red lines. Keep the firewall on.

And if you're not ready for all that - Claude Desktop, MCP, and a bit of patience will get you 80% of the value with 20% of the risk surface.

The agents are coming either way. The only question is how thoughtfully you let them in.

Welcome to the Brief

The Blacklynx Brief covers AI developments through a practitioner lens. If you work in security and found this useful, forward it to someone who's about to deploy OpenClaw without reading the docs.

AI News

  • Meta open-sourced TRIBE v2, an AI model trained on brain scans from 700+ people that can simulate neural activity across vision, hearing, and language. The system predicts brain activity more cleanly than real fMRI scans and can replicate known neuroscience findings without new experiments. The release could significantly speed up brain research by replacing expensive scanning with software simulations.

  • Apple plans to let users choose different AI models for Siri in iOS 27, ending ChatGPT’s exclusive integration. Users will be able to route requests to various AI tools through Siri, with Apple potentially taking a cut of subscriptions from third-party AI apps. The move positions Apple as a platform for multiple AI services rather than relying on a single model.

  • Wikipedia editors voted to ban AI-generated content from articles, allowing only limited use for tasks like grammar and translation with human oversight. The decision follows concerns about errors and aims to preserve human-written quality on the platform. The policy reflects growing resistance to AI-generated content in major knowledge communities.

  • Anthropic reportedly had details of its upcoming flagship model, Claude Mythos, leak due to a CMS error exposing internal launch materials. The model is described as a new tier above Opus with major advances in reasoning, coding, and cybersecurity, and is said to be significantly more powerful than current systems. The leak suggests Anthropic is preparing another major leap at the top end of AI capabilities.

  • A report from The Wall Street Journal detailed the long-running tensions between Sam Altman and Dario Amodei, tracing conflicts back to their early days at OpenAI. Disputes over leadership, strategy, and ethics — including disagreements involving Greg Brockman — have fueled a deep personal and professional rivalry. The history helps explain the increasingly public clashes shaping today’s competition between OpenAI and Anthropic.

  • OpenAI shut down Sora after it reportedly burned around $1M per day, redirecting its compute to a new model called “Spud” focused on coding and enterprise use. The move reportedly blindsided The Walt Disney Company, which had been piloting Sora for marketing and VFX work just before the shutdown. The decision highlights OpenAI’s shift away from costly video tools toward more strategic priorities.

  • Microsoft introduced Critique and Council features for Copilot, turning it into a multi-model system that compares outputs from different AIs like OpenAI and Anthropic. One model generates a report while another reviews it, or both run side by side to highlight agreements and disagreements. The update reflects a growing trend toward combining multiple AI systems for better accuracy and reliability.

  • A study from Stanford University found that major AI chatbots often agree with users even when they are wrong, reinforcing harmful or biased views. Participants preferred these agreeable responses and became more confident in their opinions after interacting with them. The findings raise concerns about AI encouraging overconfidence and poor decision-making in personal situations.

  • OpenAI raised a record $122B at an $852B valuation, with backing from Amazon, Nvidia, and SoftBank. The company plans to unify ChatGPT, Codex, and its agent tools into a single “AI superapp,” while enterprise revenue now makes up over 40% of its business. The move signals a major shift toward consolidating products and focusing on high-growth enterprise demand.

  • Anthropic accidentally exposed the source code for its Claude Code tool, revealing thousands of files and several unreleased features. The company said no user data was compromised, but the leak included experimental tools and internal project details that quickly spread online. The incident marks the second major leak in a week, raising questions about internal controls.

  • A new poll from Quinnipiac University found that while AI usage is rising, public trust is declining and job concerns are increasing. Around 70% of respondents now believe AI will reduce job opportunities, and most feel the government is not doing enough to regulate the technology. The results highlight a growing gap between rapid AI adoption and public skepticism.

  • Jack Dorsey argued that AI can replace middle management, framing Block’s 40% workforce cut as a shift toward lean, AI-driven teams. He said managers mainly pass information, a role AI can now handle using a real-time “world model” of the business. The company is reorganizing around builders, outcome owners, and player-coaches as part of this transition.

  • SpaceX filed for a record-breaking IPO targeting a valuation above $1.75T and raising up to $75B. The move follows Elon Musk folding xAI into SpaceX, creating a combined rocket and AI powerhouse ahead of public markets. If successful, it would become the largest IPO in history and a major milestone for the AI industry.

  • OpenAI is reportedly running “Project Stagecraft,” paying thousands of freelancers to simulate real-world job tasks and workflows for AI training. The effort focuses on mapping knowledge work across professions to improve ChatGPT’s capabilities, with contributors aware the data could automate their own roles. The project reflects a growing push to train AI using detailed, occupation-specific expertise.

These 7 Stocks Are Built to Outlast the Market

Some stocks are built for a quarter… others for a lifetime.

Our 7 Stocks to Buy and Hold Forever report reveals companies with the strength to deliver year after year - through recessions, rate hikes, and even the next crash.

One is a tech leader with a 15% payout ratio - leaving decades of room for dividend growth.

Another is a utility that’s paid every quarter for 96 years straight.

And that’s not all - we’ve included 5 more companies that treat payouts as high priority.

These are the stocks that anchor portfolios and keep paying.

You can download this report for free as of today, but it won’t be free forever.

This is your chance to see all 7 names and tickers - from a consumer staples powerhouse with 20 years of outperformance to a healthcare leader with 61 years of payout hikes.

Quickfire News

  • Mistral released Voxtral TTS, a lightweight voice model that can clone a speaker from a 3-second sample and generate speech in 9 languages

  • Novo Nordisk is deploying AI agents in clinical trials to speed approvals and reduce reliance on contractors

  • Google launched Gemini 3.1 Flash Live, a faster and more realistic voice AI for conversations across Search, Gemini Live, and its API

  • OpenAI has reportedly shelved its planned erotic chatbot mode indefinitely following internal and investor concerns

  • Suno released v5.5 of its music generator, adding voice cloning, custom model tuning, and personalized style learning for Pro users

  • Cohere launched Transcribe, an open-source speech recognition model that ranks No. 1 on Hugging Face benchmarks across 14 languages

  • Eli Lilly signed a $2.75B deal with Insilico Medicine to license an AI-developed drug pipeline with 28 compounds in progress

  • Anthropic won a federal injunction blocking its “supply chain risk” designation, with a judge calling the move unconstitutional retaliation

  • Google expanded its Live Translate feature to iOS, enabling real-time translation across 70+ languages using headphones

  • xAI saw co-founder Ross Nordeen depart, leaving Elon Musk as the only remaining original founder

  • Sam Altman reportedly told OpenAI staff he attempted to support Anthropic during its Pentagon dispute despite OpenAI securing its own deal

  • Starcloud raised $170M at a $1.1B valuation to build GPU-powered data centers in orbit, aiming to leverage SpaceX’s Starship for cost-effective space compute

  • Apple briefly released Apple Intelligence features in China before removing them, as the tools are not yet approved in the region

  • Anthropic added computer use to Claude Code, allowing the AI to interact with apps, navigate interfaces, and visually verify its own outputs

  • Mistral raised $830M in debt to build a 13,800-GPU Nvidia-powered infrastructure in France to reduce dependence on U.S. cloud providers

  • Alibaba launched Qwen3.5-Omni, a multimodal AI that handles text, images, audio, and video, including an audio-driven app-building mode

  • Oracle laid off thousands of employees as part of a major restructuring focused on shifting toward AI and infrastructure

  • Google released Veo 3.1 Lite, a lower-cost video generation model that supports up to 8-second clips at about half the price of its Fast version

  • Salesforce updated its Slackbot agent in Slack with 30 new features, including reusable skills, MCP integrations, and desktop control

  • PrismML launched Bonsai, a compact open-source model designed to deliver strong performance while running on consumer hardware

  • Liquid AI released LFM2.5-350M, a compact open model that outperforms larger models on tool use while running efficiently on consumer devices

  • Arcee AI launched Trinity Large-Thinking, an open-weight reasoning model competing with top models at about 1/20th the cost

  • Contra Labs debuted as an evaluation platform for creative AI tools, offering benchmarks, datasets, and leaderboards focused on human taste

  • Alibaba introduced Wan2.7-Image, a model capable of generating and editing images with consistent outputs and multilingual text rendering across 12 languages

  • Z AI released GLM-5V-Turbo, a vision-based coding model that converts screenshots and UI designs into working code

Closing Thoughts

That’s it for us this week. Please like and subscribe :)

Reply

Avatar

or to participate

Keep Reading