My Short OpenClaw Journey

OpenClaw (previously called MoltBot, and ClawdBot) is the latest on the AI hype train. In short, it is an integration platform that connects whatever digital services that the bot owner has towards an LLM. This opens up for creating a digital assistant that gets access to email, calendar, and whatever services it is hooked up to. The LLM can be seen as the brain, while OpenClaw itself contains the state of the bot. OpenClaw quickly became the most starred repo on GitHub and is argued to be one of the reasons why self-hosted LLMs are ranking high on sites monitoring LLM traffic. Recently NVidia lauched NemoClaw which is a wrapper arround OpenClaw.

Why would you want a Mac Mini for OpenClaw?

Countless of videos and blog posts about OpenClaw follows the pattern that they open the video with “that you don’t need a Mac Mini to run OpenClaw”, with the remainder of the video how you would set it up. To clear things up, these are the reasons why someone would want to actually use a MacMini for this (they would typically never mention this in the blogs or videos):

  • The bot runs on its own device, which some find appealing as it runs on its own little physical box. This way you can put a sticker “HomeBot” on it.
  • People argue the unified memory with MacMini is good for running local LLMs (such as Qwen), but many argue these are not sufficient for OpenClaw and end up using external LLMs instead.

To me this is frustrating. Not only because I bought a Mac Mini (sadly with just 16 GB of ram), but because it gives me the vibes that someone is making me think I would need a Mac Mini for running this locally and this is not the case. Adding onto how so many of these posts look similar, it does not make the posts trustworthy and it appears more as a scheme.

Security by Optimism and Prayer

Having a system integrated into personal data opens up the risk of having this being leaked out considering the LLM itself is the engine making decisions on basis of which data the user has access to. OpenClaw has some security mechanisms where the bot owner can define some guardrails on what it should not do. For example “Don’t exfiltrate private data. Ever.”. These are instructions fed into the LLM to prevent it from misbehaving.

This security architecture can be compared with the following: Imagine a bank where customers are only asked to “Don’t withdraw more money than you have, and do not rob the bank. Ever.” without having any system that limits customers from robbing the bank by design. The bot owner has to trust that the LLM does not misbehave instead of knowing that there is a deterministic system that will not decide to empty someones bank account over night.

It is possible to restrict OpenClaw from integrating with certain systems by making it ask before making actions. However, this renders the entire purpose of OpenClaw rather pointless as it should be able to act on its own.

Fundamental Security Flaws

There are mainly two issues that prevented me from starting to use OpenClaw:

  1. Having a model that runs locally which is actually good. Sending the amount of information OpenClaw would need to a big tech company is not something I see as a viable option.
  2. Knowing that the bot will not start misbehaving, either because it decides to, or because someone is gaslighting it to do that. How can it be secured against prompt injects? How can it be secured against hallucinating and deciding to make destructive actions? How can it be secured against manipulation from other actors to leak data?

The first issue is easier to solve than the second, but it could also impact the second. The second is an architectural constraint of how an LLM works. This issue can be compared with how OpenAI’s agentic browser Atlas is vulnerable to prompt injections. On top of this, the bot will act as a remote code execution engine which will pick up content and decide on its own whether or not to make actions on what it sees on the net.

Seeing users of OpenClaw get surprised when their bank account is emptied, or their email account is purged reminds me of when people discovered that people could lie on the internet. My journey with OpenClaw ended with the two major issues I mentioned above before I got to set it up. And I do not find this reasoning very controversial. I love the use-case of it but unless these isseus are adressed, it is not something I want would use - and I would go as far as advicing others from using it. Seeing people set up this without much thought into the risk (and then have it backfire) appears as irresponsible use of technology. While some argue we should meet AI with “openness, curiosity, and willingness to learn”, it appears that critical thinking was left out of the process. As with most things related to generative AI, OpenClaw looks good on the surface, but when you dig into it, things fall apart.