NemoClaw FAQ

The real questions from Reddit, the NemoClaw Discord, and the community. Honest answers based on actual documentation and community findings, not marketing copy.

🖥️ Hardware & GPU Questions

No. This is the biggest misconception about NemoClaw.

For the default cloud inference mode, you need zero GPU. Your regular CPU handles the container orchestration, and NVIDIA's cloud servers handle the actual AI processing. You pay per token, and the compute happens remotely.

A GPU matters only if you want to run local AI models (experimental feature). For that, you need an NVIDIA RTX GPU with at least 8GB VRAM for smaller models. But this is an advanced feature, not a requirement to use NemoClaw.

For most business deployments, a regular office PC or workstation is completely sufficient.

The RTX 4070 Super has 12GB VRAM. Here's the honest breakdown for local model inference:

Can run today:

  • Nemotron 3 Nano 4B, designed specifically for "resource-constrained hardware" and GeForce RTX. Runs well.
  • 7B models with Q4 quantization (~4GB VRAM)
  • 13B models with Q4 quantization (~7–8GB VRAM)

Cannot run locally:

  • Nemotron 3 Super 120B (needs DGX Spark or enterprise GPU with much more VRAM)
  • Nemotron Ultra 253B (enterprise hardware only)

Practical recommendation: Use cloud inference (default) for most tasks since it's more capable. Use local Nano 4B for privacy-critical tasks where you absolutely cannot send data anywhere outside your building. The 4070 Super handles that perfectly.

For cloud inference (the default and recommended path):

  • 4 vCPU (any modern quad-core)
  • 8 GB RAM (16 GB recommended for comfort)
  • 20 GB free disk (40 GB recommended)
  • Linux, macOS (Apple Silicon with Colima), or Windows with WSL2

Any office PC or laptop from the last 5 years meets these requirements. No GPU needed for cloud inference.

🔒 Privacy & Data Questions

More precise answer than "yes" or "no." Let's be specific about what's private and what isn't:

What never leaves your machine:

  • Your filesystem files (Landlock prevents the agent from even reading them)
  • Your local network and internal systems (network namespace isolation)
  • Your OpenClaw workspace on the host (outside the sandbox)

What does leave the building (by necessity):

  • The text content of what you send to the AI agent. This goes to NVIDIA's inference API (or Anthropic's, if configured). The AI model runs in the cloud; your messages must reach it.

What to know about NVIDIA's data handling: NVIDIA has enterprise data handling policies for their API. Unlike consumer AI tools (ChatGPT, Claude.ai), API usage typically doesn't get used for training without consent. Check NVIDIA's current API data handling agreement at build.nvidia.com for current terms.

For truly air-gapped privacy: Use local inference (experimental). Nothing ever leaves your hardware. But this requires compatible GPU hardware and extra setup work.

No, and this isn't a policy. It's a physical enforcement. Landlock LSM prevents any file access outside the approved paths at the Linux kernel level. The agent is allocated /sandbox and /tmp for read/write, plus some system files in read-only mode.

Your personal files, your client folders, your other applications' data: the agent cannot read any of it. From inside the sandbox, those locations don't even exist in a readable form.

If you need the agent to work with specific files, you must copy them into /sandbox, the agent's workspace directory inside the container.

🔄 NemoClaw vs Plain OpenClaw

The security stack, but also the model and inference routing. Here's the full comparison:

FeatureOpenClawNemoClaw
Filesystem accessFull host access/sandbox and /tmp only
Network accessUnrestrictedYAML allowlist only
Privilege escalationPossibleBlocked (seccomp)
Audit trailNoneEvery action logged
Default AI modelWhatever you configureNVIDIA Nemotron 120B
Inference routingDirect to any APIThrough OpenShell gateway
Enterprise-readyNoDesigned for it
Setup complexitySimplerMore involved (Docker + OpenShell)

Simple answer: OpenClaw is for personal productivity. NemoClaw is for business use with sensitive data. If you're processing client information of any kind, NemoClaw is the right choice.

Yes. NemoClaw routes inference through the OpenShell gateway, but you can configure the gateway to route to Anthropic (Claude) or other providers. The default is NVIDIA Nemotron cloud inference, but this is configurable.

The Anthropic API endpoint is on the default egress allowlist, so routing to Claude works without any policy changes. You'll need an Anthropic API key in addition to your NVIDIA API key.

Note: if you route to Claude, your messages go to Anthropic's servers (same as regular Claude API usage). The sandbox security still applies to filesystem and network access, but the AI inference itself leaves the building to reach Anthropic's infrastructure.

🤖 Models & Inference

The NemoClaw onboarding wizard asks for an NVIDIA API key for the default cloud inference setup. If you want to switch to local-only inference, you still go through this initial setup and then configure local inference afterward.

Additionally, some OpenShell components need to authenticate with NVIDIA's registry to download the sandbox blueprint (the container setup instructions). Even for local inference, some NVIDIA authentication is involved in the setup process.

The NVIDIA API key has a free tier that's enough for setup and testing, so you can get started without spending anything initially.

NIM (NVIDIA Inference Microservice) is NVIDIA's way of packaging and serving AI models as a local microservice. Think of it as a local version of NVIDIA's cloud inference: your GPU serves the model, your own hardware runs the API endpoint, and nothing leaves your machine.

Why it might not be running:

  • Experimental status: Local inference via Ollama and vLLM are explicitly "experimental" in the current alpha. NIM integration is even further out and not officially supported yet.
  • GPU requirements: Running large models locally requires significant VRAM (the 120B default model needs enterprise-level hardware)
  • No official setup path yet: Community members have gotten local inference working via iptables workarounds, but there's no clean official flow yet

What to do: Use cloud inference for now (it's inexpensive and works well). Watch the NemoClaw GitHub for when local NIM support is officially announced. It's clearly a priority feature.

🔧 Technical Questions

Not officially. NemoClaw's security sandbox (Landlock, seccomp, network namespaces) is built on Linux kernel features that don't exist in Windows. WSL2 provides a real Linux kernel running inside Windows, which is why WSL2 is the supported Windows path.

The NemoClaw plugin CLI (the control panel) could theoretically run on native Windows, but the sandbox itself must run in a Linux environment. WSL2 is that environment for Windows users.

The NemoClaw sandbox image is approximately 2.4GB compressed. Docker stores container images, so you also need space for Docker's overhead. Realistic minimums:

  • Docker installation: ~500MB–1GB
  • NemoClaw sandbox image: ~2.4GB
  • OpenShell + k3s: ~500MB
  • Agent workspace + logs: grows over time, plan for 10–20GB

Total recommendation: 40GB free disk minimum, 100GB for comfortable operation over months.

Yes. Each agent gets its own named sandbox. When you run nemoclaw onboard, you give the agent a name (e.g., "my-assistant"). You can create multiple agents with different names and different network policies.

Each sandbox runs its own OpenClaw instance with its own memory, workspace, and network policy. This is useful for separating different workloads (e.g., a "client-work" agent with strict policies and a "research" agent with broader access).

The practical limit is your available RAM, since each running sandbox uses memory for the container, OpenShell gateway, and k3s cluster.

This is a known alpha limitation. From the NemoClaw README: "The openclaw nemoclaw plugin commands are under active development. Use the nemoclaw host CLI as the primary interface."

In other words: use the nemoclaw command (the host CLI) rather than openclaw nemoclaw (the plugin commands). The plugin commands exist but aren't fully implemented yet. This will be resolved in future releases.

All the commands you need today work via the direct nemoclaw CLI: nemoclaw onboard, nemoclaw connect, nemoclaw status, nemoclaw logs.

Was this FAQ helpful?