How to Run OpenClaw on Cordatus.ai

OpenClaw is an AI agent system that can take real actions (tool-calling) using LLMs rather than just generating responses, it can execute commands, connect to APIs, and automate tasks. It is installed and managed through a gateway + CLI-based architecture running on the device, and operates in integration with external model providers. In short: it is an AI runtime that “does work, not just chat.” In this article, we will cover how to install and deploy OpenClaw on a remote device through the Cordatus platform.
The process is quite straightforward: you select your model source, and Cordatus handles the rest. Required dependencies are automatically installed on the remote device, and OpenClaw becomes directly accessible through the Cordatus Web Terminal in your browser.
Before You Begin
There are two fundamental prerequisites:
- The device must be online. The target device on which OpenClaw will run must be active and connected on Cordatus. This operation cannot be initiated on an offline device.
- Access to a tool-calling capable LLM is required. OpenClaw needs a language model with function calling (tool calling) capability to carry out its tasks. This model can be a container already running on Cordatus, or an external HTTPS endpoint.
Step 1: Launching OpenClaw
Navigate to the Devices page in the Cordatus control panel. Click the “Run OpenClaw” option from the actions menu on the row of the relevant device.
At this point, Cordatus connects to the target device and performs a series of preliminary checks. Informational notifications are displayed in the interface throughout the process . NVM status, Node.js version, and whether OpenClaw is installed are checked in sequence.
If an existing and valid OpenClaw installation is detected on the device, the Web Terminal opens directly. Otherwise, you are redirected to the model selection screen.
Step 2: Model Selection
OpenClaw needs to be connected to an LLM model in order to function. The dialog that opens at this step offers two different methods:
Assign from Container
In this option, Cordatus lists the model containers being run by users. You can proceed by selecting one of the models with tool calling support that are on the same network or on the same device. After selection, the container’s IP address and port information are automatically detected; no additional configuration is needed.
The key point to note here is that the selected model must support tool calling. Models without this capability are not compatible with OpenClaw.
Remote Endpoint
If your model is running on a different server or you wish to use a cloud-based LLM service, you can choose this option.
Enter the endpoint URL in the relevant field. There is an important detail to note here: the URL must not end with /v1. Cordatus automatically appends this path during model querying.
When a valid URL is entered, Cordatus queries the server’s /v1/models endpoint and presents the available models as a dropdown list. Simply select the model to be used from this list.
If the endpoint server requires authentication, one of the API keys previously saved to Cordatus can be selected. For publicly accessible servers, this field can be left empty.
Step 3: Installation and Configuration
After the model selection is confirmed, Cordatus initiates the automatic installation process on the target device. Each stage can be tracked through the interface.
NVM Installation
First, Node Version Manager (NVM) is checked on the device. If it is not present, it is installed automatically. This step is necessary for subsequent Node.js management.
Node.js 22 Installation
OpenClaw requires Node.js v22 or v24. If the appropriate version is not available on the device or the current version is outdated, the Node.js 22 installation or upgrade is automatically performed via NVM.
OpenClaw Installation
Once the Node.js environment is ready, the OpenClaw package is installed via npm:
Model Configuration
In the final stage of installation, Cordatus writes the selected model information to the OpenClaw configuration file (~/.openclaw/openclaw.json). This file contains model provider information (baseUrl, apiKey), model parameters (contextWindow, maxTokens), and gateway settings. The configuration is generated entirely automatically; the user does not need to deal with this file manually.
Notifications are displayed in the interface at each step of the process. The current stage can be tracked in real time, from NVM checks through to OpenClaw installation.
Step 4: Using OpenClaw via Web Terminal
Once installation and configuration are complete, Cordatus automatically opens a Web Terminal window and runs the openclaw tui command. This command launches OpenClaw’s interactive text user interface (TUI).
Through the terminal, you can interact directly with the AI assistant, issue commands for coding tasks, and monitor model interactions. Since the Web Terminal operates over the WebSocket connection that Cordatus maintains with the device, no additional SSH or VPN configuration is required.
Changing Models Later
When switching to a different model on a device where OpenClaw is already installed, the “Run OpenClaw” option is used again. Cordatus detects the existing installation: if the model is still accessible, the Terminal opens directly; if not, the model selection screen is shown again. In this case, no reinstallation is performed only the configuration is updated.
Process Overview
Frequently Asked Questions
- Should I append /v1 to the end of the URL?No. The URL should be entered without the /v1 suffix. Cordatus appends this path automatically. For example, the format https://server.example.com:8001 should be used.
- My device appears offline?The device must be online and connected on Cordatus for OpenClaw to run. The device’s network connection and the status of the Cordatus agent service should be verified.
- Which models are supported?OpenClaw is compatible with LLM models that support tool calling (function calling). When selecting a model from the container list or via a remote endpoint, ensure that this capability is supported.
- When is a token required?A token is only needed when the LLM endpoint server requires authentication. For publicly accessible servers, there is no need to enter a value in this field.

