is there a way to get the latest version of your software that supports nematron?
is there a way to get the latest version of your software that supports nematron?
Sure, I'll submit the update to the store. Should be out soon.
Hi, I have downloaded the latest version and its working well in the chat mode with nemotron 120b 4.5bit ,
I am now having some problems setting up the server mode, For some reason the server says its on and when I go to the url it suggest with the ports it suggested local host:0882 and 8081 but none of them worked , I cant access it , I am trying to set it up like you did in your video when you had it working with open claw and virtual mac ?
do you have any suggestion on how I can test the connection? or do you have a guide on the best way to connect it to open claw,
Thanks
Yes, you can test the connection using curl.
Launch Terminal and type this command in.
curl http://localhost:54321/v1/models
Replacing your port number if you've changed it and localhost with your ip address if you're connecting from another computer, and make sure you have allow local network connections enabled in the server settings page.
Run that command, and it will return the models that are on your server.
If that doesn't connect, make sure you have SSL disabled and try again.
Once that's working you can setup an OpenAI compatible API using http://localhost:54321/v1, replacing your ip/port with Openclaw.
Here is a copy of my config file - if you go into OpenClaw > Config > Raw, you can compare it with your one to see the differences.
I'm using baseURL as http://localhost:54321/v1
and the model as /openai/gpt-oss-20b
{
meta: {
lastTouchedVersion: '2026.3.2',
lastTouchedAt: '2026-03-09T00:19:36.550Z',
},
wizard: {
lastRunAt: '2026-03-03T06:18:43.150Z',
lastRunVersion: '2026.3.2',
lastRunCommand: 'onboard',
lastRunMode: 'local',
},
models: {
mode: 'merge',
providers: {
inferencer: {
baseUrl: 'http://192.168.1.107:54321/v1',
apiKey: '__OPENCLAW_REDACTED__',
api: 'openai-completions',
models: [
{
id: '/openai/gpt-oss-20b',
name: '/openai/gpt-oss-20b (Custom Provider)',
reasoning: false,
input: [
'text',
],
cost: {
input: 0,
output: 0,
cacheRead: 0,
cacheWrite: 0,
},
contextWindow: 16000,
maxTokens: 4096,
},
],
},
'custom-1': {
baseUrl: 'http://localhost:54321/v1',
apiKey: '__OPENCLAW_REDACTED__',
models: [
{
id: '/openai/gpt-oss-20b',
name: '/openai/gpt-oss-20b',
reasoning: false,
input: [
'text',
],
cost: {
input: 0,
output: 0,
cacheRead: 0,
cacheWrite: 0,
},
contextWindow: 200000,
maxTokens: 8192,
},
],
},
},
},
agents: {
defaults: {
model: {
primary: 'inferencer//openai/gpt-oss-20b',
},
models: {
'inferencer//openai/gpt-oss-20b': {},
},
workspace: '/Users/x/.openclaw/workspace',
compaction: {
mode: 'safeguard',
},
maxConcurrent: 4,
subagents: {
maxConcurrent: 8,
},
},
},
tools: {
profile: 'messaging',
},
messages: {
ackReactionScope: 'group-mentions',
},
commands: {
native: 'auto',
nativeSkills: 'auto',
restart: true,
ownerDisplay: 'raw',
},
session: {
dmScope: 'per-channel-peer',
},
gateway: {
port: 18789,
mode: 'local',
bind: 'loopback',
auth: {
mode: 'token',
token: '__OPENCLAW_REDACTED__',
},
tailscale: {
mode: 'off',
resetOnExit: false,
},
nodes: {
denyCommands: [
'camera.snap',
'camera.clip',
'screen.record',
'contacts.add',
'calendar.add',
'reminders.add',
'sms.send',
],
},
},
skills: {
entries: {
'multi-search-engine': {
enabled: true,
},
},
},
}
Bonus
- In the server settings page if you enable Override API Model Selection, it will use the model you have selected inside Inferencer - rather than always having to modify it in OpenClaw.
Let me know how you get on.