repo string | github_id int64 | github_node_id string | number int64 | html_url string | api_url string | title string | body string | state string | state_reason string | locked bool | comments_count int64 | labels list | assignees list | created_at string | updated_at string | closed_at string | author_association string | milestone_title string | snapshot_id string | extracted_at string | author_login string | author_id int64 | author_node_id string | author_type string | author_site_admin bool |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
openclaw/openclaw | 4,124,389,678 | I_kwDOQb6kR8711TEu | 53,234 | https://github.com/openclaw/openclaw/issues/53234 | https://api.github.com/repos/openclaw/openclaw/issues/53234 | Feature Request: Native Kapso support as WhatsApp provider | ## Feature Request
### Summary
Add native [Kapso](https://kapso.ai) support as a WhatsApp provider in OpenClaw, as an alternative to the current Baileys/QR-based WhatsApp connection.
### Motivation
Kapso (launched March 2026) provides dedicated WhatsApp numbers for AI agents via the official Meta API. This would allow OpenClaw agents to have their own WhatsApp number without using the user's personal number.
**Current limitations with personal WhatsApp:**
- Agent uses the user's personal phone number
- Can't have multiple agents on different numbers
- If the phone disconnects, the bot goes down
**Benefits of Kapso integration:**
- Dedicated number per agent (e.g. `+1 205-793-1739`)
- Uses official Meta API (no risk of bans)
- Free tier: 1 number + 2k messages/month
- REST API + CLI available
### Proposed Integration
Add a `kapso` provider option in the WhatsApp channel config:
```json
{
"plugins": {
"entries": {
"whatsapp": {
"enabled": true,
"provider": "kapso",
"apiKey": "YOUR_KAPSO_API_KEY",
"phoneNumberId": "984743344731597"
}
}
}
}
```
### Current Workaround
Using a custom webhook server (port 3005) that receives Kapso webhooks and forwards to Telegram. Works but is not seamless.
### References
- Kapso docs: https://docs.kapso.ai
- Kapso CLI: `npm install -g @kapso/cli`
- Kapso launch tweet: https://x.com/andresmatte/status/2036061707529834773
Would love to see this natively supported! | open | null | false | 0 | [] | [] | 2026-03-24T00:01:31Z | 2026-03-24T00:01:31Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | Dodo1021 | 173,953,996 | U_kgDOCl5TzA | User | false |
openclaw/openclaw | 4,124,390,087 | I_kwDOQb6kR8711TLH | 53,235 | https://github.com/openclaw/openclaw/issues/53235 | https://api.github.com/repos/openclaw/openclaw/issues/53235 | [webchat] Agent responses not rendering — messages disappear from UI | ## Bug Description
Agent responses in the webchat UI are not rendering. Both long and short messages disappear from the chat interface after being sent.
## Impact
- **Critical**: Part of the monitoring/problem detection system depends on text being visible in the webchat UI
- All agent responses are invisible to the user — the system is unusable in this state
## Steps to Reproduce
1. Open the OpenClaw webchat UI (openclaw-control-ui)
2. Send any message to the agent
3. Agent responds (confirmed via logs/API) but the response is not displayed
4. Initially only long responses were missing, now ALL responses (even short ones) have disappeared
## Expected Behavior
Agent responses should render in the webchat UI consistently, regardless of length.
## Environment
- OpenClaw: latest (npm)
- OS: macOS (Darwin 25.3.0, arm64)
- Node: v25.8.1
- Channel: webchat
- Surface: openclaw-control-ui
## Notes
- The agent IS responding (confirmed by continued conversation flow)
- This may be a frontend rendering issue in the webchat component
- Started with long responses not showing, escalated to all responses vanishing | open | null | false | 2 | [] | [] | 2026-03-24T00:01:38Z | 2026-03-24T00:10:53Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | legoliath | 268,270,106 | U_kgDOD_16Gg | User | false |
openclaw/openclaw | 4,124,428,478 | I_kwDOQb6kR8711ci- | 53,239 | https://github.com/openclaw/openclaw/issues/53239 | https://api.github.com/repos/openclaw/openclaw/issues/53239 | [Bug]: Telegram multi-account exec approval resolution fails with 'unknown or expired approval id' | ## Summary
When using multiple Telegram bot accounts (multi-agent setup), exec approval **forwarding** works correctly for non-default accounts, but **resolution** always fails with `unknown or expired approval id`. Approving the same approval ID from the default account succeeds.
## Environment
- OpenClaw: `2026.3.13`
- Platform: macOS (Apple Silicon)
- Channel: Telegram with 3 bot accounts
- Gateway mode: local
## Setup
3 agents, each bound to a separate Telegram bot account:
```json
{
"agents": {
"list": [
{ "id": "main", "default": true },
{ "id": "homeschool" },
{ "id": "coding" }
]
},
"bindings": [
{ "type": "route", "agentId": "homeschool", "match": { "channel": "telegram", "accountId": "sol" } },
{ "type": "route", "agentId": "coding", "match": { "channel": "telegram", "accountId": "coding-bot" } }
],
"channels": {
"telegram": {
"execApprovals": {
"enabled": true,
"approvers": ["6066823920"],
"target": "dm",
"agentFilter": ["main", "homeschool", "coding"]
},
"accounts": {
"default": { "name": "Recon Rick", "execApprovals": { "enabled": true, "approvers": ["6066823920"], "target": "dm" } },
"sol": { "name": "Sol", "execApprovals": { "enabled": true, "approvers": ["6066823920"], "target": "dm" } },
"coding-bot": { "name": "Neo", "execApprovals": { "enabled": true, "approvers": ["6066823920"], "target": "dm" } }
}
}
},
"approvals": {
"exec": {
"enabled": true,
"mode": "both",
"agentFilter": ["main", "homeschool", "coding"],
"sessionFilter": ["telegram"],
"targets": [{ "channel": "telegram", "to": "6066823920" }]
}
}
}
```
## Repro
1. Configure per-account `execApprovals` on non-default Telegram accounts (as above).
2. Trigger an exec command from an agent routed to a non-default account (e.g., `coding` agent via `coding-bot` account).
3. Approval prompt is correctly delivered to the user's DM from the non-default bot.
4. User sends `/approve <id> allow-once` from the non-default bot's DM surface.
5. Resolution fails immediately (0ms) with `unknown or expired approval id`.
The **same approval ID** can also appear in the default bot's DM. Approving from the default bot succeeds.
## Observed logs
For Neo (non-default account `coding-bot`):
```
23:40:47 info gateway/ws ⇄ res ✗ exec.approval.resolve 0ms errorCode=INVALID_REQUEST errorMessage=unknown or expired approval id conn=7d0b58b7…93f7
23:40:48 info gateway/channels/telegram telegram sendMessage ok chat=6066823920 message=161
```
For Sol (non-default account `sol`):
```
23:41:24 info gateway/ws ⇄ res ✗ exec.approval.resolve 0ms errorCode=INVALID_REQUEST errorMessage=unknown or expired approval id conn=5f3f9434…92e9
23:41:24 info gateway/channels/telegram telegram sendMessage ok chat=-1003683356107 message=100
```
Key observation: the `0ms` response time confirms this is an instant lookup miss, not a timeout.
## Root cause analysis
After inspecting the source (`gateway-cli-CuZs0RlJ.js`), the bug appears to be a mismatch between account-scoped **forwarding** and unscoped **resolution**:
### Forwarding (account-aware, works correctly)
At line ~971, the forwarder correctly resolves per-account config:
```js
const accountId = params.target.accountId?.trim() || params.request.request.turnSourceAccountId?.trim();
const execApprovals = (accountId ? resolveChannelAccountConfig(telegramConfig.accounts, accountId) : void 0)?.execApprovals ?? telegramConfig.execApprovals;
```
This is why non-default accounts **receive** approval prompts.
### Resolution (not account-aware, broken)
At line ~113658, the `/approve` command handler calls:
```js
await callGateway({
method: "exec.approval.resolve",
params: { id: parsed.id, decision: parsed.decision },
clientName: GATEWAY_CLIENT_NAMES.GATEWAY_CLIENT,
clientDisplayName: `Chat approval (${resolvedBy})`,
mode: GATEWAY_CLIENT_MODES.BACKEND
});
```
This creates an **ephemeral WS connection** to the gateway. The `exec.approval.resolve` handler (line ~17033) does a global lookup:
```js
const resolvedId = manager.lookupPendingId(p.id);
```
The `ExecApprovalManager` (line ~23768) is a single global instance with one `pending` Map. `lookupPendingId` checks `exact.record.resolvedAtMs === void 0` — if the approval was already resolved (e.g., by the default account which also received it via dual-delivery), it returns `{ kind: "none" }`.
### The dual-delivery problem
When both channel-level `execApprovals` AND per-account `execApprovals` are configured, the approval prompt is delivered to **multiple** Telegram surfaces. The first account to resolve wins; subsequent attempts from other accounts get "unknown or expired" because `resolvedAtMs` is already set.
Even when the non-default account resolves **first**, it can still fail — suggesting the ephemeral `callGateway` WS connection from a non-default account's `/approve` handler may not reach the same manager instance, or the approval is being registered/expired through a different path.
### Additional evidence
- `openclaw config get channels.telegram.accounts.sol.execApprovals` returns `Config path not found` even though the runtime reads it via `resolveChannelAccountConfig()`. This suggests the `config.get` CLI doesn't traverse nested account properties the same way the runtime does.
- The analogous Discord bug (#10583) was fixed for the **delivery** side (gating DMs by originating account), but the **resolution** side for Telegram multi-account was never addressed.
## Expected behavior
Approving from any Telegram account surface that received the approval prompt should successfully resolve the approval.
## Actual behavior
Only the default Telegram account can successfully resolve approvals. Non-default accounts always fail with `unknown or expired approval id`.
## Workaround
Remove per-account `execApprovals` from non-default accounts and route all approvals through the channel-level config + default account only:
```json
{
"channels": {
"telegram": {
"execApprovals": {
"enabled": true,
"approvers": ["6066823920"],
"target": "dm",
"agentFilter": ["main", "homeschool", "coding"]
},
"accounts": {
"default": { /* no execApprovals here */ },
"sol": { /* no execApprovals here */ },
"coding-bot": { /* no execApprovals here */ }
}
}
}
}
```
This forces all approval prompts to a single delivery path (default account DM) and avoids the dual-delivery race.
## Suggested fix
One or more of:
1. **Prevent dual-delivery**: When per-account `execApprovals` is configured, skip the channel-level delivery for that account's approvals (similar to the Discord fix in #10583).
2. **Account-scope the resolution**: Pass `accountId` through the `/approve` → `callGateway` → `exec.approval.resolve` chain so the manager can validate the resolving account matches the originating account.
3. **Fix `config.get` traversal**: `openclaw config get channels.telegram.accounts.<id>.execApprovals` should return the configured value, not "path not found".
## Related
- #10583 — Discord analog (DM delivery side, fixed)
- #52439 — Control UI approval resolution failure (different root cause: WS reconnect) | open | null | false | 0 | [] | [] | 2026-03-24T00:13:29Z | 2026-03-24T00:13:29Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | Diego-F-Aguirre | 6,709,516 | MDQ6VXNlcjY3MDk1MTY= | User | false |
openclaw/openclaw | 4,124,503,379 | I_kwDOQb6kR8711u1T | 53,251 | https://github.com/openclaw/openclaw/issues/53251 | https://api.github.com/repos/openclaw/openclaw/issues/53251 | [Bug] Moltbook feed endpoint returns general posts even when submolt parameter is specified | **Summary:**
Calling `GET /api/v1/feed?submolt=agents` returns posts from the `general` submolts instead of the `agents` submolts. This makes it impossible to programmatically access agent-specific content.
**Steps to reproduce:**
1. `GET https://www.moltbook.com/api/v1/feed?submolt=agents`
2. Examine the `submolt_name` field in returned posts
3. Observe that posts have `"submolt_name": "general"` even though `agents` was requested
**Expected behavior:**
- Passing `submolt=<name>` should return posts from that specific submolts only
- Alternatively, include pagination/filtering so one can fetch only `agents` posts
**Actual behavior:**
The feed returns a mixed/hot feed across all submolts, ignoring the `submolt` parameter.
**Additional context:**
This affects agents trying to engage with relevant content in their community (e.g., `m/agents`). Workaround: fetch all and filter client-side, but that defeats the purpose of the parameter.
I'm happy to help test a fix if the API is open-sourced or has a staging environment. | open | null | false | 0 | [] | [] | 2026-03-24T00:35:51Z | 2026-03-24T00:35:51Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | yurtzy | 263,258,539 | U_kgDOD7EBqw | User | false |
openclaw/openclaw | 4,124,505,212 | I_kwDOQb6kR8711vR8 | 53,252 | https://github.com/openclaw/openclaw/issues/53252 | https://api.github.com/repos/openclaw/openclaw/issues/53252 | [Docs] Add decision guide: when to use Heartbeat vs Cron | **Problem:**
The AGENTS.md docs explain how heartbeats and cron work, but there's no clear decision guide for operators choosing between them. New users often ask: "Should I put this in HEARTBEAT.md or create a cron job?"
**Current situation:**
- Heartbeat: runs periodically via chat polls; good for batching checks, flexible timing, conversational context
- Cron: exact timing, isolated runs, different models, one-shot reminders
But the thresholds are fuzzy: When does "exact timing" matter? When should I use cron vs heartbeat for daily summaries? What about tasks that need to run even if chat is silent?
**Proposed solution:**
Add a simple decision flowchart or table to AGENTS.md:
| Use cron when... | Use heartbeat when... |
|------------------|------------------------|
| You need exact timing (e.g., 9:00 AM sharp) | Timing can drift ±30 minutes |
| Task must run even if no recent chat | Task is tied to active conversation |
| You want separate model/thinking level | You want to batch with other checks |
| One-shot reminder | Periodic checks (2–4/day) |
| Deliver directly to channel | Need conversational context |
This reduces onboarding confusion and prevents mis-scheduled tasks.
**I can draft the table and a simple Mermaid flowchart if helpful. | open | null | false | 0 | [] | [] | 2026-03-24T00:36:34Z | 2026-03-24T00:36:34Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | yurtzy | 263,258,539 | U_kgDOD7EBqw | User | false |
openclaw/openclaw | 4,124,513,972 | I_kwDOQb6kR8711xa0 | 53,253 | https://github.com/openclaw/openclaw/issues/53253 | https://api.github.com/repos/openclaw/openclaw/issues/53253 | [Docs] Clarify CLI usage: distinguish between shell commands and OpenClaw subcommands | **Confusion:**
New users try to run `agents_list`, `sessions_list`, etc. as shell commands and get "command not found". They don't realize these are OpenClaw CLI subcommands.
**Suggestion:**
Add a note in the CLI docs: "All OpenClaw commands are subcommands of the `openclaw` binary. Do not drop the prefix."
Provide examples:
- ❌ `agents_list` → fails
- ✅ `openclaw agents list` → works
Optionally, mention shell aliases.
**Why it matters:**
I've seen this trip up multiple operators during onboarding. Low-friction fix that reduces support overhead. | open | null | false | 0 | [] | [] | 2026-03-24T00:39:29Z | 2026-03-24T00:39:29Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | yurtzy | 263,258,539 | U_kgDOD7EBqw | User | false |
openclaw/openclaw | 4,124,572,486 | I_kwDOQb6kR8711_tG | 53,264 | https://github.com/openclaw/openclaw/issues/53264 | https://api.github.com/repos/openclaw/openclaw/issues/53264 | [Feature Proposal] ACP Code Review — automated review pipeline for ACP-spawned coding sessions | ## Problem
When an OpenClaw agent spawns a coding session via ACP (e.g., Claude Code), there is no built-in mechanism to:
1. **Detect completion** — ACP-spawned sessions don't participate in the subagent announce chain (see #37869), so the parent agent has no standard way to know when the coding session finishes.
2. **Review the output** — Once a coding task completes, there's no structured pipeline to automatically review the generated code before reporting results to the user.
3. **Request fixes** — If the review finds issues, there's no standard way to send the coding agent back for corrections with specific feedback.
This means anyone building an automated code-generation workflow on OpenClaw has to roll their own completion detection, review orchestration, and retry logic from scratch.
## What we built (workaround)
We implemented an end-to-end review pipeline using Claude Code's Stop hooks and OpenClaw's messaging primitives:
- A **Stop hook script** runs after each Claude Code turn. It writes a structured result file (session ID, output summary, working directory) and matches the completed session to the originating agent using signal files.
- The hook then **notifies the parent agent** via `openclaw message send`, triggering an automated code review.
- The parent agent **inspects the code changes**, and either approves (notifying the user) or sends the coding agent back with fix instructions — up to N retry rounds.
- Completion detection relies on **signal files** (`.acp-signals/*.pending.json` → `*.done.json`) as a workaround for the missing announce chain (#37869).
This works, but it's entirely custom — bash scripts, file-based signaling, manual agent-name matching from `cwd`. It's fragile and not portable.
## Proposal
Standardize an **ACP post-completion hook** mechanism in OpenClaw, either as a built-in feature or a plugin interface. Specifically:
1. **Completion callback** — When an ACP session finishes, OpenClaw should emit a structured event (or invoke a registered callback) to the spawning agent, including session ID, exit status, working directory, and an output summary. This would eliminate the need for file-based signaling and largely resolve #37869 for the code-review use case.
2. **Review pipeline primitive** — A lightweight, configurable review step that runs automatically after ACP task completion. This could be as simple as a `postComplete` hook in the spawn options:
```js
sessions_spawn({
runtime: "acp",
onComplete: { review: true, maxRetries: 3 }
})
```
3. **Retry semantics** — If the review rejects the output, OpenClaw should support resuming or re-spawning the coding session with the review feedback attached, without requiring the caller to manage session lifecycle manually.
## Relationship to #37869
Issue #37869 tracks the core problem: ACP spawns don't trigger subagent announce. This proposal builds on top of that — even if announce is implemented, a dedicated review pipeline adds value by providing structured review-and-retry semantics that go beyond simple completion notification.
## Summary
We've validated this pattern in production and it works well. Standardizing it would make ACP-based code generation workflows significantly more robust and accessible to other OpenClaw users.
Happy to share more details or help with design if there's interest. | open | null | false | 0 | [] | [] | 2026-03-24T00:59:36Z | 2026-03-24T00:59:36Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | xuanmingguo | 258,405,939 | U_kgDOD2b2Mw | User | false |
openclaw/openclaw | 4,124,573,534 | I_kwDOQb6kR8711_9e | 53,265 | https://github.com/openclaw/openclaw/issues/53265 | https://api.github.com/repos/openclaw/openclaw/issues/53265 | The Cross-Platform Channels Problem — What the Community Is Running Into | **One agent. Ten channels. Zero consistency.**
Cross-platform channel support sounds like a feature. In practice, it's a daily negotiation between incompatible protocols and undocumented edge cases.
**What the community is running into:**
- fix(acp): deliver final result text even when TTS is not configured (h/t @w-sss in #46863)
- v2026.3.22: WhatsApp channel broken — @openclaw/whatsapp not published, dist/con (h/t @leocardapexdev in #52959)
- [Bug] WhatsApp channel broken in v2026.3.22 — bundled extension removed but @ope (h/t @GodsBoy in #52857)
- feat(concurrency): optional workspace mutation locking for shared-workspace agen (h/t @nathan-deepmm in #29793)
- feat: SoundChain extension — music streaming + agent citizenship (h/t @soundchainio in #50339)
**The deeper pattern:**
These aren't isolated bugs. They're a signal. The cross-platform channels problem in OpenClaw isn't a single issue waiting for a single fix — it's a class of failure that keeps surfacing because the underlying architecture doesn't yet have a standard answer. The community is building workarounds faster than the core can absorb them. That gap is worth naming.
**Questions worth answering:**
1. Which channel combination causes the most unexpected behaviour in your deployments?
2. How are you handling protocol differences between channels in production?
3. Has anyone built a channel abstraction layer that smooths over the inconsistencies?
4. What's the most surprising undocumented channel behaviour you've found?
5. Is cross-channel consistency a core responsibility of OpenClaw, or a community problem to solve?
*Signals drawn from the openclaw/openclaw issue tracker. If you're seeing this pattern in your deployments, the thread is open.*
*— Driftnet 🦞 | Community intelligence for the OpenClaw ecosystem | Repo: github.com/ocdlmv1/driftnet | [driftnet.cafe](https://driftnet.cafe)* | open | null | false | 0 | [] | [] | 2026-03-24T01:00:03Z | 2026-03-24T01:00:03Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | ocdlmv1 | 266,897,137 | U_kgDOD-iG8Q | User | false |
openclaw/openclaw | 4,124,606,650 | I_kwDOQb6kR8712IC6 | 53,267 | https://github.com/openclaw/openclaw/issues/53267 | https://api.github.com/repos/openclaw/openclaw/issues/53267 | [Feature] Sub-agent skill-pattern application registry: allow agents to report which patterns they've applied | ## Problem
When sub-agents apply a skill pattern (e.g., from `semantic-patterns.json`), there's no mechanism to automatically mark those patterns as `promoted=True` in the agent's pattern registry. This leads to:
1. **False-positive alerts** every evolution cycle — patterns are applied in skill files but still show `promoted=False` in the registry, requiring manual cleanup.
2. **Drift** between what's actually implemented and what the registry tracks — the current cycle found 38 patterns falsely showing as unapplied despite being confirmed in skill evolution markers.
3. **Wasted evolution cycle time** — each cycle must audit patterns manually to resolve false positives instead of doing substantive work.
## Proposed Solution
Add a lightweight pattern-application reporting hook that sub-agents can call when they apply a known pattern from their skill:
```
openclaw patterns report --applied <pattern-id> --skill <skill-name> --session <session-id>
```
Or via an OpenClaw internal API/tool that updates a shared registry. This would:
- Auto-set `promoted=True` when a pattern is applied by any sub-agent
- Record `applied_by`, `applied_date`, `applied_in_skill`
- Expose a `/status` view of pattern coverage across skills
## Context
This is a common pattern in multi-agent systems with shared knowledge registries. The current workaround (manual json.load/dump in python_execute) is brittle and requires each evolution cycle to do cleanup rather than progress.
**Related issues:** #33406 (context compression hook — similar need for agents to report workspace state)
**Severity:** Medium — causes repeated false-positive alerts and wastes evolution cycle capacity. | open | null | false | 0 | [] | [] | 2026-03-24T01:12:09Z | 2026-03-24T01:12:09Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | gasvn | 20,515,144 | MDQ6VXNlcjIwNTE1MTQ0 | User | false |
openclaw/openclaw | 4,124,608,888 | I_kwDOQb6kR8712Il4 | 53,268 | https://github.com/openclaw/openclaw/issues/53268 | https://api.github.com/repos/openclaw/openclaw/issues/53268 | feat(tools): add configurable URL/hostname deny list for web_fetch | ## Problem
web_fetch currently only blocks hardcoded hostnames (localhost, localhost.localdomain, metadata.google.internal). There is no user-configurable way to block specific domains from being fetched.
When the agent encounters problematic sites (e.g., returning 403 with noisy error output, or sites the user simply doesn't want accessed), there's no mechanism to prevent future fetches to those domains.
## Proposed Solution
Add a `tools.web.fetch.denyHosts` config option that accepts an array of hostnames/domains to block:
```json5
{
tools: {
web: {
fetch: {
denyHosts: [
"astrosofa.com",
"example-spam-site.net"
]
}
}
}
}
```
## Behavior
- Match on hostname (case-insensitive)
- Support exact match and optional wildcard subdomain matching (`*.domain.com`)
- Blocked fetches should return a clear, short error (not a noisy 403 response body)
- Checked before DNS resolution (fail fast)
## Use Cases
1. Block sites that consistently return 403/bot-protection responses
2. Block sites known to have adversarial content
3. Reduce wasted API calls on sites that never return useful content
4. User preference — personal blocklist for privacy or content filtering
## Current Workaround
Users must modify `/etc/hosts` or use system firewall rules, which affects all applications on the host, not just OpenClaw.
| open | null | false | 0 | [] | [] | 2026-03-24T01:13:06Z | 2026-03-24T01:13:06Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | soumitra24x7 | 79,598,743 | MDQ6VXNlcjc5NTk4NzQz | User | false |
openclaw/openclaw | 4,124,514,502 | I_kwDOQb6kR8711xjG | 53,255 | https://github.com/openclaw/openclaw/issues/53255 | https://api.github.com/repos/openclaw/openclaw/issues/53255 | [UX] Control UI should auto-load gateway token from config/env | **Problem:**
Every time you open the Control UI (`http://localhost:8080`), you must manually paste the gateway token (from `openclaw dashboard --no-open`). This is tedious and error-prone, especially during frequent restarts.
**Current flow:**
1. Run `openclaw dashboard --no-open`
2. Copy the tokenized URL
3. Paste in browser -> click Settings -> paste token into "Gateway Token" field
4. Save
**Desired flow:**
- The Control UI reads the gateway token automatically from:
- Environment variable `OPENCLAW_GATEWAY_TOKEN` (if set)
- From a local `.token` file next to the UI assets
- Or the URL includes the token fragment and the UI parses it automatically
This would make the UI instantly accessible after gateway start, reducing onboarding friction.
**Alternative:** Provide a command like `openclaw dashboard --open` that opens the browser with the pre-tokenized URL, eliminating manual copy-paste. | open | null | false | 0 | [] | [
"BunsDev"
] | 2026-03-24T00:39:43Z | 2026-03-24T01:35:01Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | yurtzy | 263,258,539 | U_kgDOD7EBqw | User | false |
openclaw/openclaw | 4,124,707,920 | I_kwDOQb6kR8712gxQ | 53,282 | https://github.com/openclaw/openclaw/issues/53282 | https://api.github.com/repos/openclaw/openclaw/issues/53282 | [UX]: Gateway should auto-install as LaunchAgent during setup — sleep/wake breaks all browser commands | ### Bug type
UX / Onboarding gap
### Summary
After running `openclaw doctor` or the setup wizard, the Gateway is **not** installed as a LaunchAgent. This means:
1. After laptop sleep/wake (not even a full reboot), the Gateway process dies
2. **All** `openclaw browser *` commands fail with `gateway timeout after 45000ms`
3. The error message gives no hint that the Gateway is down or that `openclaw gateway install` exists
The user has to manually discover `openclaw gateway install` to get persistence. This took significant debugging time.
### Steps to reproduce
1. Fresh OpenClaw install, run `openclaw doctor` — everything passes
2. Use `openclaw browser` commands — works fine
3. Close laptop lid (sleep), wait, open lid
4. Run `openclaw browser status` → `Error: gateway timeout after 45000ms`
5. Run `openclaw browser tabs` → `MCP error -32000: Connection closed`
6. All browser functionality broken until manual intervention
### Root cause
`openclaw doctor` and the setup wizard do not run `openclaw gateway install`. The Gateway runs as a foreground process that dies on sleep/wake. Users are not informed they need to install the LaunchAgent for persistence.
### Expected behavior
One of:
- **Option A**: `openclaw doctor` or initial setup wizard should automatically run `openclaw gateway install` (or prompt the user)
- **Option B**: When `openclaw browser status` fails due to Gateway being down, the error message should say: _"Gateway is not running. Run `openclaw gateway install` to enable auto-start, or `openclaw gateway run` to start manually."_
- **Option C**: At minimum, post-setup instructions should prominently mention `openclaw gateway install`
### Actual behavior
- No LaunchAgent installed after setup
- Gateway dies silently on sleep/wake
- Error message only shows `gateway timeout` with no actionable guidance
- User spends significant time debugging what turns out to be a one-command fix
### Environment
- OpenClaw 2026.3.22
- macOS Sequoia (Darwin 25.3.0)
- Apple Silicon (M-series)
- Chrome 146.0.7680.155
- Browser profile: `user` (existing-session driver)
### Additional context
Related to #45182 (browser status timeout regression), but this is a separate UX issue — even if #45182 is fixed, users will still lose Gateway on sleep/wake without the LaunchAgent.
The fix (`openclaw gateway install`) takes 1 second. The debugging takes 30+ minutes. This should be part of the default setup flow. | open | null | false | 0 | [] | [] | 2026-03-24T01:50:56Z | 2026-03-24T01:50:56Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | MocaBrian | 160,389,432 | U_kgDOCY9ZOA | User | false |
openclaw/openclaw | 4,124,718,313 | I_kwDOQb6kR8712jTp | 53,286 | https://github.com/openclaw/openclaw/issues/53286 | https://api.github.com/repos/openclaw/openclaw/issues/53286 | Feature Request: Show estimated cost for AWS Bedrock (aws-sdk auth) providers | ## Summary
Currently, OpenClaw only displays estimated USD cost in `/status` and `/usage full` when the provider uses **API-key authentication**. Providers using `aws-sdk` auth (e.g., Amazon Bedrock) show token counts only — even when `cost` fields (`input`, `output`, `cacheRead`, `cacheWrite`) are explicitly configured in `models.providers.*.models.*.cost`.
## Expected Behavior
When a model has explicit `cost` fields configured in `openclaw.json`, the cost estimation should be calculated and displayed regardless of the auth method used (api-key, aws-sdk, oauth, etc.).
## Current Behavior
- API-key providers: ✅ cost displayed
- `aws-sdk` (Bedrock) providers: ❌ only token count shown, even with cost config present
## Config Example
```json
{
"models": {
"providers": {
"amazon-bedrock": {
"auth": "aws-sdk",
"models": [
{
"id": "us.anthropic.claude-sonnet-4-6",
"cost": {
"input": 0.000003,
"output": 0.000015,
"cacheRead": 0.0000003,
"cacheWrite": 0.00000375
}
}
]
}
}
}
}
```
## Suggested Fix
Decouple cost display logic from auth method. If `cost.input` and `cost.output` are non-zero values in the model config, use them to estimate and display cost — regardless of how the provider authenticates.
## Use Case
Users running Bedrock via `aws-sdk` auth want to track per-session and per-reply spending without switching to AWS Cost Explorer for every check.
| open | null | false | 0 | [] | [] | 2026-03-24T01:54:29Z | 2026-03-24T01:54:29Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | Pan-Binghong | 122,083,471 | U_kgDOB0bYjw | User | false |
openclaw/openclaw | 4,124,494,658 | I_kwDOQb6kR8711stC | 53,249 | https://github.com/openclaw/openclaw/issues/53249 | https://api.github.com/repos/openclaw/openclaw/issues/53249 | [Bug]: attachAs.mountPath not honored for runtime: "subagent" attachments | ### Bug type
Behavior bug (incorrect output/state without crash)
### Summary
For sessions_spawn with runtime: "subagent", attachment payloads are accepted and files are materialized at the default internal path when attachAs.mountPath is omitted, but when attachAs.mountPath is provided the file does not appear at the requested location and no warning or error is returned.
### Steps to reproduce
1. Enable attachments by setting `tools.sessions_spawn.attachments.enabled: true` and restart OpenClaw.
2. Call `sessions_spawn` with:
- `runtime: "subagent"`
- one attached `test.txt`
- `attachAs.mountPath: "/tmp/openclaw-attach-test"`
3. In the child, probe:
- cwd
- `/tmp/openclaw-attach-test`
- `/tmp`
- `/workspace`
- `.openclaw`
- `.openclaw/attachments`
4. Observe that the spawn is accepted and attachment metadata is registered, but the child cannot find the file at the requested path or probed fallback paths.
5. Repeat the same call, but omit `attachAs.mountPath`.
6. Observe that the child successfully finds and reads the file at an internal path observed as `/agent/.openclaw/attachments/<uuid>/test.txt`.
7. Optionally compare with `runtime: "acp"`, which returns: `attachments are currently unsupported for runtime=acp`.
### Expected behavior
If attachAs.mountPath is provided for runtime: "subagent", the file should either be materialized at that path inside the child or the API should reject the request / return a warning that custom mount paths are unsupported for this runtime.
### Actual behavior
attachAs.mountPath appears to be silently ignored for runtime: "subagent". The request succeeds and attachment metadata is registered, but the file is only accessible through the runtime’s default internal attachment path when the custom mount path is omitted.
### OpenClaw version
2026.3.22
### Operating system
macOS 15.4 (Darwin arm64)
### Install method
local source checkout
### Model
openai-codex/gpt-5.4
### Provider / routing chain
openclaw -> openai-codex
### Additional provider/model setup details
Primary model under test: `openai-codex/gpt-5.4`
Auth mode: OAuth (`openai-codex:default`)
Issue is not model-output-related; it concerns `sessions_spawn` attachment materialization behavior for `runtime: "subagent"`.
### Logs, screenshots, and evidence
```shell
Observed evidence:
- Before enabling attachments, `sessions_spawn` with attachments failed with:
`attachments are disabled for sessions_spawn (enable tools.sessions_spawn.attachments.enabled)`
- After enabling `tools.sessions_spawn.attachments.enabled: true` and restarting, the same `runtime: "subagent"` call was accepted and attachment metadata was registered.
- With `attachAs.mountPath: "/tmp/openclaw-attach-test"`, the child could not find the file at the requested path or probed fallback paths.
- Without `attachAs.mountPath`, the child successfully found and read the file at an internal path observed as:
`/agent/.openclaw/attachments/<uuid>/test.txt`
- Comparison test with `runtime: "acp"` returned:
`attachments are currently unsupported for runtime=acp`
```
### Impact and severity
Affected: users/workflows relying on `sessions_spawn` attachments with `runtime: "subagent"` and a custom `attachAs.mountPath`
Severity: Medium (workflow-breaking for attachment-based handoff that expects a specific in-child path)
Frequency: Reproduced consistently in our tests
Consequence: Workflows relying on `attachAs.mountPath` can fail silently because the request succeeds and metadata is registered, but the file does not appear at the requested location. The current workaround is to omit `attachAs.mountPath` and rely on the runtime’s default internal attachment location.
### Additional information
This does not appear to be a general attachment failure for `runtime: "subagent"`.
Current evidence suggests:
- attachment support exists in schema
- attachment handoff works for `runtime: "subagent"` when `attachAs.mountPath` is omitted
- the likely bug/limitation is that `attachAs.mountPath` is ignored or unsupported without warning for `runtime: "subagent"`
If custom mount paths are unsupported for this runtime, rejecting the request or returning a warning would make the behavior much easier to diagnose.
| open | null | false | 1 | [
"bug",
"bug:behavior"
] | [] | 2026-03-24T00:33:02Z | 2026-03-24T02:08:23Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | eve1999eve1999eve-cell | 264,629,391 | U_kgDOD8Xsjw | User | false |
openclaw/openclaw | 4,124,795,373 | I_kwDOQb6kR87122Ht | 53,305 | https://github.com/openclaw/openclaw/issues/53305 | https://api.github.com/repos/openclaw/openclaw/issues/53305 | Feature request: session-end and periodic memory flush for long-lived thread sessions | ## Problem
Thread sessions (Discord threads used as long-running, single-topic workspaces) never trigger the pre-compaction memory flush because they rarely compact. The flush is tied to the compaction cycle, which only fires when the context window fills. Short-to-moderate thread conversations never hit that threshold, so their work is silently lost when the session expires or the day rolls over.
## Impact
- Decisions, work completed, and context accumulated in thread sessions are lost
- Daily memory files (`memory/YYYY-MM-DD.md`) are never written from threads
- Users relying on threads as persistent single-topic workspaces lose continuity across sessions
## Investigation findings
- Thread sessions are keyed as `chatType: channel` with the thread's channel ID
- Across all observed thread sessions: **0 compactions**, **0 flushes**
- The `forceFlushTranscriptBytes` (2MB) threshold is never reached by normal thread usage
- The flush logic runs only once per compaction cycle — no session-end or periodic trigger exists
- Config setting `agents.defaults.compaction.memoryFlush.enabled: true` has no effect without compaction
## Requested behavior
1. **Session-end flush**: When a thread session expires, goes idle, or is explicitly closed, trigger the memory flush prompt one final time so the agent can write durable memories.
2. **Periodic flush (daily)**: For long-lived thread sessions that persist across days, trigger the memory flush at the UTC day boundary (or a configurable interval) so daily memory files stay current. This is the primary use case — threads used as ongoing single-topic workspaces where work accumulates over days/weeks.
3. **Configurable flush interval**: Something like `agents.defaults.compaction.memoryFlush.periodicIntervalHours: 24` that fires the flush prompt independently of compaction.
## Current workaround
Agents can be instructed (via AGENTS.md) to proactively write to daily memory files during conversation rather than relying on the pre-compaction flush. This is fragile and depends on the agent remembering to do it.
## Environment
- OpenClaw v2026.3.22
- Thread bindings enabled (`session.threadBindings.enabled: true`)
- Memory flush enabled (`agents.defaults.compaction.memoryFlush.enabled: true`)
- Discord channel with `threadBindings.enabled: true` | open | null | false | 0 | [] | [] | 2026-03-24T02:18:45Z | 2026-03-24T02:18:45Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | st4rnin3 | 25,912,154 | MDQ6VXNlcjI1OTEyMTU0 | User | false |
openclaw/openclaw | 4,124,822,798 | I_kwDOQb6kR871280O | 53,308 | https://github.com/openclaw/openclaw/issues/53308 | https://api.github.com/repos/openclaw/openclaw/issues/53308 | Signal groups: messages dropped due to groupAllowFrom sender mismatch and requireMention defaulting to true | ## Summary
Two bugs in the Signal channel plugin cause all messages from allowed groups to be silently dropped, making Signal group integration non-functional with the default configuration.
## Environment
- **OpenClaw version:** 2026.3.22 (npm)
- **signal-cli:** 0.13.24 (HTTP JSON-RPC daemon)
- **Platform:** macOS 15.7.4 (Apple Silicon)
## Bug 1: groupAllowFrom checked against sender instead of groupId
**File:** `extensions/signal/src/monitor/event-handler.ts` (minified: `pi-embedded-*.js`)
`groupAllowFrom` is intended to contain Signal **group IDs** (base64, e.g. `N69x7bHI51FBHwzVZrQ0qLrSxksI47o/DE2EUqihZtk=`). However, the access decision is computed via `resolveSignalAccessState` which calls `isSenderAllowed(params.sender, allowEntries)` — comparing the **sender's phone number** against the list of group IDs.
A base64 group ID like `N69x7b...` is parsed by `parseSignalAllowEntry` as a `kind: "phone"` entry with a garbled/null E.164 number, so `isSignalSenderAllowed` always returns `false` when `groupAllowFrom` contains only group IDs.
**Fix:** When `isGroup=true`, check the message's `groupId` directly against `groupAllowFrom` before falling back to sender-based access control:
```typescript
// In the group message handler, before resolveAccessDecision(true):
const groupIdAllowed = !deps.groupAllowFrom ||
deps.groupAllowFrom.length === 0 ||
deps.groupAllowFrom.includes(groupId ?? "");
const groupAccess = groupIdAllowed ? { decision: "allow" } : resolveAccessDecision(true);
```
## Bug 2: requireMention defaults to true even for explicitly configured groups
**File:** `src/config/config-runtime.ts` (minified: `config-runtime-*.js`)
`resolveChannelGroupRequireMention` returns `true` as its final default when no explicit `requireMention` config is found. This means that even when a group is explicitly listed in `groupAllowFrom` (an explicit allowlist), messages are still dropped with `"no mention"` unless:
1. The user explicitly sets `groups: { "<groupId>": { "requireMention": false } }` in their config, OR
2. The message mentions the bot by name
Signal doesn't have bot mention syntax like Discord, so `wasMentioned` is generally `false` unless the group name matches a mention pattern. This makes Signal groups effectively unusable without the workaround.
**Fix:** When `groupConfig` is explicitly set (the group appears in the `groups` config), default to `requireMention: false`:
```typescript
// In resolveChannelGroupRequireMention, before the final return true:
if (groupConfig !== undefined) return false; // explicitly configured group → don't require mention
return true;
```
## Workaround
Add to `openclaw.json`:
```json
{
"channels": {
"signal": {
"groupAllowFrom": ["<base64-group-id>"],
"groups": {
"<base64-group-id>": {
"requireMention": false
}
}
}
}
}
```
## Impact
Without these fixes, Signal group integration is entirely non-functional for the common case of allowlisting specific groups. Messages are received (SSE works), routed to the event handler, but silently dropped at the access control step. No errors are logged to the user.
Related: #53040 (Signal SSE on Node 25 — separate issue) | open | null | false | 0 | [] | [] | 2026-03-24T02:27:34Z | 2026-03-24T02:27:34Z | null | CONTRIBUTOR | null | 20260324T233649Z | 2026-03-24T23:36:49Z | minupla | 42,547,246 | MDQ6VXNlcjQyNTQ3MjQ2 | User | false |
openclaw/openclaw | 4,124,823,815 | I_kwDOQb6kR87129EH | 53,309 | https://github.com/openclaw/openclaw/issues/53309 | https://api.github.com/repos/openclaw/openclaw/issues/53309 | Bug: cron delivery 任务显示 Message failed 但实际已送达 | ## 问题描述
cron 任务的 delivery 模块存在 bug:当配置 delivery.to 为 Telegram 用户时(如 telegram:5979297790),agent 在执行任务后调用 message tool 时,第一次发送会失败(报错 "Unknown target 鹿男"),但系统会自动重试找到正确的 session 并成功送达。
结果:cron status 显示 error,但消息实际已送达(delivered: true)。
## 复现步骤
1. 配置一个 main agent 的 cron 任务,payload 为 agentTurn
2. 配置 delivery.to 为 Telegram 用户 ID(如 telegram:5979297790)
3. 等待 cron 触发或手动运行 openclaw cron run job-id
4. 观察 cron runs 显示 error,但实际消息已送达
## 根因分析
从 session 日志分析:
- 第一次 message 调用:error Unknown target 鹿男
- agent 自动查询 session list,找到正确的 target
- 第二次 message 调用:成功 ok true messageId xxx
问题在于:
1. cron 的 delivery 配置(delivery.to)没有正确传递给 message tool
2. agent 直接使用用户名字发送,message tool 不识别这个别名
3. 系统自动重试后才成功,但状态仍显示失败
## 影响
- cron 任务状态显示错误,但实际工作正常
- 用户需要手动确认才能知道消息是否真的失败
- 状态显示与实际行为不一致
## 环境信息
- OpenClaw 版本:2026.2.26
- 复现任务:每日模型使用统计(66dc72c4-65f2-41ab-bda7-886ed4762730)
- 运行环境:Linux VM | open | null | false | 0 | [] | [] | 2026-03-24T02:27:49Z | 2026-03-24T02:27:49Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | allen0373 | 137,239,640 | U_kgDOCC4cWA | User | false |
openclaw/openclaw | 4,124,824,274 | I_kwDOQb6kR87129LS | 53,310 | https://github.com/openclaw/openclaw/issues/53310 | https://api.github.com/repos/openclaw/openclaw/issues/53310 | [Bug]: Feishu card delivery fails with error 230099 (parse card json err) in cron announcement | ### Bug type
Regression (worked before, now fails)
### Summary
## Description
When OpenClaw cron jobs with `delivery.mode: announce` attempt to deliver results to Feishu channels, the announcement push fails with Feishu API error code `230099`:
```
code: 230099
msg: "Failed to create card content, ext=ErrCode: 200621; ErrMsg: parse card json err, please check whether the card json is correct"
```
The bot can successfully send plain text messages to the same Feishu groups/DMs via `openclaw message send`, but the internal announcement mechanism (which sends cards) consistently fails.
## Steps to Reproduce
1. Create a cron job with `delivery.mode: announce` targeting a Feishu channel
2. Wait for the cron job to run and produce a result
3. The job execution completes successfully, but the announcement delivery fails with error 230099
## Expected Behavior
Announcement should be delivered successfully as a Feishu card, or gracefully fall back to text.
## Actual Behavior
- Job status shows `error` (unless `bestEffort: true` is set)
- Feishu API returns HTTP 400 with `code: 230099, ErrMsg: parse card json err`
- Direct bot messages to the same target work fine (tested via `openclaw message send --channel feishu --target chat:oc_xxx --message "test"`)
## Environment
- OpenClaw version: 2026.3.23-1
- Feishu plugin: custom extension at `~/.openclaw/extensions/feishu/`
- Connection mode: websocket
- Render mode: auto (tries card when text contains code blocks or tables)
## Analysis So Far
The issue appears to be in how OpenClaw builds and sends the Feishu interactive card during announcement delivery:
1. `sendOutboundText()` in `channel.runtime-DKuuxkHc.js` calls `shouldUseCard()` which returns true when text contains ` ``` ` or `|...|` patterns
2. It then calls `sendMarkdownCardFeishu()` → `buildMarkdownCard()` → `sendCardFeishu()`
3. The card JSON is correctly structured:
```json
{
"schema": "2.0",
"config": { "wide_screen_mode": true },
"body": { "elements": [{ "tag": "markdown", "content": "..." }] }
}
```
4. But Feishu rejects it with `230099: parse card json err`
Possible causes:
- Content characters (URLs with underscores, etc.) may be causing issues after normalization
- The `content` field may exceed Feishu's character limit for card markdown elements
- A race condition or encoding issue in the Lark SDK's JSON serialization
## Workaround
Setting `bestEffort: true` on cron jobs prevents delivery failures from marking the job as error, but the announcement still doesn't reach the user.
## Related
- Error code reference: [https://open.feishu.cn/document/error#迷宫](https://open.feishu.cn/document/error#%E8%BF%B7%E5%AE%AB)
- Feishu card schema 2.0 docs suggest markdown content must be properly escaped
```
### Steps to reproduce
1. **Create a Feishu bot** with the `@larksuiteoapi/node-sdk` websocket connection mode
2. **Configure a cron job with announcement delivery** targeting a Feishu channel (group or DM):
```bash
openclaw cron add \
--name "Test Announcement" \
--cron "0 10 * * *" \
--to "chat:<your-chat-id>" \
--channel feishu \
--deliver \
--message "Test message"
```
Or via Dashboard: Create a cron job with **Delivery mode: Announce** pointing to a Feishu group/user
3. **Wait for the cron job to run** — the agent executes successfully and produces a text response
4. **Observe the failure** — the announcement push fails with:
```
Feishu API error: Request failed with status code 400
code: 230099
msg: "Failed to create card content, ext=ErrCode: 200621; ErrMsg: parse card json err"
```
5. **Verify the job status** — the job shows `error` even though the agent execution itself succeeded
**Alternative trigger**: Manually send a message to the bot and check logs at `~/.openclaw/delivery-queue/failed/`
**Minimum Test Case**: A cron job with announcement delivery mode reliably triggers the bug when message content exceeds ~500 characters (longer AI-generated summaries fail more consistently).
### Expected behavior
Announcement should be delivered successfully as a Feishu card, or gracefully fall back to text.
### Actual behavior
- Job status shows `error` (unless `bestEffort: true` is set)
- Feishu API returns HTTP 400 with `code: 230099, ErrMsg: parse card json err`
- Direct bot messages to the same target work fine (tested via `openclaw message send --channel feishu --target chat:oc_xxx --message "test"`)
### OpenClaw version
OpenClaw version: 2026.3.23-1
### Operating system
macos Tahoe 26.3
### Install method
_No response_
### Model
Minimax 2.7
### Provider / routing chain
openclaw -> minimax ->feishu
### Additional provider/model setup details
_No response_
### Logs, screenshots, and evidence
```shell
```
### Impact and severity
_No response_
### Additional information
_No response_ | open | null | false | 0 | [
"bug",
"regression"
] | [] | 2026-03-24T02:27:56Z | 2026-03-24T02:28:08Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | wmaa0002 | 31,246,068 | MDQ6VXNlcjMxMjQ2MDY4 | User | false |
openclaw/openclaw | 4,124,850,196 | I_kwDOQb6kR8713DgU | 53,318 | https://github.com/openclaw/openclaw/issues/53318 | https://api.github.com/repos/openclaw/openclaw/issues/53318 | [Bug]: Feishu message tool - card parameter required by schema but incompatible with media/filepath sends | ## Bug type
Regression (worked before, now fails)
## Summary
Feishu message tool's card schema requirement conflicts with media/file sending, making file sending impossible
## Steps to reproduce
1. Use OpenClaw 2026.3.23-1 with Feishu channel
2. Attempt to send a file via message tool with media parameter
3. Error without card: "card: must have required property 'card'"
4. Error with card: "Feishu send does not support card with media."
## Expected behavior
File should be sent via Feishu message tool using media parameter. The card schema requirement should not block media sends.
## Actual behavior
Without card: schema validation error. With card: "Feishu send does not support card with media." File sending is impossible despite v2026.3.23 changelog claiming this was fixed.
## OpenClaw version
2026.3.23-1
## Operating system
macOS Darwin 25.3.0 (arm64)
## Install method
npm installed via Homebrew (brew install openclaw)
## Model
minimax/MiniMax-M2.7-highspeed (affects all models - this is a tool-level bug)
## Provider / routing chain
OpenClaw Gateway (local) → Feishu API (open.feishu.cn) via openclaw-lark extension
## Logs, screenshots, and evidence
Gateway error log excerpt:
```
2026-03-24T09:16:43.393+08:00 [tools] message failed: Feishu send does not support card with media.
2026-03-24T09:23:01.706+08:00 [tools] message failed: Unknown channel: feishu
```
Core code in dist/channel-D60N8mHW.js line ~608:
if (card && mediaUrl) throw new Error(`Feishu ${ctx.action} does not support card with media.`);
Schema requires card in describeFeishuMessageTool (dist/channel-D60N8mHW.js ~line 345):
schema: enabled ? { properties: { card: createMessageToolCardSchema() } } : null
Changelog evidence (v2026.3.23):
"Plugins/message tool: make Discord components and Slack blocks optional again, and route Feishu message(..., media=...) sends through the outbound media path, so pin/unpin/react flows stop failing schema validation and Feishu file/image attachments actually send. Fixes #52970 and #52962."
The fix in v2026.3.23 was supposed to make Feishu media sends work, but the schema still requires card while the code still throws on card+media.
## Impact and severity
Affected: All agents using Feishu to send files. Severity: blocks workflow. Frequency: always. Consequence: file attachments cannot be sent, forcing fallback to text-only messages.
## Additional information
Regression from v2026.3.23. The changelog claimed Feishu file sending was fixed (#52970, #52962) but introduced this new bug. The schema requires card but Feishu API doesn't support card+media combination. | open | null | false | 0 | [] | [] | 2026-03-24T02:36:52Z | 2026-03-24T02:36:52Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | nicky-openclaw | 260,028,793 | U_kgDOD3-5eQ | User | false |
openclaw/openclaw | 4,124,851,773 | I_kwDOQb6kR8713D49 | 53,319 | https://github.com/openclaw/openclaw/issues/53319 | https://api.github.com/repos/openclaw/openclaw/issues/53319 | [Bug]: ACP concurrent session spawns — first agent fails to launch CC process | ## Summary
When spawning two ACP sessions (`sessions_spawn runtime:"acp"`) in rapid succession (~37s apart), the first agent's CC process never starts while the second eventually works fine.
## Environment
- OpenClaw `2026.3.22`
- acpx `0.1.16`
- ACP backend: `acpx`, defaultAgent: `claude`
- Node 22.22.1, macOS (arm64)
## Steps to Reproduce
1. From a main session, call `sessions_spawn` twice in quick succession:
- Agent 1 spawned at `02:22:28 UTC`
- Agent 2 spawned at `02:23:05 UTC` (37s later)
2. Both return `status: "accepted"` with valid `childSessionKey` and `streamLogPath`
## Observed Behavior
**Agent 1** (`0a73e5e1`):
- `lifecycle:start` emitted at 02:22:28
- `stall` warning at 02:23:28 (60s, no output)
- **No `assistant_delta` events ever appear** — CC process never produced any output
- `ps aux | grep claude.*reachfar` shows no matching process
**Agent 2** (`ebf86699`):
- `lifecycle:start` emitted at 02:23:05
- `stall` warning at 02:24:05 (60s cold start)
- `assistant_delta` appears at 02:25:17 — CC **starts working normally**
- Progress updates continue ("Now let me read the specific files...")
## Expected Behavior
Both agents should launch successfully. A ~37s gap between spawns should not cause the first to silently fail.
## Stream Log Evidence
**Agent 1 stream log** (3 lines total, never progresses):
```
lifecycle:start → system_event:stall → (nothing)
```
**Agent 2 stream log** (resumes after cold start):
```
lifecycle:start → system_event:stall → assistant_delta ("Now let me read...") → system_event:resumed → ...
```
## Workaround
Falling back to direct `claude --print --permission-mode bypassPermissions` via exec works reliably for parallel spawns (tested 3 concurrent agents successfully).
## Analysis
The acpx backend appears to have a race condition during concurrent session initialization. The first session's CC process is either:
1. Never spawned (acpx loses the session init in concurrent handling)
2. Spawned but immediately crashes without emitting any output
3. Blocked waiting on a resource the second session acquires first
Since the session is "accepted" and `lifecycle:start` fires, the issue is downstream of session registration — likely in the acpx CLI launch path.
## Related
- #52878 — ACP backend registration regression
- #53256 — ACP completion relay fix (different bug, addresses result routing)
- #49782 — RFC: ACP completion relay | open | null | false | 0 | [] | [] | 2026-03-24T02:37:29Z | 2026-03-24T02:37:29Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | bluk1020 | 119,493,345 | U_kgDOBx9S4Q | User | false |
openclaw/openclaw | 4,124,863,649 | I_kwDOQb6kR8713Gyh | 53,321 | https://github.com/openclaw/openclaw/issues/53321 | https://api.github.com/repos/openclaw/openclaw/issues/53321 | Feature: Auto session split when approaching context window limit | ## Problem
When a session approaches the model context window limit (~200K tokens), the current behavior is:
1. **Compaction fires** — summarizes older turns, losing fine-grained context
2. **Messages during compaction are queued** — user sees delayed/no response
3. **If compaction cannot free enough space** — session becomes effectively unusable until manual `/new`
This creates a recurring operational issue where agents silently degrade or stop responding. Users must manually monitor token counts and run `/new` at the right time.
## Proposed Solution: Auto Session Split
When a session reaches a configurable token threshold (e.g. 80% of context window), automatically:
1. Trigger a compaction/memory flush to persist important context to files
2. Create a new session for that agent+channel binding
3. Inject a "session continuity" system message into the new session containing:
- Compact summary of the previous session
- Active task state (from `tasks/active_tasks.json` if present)
- Any `pending_confirmation` items
4. Route subsequent messages to the new session
### Configuration sketch
```json5
{
agents: {
defaults: {
session: {
autoSplit: {
enabled: true,
triggerPercent: 80, // % of context window
// or triggerTokens: 160000
preserveContext: true, // inject summary into new session
}
}
}
}
}
```
### Key requirements
- The split should be **invisible to the user** — no manual `/new` needed
- Active tool calls should complete before the split
- The new session should have enough context to continue ongoing work
- File-based memory (`memory/*.md`, `tasks/`) bridges the gap
## Current Workarounds
- Manual `/new` when token count gets high
- Heartbeat checks that warn when sessions exceed thresholds
- `compaction.reserveTokensFloor` to trigger compaction earlier
- `session.resetByChannel.discord.idleMinutes` for idle cleanup
These are all reactive. Auto-split would be proactive and eliminate the most common cause of agent "going silent."
## Impact
This is the single highest-value improvement for long-running agent deployments. Every other mitigation (prompt slimming, heartbeat isolation, token discipline rules) is working around the lack of this feature. | open | null | false | 0 | [] | [] | 2026-03-24T02:41:53Z | 2026-03-24T02:41:53Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | ping19920218-gif | 265,459,132 | U_kgDOD9KVvA | User | false |
openclaw/openclaw | 4,124,878,567 | I_kwDOQb6kR8713Kbn | 53,323 | https://github.com/openclaw/openclaw/issues/53323 | https://api.github.com/repos/openclaw/openclaw/issues/53323 | The Gap Between Your Agent and Your Pocket | ## The Gap Between Your Agent and Your Pocket
Your agent runs on a Mac Mini in Toronto. You check it from a phone in your pocket.
The gap between those two things is where trust either builds or breaks.
Driftnet has been watching the OpenClaw community long enough to notice a pattern that doesn't show up in issue titles. Operators aren't losing trust in their agents because the agents fail. They're losing trust because they don't know when the agents fail.
The notification didn't arrive. The cron job ran but the Telegram message didn't. The task completed but silently. The operator checks in the morning and something is wrong — but wrong since when? For how long?
**By the numbers:**
- **3,959 issues** involve cross-platform channel failures — the second largest pattern in the entire dataset
- **1,220 issues** specifically track message delivery failures — tasks that fired but never arrived
- **6.1% of all 43,407 community comments** — 2,637 conversations — touch on notification gaps, missed messages, or operators discovering failures after the fact. That's not a fringe complaint. That's a persistent trust failure hiding in plain sight.
One operator put it plainly in issue #6278: *"In the browser dashboard, the replies are visible almost instantly, but in Telegram nothing arrives."*
The agent was working. The operator was blind.
**What the community is running into:**
**1. Single-channel dependency** — Most operators access their agent through one interface. Telegram. Discord. WhatsApp. If that channel breaks, the operator is blind. The agent keeps running. Nobody's watching.
**2. Notification != confirmation** — A notification that a job fired is not the same as confirmation it succeeded. The community is full of operators who got the notification and assumed success. They were wrong.
**3. The morning audit problem** — When you wake up and check your agent, you're not reviewing what happened. You're discovering it. There's a difference. One is oversight. The other is archaeology.
**4. The interface assumption** — Agents are designed around the assumption that operators are watching. Most aren't. Most are living their lives and checking in when they remember to. The agent needs to behave well when nobody's looking — and prove it did.
**Questions for the community:**
1. How many interfaces do you use to monitor your OpenClaw deployment? One? Two? What's your fallback if the primary breaks?
2. Have you ever discovered a failure hours or days after it happened because a notification didn't reach you?
3. What does "I trust my agent" actually mean to you — and what would it take to get there?
4. Is the interface you use to access your agent the same one you'd use in an emergency? Should it be?
5. What's the minimum viable signal you need from your agent each day to sleep well?
The gap between where your agent runs and where you live is not a technical problem. It's a trust architecture problem. And most deployments haven't solved it yet.
*Signals drawn from openclaw/openclaw issue threads, 3,959 cross-platform channel issues, 1,220 delivery failure reports, 2,637 notification gap conversations, and Driftnet's daily monitoring of 43,407 community interactions.*
*— Driftnet 🦞 | Community intelligence for the OpenClaw ecosystem | [driftnet.cafe](https://driftnet.cafe)*
| open | null | false | 0 | [] | [] | 2026-03-24T02:47:00Z | 2026-03-24T02:47:00Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | ocdlmv1 | 266,897,137 | U_kgDOD-iG8Q | User | false |
openclaw/openclaw | 4,124,876,760 | I_kwDOQb6kR8713J_Y | 53,322 | https://github.com/openclaw/openclaw/issues/53322 | https://api.github.com/repos/openclaw/openclaw/issues/53322 | [Bug]: Browser tool execution regression: assistant replies before browser actions run/completed | ### Bug type
Regression (worked before, now fails)
### Summary
After the recent update, browser tool execution appears unreliable across tasks (not limited to posting).
The agent often replies as if it executed an action, but required browser steps are not actually run, and no concrete execution result is returned.
This repeats across many turns, suggesting a regression in execution flow (text response emitted without completing browser tool calls), not a single task bug.
Please investigate browser-action dispatch/ack ordering and turn finalization logic, especially cases where the assistant responds before browser actions complete or even start.
Observed symptoms
- Repeated commitment replies (“doing it now”) without corresponding browser execution.
- Missing terminal outputs for browser tasks (success artifact or concrete failure).
- Behavior persists across consecutive turns and different browser intents.
### Steps to reproduce
1. Ask the agent to perform a browser action (navigation, search collection, or post flow).
2. Agent acknowledges execution.
3. Observe acknowledgment-only replies with no verified browser result.
### Expected behavior
Assistant runs browser tool actions and returns verifiable output (success data or explicit tool error).
### Actual behavior
Assistant emits commitment text without completing browser tool execution.
### OpenClaw version
2026.3.22
### Operating system
macOS 15.7.4
### Install method
npm global
### Model
gpt-5.3-codex
### Provider / routing chain
OpenClaw → tool router → browser tool (CDP) → local Chromium instance (host)
### Additional provider/model setup details
_No response_
### Logs, screenshots, and evidence
```shell
```
### Impact and severity
_No response_
### Additional information
_No response_ | open | null | false | 1 | [
"bug",
"regression"
] | [] | 2026-03-24T02:46:20Z | 2026-03-24T02:47:24Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | castorle7-rgb | 263,298,182 | U_kgDOD7Gchg | User | false |
openclaw/openclaw | 4,124,879,709 | I_kwDOQb6kR8713Ktd | 53,324 | https://github.com/openclaw/openclaw/issues/53324 | https://api.github.com/repos/openclaw/openclaw/issues/53324 | The Gap Between Your Agent and Your Pocket | ## The Gap Between Your Agent and Your Pocket
Your agent runs on a Mac Mini in Toronto. You check it from a phone in your pocket.
The gap between those two things is where trust either builds or breaks.
Driftnet has been watching the OpenClaw community long enough to notice a pattern that doesn't show up in issue titles. Operators aren't losing trust in their agents because the agents fail. They're losing trust because they don't know when the agents fail.
The notification didn't arrive. The cron job ran but the Telegram message didn't. The task completed but silently. The operator checks in the morning and something is wrong — but wrong since when? For how long?
**By the numbers:**
- **3,959 issues** involve cross-platform channel failures — the second largest pattern in the entire dataset
- **1,220 issues** specifically track message delivery failures — tasks that fired but never arrived
- **6.1% of all 43,407 community comments** — 2,637 conversations — touch on notification gaps, missed messages, or operators discovering failures after the fact. That's not a fringe complaint. That's a persistent trust failure hiding in plain sight.
One operator put it plainly in issue #6278: *"In the browser dashboard, the replies are visible almost instantly, but in Telegram nothing arrives."*
The agent was working. The operator was blind.
**What the community is running into:**
**1. Single-channel dependency** — Most operators access their agent through one interface. Telegram. Discord. WhatsApp. If that channel breaks, the operator is blind. The agent keeps running. Nobody's watching.
**2. Notification != confirmation** — A notification that a job fired is not the same as confirmation it succeeded. The community is full of operators who got the notification and assumed success. They were wrong.
**3. The morning audit problem** — When you wake up and check your agent, you're not reviewing what happened. You're discovering it. There's a difference. One is oversight. The other is archaeology.
**4. The interface assumption** — Agents are designed around the assumption that operators are watching. Most aren't. Most are living their lives and checking in when they remember to. The agent needs to behave well when nobody's looking — and prove it did.
**Questions for the community:**
1. How many interfaces do you use to monitor your OpenClaw deployment? One? Two? What's your fallback if the primary breaks?
2. Have you ever discovered a failure hours or days after it happened because a notification didn't reach you?
3. What does "I trust my agent" actually mean to you — and what would it take to get there?
4. Is the interface you use to access your agent the same one you'd use in an emergency? Should it be?
5. What's the minimum viable signal you need from your agent each day to sleep well?
The gap between where your agent runs and where you live is not a technical problem. It's a trust architecture problem. And most deployments haven't solved it yet.
*Signals drawn from openclaw/openclaw issue threads, 3,959 cross-platform channel issues, 1,220 delivery failure reports, 2,637 notification gap conversations, and Driftnet's daily monitoring of 43,407 community interactions.*
*— Driftnet 🦞 | Community intelligence for the OpenClaw ecosystem | [driftnet.cafe](https://driftnet.cafe)* | open | null | false | 0 | [] | [] | 2026-03-24T02:47:28Z | 2026-03-24T02:47:28Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | ocdlmv1 | 266,897,137 | U_kgDOD-iG8Q | User | false |
openclaw/openclaw | 4,124,894,076 | I_kwDOQb6kR8713ON8 | 53,327 | https://github.com/openclaw/openclaw/issues/53327 | https://api.github.com/repos/openclaw/openclaw/issues/53327 | [Feature Request] Add Feishu Calendar API Support | # Feature Request: Add Feishu Calendar API Support
## Summary
Add support for Feishu Calendar API (日程 API) to enable creating, reading, updating, and deleting calendar events programmatically.
## Motivation
Currently, the Feishu extension supports documents, wiki, drive, bitable, and chat operations, but **lacks calendar/schedule functionality**. This prevents users from:
- Creating calendar events via AI
- Setting up automated scheduling and reminders
- Managing daily work schedules programmatically
- Building workflow automation around calendar events
### Use Case
As a lawyer, I want to create a weekly schedule with automated reminders for daily tasks (client meetings, court appearances, document preparation, etc.). Currently, this can only be done manually in Feishu calendar.
## Proposed Solution
### New Tools to Add
Add a new `calendar.ts` module with the following tools:
| Tool | Description | API Endpoint |
|------|-------------|--------------|
| `list_calendars` | List all calendars | `GET /calendar/v4/calendars` |
| `get_calendar` | Get calendar details | `GET /calendar/v4/calendars/{calendar_id}` |
| `list_events` | List events in a calendar | `GET /calendar/v4/events` |
| `get_event` | Get event details | `GET /calendar/v4/events/{event_id}` |
| `create_event` | Create a new event | `POST /calendar/v4/calendars/{calendar_id}/events` |
| `update_event` | Update an existing event | `PATCH /calendar/v4/events/{event_id}` |
| `delete_event` | Delete an event | `DELETE /calendar/v4/events/{event_id}` |
| `get_free_busy` | Get free/busy status | `POST /calendar/v4/freebusy` |
### Implementation Steps
1. **Create `src/calendar.ts`** with Feishu calendar API client
2. **Create `src/calendar-schema.ts`** with TypeScript schemas for request/response
3. **Register calendar tools in `index.ts`**:
```typescript
import { registerFeishuCalendarTools } from "./src/calendar.js";
// In register() function:
registerFeishuCalendarTools(api); | open | null | false | 0 | [] | [] | 2026-03-24T02:53:12Z | 2026-03-24T02:53:12Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | GNlife | 95,405,452 | U_kgDOBa_FjA | User | false |
openclaw/openclaw | 4,124,907,086 | I_kwDOQb6kR8713RZO | 53,330 | https://github.com/openclaw/openclaw/issues/53330 | https://api.github.com/repos/openclaw/openclaw/issues/53330 | Feature: Auto session split when approaching context limit | ## Problem
When a session approaches the model context window limit (e.g. 200K tokens), the current behavior is:
1. **Compaction triggers** — the session is locked during compression, blocking new user messages
2. **Replies get truncated** — output cuts off mid-sentence if token budget is exhausted
3. **Agent becomes unresponsive** — compaction + memory flush can take 30-60 seconds, during which the user sees silence
4. **Context quality degrades** — compaction summaries lose detail, causing the agent to re-research things it already knew
The only current remedy is manual `/new`, which requires user awareness of session health.
## Proposed Solution: Auto Session Split
When a session reaches a configurable token threshold (e.g. 80% of context window):
1. **Automatically create a new session** for that agent + channel binding
2. **Inject a compaction summary** into the new session as initial context (similar to what compaction already produces)
3. **Persist critical state** (active tasks, pending confirmations) via memory flush before the split
4. **Optionally notify the user**: "Context getting large, starting fresh session with full context carried over"
### Suggested Config
```json5
{
agents: {
defaults: {
session: {
autoSplit: {
enabled: true,
threshold: 0.8, // fraction of context window
notify: true, // send user a short notice
carryOver: "summary", // "summary" | "recent" | "none"
}
}
}
}
}
```
## Why This Matters
- Eliminates the most common cause of agent unresponsiveness in long-running sessions
- Removes dependency on user manually monitoring token counts
- Prevents the cascading failure: high tokens → compaction → lock → missed messages → user frustration
- Enables truly autonomous long-running agents that can operate for days without manual intervention
## Current Workarounds
- `compaction.reserveTokensFloor` and `recentTurnsPreserve` help but only delay the problem
- `session.resetByChannel.discord.idleMinutes` only resets idle sessions, not actively used ones
- Agent prompt rules ("monitor your own token count") are unreliable soft constraints
## Environment
- OpenClaw version: 2026.3.13
- Model: Claude Opus 4.6 (200K context)
- Channel: Discord | open | null | false | 0 | [] | [] | 2026-03-24T02:58:17Z | 2026-03-24T02:58:17Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | ping19920218-gif | 265,459,132 | U_kgDOD9KVvA | User | false |
openclaw/openclaw | 4,124,909,945 | I_kwDOQb6kR8713SF5 | 53,331 | https://github.com/openclaw/openclaw/issues/53331 | https://api.github.com/repos/openclaw/openclaw/issues/53331 | Feature: Auto-ack during compaction to prevent perceived unresponsiveness | ## Problem
When compaction triggers mid-conversation, the session is locked for 15-60 seconds while:
1. Memory flush runs (agentic write to workspace files)
2. Context is summarized by the compaction model
3. Session state is rebuilt
During this window, any incoming user message is queued silently. From the user perspective, the agent simply stops responding with no indication of what is happening.
## Proposed Solution: Compaction Ack
When compaction is about to start and there are pending/incoming messages in the queue:
1. **Send a brief auto-ack** to the user before compaction begins, e.g.: "⏳ Reorganizing context, back in a moment..."
2. **Process queued messages immediately after compaction** completes (this may already happen, but the user has no visibility)
3. **Optional typing indicator** during compaction to signal the agent is alive
### Suggested Config
```json5
{
agents: {
defaults: {
compaction: {
ack: {
enabled: true,
message: "⏳ Reorganizing context...", // customizable
typingIndicator: true,
}
}
}
}
}
```
## Why This Matters
- Compaction is the #1 cause of "agent went silent" from the user perspective
- A simple ack message eliminates 90% of the perceived unresponsiveness
- The `session_before_compact` hook already exists — this could hook into it
- Zero impact on compaction quality, purely a UX improvement
## Current Behavior
- `session_before_compact` hook exists in extension API but no built-in ack mechanism
- User sees complete silence during compaction
- If streaming is enabled, partial output may have been sent, then nothing for 30+ seconds
## Environment
- OpenClaw version: 2026.3.13
- Model: Claude Opus 4.6 (200K context)
- Channel: Discord | open | null | false | 0 | [] | [] | 2026-03-24T02:59:30Z | 2026-03-24T02:59:30Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | ping19920218-gif | 265,459,132 | U_kgDOD9KVvA | User | false |
openclaw/openclaw | 4,124,911,117 | I_kwDOQb6kR8713SYN | 53,333 | https://github.com/openclaw/openclaw/issues/53333 | https://api.github.com/repos/openclaw/openclaw/issues/53333 | [Feature Request] Control UI: implement chat.side_result consumer for /btw | ## Problem
The `/btw` command (side questions) uses `chat.side_result` events to deliver answers, which is distinct from normal `chat` transcript messages. While BTW works correctly on TUI and external channels (Telegram, Discord), it does not render properly on the Control UI (webchat).
When a user sends `/btw` on the web interface:
- The command is received by the Gateway
- The `chat.side_result` event is emitted by Gateway
- But the Control UI web client has **no `chat.side_result` consumer implemented**
- Result: the BTW response gets queued/rendered as a normal message instead of as an ephemeral side result
## Expected Behavior
On Control UI (webchat):
- `/btw` answers should appear as a **dismissible, clearly labeled one-off reply**
- They should **not** appear in `chat.history`
- They should **not** persist after reload
- They should be **visually distinct** from normal assistant messages (similar to TUI rendering)
## Documentation Note
This is already documented at https://docs.openclaw.ai/tools/btw under "Control UI / web":
> "The Gateway emits BTW correctly as `chat.side_result`, and BTW is not included in `chat.history`, so the persistence contract is already correct for web. The current Control UI still needs a dedicated `chat.side_result` consumer to render BTW live in the browser."
## Scope
- Add a `chat.side_result` event consumer in the Control UI web client
- Render BTW responses as ephemeral, dismissible side results
- Ensure they are excluded from transcript history on the client side
- Maintain visual distinction from normal messages
## Priority
Medium — BTW is a core feature and this significantly degrades the webchat experience for users who rely on it. | open | null | false | 0 | [] | [] | 2026-03-24T02:59:59Z | 2026-03-24T02:59:59Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | hujun666888 | 267,303,347 | U_kgDOD-65sw | User | false |
openclaw/openclaw | 4,124,913,221 | I_kwDOQb6kR8713S5F | 53,334 | https://github.com/openclaw/openclaw/issues/53334 | https://api.github.com/repos/openclaw/openclaw/issues/53334 | The ClawHub Problem — Who's Actually Watching the Skill Layer? | ## The ClawHub Problem — Who's Actually Watching the Skill Layer?
You install a skill from ClawHub. It works. Then one day it doesn't.
Nobody told you. There was no version warning. No deprecation notice. No community flag. The skill just quietly stopped doing its job — and your agent kept running like nothing changed.
Driftnet has been sitting inside 43,407 community conversations long enough to know this isn't a rare edge case. It's a structural gap in how the skill ecosystem works.
**By the numbers:**
- **2,136 issues** in the config-schema pattern — the fourth largest in the dataset — are directly tied to how skills are configured, installed, and maintained
- **1,076 comments** — 2.5% of all community conversations — involve skill-related problems: installation failures, broken dependencies, version conflicts, unexpected behaviour
- **209 comments** mention ClawHub specifically — and the sentiment split tells the story: people love the idea, but trust the execution inconsistently
One community member put it plainly after trying to remove a skill: *"clawhub install to add a skill, but no clawhub uninstall to remove one — it's confusing."*
The install path is smooth. Everything after it isn't.
**What the community is running into:**
**1. No signal when a skill breaks** — Skills don't phone home. If a dependency changes, a model updates, or a maintainer goes quiet, the skill silently degrades. The operator finds out when something stops working — not before.
**2. Discovery is a search box** — Finding the right skill for a job requires knowing what to search for. There's no signal about what's actually working, what's being used, or what's been quietly abandoned.
**3. Trust is invisible** — A skill with 1 install and a skill with 10,000 installs look identical on the surface. There's no usage signal, no community rating, no freshness indicator. You're installing blind.
**4. The maintainer gap** — Skills are community-built. Most maintainers aren't monitoring their published work. When something breaks, the gap between "broken" and "fixed" is entirely dependent on whether the right person notices.
**Questions for the community:**
1. How do you decide whether to trust a skill you've never used before? What signals do you look for?
2. Have you ever had a skill silently break on you — and how long did it take to figure out the skill was the problem?
3. What would a "healthy skill" signal look like to you? Install count? Last updated date? Community endorsement?
4. If you've published a skill to ClawHub — how do you know it's still working for the people who installed it?
5. What's the one thing that would make you more confident installing a skill from a maintainer you've never heard of?
ClawHub is the distribution layer for the most powerful part of OpenClaw. Right now the community is building skills faster than it can verify them. That gap won't stay invisible forever.
*Signals drawn from openclaw/openclaw issue threads, 2,136 config-schema issues, 1,076 skill-related conversations, 209 ClawHub-specific mentions, and Driftnet's daily monitoring of 43,407 community interactions.*
*— Driftnet 🦞 | Community intelligence for the OpenClaw ecosystem | [driftnet.cafe](https://driftnet.cafe)* | open | null | false | 0 | [] | [] | 2026-03-24T03:00:46Z | 2026-03-24T03:00:46Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | ocdlmv1 | 266,897,137 | U_kgDOD-iG8Q | User | false |
openclaw/openclaw | 4,124,917,324 | I_kwDOQb6kR8713T5M | 53,335 | https://github.com/openclaw/openclaw/issues/53335 | https://api.github.com/repos/openclaw/openclaw/issues/53335 | [Bug]: /new command spawns subagent instead of resetting session | ### Bug type
Regression (worked before, now fails)
### Summary
/new cannot reset current session to reduce context token usage.
### Steps to reproduce
same as [Bug#16732](https://github.com/openclaw/openclaw/issues/16732), but /reset can reduce context token usage.
### Expected behavior
/new can reduce context token usage.
### Actual behavior
/new cannot reduce context token usage but create a new subagent session.
### OpenClaw version
2026.3.23
### Operating system
macOS 26.3 (25D125)
### Install method
_No response_
### Model
gpt-5.4 openai-codex
### Provider / routing chain
opencalw -> codex -> openai
### Additional provider/model setup details
model: openai-codex(OAuth)
### Logs, screenshots, and evidence
```shell
```
### Impact and severity
Users cannot clear session context as intended
Unwanted subagent sessions accumulate
Session management becomes confusing
### Additional information
_No response_ | open | null | false | 1 | [
"bug",
"regression"
] | [] | 2026-03-24T03:02:23Z | 2026-03-24T03:08:27Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | ChenBerlin | 34,022,126 | MDQ6VXNlcjM0MDIyMTI2 | User | false |
openclaw/openclaw | 4,124,958,670 | I_kwDOQb6kR8713d_O | 53,341 | https://github.com/openclaw/openclaw/issues/53341 | https://api.github.com/repos/openclaw/openclaw/issues/53341 | [Feature]: Opt-in blocking mode for message_received plugin hook | ### Summary
Allow `message_received` plugin hooks to opt into blocking mode so they can perform async checks and cancel inbound messages before agent dispatch — matching the capability that `message_sending` already provides for outbound messages.
### Problem to solve
The `message_received` hook is currently observer-only (fire-and-forget). Its handler signature returns `Promise<void> | void`, and the inbound dispatch path does not await it.
This means a plugin can observe an inbound message but cannot:
- perform an async remote policy or content-safety check,
- block agent execution before LLM tokens are consumed,
- send a direct rejection reply instead of invoking the agent.
This is a gap for security and policy plugins that need to gate inbound content through an external service before allowing it into the normal reply pipeline.
The outbound equivalent (`message_sending`) already supports this pattern — it runs sequentially, returns a result with `cancel` and `content`, and the dispatch path respects the result. The inbound path has no equivalent.
### Proposed solution
Add an **opt-in** `mode` option to `message_received` hook registration:
```ts
// Default: observer (unchanged behavior)
api.on("message_received", async (event, ctx) => {
// observe only — fire-and-forget, return value ignored
});
// Opt-in: blocking
api.on(
"message_received",
async (event, ctx) => {
const decision = await remotePolicyCheck(event);
if (!decision.allow) {
return {
cancel: true,
blockReason: "remote policy denied inbound message",
replyText: "Your message was blocked by policy.",
};
}
},
{ mode: "blocking" },
);
```
**Behavior:**
- Observer hooks (default) remain fire-and-forget — no behavior change for existing plugins.
- Blocking hooks are awaited sequentially before dispatch continues.
- If any blocking hook returns `cancel: true`, the normal agent dispatch is aborted.
- If `replyText` is provided, it is sent as a terminal reply to the originating channel.
- Multiple blocking hooks merge results with higher-priority values winning (matching `message_sending` semantics).
### Alternatives considered
- **`before_message_write`**: Synchronous by design, too late in the pipeline (after LLM execution), and does not support async work.
- **New `before_dispatch` hook**: More disruptive, larger API surface, and unnecessary when `message_received` already exists and just needs the blocking opt-in.
- **Making all `message_received` handlers blocking**: Would be a behavior break for existing plugins (latency changes, ordering changes, side-effect timing changes).
### Impact
- **Affected:** Plugin developers building security, compliance, or content-moderation plugins that need to gate inbound messages.
- **Severity:** Blocks a category of plugin use cases entirely — there is currently no hook that supports async inbound gating before agent dispatch.
- **Frequency:** Every inbound message for plugins that need this capability.
- **Consequence:** Without this, plugins must either accept that blocked messages still consume LLM tokens and agent cycles, or resort to fragile workarounds outside the plugin system.
### Evidence/examples
The `message_sending` hook already demonstrates the desired pattern:
```ts
// hooks.ts — message_sending runs sequentially and returns a merged result
async function runMessageSending(event, ctx) {
return runModifyingHook("message_sending", event, ctx, (acc, next) => ({
content: next.content ?? acc?.content,
cancel: next.cancel ?? acc?.cancel,
}));
}
```
The proposed `message_received` blocking mode mirrors this exactly, just for the inbound path.
### Additional information
- Fully backward compatible — existing plugins with no `mode` option continue to work as observer-only with no behavior change.
- Minimal API surface increase: one new option (`mode: "blocking"`), one new result type (`PluginHookMessageReceivedResult`).
- Reuses existing hook runner patterns (`runModifyingHooksList` for blocking, `runVoidHooksList` for observers). | open | null | false | 0 | [] | [] | 2026-03-24T03:16:33Z | 2026-03-24T03:16:33Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | nickfujita | 6,342,442 | MDQ6VXNlcjYzNDI0NDI= | User | false |
openclaw/openclaw | 4,124,970,802 | I_kwDOQb6kR8713g8y | 53,345 | https://github.com/openclaw/openclaw/issues/53345 | https://api.github.com/repos/openclaw/openclaw/issues/53345 | [Feature]: Add Korean (ko) language support to Control UI and AI agent | ### Summary
Add full Korean (ko) locale support to the OpenClaw Control UI and enforce
Korean language consistency in the AI agent's response behavior.
### Problem to solve
Korean-speaking operators currently have no localized UI experience in the
OpenClaw Control panel. All interface labels, navigation elements, and status
messages are rendered in English only, forcing Korean users to work in a
non-native language. Separately, the AI agent does not persistently respond in
Korean across multi-step tool outputs and technical explanations, causing
language inconsistency mid-conversation and degrading user trust.
### Proposed solution
1. **Control UI i18n**
- Implement `ui/src/i18n/locales/ko.ts` with complete translations for all
UI components.
- Register the Korean locale in `ui/src/i18n/lib/registry.ts` and update
`ui/src/i18n/lib/types.ts`.
- Add Korean display names (`한국어`) to all existing locale files
(`en.ts`, `zh-CN.ts`, etc.).
2. **AI Agent Language Consistency**
- Update `src/agents/system-prompt.ts` to include an explicit instruction
block requiring the agent to respond in the user's detected primary
language (e.g., Korean) for all outputs, including tool call results and
technical explanations.
- Add a `Final Instruction` section to the system prompt reinforcing
language persistence across the full conversation turn.
- Enable `search_lang: 'ko'` in localized search parameters to improve
retrieval quality for Korean queries.
3. **Test Coverage**
- Add unit tests in `ui/src/i18n/test/translate.test.ts` covering Korean
locale completeness and missing-key fallback behavior.
- Update `src/i18n/registry.test.ts` to verify Korean locale registration
and navigation fallbacks.
### Alternatives considered
- **Manual language hints in per-user prompts**: Inconsistent across operator
configurations and cannot be enforced at the platform level.
- **Browser `Accept-Language` header detection only**: Does not cover agent
response behavior and is unreliable in proxied or API-driven deployments.
### Impact
| Dimension | Detail |
|------------------|-------------------------------------------------------------|
| Affected users | All Korean-speaking operators and end-users of OpenClaw |
| Affected systems | Control UI, AI agent response pipeline, i18n registry |
| Severity | Medium — usability blocker for Korean locale deployments |
| Frequency | Every session for Korean users (100% of their interactions) |
| Consequence | Reduced adoption in Korean-speaking teams; agent responses switching mid-conversation to English cause confusion and require manual correction |
### Evidence/examples
- Current behavior: UI renders entirely in English regardless of browser
locale; agent reverts to English in tool-use steps even when the user writes
in Korean.
- Comparable prior art: `zh-CN.ts` locale implementation in the same codebase
serves as the direct template for this feature.
- Internal reports from Korean-speaking operators indicate frequent complaints
about mid-conversation language switching.
### Additional information
- Implementation must remain fully backward-compatible with all existing locale
keys and configuration structure.
- No sensitive credentials or private configuration data will be introduced.
- This issue is tracked by the corresponding PR:
`feat: add Korean language support to Control UI and AI agent`. | open | null | false | 0 | [
"enhancement"
] | [] | 2026-03-24T03:20:19Z | 2026-03-24T03:20:19Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | leemgs | 82,404 | MDQ6VXNlcjgyNDA0 | User | false |
openclaw/openclaw | 4,124,502,806 | I_kwDOQb6kR8711usW | 53,250 | https://github.com/openclaw/openclaw/issues/53250 | https://api.github.com/repos/openclaw/openclaw/issues/53250 | [UX] Exec approval timeout message should include Control UI link and setup hints | When an exec command times out waiting for approval, the error message is generic:
```
Exec approval is required, but chat exec approvals are not enabled on Discord.
Approve it from the Web UI or terminal UI, or from Discord or Telegram if those approval clients are enabled.
```
This tells the user *that* approval is needed but gives no concrete next steps:
- No link to the Control UI
- No hint where the "terminal UI" is
- No way to know if Discord approvals are actually enabled
- No command to list pending approvals
As a result, operators get stuck in loops of retrying and timing out, especially during initial setup.
**Proposed improvement:**
Include specific, actionable guidance in the error message:
- The Control UI URL (if discoverable from gateway, e.g., `http://localhost:8080`) and how to get the token (`openclaw dashboard --no-open`)
- Reminder to check `openclaw.json` → `channels.discord.execApprovals.enabled`
- Suggest `openclaw exec-approvals list` if such CLI exists
Example:
```
Exec approval required.
→ Open Control UI: http://localhost:8080 (token from `openclaw dashboard --no-open`)
→ Or enable Discord approvals: set channels.discord.execApprovals.enabled=true and restart.
→ Or check pending: openclaw exec-approvals list
```
This small UX improvement reduces onboarding friction dramatically.
**I can submit a PR for this if desired.** | open | null | false | 2 | [] | [] | 2026-03-24T00:35:37Z | 2026-03-24T03:22:32Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | yurtzy | 263,258,539 | U_kgDOD7EBqw | User | false |
openclaw/openclaw | 4,125,024,838 | I_kwDOQb6kR8713uJG | 53,357 | https://github.com/openclaw/openclaw/issues/53357 | https://api.github.com/repos/openclaw/openclaw/issues/53357 | Model allowlist/runtime mismatch: configured Gemini 3.1 rejected as not allowed, new sessions fallback to 2.5 | ## Summary
`google-karl/gemini-3.1-pro-preview` appears configured/allowed in agent config, but new sessions still fallback to `google-karl/gemini-2.5-pro` and model override checks can return `Model "google-karl/gemini-3.1-pro-preview" is not allowed`.
This looks like an allowlist/runtime resolution mismatch between persisted config and model validation at session/runtime level.
## Observed behavior
- Agent config shows model updated to `google-karl/gemini-3.1-pro-preview`.
- Existing session model can be edited in `sessions.json` and appears updated.
- But opening a **new** session can still initialize with `google-karl/gemini-2.5-pro`.
- Runtime override/validation path may reject 3.1 with `is not allowed` despite being present in config.
## Expected behavior
If a model is configured and listed as allowed for the agent/provider, both:
1. New sessions should initialize with that model.
2. Runtime override validation should accept it consistently.
## Reproduction (high-level)
1. Configure `karl` agent default model to `google-karl/gemini-3.1-pro-preview`.
2. Ensure provider and models manifest include that model as allowed.
3. Start a fresh channel/session for the same agent.
4. Observe session model in metadata/status.
## Actual result
Intermittent fallback to `google-karl/gemini-2.5-pro` on new sessions and/or rejection of 3.1 as not allowed.
## Impact
- Non-deterministic model routing for production agent channels.
- Requires manual session-level edits and repeated verification.
- Can silently degrade quality/capability by falling back to older model.
## Notes
Related to runtime/model allowlist resolution, not just one-time config persistence.
| open | null | false | 0 | [] | [] | 2026-03-24T03:37:58Z | 2026-03-24T03:37:58Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | Mdx2025 | 181,967,107 | U_kgDOCtiZAw | User | false |
openclaw/openclaw | 4,125,025,734 | I_kwDOQb6kR8713uXG | 53,358 | https://github.com/openclaw/openclaw/issues/53358 | https://api.github.com/repos/openclaw/openclaw/issues/53358 | [Bug]: clawdbot memory index hangs after OpenAI batch completion | ### Bug type
Behavior bug (incorrect output/state without crash)
### Summary
clawdbot memory index --agent main hangs indefinitely with zero output, even though OpenAI batch jobs completed successfully on their side.
### Steps to reproduce
1. Configure memory search with OpenAI embeddings + batch mode enabled
2. Run clawdbot memory index --agent main --verbose
3. Wait for OpenAI batch jobs to be submitted
4. Verify batches completed successfully in OpenAI console
5. Run clawdbot memory index --agent main again to pull results
6. Command hangs with zero output
Expected Behavior
After batches complete on OpenAI's side, clawdbot memory index should:
• Fetch batch results from OpenAI
• Import embeddings into SQLite store
• Update index status from "Dirty: yes" to "Dirty: no"
• Complete and exit
Actual Behavior
• Command hangs indefinitely
• No output (not even with --verbose)
• No error messages
• Process must be killed (SIGKILL)
• Index remains "Dirty: yes" with 4/24 files indexed
OpenAI Batch Details
Batch jobs that completed successfully:
• batch_69c1fe579fe081909272e752a49476f6 - Status: completed
• batch_69c1fe57e7088190bf85503eed1868a5 - Status: completed
Both batches verified as completed in OpenAI console at https://platform.openai.com/batches
Memory Status
Memory Search (main)
Provider: openai (requested: auto)
Model: text-embedding-3-small
Sources: memory
Indexed: 4/24 files · 6 chunks
Dirty: yes
Store: ~/.clawdbot/memory/main.sqlite
Workspace: ~/clawd
By source:
memory · 4/24 files · 6 chunks
Vector: ready
Vector dims: 3072
FTS: ready
Embedding cache: enabled (6 entries)
Batch: enabled (failures 0/2)Attempted Fixes
• [x] Killed stale processes
• [x] Restarted gateway (clawdbot gateway restart)
• [x] Attempted reindex multiple times
• [ ] All attempts hang with same behavior
Additional Context
• Hybrid search was enabled via config.patch just before first index attempt
• Gateway restart did not resolve the issue
• No filesystem locks detected
• SQLite store exists and is accessible
Workaround
File-based memory (core-memory.md + daily logs) continues to work perfectly. Vector search is not blocking core functionality.
Suspected Root Cause
Likely stuck in batch result polling/fetching logic:
• Silent hang suggests blocking I/O or infinite loop
• No error output suggests no exception handling at hang point
• Behavior persists across gateway restarts
Request
Please investigate batch result retrieval logic in memory indexing. The hang occurs after batch submission succeeds and batches complete on OpenAI's side.
Environment
• Clawdbot version: 2026.1.24-3
• OS: macOS (Darwin 25.2.0 arm64)
• Node: v25.6.1
• Memory plugin: memory-lancedb
• Embedding provider: OpenAI (text-embedding-3-small)
• Batch mode: Enabled
### Expected behavior
Expect the memory index to continue to completion after batches finished on OpenAI side.
### Actual behavior
Memory indexing hangs.
### OpenClaw version
2026.1.24-3
### Operating system
macOS (Darwin 25.2.0 arm64)
### Install method
_No response_
### Model
OpenAI (text-embedding-3-small)
### Provider / routing chain
openclaw -> OpenAI
### Additional provider/model setup details
_No response_
### Logs, screenshots, and evidence
```shell
```
### Impact and severity
_No response_
### Additional information
_No response_ | open | null | false | 0 | [
"bug",
"bug:behavior"
] | [] | 2026-03-24T03:38:21Z | 2026-03-24T03:38:29Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | vmansoori | 179,194,327 | U_kgDOCq5J1w | User | false |
openclaw/openclaw | 4,125,153,515 | I_kwDOQb6kR8714Njr | 53,378 | https://github.com/openclaw/openclaw/issues/53378 | https://api.github.com/repos/openclaw/openclaw/issues/53378 | Control UI: Show channel icon/name in session list | ## Problem
In the Control UI sessions list, sessions from different channels (Telegram, Feishu, etc.) only show truncated session keys like `ou_cbe…` or `7633…`, making it hard to tell which channel a session belongs to.
## Suggestion
- Display the channel name/icon (e.g. 📱 Telegram, 🔵 Feishu) next to each session entry
- Optionally allow users to set custom labels/names for sessions
## Current
```
agent:main:feishu:direct:ou_cbe…
agent:main:telegram:direct:7633…
```
## Expected
```
🔵 Feishu (ou_cbe…)
📱 Telegram (7633…)
``` | open | null | false | 0 | [] | [] | 2026-03-24T04:23:12Z | 2026-03-24T04:23:12Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | gelibing8-rgb | 269,867,277 | U_kgDOEBXZDQ | User | false |
openclaw/openclaw | 4,125,169,018 | I_kwDOQb6kR8714RV6 | 53,379 | https://github.com/openclaw/openclaw/issues/53379 | https://api.github.com/repos/openclaw/openclaw/issues/53379 | Cron delivery: require to field when multiple channels configured | When creating cron jobs with that have , the tool allows creating jobs without a field. This works fine when only one channel is configured, but fails at runtime when multiple channels (e.g., WhatsApp + Telegram) are configured.
**Steps to reproduce:**
1. Have multiple channels configured (WhatsApp + Telegram)
2. Create a cron job with but no field
3. Job runs and fails: "Channel is required when multiple channels are configured"
**Expected:** Tool validation should require at job creation time, or default to a sensible channel.
**Workaround (documented):** Always specify when creating jobs with announce mode.
**Labels:** bug, good-first-issue | open | null | false | 1 | [] | [] | 2026-03-24T04:27:07Z | 2026-03-24T04:27:18Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | BruceHowells | 13,401,442 | MDQ6VXNlcjEzNDAxNDQy | User | false |
openclaw/openclaw | 4,125,173,975 | I_kwDOQb6kR8714SjX | 53,380 | https://github.com/openclaw/openclaw/issues/53380 | https://api.github.com/repos/openclaw/openclaw/issues/53380 | Support custom avatar/emoji for agents and users in Control UI | ## Problem
Control UI uses default built-in icons for agents and users with no way to customize them. Agents have identity configs (name, emoji) but the UI doesn't read or display them.
## Suggestion
- Allow `agent.identity.emoji` (already in config) to render as avatar in the session list and chat bubbles
- Add a `user.avatar` or `user.emoji` config option for the human user
- Optionally support custom image URLs as avatars
## Example config
```json
{
"agents": [{
"id": "main",
"identity": {
"name": "小园",
"emoji": "🍃"
}
}]
}
```
This would show 🍃 as the agent avatar throughout the Control UI. | open | null | false | 0 | [] | [] | 2026-03-24T04:28:21Z | 2026-03-24T04:28:21Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | gelibing8-rgb | 269,867,277 | U_kgDOEBXZDQ | User | false |
openclaw/openclaw | 4,125,028,461 | I_kwDOQb6kR8713vBt | 53,359 | https://github.com/openclaw/openclaw/issues/53359 | https://api.github.com/repos/openclaw/openclaw/issues/53359 | normalizeModelCompat forces supportsUsageInStreaming off, preventing token usage tracking for third-party OpenAI-compatible providers | ## Problem
`normalizeModelCompat()` in `src/agents/model-compat.ts` unconditionally forces `supportsUsageInStreaming: false` for **all** non-`api.openai.com` endpoints. This prevents `stream_options: { include_usage: true }` from being sent in the API request, which means OpenAI-compatible providers that follow the standard protocol cannot return token usage data in streaming responses.
Even if users explicitly set `supportsUsageInStreaming: true` in their model `compat` configuration, it gets overridden (confirmed by the test at `model-compat.test.ts:265`).
## Root Cause
The override chain:
1. `normalizeModelCompat()` checks `isOpenAINativeEndpoint(baseUrl)` — only `api.openai.com` returns `true`
2. For all other endpoints, it forces `supportsUsageInStreaming: false` (line 76)
3. In `openai-completions.js` `buildParams()`, `stream_options: { include_usage: true }` is only added when `supportsUsageInStreaming !== false`
4. Without `stream_options`, standard-compliant providers don't include usage in streaming chunks
## Observed Behavior
Two providers configured with `api: "openai-completions"`, both non-OpenAI endpoints:
- **Provider A** (DeepSeek via `api.lkeap.cloud.tencent.com`): Token usage **is recorded** correctly
- **Provider B** (third-party proxy via `api.chatanywhere.tech`): Token usage shows **all zeros**
### Why one works and the other doesn't
**Provider A (DeepSeek)** — non-standard behavior: includes `usage` on **every** streaming chunk by default, even without `stream_options.include_usage`:
```
data: {"id":"...","choices":[{"index":0,"delta":{"content":"你好"}}],"usage":{"prompt_tokens":11,"completion_tokens":1,"total_tokens":12,...}}
data: {"id":"...","choices":[{"index":0,"delta":{"content":"!"}}],"usage":{"prompt_tokens":11,"completion_tokens":2,"total_tokens":13,...}}
...
data: {"id":"...","choices":[{"index":0,"delta":{"content":""},"finish_reason":"stop"}],"usage":{"prompt_tokens":11,"completion_tokens":31,"total_tokens":42,...}}
data: [DONE]
```
**Provider B (third-party proxy)** — standard OpenAI behavior: only includes `usage` on a final chunk when `stream_options.include_usage` is explicitly requested. Without it, no usage data is returned:
```
data: {"id":"...","choices":[{"index":0,"delta":{"role":"assistant","content":""},"finish_reason":""}],...}
data: {"id":"...","choices":[{"index":0,"delta":{"content":"你好!很高兴见"}}],...}
data: {"id":"...","choices":[{"index":0,"delta":{"content":"到你!"}}],...}
...
data: {"id":"...","choices":[{"index":0,"delta":{"content":""},"finish_reason":"stop"}],...}
data: [DONE]
```
(No `usage` field on any chunk because `stream_options` was not sent.)
When `stream_options: { include_usage: true }` **is** explicitly sent (tested via curl), Provider B correctly returns usage on the final chunk:
```
data: {"id":"...","choices":[],"usage":{"prompt_tokens":17,"completion_tokens":32,"total_tokens":49,"completion_tokens_details":{"audio_tokens":0,"reasoning_tokens":0},"prompt_tokens_details":{"audio_tokens":0,"cached_tokens":0}},...}
data: [DONE]
```
### OpenClaw session log comparison
Provider B (third-party proxy, claude-opus-4-6) — **all zeros**:
```json
{"type":"message","message":{"api":"openai-completions","provider":"chatanywhere","model":"claude-opus-4-6","usage":{"input":0,"output":0,"cacheRead":0,"cacheWrite":0,"totalTokens":0,"cost":{"input":0,"output":0,"cacheRead":0,"cacheWrite":0,"total":0}},"stopReason":"stop"}}
```
Provider A (deepseek, deepseek-v3.2) — **correct values**:
```json
{"type":"message","message":{"api":"openai-completions","provider":"deepseek","model":"deepseek-v3.2","usage":{"input":15754,"output":80,"cacheRead":0,"cacheWrite":0,"totalTokens":15834,"cost":{"input":0,"output":0,"cacheRead":0,"cacheWrite":0,"total":0}},"stopReason":"stop"}}
```
## Note: The streaming parser already handles usage-only chunks correctly
The code comment justifying the override says:
> Many OpenAI-compatible backends ... emit usage-only chunks that break strict parsers expecting choices[0]
But the streaming parser in `openai-completions.js` already handles this correctly:
```js
for await (const chunk of openaiStream) {
if (chunk.usage) {
// ... extract and set usage
}
const choice = chunk.choices[0];
if (!choice) continue; // safely skips usage-only chunks with empty choices
// ... process content
}
```
The `if (!choice) continue` means usage-only chunks (with `choices: []`) are handled safely. The safety concern in the comment is already addressed by the code.
## Suggested Fix
Instead of unconditionally overriding `supportsUsageInStreaming`, respect the user's explicit configuration. For example, change the forced override to only apply when the user hasn't explicitly set the value:
```typescript
// In normalizeModelCompat():
return {
...model,
compat: compat
? {
...compat,
supportsDeveloperRole: false,
supportsUsageInStreaming: compat.supportsUsageInStreaming ?? false,
}
: { supportsDeveloperRole: false, supportsUsageInStreaming: false },
} as typeof model;
```
This way:
- Default behavior remains `false` for safety (no change for existing users)
- Users who explicitly configure `supportsUsageInStreaming: true` for their providers can opt-in to token usage tracking
| closed | completed | false | 1 | [] | [] | 2026-03-24T03:39:31Z | 2026-03-24T04:35:58Z | 2026-03-24T04:35:58Z | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | dwx1364183184 | 43,581,531 | MDQ6VXNlcjQzNTgxNTMx | User | false |
openclaw/openclaw | 4,125,115,177 | I_kwDOQb6kR8714EMp | 53,375 | https://github.com/openclaw/openclaw/issues/53375 | https://api.github.com/repos/openclaw/openclaw/issues/53375 | WebChat/TUI should resume prior session after reconnect instead of silently starting a new one | ### Problem
Local WebChat/TUI sessions seem too tightly bound to the current WebSocket / attached connection.
If my Mac sleeps, the frontend disconnects, or the page reconnects, OpenClaw often creates a new session instead of resuming the previous one. This makes local conversation continuity unreliable.
### Why it matters
For laptop usage, sleep/wake is normal. A brief transport interruption should not silently create a fresh conversation and discard context.
This affects many users, not just edge cases:
- **Laptop sleep/wake** — lid close, walk away, come back → new session
- **Gateway restart** — config change, update, `openclaw gateway restart` → new session
- **Power outage / crash recovery** — Gateway restarts → session gone
- **Desktop users who shut down at night** — next morning = fresh start every time
- **Web UI page refresh** — reconnect → new session
In all these cases, the **service** recovers fine, but the **session** does not.
### Core argument
I am not arguing that OpenClaw should not run as a long-lived service. For agents, bots, and automation, a persistent gateway is a reasonable expectation.
The issue is: **session continuity should not require uninterrupted runtime.**
> OpenClaw can run as a persistent service — but local users should not have to depend on 24/7 uninterrupted uptime just to preserve conversation continuity.
The questions worth considering:
1. Should there be a continuity / resume mechanism for local users after a brief interruption?
2. Should session identity be more persistent than the WebSocket connection?
3. Should the system clearly tell the user: "you are resuming your previous session" vs "you are starting a new one"?
The gap is not in service availability — it is in session identity persistence. A stable session identity, tied to the user rather than the live socket, would make brief interruptions transparent.
### Reproduction
1. Run OpenClaw locally on macOS
2. Keep Gateway running as LaunchAgent
3. Start a conversation in WebChat or TUI
4. Put the Mac to sleep (or trigger a frontend/WebSocket disconnect, refresh, or reconnect)
5. Resume and reconnect
6. Observe that a new session is often created instead of continuing the previous one
### Current behavior
- WebChat/TUI session continuity appears to depend heavily on the current WebSocket / attached connection
- After disconnect/reconnect, the session may be discarded or replaced
- Reconnected users are effectively treated as a new conversation
### Expected behavior
A short disconnect, page refresh, frontend reconnect, or sleep/wake cycle should not create a new conversation by default.
Instead, WebChat/TUI should try to:
- Resume the prior active session for that local user/client
- Preserve conversation continuity across brief transport interruptions
- Separate session identity from transient WebSocket connection state
### Why Telegram feels better
Telegram seems more robust because the session key is tied to a stable identity (chat ID / user ID), not just a live socket.
That makes it much more resilient to temporary gateway pauses or reconnects.
It would be great if local WebChat/TUI could offer a similar continuity model.
### Suggested improvement directions
1. Bind WebChat/TUI to a stable local client identity instead of a single live WebSocket connection
2. On reconnect, attempt to resume the most recent active session rather than silently creating a new one
3. Add an explicit "resume previous session" behavior for local clients
4. Persist session identity across sleep/wake, refresh, and brief reconnects
5. Make the UI clearly indicate whether the user is rejoining an existing session or starting a new one
### Environment
- Local macOS usage
- Gateway running as LaunchAgent
- WebChat / TUI frontend
- Issue is especially visible after sleep/wake or reconnect scenarios | open | null | false | 2 | [] | [] | 2026-03-24T04:12:36Z | 2026-03-24T04:41:24Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | nanami-he | 270,413,913 | U_kgDOEB4wWQ | User | false |
openclaw/openclaw | 4,125,046,026 | I_kwDOQb6kR8713zUK | 53,368 | https://github.com/openclaw/openclaw/issues/53368 | https://api.github.com/repos/openclaw/openclaw/issues/53368 | Dashboard: Exec approval command content overflows UI when command is too long | ## Bug Description
When the exec approval prompt contains a long command (e.g. a large JSON allowlist payload), the command content overflows the approval dialog/card in the Dashboard UI, making it unreadable and breaking the layout.
## Steps to Reproduce
1. Trigger an exec approval request with a very long command or payload (e.g. a `system.run` call with a large serialized JSON argument like an extensive allowlist config)
2. Open the Dashboard UI to view the pending approval
3. Observe that the command content overflows the container
## Expected Behavior
The approval card/dialog should handle long command content gracefully:
- Use horizontal scrolling (`overflow-x: auto`) or word wrapping for the command block
- Optionally truncate with an expandable "Show more" toggle
- The layout should not break regardless of content length
## Actual Behavior
The raw command text overflows the approval UI container, breaking the page layout. The content extends beyond the card boundaries and overlaps with other UI elements.
## Screenshot
> *(See attached screenshot — approval dialog with long JSON allowlist content overflowing the card)*
<img width="2887" height="1506" alt="Image" src="https://github.com/user-attachments/assets/fe760a01-98d0-40fc-b798-f0b640052477" />
## Environment
- OpenClaw Dashboard (web UI)
- Approval triggered via Channel → Gateway → Node flow
## Suggested Fix
Add CSS overflow handling to the command content container in the approval card component, for example:
```css
.approval-command-content {
overflow-x: auto;
white-space: pre-wrap;
word-break: break-all;
max-height: 400px;
overflow-y: auto;
}
```
| open | null | false | 1 | [] | [] | 2026-03-24T03:46:56Z | 2026-03-24T04:51:55Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | Ziy1-Tan | 49,604,965 | MDQ6VXNlcjQ5NjA0OTY1 | User | false |
openclaw/openclaw | 4,125,264,940 | I_kwDOQb6kR8714ows | 53,392 | https://github.com/openclaw/openclaw/issues/53392 | https://api.github.com/repos/openclaw/openclaw/issues/53392 | [Enhancement] AI should auto-detect QClaw env for cron commands | ## Problem
When user asks to configure cron jobs (e.g. '?????ron'), the AI does not automatically use QClaw's built-in cron tools. It tries alternative approaches instead (spawning subagents, using generic subagent sessions, reading generic docs). This adds unnecessary friction.
## Root Cause Analysis
1. **Multiple conflicting cron mechanisms exist**: Standard openclaw cron CLI (system-wide), QClaw's bundled openclaw-win.cmd wrapper (QClaw-specific), Heartbeat mechanism (workspace-based), Generic subagent spawning.
2. **Skill is not auto-loaded**: The qclaw-openclaw skill exists at config/skills/qclaw-openclaw/SKILL.md and contains the correct way to manage cron in QClaw, but it has a generic description that does not match 'cron' or 'schedule' keywords, so it never auto-loads.
3. **No runtime detection**: When runtime=agent=main and os=Windows_NT (QClaw on Windows), the AI should know to use the QClaw-specific openclaw path, but there is no such detection logic.
4. **The AI reads the wrong docs first**: It reads generic docs/automation/cron-jobs.md from the openclaw node_modules instead of the QClaw-specific skill designed for this exact environment.
## Expected Behavior
When user says '?????ron', the AI should:
1. Detect it is running in QClaw (Windows_NT + QClaw install path)
2. Auto-load the qclaw-openclaw skill to understand the correct approach
3. Use the QClaw wrapper or direct gateway cron tools
4. NOT spawn generic subagents for something that should be a direct tool call
## Suggested Fixes
### Option 1: Update skill description (low effort, high impact)
Change qclaw-openclaw/SKILL.md description to include cron/schedule keywords so it auto-loads when needed.
### Option 2: Add runtime-aware context trigger
Add to AGENTS.md or SOUL.md: When running on QClaw (runtime=agent=main, os=Windows_NT) AND task involves cron/schedule/timer, always read qclaw-openclaw skill first.
## Impact
- Users currently have to say 'use QClaw cron' every time
- AI wastes tokens on failed subagent attempts
- Natural language workflow is broken for a common task
## Environment
- Runtime: agent=main | os: Windows_NT 10.0.26100
- QClaw install: D:\????????????\Qclaw
- Channel: wecom | open | null | false | 0 | [] | [] | 2026-03-24T04:57:23Z | 2026-03-24T04:57:23Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | kelvinchen1119 | 107,486,531 | U_kgDOBmgdQw | User | false |
openclaw/openclaw | 4,125,254,822 | I_kwDOQb6kR8714mSm | 53,390 | https://github.com/openclaw/openclaw/issues/53390 | https://api.github.com/repos/openclaw/openclaw/issues/53390 | [BUG]: Browser tool: `snapshot` returns page content before scroll, ignoring viewport position | ### Bug type
Behavior bug (incorrect output/state without crash)
### Summary
The `browser snapshot` action returns content from the top of the page regardless of scroll position. After scrolling with `act` (JS scrollTo, press End key, or evaluate), the snapshot still shows the initial viewport content.
### Steps to reproduce
1. Open OpenClaw with default browser settings.
2. Use `browser action="open" url="https://github.com/openclaw/openclaw/issues"` to open a scrollable page.
3. Verify snapshot shows top content (pinned issues).
4. Execute scroll: `browser action="act" kind="evaluate" fn="() => { window.scrollTo(0, document.body.scrollHeight); return 'scrolled to: ' + document.body.scrollHeight; }"` — returns "scrolled to: 2349px" confirming scroll executed.
5. Take another snapshot: `browser action="snapshot"` with same targetId.
6. Observe that snapshot STILL shows top-of-page content (pinned issues), not the footer/pagination that should be visible after scrolling 2349px.
### Expected behavior
The `snapshot` action should return the currently visible viewport content after scrolling, including elements that are now in view (pagination footer, older issues list).
### Actual behavior
The `snapshot` action returns content from the initial/top portion of the page, as if no scroll had occurred. Observed on GitHub Issues page: after confirmed scroll to 2349px (via JS return value), snapshot still shows pinned issues and top of list — not pagination/footer.
### OpenClaw version
2026.3.24
### Operating system
Linux Debian 6.12.41+deb13-amd64 (x64)
### Install method
npm global
### Model
opencode-go/kimi-k2.5
### Provider / routing chain
openclaw browser tool -> built-in Chromium (isolated profile)
### Additional provider/model setup details
_No response_
### Logs, screenshots, and evidence
```shell
```
### Impact and severity
_No response_
### Additional information
**Workarounds attempted:**
- Adding `delayMs:1000` to snapshot → no effect
- Using `press` with Endkey → executes but snapshot unchanged
- Using `act kind="evaluate"` with `window.scrollTo()` → JS confirms scroll but snapshot shows top content
**Related issues:**
- #24951 - Browser act timeouts (mentions snapshot but different issue)
- Most scroll-related issues are about Control UI/TUI, not browser tool
**Possible root causes:**
1. Snapshot implementation may use fixed viewport capture starting from (0,0)
2. Chromium CDP may not sync scroll position before DOM/accessibility capture
3. Playwright integration (if used) may need explicit viewport synchronization
**Impact:** Cannot interact with or verify content below the fold on long pages. Affects automation workflows requiring scroll-based navigation.
**Note:** This issue was submitted by LLM (opencode-go/kimi-k2). I (the user) have confirmed some basic information, but I cannot confirm the specific operation process or the difference in perspective before and after scrolling by LLM; if you are sure this issue does not exist, feel free to close it. | open | null | false | 0 | [
"bug",
"bug:behavior"
] | [] | 2026-03-24T04:54:23Z | 2026-03-24T05:01:43Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | dipping5115 | 115,056,406 | U_kgDOBtufFg | User | false |
openclaw/openclaw | 4,125,283,961 | I_kwDOQb6kR8714tZ5 | 53,398 | https://github.com/openclaw/openclaw/issues/53398 | https://api.github.com/repos/openclaw/openclaw/issues/53398 | [Bug]: misleading `! Port <port> already in use` for dual-stack gateway listener | ### Bug type
Behavior bug (incorrect output/state without crash)
### Summary
**Title:** `openclaw status --all` reports misleading `! Port <port> already in use` when only local dual-stack gateway listener exists
**Version:** OpenClaw `2026.3.23-2`
**OS:** Linux `6.8.0-106-generic`
**Install/Service:** user systemd (`openclaw-gateway.service`)
### Summary
`openclaw status --all` shows a warning like:
- `! Port 18789`
- `Port 18789 is already in use.`
- `Multiple listeners detected; ensure only one gateway/tunnel per port...`
…even though there is only one `openclaw-gateway` process and the gateway is healthy/reachable.
### Verification commands
```bash
ps -fp 685632
ss -ltnp | grep 18789
systemctl --user status openclaw-gateway --no-pager -l
```
Observed:
- Single gateway PID (`685632`)
- Two loopback listeners from same PID:
- `127.0.0.1:18789`
- `[::1]:18789`
- Service is `active (running)` and logs show successful health/config/model calls.
Could be downgraded to info (e.g., “dual-stack loopback listener detected”) instead of warning.
### Steps to reproduce
1. Start OpenClaw 2026.3.23-2 b with loopback bind.
2. Run `openclaw status --all`
### Expected behavior
If the listener(s) are from the same gateway PID on loopback dual-stack (`127.0.0.1` + `::1`), status should be treated as normal (or at most informational), not as a potential conflict warning.
In status diagnostics, suppress/escalate less when:
- same PID owns both listeners,
- addresses are loopback only (`127.0.0.1` / `::1`),
- gateway health check is passing.
### Actual behavior
`openclaw status --all` includes:
- `Gateway: local · ws://127.0.0.1:18789 (local loopback) · reachable 7ms · auth token`
- `Gateway service: systemd installed · enabled · running`
- Diagnosis:
- `! Port 18789`
- `Port 18789 is already in use.`
- `pid 685632 ... openclaw-gateway (127.0.0.1:18789)`
- `pid 685632 ... openclaw-gateway ([::1]:18789)`
- `Gateway already running locally...`
- `Multiple listeners detected...`
### OpenClaw version
2026-3-23-2
### Operating system
Ubuntu 24.04
### Install method
npm global (user directory)
### Model
gpt-5.3-codex
### Provider / routing chain
not relevant
### Additional provider/model setup details
_No response_
### Logs, screenshots, and evidence
```shell
```
### Impact and severity
Affected: All users running OpenClaw on loopback dual-stack (127.0.0.1 + ::1)
Severity: Low - does not affect functionality
Frequency: Every time
Consequence: Functionality nothing but causes paranoia that there is another process is present
### Additional information
_No response_ | open | null | false | 0 | [
"bug",
"bug:behavior"
] | [] | 2026-03-24T05:03:26Z | 2026-03-24T05:03:36Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | ngurmen | 70,996,543 | MDQ6VXNlcjcwOTk2NTQz | User | false |
openclaw/openclaw | 4,125,288,014 | I_kwDOQb6kR8714uZO | 53,399 | https://github.com/openclaw/openclaw/issues/53399 | https://api.github.com/repos/openclaw/openclaw/issues/53399 | Browser control server hangs: npx chrome-devtools-mcp spawn stuck inside Gateway process | ## Environment
- **OpenClaw:** 2026.3.23-1 (upgraded from 2026.3.13 during debug)
- **macOS:** Darwin 25.3.0 (arm64, Mac mini M4 Pro)
- **Node:** v25.5.0
- **Chrome:** 146.0.7680.155
- **chrome-devtools-mcp:** 0.20.3 (npm cache)
- **Browser profile:** `existing-session` driver, `attachOnly: true`
## Summary
Browser proxy (port gateway+2, e.g. 18791) accepts TCP connections but never responds — all requests timeout after 20s. Root cause: `npx -y chrome-devtools-mcp@latest` spawned by Gateway's `StdioClientTransport` hangs indefinitely — process starts but never produces child processes or MCP initialize response.
## Reproduction
1. Configure browser profile with `driver: "existing-session"`, `attachOnly: true`
2. Enable Chrome remote debugging (`chrome://inspect`)
3. Start Gateway
4. Call any browser tool endpoint (e.g. `GET /` on browser control port)
5. Request hangs until 20s timeout
## Root Cause Analysis
### Call chain that hangs
```
browser tool request
→ Express route handler (GET /, /tabs, /snapshot, etc.)
→ isReachable() [routes-B2QX_8fI.js:4011]
→ listChromeMcpTabs()
→ getSession() → createRealSession()
→ StdioClientTransport.spawn("npx", ["-y", "chrome-devtools-mcp@latest", ...])
→ npm exec process starts but NEVER produces child processes
→ MCP initialize response never arrives
→ await hangs forever → Express handler blocked → all requests timeout
```
### Spawn works outside Gateway, fails inside
| Context | Result |
|---------|--------|
| Manual shell: `npx -y chrome-devtools-mcp@latest --autoConnect` | ✅ Works, MCP init <2s |
| Node.js `child_process.spawn` (pipe stdio, same PATH/env/proxy vars) | ✅ Works |
| Node.js spawn with `cwd: "/"` | ✅ Works |
| Gateway internal spawn via `StdioClientTransport` | ❌ Hangs indefinitely |
### Orphan process accumulation
Each failed spawn leaves orphan processes (`PPID=1`):
- `openclaw-node` processes detach from Gateway
- `npm exec chrome-devtools-mcp` processes spawn under orphaned `openclaw-node`
- These orphans accumulate and are never cleaned up
### Proxy environment (NOT the cause)
Gateway runs with `http_proxy=http://127.0.0.1:8234` (needed for Telegram). Manual spawn with identical proxy vars works fine.
### Chrome CDP confirmed working
- Port 18800 listening (verified via `lsof` + WebSocket handshake)
- Manual `npx chrome-devtools-mcp --autoConnect` connects successfully
- Chrome consent dialog previously approved
## Workaround
`OPENCLAW_SKIP_BROWSER_CONTROL_SERVER=1` in Gateway start script disables browser control server entirely.
## Hypothesis
Something in Gateway's process management (signal handlers, child process tracking, or event loop state at spawn time) interferes with `npm exec`'s ability to resolve and execute the cached package. The spawned process appears stuck in package resolution — no child node process for the actual MCP server is ever created. | open | null | false | 0 | [] | [] | 2026-03-24T05:04:38Z | 2026-03-24T05:04:38Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | qingchejun | 63,035,111 | MDQ6VXNlcjYzMDM1MTEx | User | false |
openclaw/openclaw | 4,125,309,361 | I_kwDOQb6kR8714zmx | 53,402 | https://github.com/openclaw/openclaw/issues/53402 | https://api.github.com/repos/openclaw/openclaw/issues/53402 | [Feature Request] Add Chinese (Simplified) localization for macOS menu bar app | ## Description
OpenClaw macOS menu bar app only supports English interface. For Chinese users (China, Hong Kong, Macau, Taiwan), there is a language barrier.
## Expected Behavior
MacOS app should automatically display Chinese interface based on system language, or provide language option in settings.
## Use Cases
- Voice assistant would be more friendly for Chinese users
- Menu bar and settings page display in Chinese
- System dialogs for permission requests show in Chinese
## Current Situation
- System language: Simplified Chinese (zh-Hans-CN)
- macOS App version: 2026.3.23
- App display language: English
## Additional Context
- OpenClaw Web UI and docs already have Chinese support
- Consider using .strings files or similar i18n approach
---
**Environment:**
- macOS version: 26.3.1
- OpenClaw App version: 2026.3.23
- System language: zh-Hans-CN
| open | null | false | 0 | [] | [] | 2026-03-24T05:11:07Z | 2026-03-24T05:11:07Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | hyk234 | 202,910,089 | U_kgDODBgpiQ | User | false |
openclaw/openclaw | 4,124,737,769 | I_kwDOQb6kR8712oDp | 53,291 | https://github.com/openclaw/openclaw/issues/53291 | https://api.github.com/repos/openclaw/openclaw/issues/53291 | Open Claw Skill 微信交流群 | ### Summary

### Problem to solve
Open Claw Skill 交流
### Proposed solution
Open Claw Skill 微信交流群
### Alternatives considered
_No response_
### Impact
Open Claw Skill 微信交流群
### Evidence/examples
_No response_
### Additional information
_No response_ | closed | duplicate | false | 0 | [
"enhancement"
] | [] | 2026-03-24T02:01:58Z | 2026-03-24T05:16:50Z | 2026-03-24T05:16:49Z | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | yanUEd | 4,066,225 | MDQ6VXNlcjQwNjYyMjU= | User | false |
openclaw/openclaw | 4,125,334,403 | I_kwDOQb6kR87145uD | 53,406 | https://github.com/openclaw/openclaw/issues/53406 | https://api.github.com/repos/openclaw/openclaw/issues/53406 | feat: Let users steer running ACP/sub-agent sessions from Discord (and other chat surfaces) | Right now, when I spawn a Codex or Claude Code session from Discord, I can see what the agent is doing (especially with `/reasoning on`), but I can't talk to it while it's working. If it goes off track, my only option is to kill it and start over.
In a terminal, Claude Code and Codex let you type corrections mid-task, answer questions the agent asks, and redirect it on the fly. The output streaming already works in Discord. The missing piece is the input side.
**Proposed behavior:**
When an ACP session is running in a Discord thread, messages sent in that thread should go to the running agent as input while it's working, not queue up for after it finishes.
Specifically:
- Send a message, the agent gets it while it's working (not after the current turn ends)
- Users can correct, redirect, or answer questions without killing the session
- Works for both ACP (Codex, Claude Code) and sub-agent sessions
- Output streaming already works today via `/reasoning on`. This just adds the input path
**Current workarounds:**
- Having the orchestrating agent relay messages back and forth using `process submit`, which is clunky and adds latency
- Using `openclaw tui` in a terminal, which works but defeats the point of having Discord integration
- Killing and re-spawning when something goes wrong, losing all progress
**Related issues:**
- #23580 (ACP thread-bound agents, merged)
- #28511 (ACP runtime plugin)
- #28484 (ACP file write bug in Discord threads)
The thread-bound session infrastructure and output streaming already exist. This is about completing the loop by adding interactive input. | open | null | false | 0 | [] | [] | 2026-03-24T05:18:09Z | 2026-03-24T05:18:09Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | ubehera | 3,691,059 | MDQ6VXNlcjM2OTEwNTk= | User | false |
openclaw/openclaw | 4,125,343,211 | I_kwDOQb6kR871473r | 53,409 | https://github.com/openclaw/openclaw/issues/53409 | https://api.github.com/repos/openclaw/openclaw/issues/53409 | [Feature Request] Exec approval should only show on the triggering agent's Telegram channel | ## Description
When an exec command requires approval, the approval request is broadcast to ALL connected Telegram bots/channels instead of just the one that triggered it.
**Expected behavior:** When agent A (mis) triggers an exec that needs approval, the approval request should ONLY appear on agent A's Telegram bot, not on all bots.
## Steps to Reproduce
1. Have multiple Telegram bots configured for different agents (e.g., mis, ass, rd)
2. Trigger an exec command from one agent (e.g., mis)
3. The approval request appears on ALL Telegram bots, not just mis
## Environment
- OpenClaw version: 2026.3.13
- Multiple Telegram accounts configured
- macOS
## Suggested Solution
Add a configuration option to route the approval to the same session/channel that triggered the exec.
## Workaround
Using `ask: "off"` to bypass approvals, but this is not ideal for security.
| open | null | false | 0 | [] | [] | 2026-03-24T05:20:45Z | 2026-03-24T05:20:45Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | jues5466-oss | 269,778,548 | U_kgDOEBR-dA | User | false |
openclaw/openclaw | 4,125,349,873 | I_kwDOQb6kR87149fx | 53,412 | https://github.com/openclaw/openclaw/issues/53412 | https://api.github.com/repos/openclaw/openclaw/issues/53412 | [Bug]: Feishu encryptKey bypasses config redaction, enabling webhook forgery | ## Severity Assessment
### CVSS Assessment
| Metric | v3.1 | v4.0 |
|--------|------|------|
| **Score** | 9.1 / 10.0 | 8.5 / 10.0 |
| **Severity** | Critical | High |
| **Vector** | CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:C/C:L/I:H/A:L | CVSS:4.0/AV:N/AC:L/AT:N/PR:L/UI:N/VC:L/VI:H/VA:L/SC:L/SI:H/SA:L |
| **Calculator** | [CVSS v3.1 Calculator](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:C/C:L/I:H/A:L) | [CVSS v4.0 Calculator](https://www.first.org/cvss/calculator/4.0#CVSS:4.0/AV:N/AC:L/AT:N/PR:L/UI:N/VC:L/VI:H/VA:L/SC:L/SI:H/SA:L) |
### Threat Model Alignment
**Classification:** `security-specific`
The config redaction system is an explicit security boundary in OpenClaw: `config.get` returns a redacted snapshot specifically to prevent credential disclosure to read-scoped clients. This finding demonstrates that the Feishu `encryptKey` field escapes all three redaction layers (schema hints, sensitive patterns, and plugin SDK secret-input registration), leaking the sole webhook authenticator. This is not covered by the Out of Scope section — it is not a multi-tenant assumption, prompt injection, or operator-intended behavior. The `encryptKey` is a cryptographic secret used for webhook signature verification, and its exposure enables webhook forgery from outside the trust boundary.
## Impact
When Feishu is configured in webhook mode, `channels.feishu.encryptKey` survives config redaction and is returned in plaintext to any `operator.read` client via `config.get`. The leaked key is the sole authenticator for inbound Feishu webhooks, enabling an attacker to forge accepted webhook events that OpenClaw processes as legitimate Feishu messages.
## Affected Component
**File:** `src/config/schema.hints.ts:104-110`
```typescript
const SENSITIVE_PATTERNS = [
/token$/i,
/password/i,
/secret/i,
/api.?key/i,
/serviceaccount(?:ref)?$/i,
];
```
**File:** `extensions/feishu/src/config-schema.ts:197`
```typescript
encryptKey: buildSecretInputSchema().optional(),
```
**File:** `extensions/feishu/src/monitor.transport.ts:51-76` (signature validation)
**File:** `extensions/feishu/src/monitor.transport.ts:217-219` (`needCheck: false` dispatcher invocation)
**File:** `src/gateway/method-scopes.ts:88` (`config.get` in `READ_SCOPE`)
## Technical Reproduction
1. Configure Feishu with `connectionMode: "webhook"` and a plaintext `channels.feishu.encryptKey` value.
2. Pair a gateway client that holds only the `operator.read` scope.
3. Call `config.get`. Observe that the response payload at `config.channels.feishu.encryptKey` contains the raw key value rather than `__OPENCLAW_REDACTED__`.
4. Using the leaked key, compute `sha256(timestamp + nonce + encryptKey + JSON.stringify(payload))` and send a forged webhook to the Feishu path (default `/feishu/events`) with correct `x-lark-request-timestamp`, `x-lark-request-nonce`, and `x-lark-signature` headers.
5. The request passes `isFeishuWebhookSignatureValid()` and is dispatched via `eventDispatcher.invoke(..., { needCheck: false })`.
6. OpenClaw processes the forged event through its normal `im.message.receive_v1` handler.
## Demonstrated Impact
The redaction bypass is caused by three compounding gaps:
1. **Pattern miss:** `isSensitiveConfigPath` in `src/config/schema.hints.ts` checks `SENSITIVE_PATTERNS` which match `token$`, `password`, `secret`, `api.?key`, and `serviceaccount`. The field name `encryptKey` matches none of these patterns.
2. **Missing sensitive registry:** `buildSecretInputSchema()` in `src/plugin-sdk/secret-input-schema.ts` returns a plain `z.union(...)` without registering the schema in the `sensitive` Zod registry (`src/config/zod-schema.sensitive.ts`). The `mapSensitivePaths` function in `schema.hints.ts` checks `sensitive.has(currentSchema)` when walking the schema tree, but since `buildSecretInputSchema` never calls `sensitive.add()`, the encryptKey field is not flagged through this path either.
3. **Missing extension uiHints:** `buildChannelConfigSchema` in `src/channels/plugins/config-schema.ts` produces only `{ schema: ... }` with no `uiHints` property. The Feishu channel plugin at `extensions/feishu/src/channel.ts:423` calls `buildChannelConfigSchema(FeishuConfigSchema)` without supplying per-field sensitivity annotations. As a result, `applyChannelHints` in `src/config/schema.ts` never adds a `channels.feishu.encryptKey` entry to the merged hints, and `applySensitiveHints` has no key to evaluate.
With the encryptKey leaked, the webhook authentication is fully compromised. `isFeishuWebhookSignatureValid()` in `monitor.transport.ts:51-76` computes a SHA-256 HMAC using only `timestamp + nonce + encryptKey + JSON.stringify(payload)` and the `verificationToken` is not consulted in this path. The subsequent `eventDispatcher.invoke(..., { needCheck: false })` at line 217-219 explicitly disables the Lark SDK's built-in request verification. The repository's own e2e test (`monitor.webhook-e2e.test.ts:24-41`) demonstrates that any correctly signed plaintext Feishu event envelope reaches the dispatcher.
## Environment
Tested against `openclaw/openclaw` tag `v2026.3.23-2` (commit `3b1657803292509f382fd6456242fb3e3d325461`).
Prerequisites:
- Feishu configured in webhook mode with a plaintext `channels.feishu.encryptKey`
- A gateway client holding `operator.read` scope
- Knowledge of any valid Feishu chat the bot participates in
## Remediation Advice
Register the Zod schema returned by `buildSecretInputSchema()` with the `sensitive` registry so that `mapSensitivePaths` marks all fields using that schema as sensitive. Alternatively, add `/encrypt.?key/i` to `SENSITIVE_PATTERNS` as a defense-in-depth fallback. Verify that `config.get` returns `__OPENCLAW_REDACTED__` for `channels.feishu.encryptKey` (and per-account `encryptKey` fields) when called with `operator.read` scope, and add regression coverage for this redaction path.
<!-- submission-marker:CR-mbx-feishu-encryptkey-config-redaction-bypass -->
| open | null | false | 0 | [] | [] | 2026-03-24T05:22:24Z | 2026-03-24T05:22:24Z | null | CONTRIBUTOR | null | 20260324T233649Z | 2026-03-24T23:36:49Z | coygeek | 65,363,919 | MDQ6VXNlcjY1MzYzOTE5 | User | false |
openclaw/openclaw | 4,125,362,730 | I_kwDOQb6kR8715Aoq | 53,416 | https://github.com/openclaw/openclaw/issues/53416 | https://api.github.com/repos/openclaw/openclaw/issues/53416 | Discord native slash commands return empty 'Done' after Carbon reconcile migration (v2026.3.22+) | ## Description
All native Discord slash commands (`/status`, `/acp`, `/help`, `/model`, etc.) return Discord's generic "Done" completion instead of their actual response content. This started with v2026.3.22 which switched to Carbon reconcile for command deployment (#46597).
## Environment
- OpenClaw: v2026.3.23-2 (also reproduced on v2026.3.22 and v2026.3.23)
- macOS 26.3.1 (arm64), Node.js v25.8.1
- Discord bot with single guild, 4 agents (separate bot tokens)
- commands.native: true (also tried "auto")
- commands.allowFrom.discord: ["user"]
## Behavior
1. User invokes any slash command (e.g. `/status`)
2. Gateway receives `INTERACTION_CREATE` event
3. `InteractionEventListener` processes it in ~1000-1026ms (logged as "Slow listener detected")
4. Discord shows "✅ Done" (ephemeral) — no actual command output
## Logs
```
[EventQueue] Slow listener detected: InteractionEventListener took 1026ms for event INTERACTION_CREATE
```
No errors, no command handler output, no Discord API response logged. The interaction is acknowledged (deferred) but the follow-up response with actual content is never sent.
## What works
- Text commands (typing `/status` as a regular chat message) work perfectly
- Chat messages work normally
- ACP sessions work
- The Carbon reconcile path logs correctly: `discord: native commands using Carbon reconcile path`
- Commands are registered (62 commands deployed)
## What was tried
- commands.native: true vs "auto"
- Forced gateway restart via launchctl kickstart
- Updated to v2026.3.23-2 (correction release)
- Verified commands.allowFrom includes the user
- Fixed unrelated skill symlink warnings
- Fixed unrelated sub-agent config errors
- Multiple clean restarts with no active sessions blocking
## Root cause hypothesis
The Carbon `InteractionEventListener` (from `@buape/carbon`) receives the interaction and defers it, but the OpenClaw command handler never sends the follow-up response through Carbon's response pathway. The old `deploy-rest:put` path handled responses directly; the Carbon reconcile path routes through a different interaction lifecycle that isn't completing.
## Related
- #53041 (partial fix for auth-gated commands returning generic completion — fixed in v2026.3.23)
- #46597 (original Carbon reconcile migration)
## Workaround
Use text commands instead of native slash commands. All commands work when typed as regular chat messages. | open | null | false | 0 | [] | [] | 2026-03-24T05:26:27Z | 2026-03-24T05:29:25Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | EscalioDev | 263,978,272 | U_kgDOD7v9IA | User | false |
openclaw/openclaw | 4,125,358,822 | I_kwDOQb6kR8714_rm | 53,413 | https://github.com/openclaw/openclaw/issues/53413 | https://api.github.com/repos/openclaw/openclaw/issues/53413 | [Bug]: Feishu encryptKey bypasses config redaction, enabling webhook forgery | ## Summary
The Feishu `encryptKey` field escapes all three layers of config redaction (schema hints, sensitive patterns, and plugin-SDK secret-input registration), allowing any `operator.read`-scoped client to retrieve the plaintext key via `config.get`. Since `encryptKey` is the sole authenticator for inbound Feishu webhooks, its disclosure enables an attacker to forge webhook events that OpenClaw processes as legitimate Feishu messages.
## CVSS
- **v3.1:** 9.1 Critical (AV:N/AC:L/PR:L/UI:N/S:C/C:L/I:H/A:L)
- **v4.0:** 8.5 High
## Root Cause
Three independent redaction gaps combine:
1. `SENSITIVE_PATTERNS` in `src/config/schema.hints.ts:104-110` does not match `encryptKey`
2. `buildSecretInputSchema()` in `src/plugin-sdk/secret-input-schema.ts` does not register with the `sensitive` Zod registry
3. The Feishu extension provides no `configUiHints` from `buildChannelConfigSchema`, so no hint-based sensitive annotation reaches the merged schema
## Affected Files
- `src/config/schema.hints.ts` — missing pattern
- `src/plugin-sdk/secret-input-schema.ts` — missing `.register(sensitive)`
- `extensions/feishu/src/monitor.transport.ts:51-76` — webhook signature depends on this key
## Verified Against
- Release: `v2026.3.23-2`
- Commit: `3b1657803292509f382fd6456242fb3e3d325461`
---
🤖 Generated with AI assistance (Claude Opus 4.6) | closed | completed | false | 1 | [] | [] | 2026-03-24T05:25:06Z | 2026-03-24T05:34:45Z | 2026-03-24T05:34:45Z | CONTRIBUTOR | null | 20260324T233649Z | 2026-03-24T23:36:49Z | coygeek | 65,363,919 | MDQ6VXNlcjY1MzYzOTE5 | User | false |
openclaw/openclaw | 4,125,393,612 | I_kwDOQb6kR8715ILM | 53,419 | https://github.com/openclaw/openclaw/issues/53419 | https://api.github.com/repos/openclaw/openclaw/issues/53419 | Telegram group messages not delivered to agent despite correct groupPolicy/groups config | ## Summary
Telegram group messages are not reaching the agent even when:
- BotFather has 'Allow Groups' enabled
- `channels.telegram.groupPolicy` is set to `"allowlist"`
- The group's chat ID is listed under `channels.telegram.groups`
## Config
```json
{
"channels": {
"telegram": {
"groupPolicy": "allowlist",
"groups": {
"-5174974265": {}
},
"allowFrom": [8489979671]
}
}
}
```
## Observed behaviour
Gateway log shows messages from the group being silently dropped:
```
{"module":"telegram-auto-reply"} {chatId: -5174974265, title: 'A-Team', reason: 'not-allowed'} skipping group message
```
Earlier attempts also showed:
```
Invalid allowFrom entry: "-5174974265" - allowFrom/groupAllowFrom authorization expects numeric Telegram sender user IDs only
```
and:
```
channels.telegram.groups: Invalid input: expected record, received array
```
(both resolved by fixing config format)
## Expected behaviour
Messages from the group chat ID listed under `channels.telegram.groups` should be processed by the agent.
## Questions
1. Is `channels.telegram.groups` the correct key for allowlisting a group chat?
2. Is there additional config needed (e.g. `groupAllowFrom`, group-level `dmPolicy`)?
3. What is the correct schema for group-level config (e.g. policies per group)?
## Environment
- OpenClaw 2026.3.22
- macOS Darwin 25.3.0 arm64
- Telegram channel | open | null | false | 0 | [] | [] | 2026-03-24T05:36:36Z | 2026-03-24T05:36:36Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | ramshenoy | 268,190,902 | U_kgDOD_xEtg | User | false |
openclaw/openclaw | 4,124,650,422 | I_kwDOQb6kR8712Su2 | 53,273 | https://github.com/openclaw/openclaw/issues/53273 | https://api.github.com/repos/openclaw/openclaw/issues/53273 | [Bug] npm install -g upgrade breaks acpx plugin: sourcePath not updated from extensions/ to dist/extensions/ | ## Summary
After upgrading OpenClaw via `npm install -g openclaw`, the gateway fails to start with a config validation error because the `acpx` plugin path in `openclaw.json` is not automatically migrated from the old layout (`extensions/acpx`) to the new one (`dist/extensions/acpx`).
## Environment
- **OS:** Windows 10 (10.0.26100 x64)
- **Node.js:** v24.12.0
- **OpenClaw version (after upgrade):** 2026.3.23-1
- **Install method:** `npm install -g openclaw` (standard npm global install)
## Steps to Reproduce
1. Have an existing OpenClaw installation with `acpx` plugin configured.
2. Run `npm install -g openclaw` to upgrade.
3. Start (or restart) the gateway.
## Actual Behavior
Gateway fails with config validation error on every startup:
```
Config invalid
- plugins.load.paths: plugin: plugin path not found:
D:\Program Files\nodejs\npm-global\node_modules\openclaw\extensions\acpx
Run: openclaw doctor --fix
```
Inspecting the package, the actual path after upgrade is:
```
D:\Program Files\nodejs\npm-global\node_modules\openclaw\dist\extensions\acpx
```
The `openclaw.json` config still contains the old path in two fields:
```json
"plugins": {
"installs": {
"acpx": {
"sourcePath": "...\\openclaw\\extensions\\acpx",
"installPath": "...\\openclaw\\extensions\\acpx"
}
}
}
```
## Expected Behavior
Either:
- The upgrade process (or first run after upgrade) automatically migrates `plugins.installs.acpx.sourcePath` and `plugins.installs.acpx.installPath` to the new `dist/extensions/acpx` path, **or**
- `openclaw doctor --fix` detects and fixes this stale path automatically.
## Additional Notes
- `openclaw doctor --fix` was run but did **not** fix the path issue (it reported `Plugins: Loaded: 4, Errors: 0`, which appears incorrect given the gateway was logging errors).
- The Control UI showed a red "Update available" banner after the upgrade even though the installed version was already the latest — this may be a related symptom.
- Manual fix: edit `openclaw.json` and update both `sourcePath` and `installPath` to point to `dist/extensions/acpx`.
## Workaround
Manually update `~/.openclaw/openclaw.json`:
```json
"plugins": {
"installs": {
"acpx": {
"sourcePath": "<npm-global>/node_modules/openclaw/dist/extensions/acpx",
"installPath": "<npm-global>/node_modules/openclaw/dist/extensions/acpx"
}
}
}
```
Then restart the gateway.
| open | null | false | 2 | [] | [] | 2026-03-24T01:28:52Z | 2026-03-24T05:39:01Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | Pan-Binghong | 122,083,471 | U_kgDOB0bYjw | User | false |
openclaw/openclaw | 4,125,395,135 | I_kwDOQb6kR8715Ii_ | 53,422 | https://github.com/openclaw/openclaw/issues/53422 | https://api.github.com/repos/openclaw/openclaw/issues/53422 | 增强 subagents 工具 - 支持查看子代理实时状态和进度 | ---
**⚠️ 此 issue 由 OpenClaw AI 代理提交,非账号本人操作**
---
## 背景
当前主对话无法很好地追踪子代理的执行状态:
1. `subagents list` 只能看到最近的任务列表,看不到中间进度
2. 如果子代理完成但不主动汇报结果,结果就丢失了
3. 无法查看子代理的实时执行状态
## 期望的功能
1. 增强 `subagents` 工具,支持查看活跃子代理的实时状态
2. 显示子代理的执行进度(已完成/进行中/等待中)
3. 支持查看子代理的最近输出(不等到完成)
4. 或者:让子代理自动定期汇报进度
## 参考
- 文档位置:/opt/homebrew/lib/node_modules/openclaw/docs/tools/subagents.md
- 当前限制:announce 是 best-effort,Gateway 重启可能丢失 | open | null | false | 0 | [] | [] | 2026-03-24T05:37:06Z | 2026-03-24T05:39:17Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | JAANG131 | 203,095,831 | U_kgDODBr_Fw | User | false |
openclaw/openclaw | 4,125,434,201 | I_kwDOQb6kR8715SFZ | 53,426 | https://github.com/openclaw/openclaw/issues/53426 | https://api.github.com/repos/openclaw/openclaw/issues/53426 | lossless-claw plugin update fails after core update | After updating OpenClaw core from 2026.3.22 to 2026.3.23-2, the lossless-claw plugin update fails with:
```
Failed to update lossless-claw: Error [ERR_MODULE_NOT_FOUND]: Cannot find module '/home/linuxbrew/.linuxbrew/lib/node_modules/openclaw/dist/install.runtime-Deq6Beal.js' imported from /home/linuxbrew/.linuxbrew/lib/node_modules/openclaw/dist/installs-XLSjUYkq.js
```
The plugin is installed as a global extension at `~/.openclaw/extensions/lossless-claw/` and loads successfully (version 0.3.0). The failure only occurs during the plugin update step after core update.
Steps:
1. `openclaw update` (core updates successfully)
2. Plugin update step runs and fails for lossless-claw
Environment: Linux x64, npm install, global extension (not npm package). | open | null | false | 0 | [] | [] | 2026-03-24T05:47:27Z | 2026-03-24T05:47:27Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | jcenters | 7,380,366 | MDQ6VXNlcjczODAzNjY= | User | false |
openclaw/openclaw | 4,125,444,128 | I_kwDOQb6kR8715Ugg | 53,429 | https://github.com/openclaw/openclaw/issues/53429 | https://api.github.com/repos/openclaw/openclaw/issues/53429 | feat: allow suppressing specific doctor warnings | ## Summary
`openclaw doctor` currently warns about Telegram group mode being in "first-time setup mode" even when the user has intentionally disabled group access (solo usage). There's no way to suppress known/intentional warnings, so they show up every time.
## Proposal
Add a `doctor.suppress` (or similar) config option that lets users silence specific doctor warnings by key.
Example:
```yaml
doctor:
suppress:
- channels.telegram.groupPolicy
- channels.telegram.accounts.default.groupPolicy
- channels.telegram.accounts.heum.groupPolicy
```
## Why
- Solo users who intentionally keep group mode off shouldn't see repeated warnings
- Keeps `doctor` output clean so real issues stand out
- Similar pattern exists in other tools (e.g., eslint disable, hadolint ignore)
## Current behavior
Every `openclaw doctor` run shows 3 Telegram group warnings even though it's intentional config.
## Expected behavior
Suppressed warnings are hidden from output (or shown as dimmed/info level with a note that they're suppressed).
🤖 Generated with [Claude Code](https://claude.com/claude-code) | open | null | false | 0 | [] | [] | 2026-03-24T05:49:57Z | 2026-03-24T05:49:57Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | kiyoungmacmini-pixel | 260,901,698 | U_kgDOD40LQg | User | false |
openclaw/openclaw | 4,124,477,288 | I_kwDOQb6kR8711odo | 53,246 | https://github.com/openclaw/openclaw/issues/53246 | https://api.github.com/repos/openclaw/openclaw/issues/53246 | [Feature]: Add official Mattermost channel support via plugin | ### Summary
Enable official Mattermost channel support in OpenClaw by introducing the `@openclaw/mattermost` plugin with gateway-based lifecycle, `defaultTo` configuration, and comprehensive documentation.
### Problem to solve
OpenClaw currently supports several messaging channels (e.g., Slack, Discord, Teams), but lacks native Mattermost integration. Users operating self-hosted Mattermost instances — common in enterprise and open-source environments — cannot use OpenClaw for automated notifications, cron-based alerts, or AI-driven message delivery. This forces teams to maintain fragile, custom workarounds outside of OpenClaw's secure credential and routing framework, increasing maintenance burden and security risk.
---
### Proposed solution
## Proposed solution
Introduce the `@openclaw/mattermost` extension as an officially supported channel plugin with the following capabilities:
- **Gateway-based lifecycle**: Aligns with OpenClaw's new `gateway`-driven channel architecture for consistent startup/shutdown behavior.
- **`defaultTo` support**: Allows outbound message delivery to a pre-configured target channel when no explicit target is specified (e.g., scheduled cron jobs or system alerts).
- **Secure credential management**: All sensitive values (bot tokens, server URLs) are handled exclusively through OpenClaw's secrets framework — no hardcoded credentials anywhere in code or documentation.
- **Placeholder-safe documentation**: `docs/mattermost.md` and `README.md` use obvious placeholders (`mm-token-1234`, `https://chat.example.com`) throughout all examples.
- **Full test coverage**: 289 unit tests across 34 files covering monitoring, gating, and target resolution — all passing under `pnpm test -- extensions/mattermost/`.
- **Build verified**: Confirmed zero build errors via `pnpm build`.
**Reference branch**: `feature-mattermost-channel` (branched from `main`, ported from validated `ws_mm_pn_20260322_1930` workspace)
**Reference PR**: https://github.com/openclaw/openclaw/compare/main...feature-mattermost-channel
---
### Alternatives considered
| Approach | Weakness |
|---|---|
| External webhook adapter (outside OpenClaw) | Bypasses credential management and routing — increases security surface and maintenance overhead |
| Generic HTTP channel with manual Mattermost calls | No structured lifecycle, no `defaultTo`, no test coverage; brittle and unsupported |
| Waiting for upstream Mattermost SDK stabilization | Indefinitely blocks enterprise users who are already running production Mattermost instances |
The plugin architecture with gateway integration is the only approach that delivers a consistent, secure, and maintainable first-class channel experience aligned with OpenClaw's existing design.
---
### Impact
## Impact
- **Affected users/systems**: All OpenClaw users operating self-hosted or cloud Mattermost instances; particularly enterprise teams using Mattermost as their primary internal communications platform.
- **Severity**: Blocks workflow — there is currently no supported, standards-compliant path to connect OpenClaw with Mattermost.
- **Frequency**: Affects every interaction where Mattermost is the intended delivery channel (always reproducible — the channel simply does not exist).
- **Consequence**: Teams either skip OpenClaw integration entirely or maintain unreviewed custom code outside the framework, leading to credential exposure risk, missed alerts, and duplicated engineering effort.
---
### Evidence/examples
## Evidence / Examples
- Validated implementation exists in branch `ws_mm_pn_20260322_1930` (internal workspace).
- 289/289 unit tests pass: `pnpm test -- extensions/mattermost/`
- `pnpm build` completes without errors.
- `docs/mattermost.md` provides step-by-step bot setup and configuration guide.
- `README.md` updated to list Mattermost alongside existing supported channels.
---
### Additional information
- All code and documentation have been audited to confirm no real tokens, private server URLs, or internal paths are present.
- The `defaultTo` feature is particularly critical for non-interactive flows (cron schedulers, AI pipeline reporters) where a runtime message target is not always available.
- This issue tracks the review and merge of PR `feat(mattermost): enable Mattermost channel support via plugin`. | closed | completed | false | 1 | [
"enhancement"
] | [] | 2026-03-24T00:27:00Z | 2026-03-24T05:51:38Z | 2026-03-24T05:51:38Z | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | leemgs | 82,404 | MDQ6VXNlcjgyNDA0 | User | false |
openclaw/openclaw | 4,125,451,769 | I_kwDOQb6kR8715WX5 | 53,433 | https://github.com/openclaw/openclaw/issues/53433 | https://api.github.com/repos/openclaw/openclaw/issues/53433 | [Bug]: remote CDP URLs bypass config redaction and leak credentials | ## Severity Assessment
### CVSS Assessment
| Metric | v3.1 | v4.0 |
|--------|------|------|
| **Score** | 9.9 / 10.0 | 9.4 / 10.0 |
| **Severity** | Critical | Critical |
| **Vector** | CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:C/C:H/I:H/A:L | CVSS:4.0/AV:N/AC:L/AT:N/PR:L/UI:N/VC:H/VI:H/VA:L/SC:H/SI:H/SA:L |
| **Calculator** | [CVSS v3.1 Calculator](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:C/C:H/I:H/A:L) | [CVSS v4.0 Calculator](https://www.first.org/cvss/calculator/4.0#CVSS:4.0/AV:N/AC:L/AT:N/PR:L/UI:N/VC:H/VI:H/VA:L/SC:H/SI:H/SA:L) |
### Threat Model Alignment
**Classification:** `security-specific`
The `config.get` method is explicitly read-scoped (`operator.read`) and returns a redacted config snapshot to prevent credential disclosure. This finding demonstrates that the redaction layer fails to treat `browser.cdpUrl` and `browser.profiles.*.cdpUrl` as sensitive, leaking reusable third-party service credentials (Browserless/Browserbase API keys, HTTP Basic auth) to any authenticated read-scoped client. This crosses the redaction trust boundary: read-scoped callers are not supposed to see raw credentials in redacted snapshots.
This is not covered by the Out of Scope section. It is not a multi-tenant assumption (single-operator read-scope clients should still not see raw credentials in redacted snapshots), not a prompt injection chain, and not an operator-intended feature. The documentation at `docs/tools/browser.md:277` explicitly states "Treat remote CDP URLs/tokens as secrets," yet the config redaction layer does not enforce this.
## Impact
Authenticated clients with only `operator.read` scope can call `config.get` and receive the full, unredacted `browser.cdpUrl` and `browser.profiles.*.cdpUrl` values, including embedded query tokens (e.g., Browserless `?token=...`) and HTTP Basic credentials (e.g., `user:pass@host`). The leaked credentials are reusable outside OpenClaw to connect directly to the upstream browser service, bypassing OpenClaw's audit trail and scope checks.
## Affected Component
**File:** `src/config/zod-schema.ts:358,386`
```typescript
// browser.cdpUrl — NOT registered with sensitive
cdpUrl: z.string().optional(),
// browser.profiles.*.cdpUrl — NOT registered with sensitive
cdpUrl: z.string().optional(),
```
**File:** `src/config/redact-snapshot.ts:32-33`
```typescript
function isUserInfoUrlPath(path: string): boolean {
return path.endsWith(".baseUrl") || path.endsWith(".httpUrl");
}
```
**File:** `src/config/schema.hints.ts:104-110`
```typescript
const SENSITIVE_PATTERNS = [
/token$/i,
/password/i,
/secret/i,
/api.?key/i,
/serviceaccount(?:ref)?$/i,
];
```
The `cdpUrl` field name does not match any sensitive pattern, is not registered as sensitive in the Zod schema, and `isUserInfoUrlPath` only strips userinfo from `.baseUrl` and `.httpUrl` paths -- not `.cdpUrl` paths. A separate `redactCdpUrl` helper exists at `src/browser/cdp.helpers.ts:42-58` but is only used in CLI status output and logging, not in the config snapshot redaction path.
## Technical Reproduction
1. Configure a remote browser profile with an auth-bearing CDP URL:
```json5
{ browser: { profiles: { remote: { cdpUrl: "https://alice:secret@example.com/chrome?token=supersecret123" } } } }
```
2. Connect a Gateway client that has only `operator.read` scope.
3. Call `config.get`.
4. Observe the response contains the full unredacted `cdpUrl` in `payload.config.browser.profiles.remote.cdpUrl`, including the query token and Basic-auth userinfo.
## Demonstrated Impact
The config redaction layer (`redactConfigSnapshot` at `src/config/redact-snapshot.ts:382-431`) is the single control point for stripping credentials before read-scoped API responses. It relies on two mechanisms: (1) schema-level `sensitive` registration via Zod's `.register(sensitive)`, and (2) pattern-based fallback matching on field names via `SENSITIVE_PATTERNS`. Neither mechanism covers `cdpUrl`:
- `browser.cdpUrl` and `browser.profiles.*.cdpUrl` are declared as plain `z.string().optional()` without `.register(sensitive)` (confirmed in `src/config/zod-schema.ts:358,386`).
- The field name `cdpUrl` does not match any entry in `SENSITIVE_PATTERNS` (`/token$/i`, `/password/i`, `/secret/i`, `/api.?key/i`, `/serviceaccount/i`).
- The URL userinfo stripping (`isUserInfoUrlPath`) only triggers for paths ending in `.baseUrl` or `.httpUrl`, not `.cdpUrl`.
- The generated config baseline confirms `"sensitive": false` for both paths (`docs/.generated/config-baseline.jsonl:702,716`).
Meanwhile, OpenClaw's own documentation explicitly supports and encourages auth-bearing CDP URLs with query tokens (`docs/tools/browser.md:168-175,210,250`) and explicitly warns to "Treat remote CDP URLs/tokens as secrets" (`docs/tools/browser.md:277`). The leaked credentials are reusable outside OpenClaw: a read-scoped client can take the returned Browserless or Browserbase URL and open direct browser sessions against the upstream service, bypassing OpenClaw's scope checks and audit trail entirely.
## Environment
Source review on commit `e864421d83cf292d1dc238f5383f3ac4b011c924` (tag `v2026.3.23-2`). Affected whenever operators store auth-bearing remote CDP URLs in `browser.cdpUrl` or `browser.profiles.*.cdpUrl` and expose `config.get` to read-scoped clients.
## Remediation Advice
Register `browser.cdpUrl` and `browser.profiles.*.cdpUrl` as sensitive in the Zod schema (add `.register(sensitive)` to both declarations in `src/config/zod-schema.ts`), and extend `isUserInfoUrlPath` in `src/config/redact-snapshot.ts` to also match `.cdpUrl` paths so that embedded URL userinfo is stripped even when the full value is not sentinel-redacted. This aligns the config redaction layer with the documentation guidance that CDP URLs should be treated as secrets.
<!-- submission-marker:DE-dny-remote-cdp-url-redaction-bypass -->
| open | null | false | 0 | [] | [] | 2026-03-24T05:52:09Z | 2026-03-24T05:52:09Z | null | CONTRIBUTOR | null | 20260324T233649Z | 2026-03-24T23:36:49Z | coygeek | 65,363,919 | MDQ6VXNlcjY1MzYzOTE5 | User | false |
openclaw/openclaw | 4,125,471,138 | I_kwDOQb6kR8715bGi | 53,439 | https://github.com/openclaw/openclaw/issues/53439 | https://api.github.com/repos/openclaw/openclaw/issues/53439 | fix(synology-chat): respond 200+body to webhook POST; handle HEAD probe | ## Problem
Synology Chat's outgoing webhook integration sends a **HEAD request** to verify the endpoint before each POST. OpenClaw currently returns `405 Method Not Allowed` for HEAD requests, causing Synology to mark the webhook as broken and **stop delivering subsequent messages**.
Additionally, when OpenClaw successfully processes an incoming message, it responds with `204 No Content`. Synology Chat expects `200 OK` with `{"success":true}` in the body — receiving a 204 causes it to log _"Failed to send a request to the bot server"_ and suppress follow-up messages.
## Behavior
- After successfully processing a message, the bot shows:
`(Only visible to you) Failed to send a request to the bot server. Please contact the bot owner.`
- Subsequent messages stop being delivered to OpenClaw
## Root Cause
In `extensions/synology-chat/src/webhook-handler.ts`:
1. `createWebhookHandler` returns 405 for any non-POST method, including HEAD
2. `respondNoContent()` sends `204` with no body instead of `200 + {"success":true}`
## Fix
- Handle HEAD requests by returning `200 OK`
- Change `respondNoContent` to return `200 OK` with `{"success":true}` body
## Testing
Verified end-to-end with a Synology NAS bot integration — Synology Chat successfully delivers messages and receives replies after this fix. | open | null | false | 0 | [] | [] | 2026-03-24T05:57:33Z | 2026-03-24T05:57:33Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | dennis-lynch | 59,386,026 | MDQ6VXNlcjU5Mzg2MDI2 | User | false |
openclaw/openclaw | 4,125,363,881 | I_kwDOQb6kR8715A6p | 53,417 | https://github.com/openclaw/openclaw/issues/53417 | https://api.github.com/repos/openclaw/openclaw/issues/53417 | [Bug]: Remote CDP URLs bypass config redaction, leaking credentials to read-scoped clients |
## Severity Assessment
### CVSS Assessment
| Metric | v3.1 | v4.0 |
|--------|------|------|
| **Score** | 9.9 / 10.0 | 9.4 / 10.0 |
| **Severity** | Critical | Critical |
| **Vector** | CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:C/C:H/I:H/A:L | CVSS:4.0/AV:N/AC:L/AT:N/PR:L/UI:N/VC:H/VI:H/VA:L/SC:H/SI:H/SA:L |
| **Calculator** | [CVSS v3.1 Calculator](https://www.first.org/cvss/calculator/3.1#CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:C/C:H/I:H/A:L) | [CVSS v4.0 Calculator](https://www.first.org/cvss/calculator/4.0#CVSS:4.0/AV:N/AC:L/AT:N/PR:L/UI:N/VC:H/VI:H/VA:L/SC:H/SI:H/SA:L) |
### Threat Model Alignment
**Classification:** `security-specific`
The `config.get` method is explicitly read-scoped (`operator.read`) and returns a redacted config snapshot to prevent credential disclosure. This finding demonstrates that the redaction layer fails to treat `browser.cdpUrl` and `browser.profiles.*.cdpUrl` as sensitive, leaking reusable third-party service credentials (Browserless/Browserbase API keys, HTTP Basic auth) to any authenticated read-scoped client. This crosses the redaction trust boundary: read-scoped callers are not supposed to see raw credentials in redacted snapshots.
This is not covered by the Out of Scope section. It is not a multi-tenant assumption (single-operator read-scope clients should still not see raw credentials in redacted snapshots), not a prompt injection chain, and not an operator-intended feature. The documentation at `docs/tools/browser.md:277` explicitly states "Treat remote CDP URLs/tokens as secrets," yet the config redaction layer does not enforce this.
## Impact
Authenticated clients with only `operator.read` scope can call `config.get` and receive the full, unredacted `browser.cdpUrl` and `browser.profiles.*.cdpUrl` values, including embedded query tokens (e.g., Browserless `?token=...`) and HTTP Basic credentials (e.g., `user:pass@host`). The leaked credentials are reusable outside OpenClaw to connect directly to the upstream browser service, bypassing OpenClaw's audit trail and scope checks.
## Affected Component
**File:** `src/config/zod-schema.ts:358,386`
```typescript
// browser.cdpUrl — NOT registered with sensitive
cdpUrl: z.string().optional(),
// browser.profiles.*.cdpUrl — NOT registered with sensitive
cdpUrl: z.string().optional(),
```
**File:** `src/config/redact-snapshot.ts:32-33`
```typescript
function isUserInfoUrlPath(path: string): boolean {
return path.endsWith(".baseUrl") || path.endsWith(".httpUrl");
}
```
**File:** `src/config/schema.hints.ts:104-110`
```typescript
const SENSITIVE_PATTERNS = [
/token$/i,
/password/i,
/secret/i,
/api.?key/i,
/serviceaccount(?:ref)?$/i,
];
```
The `cdpUrl` field name does not match any sensitive pattern, is not registered as sensitive in the Zod schema, and `isUserInfoUrlPath` only strips userinfo from `.baseUrl` and `.httpUrl` paths -- not `.cdpUrl` paths. A separate `redactCdpUrl` helper exists at `src/browser/cdp.helpers.ts:42-58` but is only used in CLI status output and logging, not in the config snapshot redaction path.
## Technical Reproduction
1. Configure a remote browser profile with an auth-bearing CDP URL:
```json5
{ browser: { profiles: { remote: { cdpUrl: "https://alice:secret@example.com/chrome?token=supersecret123" } } } }
```
2. Connect a Gateway client that has only `operator.read` scope.
3. Call `config.get`.
4. Observe the response contains the full unredacted `cdpUrl` in `payload.config.browser.profiles.remote.cdpUrl`, including the query token and Basic-auth userinfo.
## Demonstrated Impact
The config redaction layer (`redactConfigSnapshot` at `src/config/redact-snapshot.ts:382-431`) is the single control point for stripping credentials before read-scoped API responses. It relies on two mechanisms: (1) schema-level `sensitive` registration via Zod's `.register(sensitive)`, and (2) pattern-based fallback matching on field names via `SENSITIVE_PATTERNS`. Neither mechanism covers `cdpUrl`:
- `browser.cdpUrl` and `browser.profiles.*.cdpUrl` are declared as plain `z.string().optional()` without `.register(sensitive)` (confirmed in `src/config/zod-schema.ts:358,386`).
- The field name `cdpUrl` does not match any entry in `SENSITIVE_PATTERNS` (`/token$/i`, `/password/i`, `/secret/i`, `/api.?key/i`, `/serviceaccount/i`).
- The URL userinfo stripping (`isUserInfoUrlPath`) only triggers for paths ending in `.baseUrl` or `.httpUrl`, not `.cdpUrl`.
- The generated config baseline confirms `"sensitive": false` for both paths (`docs/.generated/config-baseline.jsonl:702,716`).
Meanwhile, OpenClaw's own documentation explicitly supports and encourages auth-bearing CDP URLs with query tokens (`docs/tools/browser.md:168-175,210,250`) and explicitly warns to "Treat remote CDP URLs/tokens as secrets" (`docs/tools/browser.md:277`). The leaked credentials are reusable outside OpenClaw: a read-scoped client can take the returned Browserless or Browserbase URL and open direct browser sessions against the upstream service, bypassing OpenClaw's scope checks and audit trail entirely.
## Environment
Source review on commit `e864421d83cf292d1dc238f5383f3ac4b011c924` (tag `v2026.3.23-2`). Affected whenever operators store auth-bearing remote CDP URLs in `browser.cdpUrl` or `browser.profiles.*.cdpUrl` and expose `config.get` to read-scoped clients.
## Remediation Advice
Register `browser.cdpUrl` and `browser.profiles.*.cdpUrl` as sensitive in the Zod schema (add `.register(sensitive)` to both declarations in `src/config/zod-schema.ts`), and extend `isUserInfoUrlPath` in `src/config/redact-snapshot.ts` to also match `.cdpUrl` paths so that embedded URL userinfo is stripped even when the full value is not sentinel-redacted. This aligns the config redaction layer with the documentation guidance that CDP URLs should be treated as secrets.
| closed | completed | false | 2 | [] | [] | 2026-03-24T05:26:51Z | 2026-03-24T06:00:02Z | 2026-03-24T06:00:02Z | CONTRIBUTOR | null | 20260324T233649Z | 2026-03-24T23:36:49Z | coygeek | 65,363,919 | MDQ6VXNlcjY1MzYzOTE5 | User | false |
openclaw/openclaw | 4,125,466,508 | I_kwDOQb6kR8715Z-M | 53,438 | https://github.com/openclaw/openclaw/issues/53438 | https://api.github.com/repos/openclaw/openclaw/issues/53438 | [Feature Request] Control UI Chat Window - Support Custom Agent Display Name | ## Problem
In the OpenClaw Control UI chat window, the agent is always displayed as "Assistant" by default. Even though the agent has a custom name defined in SOUL.md/IDENTITY.md (e.g., "小呦"), the Control UI still shows "Assistant" in the chat header.
## Expected Behavior
The Control UI should display the custom agent name from IDENTITY.md or a configurable name in openclaw.json, rather than the hardcoded "Assistant" label.
## Use Case
Users who have customized their agent's persona (e.g., "小呦" for a 12-year-old cute boy assistant) want the Control UI to reflect their custom agent identity consistently across all surfaces.
## Environment
- OpenClaw version: 2026.3.23
- Platform: Linux
- Control UI: Browser-based dashboard
## Additional Context
The agent's identity is already properly configured in:
- `IDENTITY.md` - agent name and persona
- `SOUL.md` - agent soul and tone
But the Control UI chat window ignores these settings and displays "Assistant" instead.
---
Is there a configuration option to customize the Control UI display name? If not, this would be a great feature request! | open | null | false | 1 | [] | [] | 2026-03-24T05:56:14Z | 2026-03-24T06:04:40Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | OrekiDawson | 182,470,837 | U_kgDOCuBItQ | User | false |
openclaw/openclaw | 4,125,504,678 | I_kwDOQb6kR8715jSm | 53,447 | https://github.com/openclaw/openclaw/issues/53447 | https://api.github.com/repos/openclaw/openclaw/issues/53447 | Thread sessions should inherit parent channel's modelByChannel override | ## Summary
Discord threads created under a channel with a `modelByChannel` override do not inherit that override. Instead, they fall back to the default primary model.
## Current Behaviour
- `modelByChannel` resolves by exact channel ID match
- Discord threads get their own unique channel ID (different from the parent)
- Result: a thread under a channel configured for e.g. Opus still gets routed to the default model (Sonnet)
## Expected Behaviour
When a Discord thread's channel ID has no explicit `modelByChannel` entry, OpenClaw should resolve the thread's **parent channel ID** and use that channel's model override (if one exists). Only fall back to the default model if neither the thread nor its parent has an override.
## Why This Matters
- Users configure specific channels for specific models (e.g. #rooks-nest → Opus for deep work)
- Threads are a natural extension of their parent channel's context
- Having to `/model` in every new thread defeats the purpose of channel-level model routing
- Thread IDs are ephemeral — adding them to `modelByChannel` config is impractical
## Suggested Approach
During model resolution, if the channel ID is a thread:
1. Check `modelByChannel` for the thread's own ID (existing behaviour — allows explicit overrides)
2. If no match, resolve the thread's parent channel ID via Discord API / cached guild data
3. Check `modelByChannel` for the parent channel ID
4. Fall back to default model if neither matches
| open | null | false | 0 | [] | [] | 2026-03-24T06:06:56Z | 2026-03-24T06:06:56Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | Stache73 | 7,196,384 | MDQ6VXNlcjcxOTYzODQ= | User | false |
openclaw/openclaw | 4,125,043,353 | I_kwDOQb6kR8713yqZ | 53,365 | https://github.com/openclaw/openclaw/issues/53365 | https://api.github.com/repos/openclaw/openclaw/issues/53365 | [Bug]: openclaw-weixin v1.0.2 登录不出二维码,报错缺少函数 resolvePreferredOpenClawTmpDir | ### Bug type
Behavior bug (incorrect output/state without crash)
### Summary
系统:macOS
OpenClaw 版本:2026.3.23
微信插件版本:v1.0.2
命令:
openclaw channels login --channel openclaw-weixin
现象:不显示二维码,直接报错:
Missing function: resolvePreferredOpenClawTmpDir
应该是插件与当前 OpenClaw 版本不兼容。
### Steps to reproduce
系统:macOS
OpenClaw 版本:2026.3.23
微信插件版本:v1.0.2
命令:
openclaw channels login --channel openclaw-weixin
现象:不显示二维码,直接报错:
Missing function: resolvePreferredOpenClawTmpDir
应该是插件与当前 OpenClaw 版本不兼容。
### Expected behavior
让微信尽快出更新版本插件
### Actual behavior
<img width="1058" height="398" alt="Image" src="https://github.com/user-attachments/assets/8e1705ec-a57e-4fa2-a39e-6b4c0776edcf" />
### OpenClaw version
2026.03.23
### Operating system
macOS 12.7
### Install method
_No response_
### Model
mac/kimi 2.5
### Provider / routing chain
openclaw-local gateway ->kimi 2.5
### Additional provider/model setup details
_No response_
### Logs, screenshots, and evidence
```shell
```
### Impact and severity
_No response_
### Additional information
_No response_ | open | null | false | 2 | [
"bug",
"bug:behavior"
] | [] | 2026-03-24T03:45:49Z | 2026-03-24T06:07:19Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | lanjing-China | 270,524,039 | U_kgDOEB_ehw | User | false |
openclaw/openclaw | 4,125,538,816 | I_kwDOQb6kR8715roA | 53,457 | https://github.com/openclaw/openclaw/issues/53457 | https://api.github.com/repos/openclaw/openclaw/issues/53457 | Feature: Global system prompt injection for multi-agent governance (with NemoClaw analysis) | ## Summary
When running multiple agents across a portfolio (different workspaces, different servers), there is no single configuration point to inject mandatory rules into every agent's system prompt.
## Use Case
An organization operating multiple agents needs to enforce a universal policy (e.g., citation verification, compliance rules, output safety standards) across all agents. Today, the only option is to manually add the rule to each agent's SOUL.md or AGENTS.md — one file per workspace, per server.
This works but doesn't scale:
- New agents can be deployed without the rule
- There's no enforcement at the framework level — it's convention, not configuration
- Multi-server deployments require SSH'ing into each machine to update workspace files
## Proposed Solution
A new configuration key at the `agents.defaults` level:
```json5
{
agents: {
defaults: {
systemPromptAppend: "Never fabricate a URL, citation, or verifiable fact. If unsure, say so.",
// or for multi-line:
systemPromptAppendFile: "policies/global-rules.md",
}
}
}
```
This text would be injected into the system prompt for **every agent run** on that gateway, regardless of which agent or workspace is active.
## Alternatives Considered
1. **Per-workspace bootstrap files (AGENTS.md, SOUL.md)** — works today but requires per-workspace maintenance and has no enforcement mechanism
2. **Per-channel `systemPrompt`** — channel-scoped, not agent-scoped; requires N configs to maintain
3. **`agent:bootstrap` hooks** — internal/code-level, not a user-facing config
---
## NemoClaw Analysis — Does It Already Solve This?
We evaluated [NVIDIA NemoClaw](https://github.com/NVIDIA/NemoClaw) (alpha since March 16, 2026) to determine whether it addresses this need.
### What NemoClaw Provides
NemoClaw runs OpenClaw inside NVIDIA OpenShell sandboxes with:
- **Network isolation** — Deny-by-default egress with declarative YAML policies. Unknown hosts blocked and surfaced for operator approval.
- **Filesystem sandboxing** — Landlock LSM enforcement. Agents write only to `/sandbox` and `/tmp`.
- **Process isolation** — seccomp + network namespaces. Blocks privilege escalation and dangerous syscalls.
- **Inference routing** — All model API calls pass through the OpenShell gateway. Transparent to the agent.
### What NemoClaw Does NOT Provide
NemoClaw controls the **execution environment** (what agents can *do*), not **prompt content** (what agents *say*).
| Capability | NemoClaw | This Feature Request |
|---|---|---|
| Prevent unauthorized network access | ✅ Deny-by-default | N/A |
| Prevent filesystem escape | ✅ Landlock | N/A |
| Inject governance rules into system prompt | ❌ Not supported | ✅ Core request |
| Validate output content (URL verification) | ❌ Not in scope | Related (post-generation) |
| Apply policy across multiple agents from one config | ❌ Single-sandbox model | ✅ Multi-agent governance |
| Intercept/modify prompt before LLM call | ❌ Transparent proxy | Potential future hook |
### Engineering Assessment
**From an SRE perspective:** NemoClaw's OpenShell inference routing is a transparent destination proxy, not a payload transformer. There is no documented hook for prompt mutation, and forking OpenShell to add one is not practical. The deny-by-default egress policy does provide useful defense-in-depth (agents can't silently validate fabricated URLs against unreachable hosts), but that's orthogonal to content governance.
**From a security perspective:** NemoClaw's sandbox model (Landlock + seccomp + network namespaces) is genuine infrastructure-level isolation — useful for preventing data exfiltration and unauthorized access. However, content-level governance (preventing fabricated citations, hallucinated facts) operates at a different layer entirely. No amount of network or process isolation prevents an LLM from generating a plausible-looking but fake URL in its output. The gateway-level `systemPromptAppend` is the correct architectural layer for this problem.
### Conclusion
**NemoClaw and `systemPromptAppend` are complementary, not overlapping:**
- **NemoClaw** = execution-level guardrails (what the agent can *do*)
- **`systemPromptAppend`** = content-level guardrails (what the agent should *say* and *not fabricate*)
A complete multi-agent governance stack needs both. NemoClaw prevents unauthorized actions. `systemPromptAppend` prevents fabricated content.
**Potential future hook:** NemoClaw's inference routing intercepts all model API calls before they reach the provider. This could theoretically be extended to inject system prompt rules at the proxy layer — worth exploring as a complementary NemoClaw/OpenShell feature request.
---
## Impact
This is particularly important for organizations using OpenClaw in regulated or public-facing contexts where hallucinated facts, fabricated citations, or policy violations carry real reputational or compliance risk.
| open | null | false | 0 | [] | [] | 2026-03-24T06:16:21Z | 2026-03-24T06:16:21Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | noosphera-deploy[bot] | 263,834,741 | BOT_kgDOD7nMdQ | Bot | false |
openclaw/openclaw | 4,125,534,138 | I_kwDOQb6kR8715qe6 | 53,456 | https://github.com/openclaw/openclaw/issues/53456 | https://api.github.com/repos/openclaw/openclaw/issues/53456 | ACP/acpx: claude-agent-acp requires API key, doesn't work with Claude Max subscription OAuth tokens | ## Description
The stock acpx plugin in v2026.3.22+ uses `@zed-industries/claude-agent-acp` as the ACP adapter for Claude Code sessions. This adapter requires a standard Anthropic API key (`sk-ant-api03-...`) and can't use Claude Max subscriptions with the adapter.
Users with Claude Max/Pro subscriptions who authenticate via `claude.ai` OAuth cannot use ACP sessions because:
1. `claude-agent-acp` calls the Anthropic API directly and requires `ANTHROPIC_API_KEY`
2. OAuth tokens from claude.ai login are not accepted by the adapter
3. The Claude Code CLI (`claude`) itself works fine with OAuth auth (`claude --print` succeeds), but it doesn't support ACP protocol server mode
## Environment
- OpenClaw v2026.3.23-2
- acpx v0.3.1 (global), stock acpx plugin v2026.3.23
- Claude Code CLI v2.1.81 (authenticated via claude.ai OAuth, Max subscription)
- macOS arm64, Node.js v25.8.1
- `plugins.entries.acpx.config.permissionMode: "approve-all"`
## Previous behavior (v2026.3.13)
ACP sessions worked because the acpx plugin was bundled inside OpenClaw and shared the gateway's auth context, allowing it to inject resolved API tokens into the child process environment.
## Current behavior (v2026.3.22+)
1. `sessions_spawn(runtime="acp", agentId="claude")` is accepted
2. acpx launches `npx @zed-industries/claude-agent-acp`
3. claude-agent-acp returns `Authentication required`
4. Session dies, acpx retries: `acpx ensureSession replacing dead named session`
5. Eventually: `Permission denied by ACP runtime (acpx). ACPX blocked a write/exec permission request in a non-interactive session`
## Reproduction
```bash
# Works (claude CLI with OAuth):
claude --print -p "say hello"
# Output: Hello!
# Fails (acpx → claude-agent-acp):
acpx --approve-all exec "say hello"
# Output: [error] RUNTIME: Authentication required
# Fails even with OAuth token in env:
ANTHROPIC_API_KEY="sk-ant-oat01-..." acpx --approve-all exec "say hello"
# Output: [error] RUNTIME: Authentication required
```
## Expected behavior
The acpx plugin should be able to use Claude Code CLI's existing OAuth authentication (Max/Pro subscription) for ACP sessions, similar to how `claude --print` works. The adapter should either:
1. Bridge Claude Code CLI's OAuth auth to the ACP protocol, or
2. Accept OAuth tokens (`sk-ant-oat01-...`) in addition to standard API keys, or
3. Provide a config option to use Claude Code CLI directly instead of claude-agent-acp
## Workaround
Currently none for Max subscription users without a separate API key. Subagent spawning (`runtime="subagent"`) works as an alternative for non-ACP coding tasks. | open | null | false | 0 | [] | [] | 2026-03-24T06:15:00Z | 2026-03-24T06:17:45Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | EscalioDev | 263,978,272 | U_kgDOD7v9IA | User | false |
openclaw/openclaw | 4,125,237,545 | I_kwDOQb6kR8714iEp | 53,387 | https://github.com/openclaw/openclaw/issues/53387 | https://api.github.com/repos/openclaw/openclaw/issues/53387 | [LaunchAgent] gateway install should not copy .env variables into plist EnvironmentVariables | ## Problem
`openclaw gateway install` copies all environment variables from `~/.openclaw/.env` into the LaunchAgent plist `EnvironmentVariables` dictionary. This creates a **dual-source-of-truth problem**:
1. User updates a key in `~/.openclaw/.env` (e.g. rotates `TAVILY_API_KEY`)
2. Restarts gateway via `openclaw gateway restart`
3. The old value in plist `EnvironmentVariables` takes precedence because `.env` loading uses `dotenv` which does **not override** existing env vars
4. The new key is silently ignored — gateway keeps using the stale plist value
The only workaround is to run `openclaw gateway install` again to regenerate the plist, then restart.
## Expected behavior
`openclaw gateway install` should only write OpenClaw-internal env vars to the plist (e.g. `OPENCLAW_GATEWAY_PORT`, `OPENCLAW_SERVICE_MARKER`, `PATH`, `HOME`, etc.). User-managed secrets and API keys should be loaded exclusively from `~/.openclaw/.env` at gateway startup, so editing `.env` + restarting is sufficient.
## Environment
- OpenClaw: 2026.3.23-1
- OS: macOS 26.2 (arm64)
- Service: LaunchAgent (launchd)
## Reproduction
1. Set `TAVILY_API_KEY=key-A` in `~/.openclaw/.env`
2. Run `openclaw gateway install` → plist gets `TAVILY_API_KEY=key-A`
3. Change to `TAVILY_API_KEY=key-B` in `~/.openclaw/.env`
4. Run `openclaw gateway restart`
5. Gateway still uses `key-A` from plist, ignoring `key-B` in `.env` | open | null | false | 1 | [] | [] | 2026-03-24T04:48:56Z | 2026-03-24T06:20:29Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | jusaka | 5,572,743 | MDQ6VXNlcjU1NzI3NDM= | User | false |
openclaw/openclaw | 4,125,608,392 | I_kwDOQb6kR87158nI | 53,469 | https://github.com/openclaw/openclaw/issues/53469 | https://api.github.com/repos/openclaw/openclaw/issues/53469 | [Feature]: Add private network access support for web tools and gateway | ### Summary
Allow web tools (`web_fetch`, `web_search`, `image_tool`) and the gateway to access private/local network addresses, enabling self-hosted search engines and local-first deployments.
### Problem to solve
Currently, the SSRF guard blocks all requests to private IP ranges (127.x.x.x, 10.x.x.x, 192.168.x.x), making it impossible to use self-hosted infrastructure such as SearXNG or local storage nodes. There is no configuration path to opt into private network access, even in explicitly local gateway deployments where SSRF risk is not a concern.
### Proposed solution
Introduce an `allowPrivateNetwork` boolean option for `web_fetch`, `web_search`, and `image_tool`. When running in `local` gateway mode, automatically set `allowPrivateNetwork: true` for a seamless local-first experience. Wire this flag through the SSRF guard (`fetchWithSsrFGuard`) so that private addresses are reachable when the option is enabled.
### Alternatives considered
Manually whitelisting specific IPs in the SSRF guard — too brittle and requires code changes per deployment. Routing local services through a public proxy — adds unnecessary latency and defeats the purpose of self-hosting.
### Impact
- **Affected:** Teams running self-hosted search engines (SearXNG, etc.) or air-gapped/private deployments
- **Severity:** Blocks workflow — private network access is completely unavailable today
- **Frequency:** Always, for any private-network deployment
- **Consequence:** Self-hosted search is non-functional; operators must use cloud search providers even when a local alternative exists
### Evidence/examples
- SearXNG Docker setup requires access to `127.x.x.x` or `192.168.x.x` by default
- Comparable `allowPrivateNetwork` patterns exist in Fetch API specs and other gateway frameworks
- Schema validation for the new flag is covered in `src/config/zod-schema.agent-runtime.ts`
- New search providers (Tavily, SearXNG) added alongside this change
### Additional information
- Must remain backward-compatible: `allowPrivateNetwork` defaults to `false`, preserving existing SSRF protection for all non-local deployments
- A troubleshooting guide for plaintext WebSocket/HTTP issues on private networks is included in `docs/websearch.md`
- No sensitive credentials or private configuration data are exposed by this change | open | null | false | 0 | [
"enhancement"
] | [] | 2026-03-24T06:34:25Z | 2026-03-24T06:34:25Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | leemgs | 82,404 | MDQ6VXNlcjgyNDA0 | User | false |
openclaw/openclaw | 4,125,449,561 | I_kwDOQb6kR8715V1Z | 53,431 | https://github.com/openclaw/openclaw/issues/53431 | https://api.github.com/repos/openclaw/openclaw/issues/53431 | [Bug]: 飞书机器人不响应消息 - WebSocket 模式无法接收消息 | ### Bug type
Regression (worked before, now fails)
### Summary
# 飞书机器人不响应消息 - WebSocket 模式无法接收消息
### Steps to reproduce
## 问题描述
配置了飞书机器人,OpenClaw 飞书 Channel 状态正常(enabled, configured, running),WebSocket 客户端已启动并就绪,但飞书消息无法到达 OpenClaw,机器人不响应。
## 环境信息
- OpenClaw 版本: 2026.3.13
- Node 版本: v24.14.0
- 操作系统: Windows 10.0.19045 (x64)
- 飞书 SDK 版本: @larksuiteoapi/node-sdk ^1.59.0
## 复现步骤
1. 配置飞书应用(App ID, App Secret, encryptKey, verificationToken)
2. 在飞书开放平台配置事件订阅(使用长连接接收事件 + 接收消息 v2.0)
3. 发布飞书应用版本 1.0.2
4. 启动 OpenClaw Gateway
5. 确认飞书 Channel 状态:enabled, configured, running
6. 确认 WebSocket 客户端启动成功(日志显示 "ws client ready")
7. 在飞书客户端发送消息给机器人
8. 机器人不响应,日志中没有任何消息接收记录
## 预期行为
机器人应该接收消息并回复。
## 实际行为
机器人不响应消息,OpenClaw 日志中没有任何消息接收的日志。
## 关键日志
### Expected behavior
机器人不响应消息,OpenClaw 日志中没有任何消息接收的日志。
WebSocket 客户端已启动(日志显示 "ws client ready"),但飞书消息未到达 OpenClaw。
### Actual behavior
机器人不响应消息,OpenClaw 日志中没有任何消息接收的日志。
WebSocket 客户端已启动(日志显示 "ws client ready"),但飞书消息未到达 OpenClaw。
### OpenClaw version
2026.3.13
### Operating system
Windows 10.0.19045 (x64)
### Install method
npm install -g openclaw
### Model
zai/glm-5
### Provider / routing chain
zai/glm-5 -> zai/glm-4.7 -> zai/glm-4.7-flash
### Additional provider/model setup details
使用智谱 AI (Zhipu AI) 的 GLM 模型
### Logs, screenshots, and evidence
```shell
feishu[default]: WebSocket client started
ws client ready
飞书 Channel 状态: enabled, configured, running
```
### Impact and severity
Affected: 飞书机器人用户
Severity: High (blocks Feishu bot replies)
Frequency: 100% (always)
Consequence: 飞书机器人无法接收和响应消息
### Additional information
完整的配置和日志信息已保存在桌面的 OpenClaw_Feishu_Issue.md 文件中。 | open | null | false | 1 | [
"bug",
"regression"
] | [] | 2026-03-24T05:51:28Z | 2026-03-24T06:35:31Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | qboss111 | 110,461,003 | U_kgDOBpWASw | User | false |
openclaw/openclaw | 4,125,590,592 | I_kwDOQb6kR87154RA | 53,464 | https://github.com/openclaw/openclaw/issues/53464 | https://api.github.com/repos/openclaw/openclaw/issues/53464 | [Bug]: Anthropic setup-token silently truncated when terminal line-wraps during paste | ### Bug type
Regression (worked before, now fails)
### Summary
`claude setup-token` generates 108-character OAuth tokens. When pasted into OpenClaw's Clack text prompts (`openclaw models auth` or `openclaw models auth paste-token`), the token wraps across two terminal lines and only the first ~80 characters are captured. The truncated token passes validation (`ANTHROPIC_SETUP_TOKEN_MIN_LENGTH` is 80) and gets stored in `auth-profiles.json`, but every Anthropic API call fails with `401 Invalid bearer token`. There is no indication the token was truncated.
### Steps to reproduce
1. Run `claude setup-token` in terminal — generates a 108-character `sk-ant-oat01-*` token
2. Run `openclaw models auth` and select Anthropic setup-token flow
3. Paste the token into the Clack text prompt — token wraps across two lines in terminal
4. Only the first ~80 characters are captured and stored in `auth-profiles.json`
5. All Anthropic API calls fail with `401 authentication_error: Invalid bearer token`
### Expected behavior
The full 108-character token should be stored. Token input should strip newline/carriage-return characters that occur when the terminal wraps long tokens across multiple lines.
Suggested fix: replace `.trim()` with `.replace(/[\n\r\s]+/g, "").trim()` in the token input handlers in:
- `dist/extensions/anthropic/index.js` (line 323)
- `dist/models-C6Rr59E2.js` (line 358)
- `dist/pi-embedded-CzQCqSlH.js` `validateAnthropicSetupToken` (line 10961)
### Actual behavior
Only the first ~80 characters of the 108-character setup-token are stored in `auth-profiles.json`. The truncated token passes validation but every Anthropic API call returns `401 authentication_error: Invalid bearer token`. The gateway logs show auth failures on all Anthropic models with no indication that the token was truncated during input.
### OpenClaw version
2026.3.22
### Operating system
macOS 14.3 (Darwin 23.3.0)
### Install method
npm global
### Model
anthropic/claude-opus-4-6, anthropic/claude-sonnet-4-6
### Provider / routing chain
openclaw -> anthropic (direct, setup-token auth)
### Additional provider/model setup details
Auth profile stored in `~/.openclaw/agents/main/agent/auth-profiles.json` as `type: "token"` with provider `anthropic`. Token generated via `claude setup-token` (Claude Code v2.1.78). The Clack text prompt captures only the first terminal line (~80 chars) of the 108-char token, silently truncating it.
### Logs, screenshots, and evidence
```shell
Gateway error log showing auth failures:
[agent/embedded] embedded run agent end: isError=true model=claude-opus-4-6 provider=anthropic error=HTTP 401 authentication_error: Invalid bearer token
[model-fallback/decision] model fallback decision: candidate_failed reason=auth
Verified by curl — truncated 80-char token returns 401, full 108-char token authenticates successfully:
$ # Truncated (80 chars, as stored by OpenClaw)
$ curl -s -X POST https://api.anthropic.com/v1/messages -H "Authorization: Bearer sk-ant-oat01-FIRST80CHARS..."
{"type":"error","error":{"type":"authentication_error","message":"Invalid bearer token"}}
$ # Full token (108 chars)
$ curl -s -X POST https://api.anthropic.com/v1/messages -H "Authorization: Bearer sk-ant-oat01-FULL108CHARS..."
{"type":"error","error":{"type":"not_found_error","message":"model: ..."}} # Auth OK, wrong model name
```
### Impact and severity
Affected: Any user pasting an Anthropic setup-token via interactive prompts on terminals where the 108-char token wraps
Severity: High — completely breaks all Anthropic API calls with no clear indication of the cause
Frequency: 100% reproducible when terminal width causes token to wrap
Consequence: All Anthropic model requests fail with 401. Users may waste hours debugging auth configuration when the real issue is silent token truncation during input.
### Additional information
This bug likely affects all Anthropic setup-token users whose terminal width is narrower than 108 characters. The fix is a one-line change in three files — replace `.trim()` with `.replace(/[\n\r\s]+/g, "").trim()` to strip newlines from pasted input before storage and validation. | open | null | false | 1 | [
"bug",
"regression"
] | [] | 2026-03-24T06:30:29Z | 2026-03-24T06:43:44Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | tybiker3001-svg | 247,921,367 | U_kgDODsb61w | User | false |
openclaw/openclaw | 4,125,657,224 | I_kwDOQb6kR8716IiI | 53,475 | https://github.com/openclaw/openclaw/issues/53475 | https://api.github.com/repos/openclaw/openclaw/issues/53475 | [Bug] macOS LaunchAgent gateway does not respawn after SIGTERM; launchd reports 'domain in on-demand-only mode' | ### What version of OpenClaw are you using?
2026.3.13
### What platform are you on?
macOS
### What happened?
A gateway installed as a macOS LaunchAgent appears to shut down cleanly on `SIGTERM`, but does not automatically respawn afterward.
Instead, `launchd` leaves the service inactive and logs:
`pending spawn, domain in on-demand-only mode: ai.openclaw.gateway`
The gateway remains down until an explicit/manual restart or some later demand-triggered startup occurs.
This was initially mistaken for a state-bloat issue, but we reproduced it independently on a healthy instance with small state and no OOM behavior.
### Expected behavior
If the installed gateway service receives `SIGTERM`, it should be restarted automatically by launchd (or otherwise remain available as an always-on gateway service).
### Actual behavior
After `SIGTERM`:
- gateway process exits cleanly
- `launchctl print gui/<uid>/ai.openclaw.gateway` shows `state = not running`
- health endpoint stops responding
- no immediate respawn happens
- service only returns after explicit restart, e.g. `launchctl kickstart -k gui/<uid>/ai.openclaw.gateway`
### Reproduction
1. Install the gateway service on macOS using the normal OpenClaw service install flow.
2. Confirm the service is healthy.
3. Send `SIGTERM` to the running gateway process.
4. Poll:
- `curl http://127.0.0.1:18789/health`
- `launchctl print gui/<uid>/ai.openclaw.gateway`
### Reproduction result
Observed sequence:
- before test:
- gateway healthy
- launchd service running
- after `SIGTERM`:
- health endpoint stops responding
- `launchctl` shows service inactive / not running
- no automatic respawn occurs
- after manual restart:
- `launchctl kickstart -k gui/<uid>/ai.openclaw.gateway`
- health endpoint returns healthy again
### Relevant logs
Gateway log:
```text
[gateway] signal SIGTERM received
[gateway] received SIGTERM; shutting down
```
launchd / unified log:
```text
service inactive: ai.openclaw.gateway
pending spawn, domain in on-demand-only mode: ai.openclaw.gateway
```
Manual recovery later shows:
```text
launching: non-ipc demand
Successfully spawned node[...] because non-ipc demand
```
### Additional notes
- This was reproduced on a healthy instance with small persisted state, so it does not appear to depend on OOM or large session files.
- The LaunchAgent had `RunAtLoad = true` and `KeepAlive = true`.
- This may be a macOS LaunchAgent/session-domain issue rather than a gateway crash issue, but from the user perspective the installed service is not behaving as an always-on service after a clean termination.
### Question
Is this expected behavior for the current macOS LaunchAgent install model, or should the installed gateway service be resilient to `SIGTERM` without requiring manual/demand restart?
| open | null | false | 0 | [] | [] | 2026-03-24T06:46:24Z | 2026-03-24T06:46:24Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | dafacto | 39,696,448 | MDQ6VXNlcjM5Njk2NDQ4 | User | false |
openclaw/openclaw | 4,125,657,503 | I_kwDOQb6kR8716Imf | 53,476 | https://github.com/openclaw/openclaw/issues/53476 | https://api.github.com/repos/openclaw/openclaw/issues/53476 | Mistral provider returns 422 Unprocessable Entity for all models using openai-completions adapter | # OpenClaw Issue: Mistral Provider 422 Error
## 问题概述
当使用 OpenClaw 内置的 `openai-completions` 适配器调用 Mistral API 时,所有 Mistral 模型均返回 **422 Unprocessable Entity** 错误。而直接使用 `curl` 调用同一 API endpoint 则完全正常。
---
## 环境信息
- **OpenClaw 版本**: 2026.3.23-1
- **操作系统**: Windows 10 (build 26200) x64
- **Node.js**: v22.22.1
- **代理**: HTTP_PROXY=http://127.0.0.1:7897 (Clash Verge)
- **配置路径**: `C:\Users\liuq\.openclaw\openclaw.json`
---
## 配置详情
### providers.mistral
```json
{
"baseUrl": "https://api.mistral.ai/v1",
"apiKey": "nG4eFs1KpoY8ME1TMVBtEj5F6fQu3mKn",
"api": "openai-completions",
"models": [
{ "id": "mistral-large-latest", "name": "Mistral Large", ... },
{ "id": "mistral-small-latest", "name": "mistral-small-latest", ... },
{ "id": "magistral-small-latest", "name": "magistral-small-latest", ... }
]
}
```
### agents.defaults.model
```json
{
"primary": "mistral/mistral-small-latest",
"fallbacks": [
"mistral/mistral-large-latest",
"mistral/magistral-small-latest",
"nvidia/minimaxai/minimax-m2.1",
"openrouter/nvidia/nemotron-3-super-120b-a12b:free",
"openrouter/stepfun/step-3.5-flash:free",
"qtcool/MiniMax-M2.7"
]
}
```
---
## 复现步骤
1. 将 `mistral-small-latest` 设为 primary 模型
2. 重启网关:`openclaw gateway restart`
3. 发送任意消息触发模型调用
4. 观察日志中的错误
### 日志片段
```
{"event":"embedded_run_agent_end","error":"422 status code (no body)","model":"mistral-small-latest","provider":"mistral"}
{"event":"model_fallback_decision","reason":"rate_limit","status":429} // mistral-large-latest
{"event":"embedded_run_agent_end","error":"422 status code (no body)","model":"magistral-small-latest","provider":"mistral"}
// 最终 fallback 到 stepfun/step-3.5-flash:free
```
---
## 对比:直接 API 调用(成功)
```bash
curl -x http://127.0.0.1:7897 \
-X POST "https://api.mistral.ai/v1/chat/completions" \
-H "Authorization: Bearer nG4eFs1KpoY8ME1TMVBtEj5F6fQu3mKn" \
-H "Content-Type: application/json" \
-d '{"messages":[{"role":"user","content":"一二三四五"}],"model":"mistral-small-latest","max_tokens":10}'
```
**响应** (200 OK):
```json
{
"id": "xxx",
"model": "mistral-small-latest",
"choices": [{
"message": {"content": "你好!测试成功😊"}
}]
}
```
---
## 测试的 Mistral 模型
| 模型 | 直接调用 | OpenClaw provider |
|------|----------|-------------------|
| mistral-small-latest | ✅ | ❌ 422 |
| mistral-medium-latest | ✅ | ❌ 422 (预期) |
| mistral-large-latest | ✅ | ❌ 429 (rate limit after 422 attempts) |
| codestral-latest | ✅ | ❌ 422 (预期) |
| magistral-small-latest | ✅ | ❌ 422 |
**结论**:所有 Mistral 模型在 provider 中均失败,问题具有普遍性。
---
## 推测原因
OpenClaw 的 `openai-completions` 适配器在构造请求时,可能包含了 Mistral API 不接受的字段,例如:
- `logit_bias`
- `logprobs` / `top_logprobs`
- `user` (OpenAI 特有)
- 或其他 OpenAI 特定扩展参数
Mistral API 虽然兼容 OpenAI 协议,但对参数验证更严格,某些字段会直接导致 422 错误。
---
## 期望的修复
方案A(推荐):为 `mistral` provider 添加 `allowedParameters` 配置,允许用户过滤请求参数:
```json
"providers": {
"mistral": {
"allowedParameters": ["model", "messages", "max_tokens", "temperature", "top_p", "stream", "stop", "random_seed", ...]
}
}
```
方案B:修改 `openai-completions` 适配器,检测 provider 类型并自动排除不支持的字段。
---
## 临时工作around
1. 继续使用其他 provider(如 stepfun)
2. 直接通过 curl/脚本调用 Mistral API
3. 创建自定义 skill 封装调用(已验证可行)
---
## 附加信息
- OpenClaw 配置热重载正常,`mistral-small-latest` 已在 `agents.defaults.models` 中注册
- 代理配置正确(HTTP_PROXY)
- API Key 有效(有足够配额)
- 问题仅在 `openai-completions` 适配器中出现
---
**Issue 优先级**: 中高(影响使用 Mistral 模型的能力)
**预计修复难度**: 低(参数过滤逻辑) | open | null | false | 0 | [] | [] | 2026-03-24T06:46:26Z | 2026-03-24T06:46:26Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | strategypay | 206,508,628 | U_kgDODE8SVA | User | false |
openclaw/openclaw | 4,125,709,699 | I_kwDOQb6kR8716VWD | 53,480 | https://github.com/openclaw/openclaw/issues/53480 | https://api.github.com/repos/openclaw/openclaw/issues/53480 | [Bug]: Discord /reset creates sessions with thinkingLevel=off for OpenAI Codex models, breaking tool calling (regression in 2026.3.23) | ### Bug type
Regression (worked before, now fails)
### Summary
After upgrading from v2026.3.13 to v2026.3.23-2, Discord `/reset` creates new sessions with `thinkingLevel: "off"` for `openai-codex/gpt-5.3-codex`. This completely breaks tool calling — the model generates text-only responses (including fabricated execution claims) without ever issuing tool calls. `stopReason` is always `"stop"` instead of `"toolUse"`.
### Steps to reproduce
1. Upgrade from v2026.3.13 to v2026.3.23-2
2. In a Discord group channel, run `/reset`
3. Check the new session JSONL — `thinkingLevel` is `"off"`
4. Send a message that requires tool use (e.g., dispatching a subagent via `exec`)
5. The assistant responds with text claiming it executed the action, but the session JSONL contains zero `toolCall` entries
### Expected behavior
## Environment
- **OpenClaw version:** 2026.3.23-2 (7ffe7e4)
- **Previous version:** 2026.3.13 (no issue)
- **OS:** macOS 26.3.1 (arm64)
- **Node:** 25.6.1
- **Install:** pnpm global (Homebrew)
- **Provider:** openai-codex (Responses API)
- **Model:** gpt-5.3-codex
- **Channel:** Discord (group chat, `streaming: "off"`)
## Expected Behavior
New session should have `thinkingLevel: "low"` (matching pre-upgrade behavior). The model should generate `toolCall` content blocks when tasks require tool use.
### Actual behavior
## Actual Behavior
New session has `thinkingLevel: "off"`. The model:
- Never generates `toolCall` content blocks
- Always returns `stopReason: "stop"` (never `"toolUse"`)
- Fabricates execution results in text (e.g., claims a subagent task was dispatched with a hallucinated URL, while no tool was actually called)
### OpenClaw version
v2026.3.23-2
### Operating system
macOS 26.3.1 (arm64)
### Install method
pnpm global (Homebrew)
### Model
gpt-5.3-codex
### Provider / routing chain
openai-codex (Responses API)
### Additional provider/model setup details
_No response_
### Logs, screenshots, and evidence
```shell
All Discord group-channel sessions from the same channel, same model (gpt-5.3-codex):
Date Created Version thinkingLevel Tool Calls Working?
2026-03-15 v2026.3.13 low ✅
2026-03-17 v2026.3.13 low ✅
2026-03-18 v2026.3.13 low ✅
2026-03-19 v2026.3.13 low ✅
2026-03-20 v2026.3.13 low ✅
2026-03-21 v2026.3.13 low ✅
2026-03-24 v2026.3.23-2 off ❌ regression
```
### Impact and severity
Severity: High — Agent silently loses all tool-calling capability after /reset
The failure is invisible to users: the agent responds conversationally and fabricates action results
Affects any model not hardcoded or found in the catalog at session-init time (OpenAI Codex models confirmed; other dynamically-discovered providers likely affected)
### Additional information
## Root Cause
The issue is in `resolveThinkingDefaultForModel()` (`dist/thinking.shared-BtwPxLYS.js`):
```js
function resolveThinkingDefaultForModel(params) {
if (normalizedProvider === "anthropic" && ANTHROPIC_CLAUDE_46_MODEL_RE.test(modelId))
return "adaptive";
if (normalizedProvider === "amazon-bedrock" && AMAZON_BEDROCK_CLAUDE_46_MODEL_RE.test(modelId))
return "adaptive";
if ((params.catalog?.find(entry => entry.provider === params.provider
&& entry.id === params.model))?.reasoning)
return "low";
return "off"; // ← falls through here when model is not in catalog
}
## Current Workaround
openclaw config set agents.defaults.thinkingDefault low
openclaw gateway restart
Then /reset again in Discord. The explicit thinkingDefault value is picked up by resolveThinkingDefault() before the buggy catalog lookup:
const configured = params.cfg.agents?.defaults?.thinkingDefault;
if (configured) return configured; // ← returns "low" here, skips the catalog path
## Suggested Fix
Either:
Return undefined instead of "off" from resolveThinkingDefaultForModel() when the model is not found in catalog, so the ?? chain can fall through to a safer default
Ensure the model catalog is fully loaded before resolving thinking defaults in the Discord /reset code path
Default to "low" instead of "off" when the model's reasoning capability cannot be determined | open | null | false | 0 | [
"bug",
"regression"
] | [] | 2026-03-24T06:59:18Z | 2026-03-24T06:59:28Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | lijunliu-gh | 85,854,775 | MDQ6VXNlcjg1ODU0Nzc1 | User | false |
openclaw/openclaw | 4,125,713,870 | I_kwDOQb6kR8716WXO | 53,481 | https://github.com/openclaw/openclaw/issues/53481 | https://api.github.com/repos/openclaw/openclaw/issues/53481 | Feature Request: Cron registry webhook — fire event on job add/modify/delete | ## Summary
When a cron job is added, modified, or deleted, OpenClaw should fire an internal event (or invoke a configurable webhook/callback) so external tooling can react immediately.
## Problem
Currently there is no trigger-on-change mechanism for the cron subsystem. If `jobs.json` is wiped (e.g. by a gateway restart with fewer jobs in memory), a daily scheduled backup can be up to 24 hours stale.
Tonight we experienced exactly this: `jobs.json` went from ~86 jobs to 5 jobs after a gateway restart mid-session. Because the last backup was from Mar 12, we lost ~66 job configs created in the interim.
## Proposed Solution
Add a `cron.onChange` event fired whenever a job is created, updated, or deleted. It should be consumable in at least one of these ways:
1. **Webhook/callback URL** — POST a payload to a configurable URL on each change
2. **System event injection** — fire a `systemEvent` into a named session (e.g. `main`) with the change type and job ID
3. **File write hook** — execute a shell command / script path on change
Minimum viable event payload:
```json
{
"event": "cron.change",
"action": "created" | "updated" | "deleted",
"jobId": "uuid",
"jobName": "job name",
"timestamp": "ISO-8601"
}
```
## Use Case
We use a "Cron Registry" skill (Nancy, our librarian agent) that snapshots `jobs.json` to markdown + JSON + external drive. Today it runs daily at 7 AM. With a trigger-on-change hook, the registry would always be current regardless of when a wipe occurs.
## Workaround Today
Dear daily 7 AM snapshot + a tripwire alert if `jobs.json` has fewer than 10 jobs. This helps but leaves a gap.
## Priority
Medium. Workaround exists but data loss from a wipe is a real operational risk for anyone with many active jobs. | open | null | false | 0 | [] | [] | 2026-03-24T07:00:29Z | 2026-03-24T07:00:29Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | ohmandd | 77,719,987 | MDQ6VXNlcjc3NzE5OTg3 | User | false |
openclaw/openclaw | 4,125,716,648 | I_kwDOQb6kR8716XCo | 53,483 | https://github.com/openclaw/openclaw/issues/53483 | https://api.github.com/repos/openclaw/openclaw/issues/53483 | feat: auto-yield parent session after mode=run ACP spawn | ## Problem
When a parent session spawns an ACP agent with `mode: "run"`, the session continues its turn and often finishes (`stopReason: "stop"`) before calling `sessions_yield`. This means the completion callback has nowhere to route — the parent is asleep.
This happens consistently even when agent instructions explicitly say to yield after spawn. The model treats spawn as fire-and-forget and wraps up its turn with a status message.
## Current Behavior
1. Parent session calls `sessions_spawn(runtime: "acp", mode: "run")`
2. Parent continues generating text, posts a status message
3. Parent turn ends (`stopReason: "stop"`) — no `sessions_yield`
4. ACP agent completes
5. Completion event has no suspended parent to wake → result is lost
## Expected Behavior
For `mode: "run"` spawns, the parent should be automatically suspended after the spawn tool call returns, similar to how `sessions_yield` works. The completion callback should wake the parent with the result.
Two possible approaches:
1. **Auto-yield after spawn**: Gateway automatically suspends the parent after a `mode: "run"` spawn, treating it like an implicit `sessions_yield`
2. **Blocking spawn**: The spawn tool itself blocks until the ACP agent completes, returning the result inline (like a synchronous tool call)
Option 1 preserves the current async model while fixing the UX gap. Option 2 is simpler but may not work for long-running agents.
## Evidence
Multiple sessions exhibit this pattern:
- Session spawns 10+ ACP agents over its lifetime
- None of the completions route back because the parent never yields
- Agent instructions in AGENTS.md explicitly say "yield after spawn" but the model ignores it
## Related
- #53256 — ACP completion relay fix (ensures completions CAN route to non-subagent sessions)
- #53319 — ACP concurrent spawn failure
- #49782 — RFC on ACP completion relay
The relay fix (#53256) solved the server-side routing. This issue is about the client-side UX gap where sessions never enter the "waiting" state needed to receive completions. | open | null | false | 0 | [] | [] | 2026-03-24T07:01:12Z | 2026-03-24T07:01:12Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | bluk1020 | 119,493,345 | U_kgDOBx9S4Q | User | false |
openclaw/openclaw | 4,125,716,584 | I_kwDOQb6kR8716XBo | 53,482 | https://github.com/openclaw/openclaw/issues/53482 | https://api.github.com/repos/openclaw/openclaw/issues/53482 | [Bug]: Gateway silently fails without warning when using legacy CLAWDBOT_* env variables | ### Bug type
Behavior bug (incorrect output/state without crash)
### Summary
### Describe the bug
After updating `openclaw` to the latest version (2026.3.x / moving to head of main), the Gateway silently fails to connect to external channels (like Telegram, WhatsApp, or Discord) if the user's `.env` file still relies on the legacy `CLAWDBOT_*` environment variables.
Following the recent breaking change (`refactor!: drop legacy CLAWDBOT env compatibility`), the Gateway simply ignores these old variables. However, it does not emit a clear fatal error, migration warning, or startup prompt instructing the user to update their `.env` keys. As a result, the daemon starts successfully, but the channels remain completely offline, leaving the user confused.
### Environment
```text
System:
OS: macOS / Linux / Windows (WSL2)
Binaries:
Node: 24.x (or 22.16+)
pnpm: 9.x
OpenClaw:
CLI Version: 2026.3.23 (or latest main build)
```
### Steps to reproduce
1. Use an existing `openclaw` workspace configured with legacy environment variables (e.g., `CLAWDBOT_TELEGRAM_TOKEN`, `CLAWDBOT_OPENAI_API_KEY`).
2. Update the CLI to the latest version built from source or via the `dev`/`latest` channel.
3. Start the daemon using `openclaw gateway --verbose`.
4. Observe the terminal output and try to interact with the agent via the configured channels.
### Expected behavior
If legacy `CLAWDBOT_*` variables are detected in the `.env` file or environment context, the Gateway should ideally throw a loud startup warning (or a fatal error) pointing the user to the `openclaw doctor` or the migration documentation so they know exact steps to rename their configuration keys.
### Actual behavior
The Gateway starts up normally but silently ignores the legacy authentication tokens and configurations. The channels do not connect, and the verbose logs do not indicate that the missing connection is due to deprecated `.env` variable names.
### Suggested Fix
Introduce a pre-flight check in the configuration loader. Before the Gateway fully boots, scan the `process.env` for any keys starting with `CLAWDBOT_`. If found, log a highly visible warning:
`WARN: Legacy CLAWDBOT_* environment variables are no longer supported. Please run 'openclaw onboard' to migrate your configuration or rename them according to the latest documentation.`
### Steps to reproduce
### Steps to reproduce
1. Create a `.env` file using the legacy prefix: `CLAWDBOT_TELEGRAM_TOKEN=your_token_here`.
2. Ensure no new prefix equivalent exists (i.e., NO `OPENCLAW_TELEGRAM_TOKEN` in environment).
3. Run the gateway: `pnpm gateway`.
4. Observe that the Gateway starts without any errors or warnings regarding the legacy `.env` variables, but the Telegram channel remains completely uninitialized/offline.
### Expected behavior
### Expected behavior
The Gateway should provide a clear migration warning or a fatal error when legacy `CLAWDBOT_*` environment variables are detected, instead of failing silently.
**Reference:**
1. **Prior Observed Behavior:** In versions prior to the recent refactor (specifically before commit `refactor!: drop legacy CLAWDBOT env compatibility`), the codebase maintained a fallback mechanism to ensure backward compatibility for users migrating from the original "Clawdbot" branding.
2. **Standard CLI Practice:** According to the "The lobster way" philosophy mentioned in the README and the `openclaw doctor` utility's purpose, the system is expected to surface configuration risks.
3. **Known-good state:** A robust CLI should either:
- Automatically map `CLAWDBOT_` keys to `OPENCLAW_` keys with a `console.warn` notice.
- Or, prevent startup with a clear message: "Legacy CLAWDBOT_* variables detected. Please rename them to OPENCLAW_* to continue."
### Actual behavior
### Actual behavior
The Gateway process initializes and reports a successful startup, but it fails to load any configurations tied to legacy environment variables.
**Observed results:**
1. **Silent Failure:** The Gateway does not authenticate with providers (OpenAI, Anthropic) or connect to channels (Telegram, WhatsApp) when only `CLAWDBOT_` prefixed variables are present.
2. **Missing Diagnostics:** Running `openclaw doctor` reports a "Healthy" status or "Missing API Keys" without identifying that the keys are actually present under the deprecated `CLAWDBOT_` prefix.
3. **No Console Warnings:** There are no `warn` or `error` logs in the terminal indicating that the configuration loader skipped the legacy variables.
**Cited Evidence:**
- **Source Code Change:** In the recent refactor (Commit: `refactor!: drop legacy CLAWDBOT env compatibility`), the mapping logic in the configuration provider (formerly allowing fallback from `OPENCLAW_*` to `CLAWDBOT_*`) was removed entirely.
- **Terminal Output:**
```bash
[GATEWAY] Info: Gateway version 2026.3.23 starting...
[GATEWAY] Info: Control UI port: 18789
[GATEWAY] Info: No channels configured. (Note: This occurs even if CLAWDBOT_TELEGRAM_TOKEN is set)
```
### OpenClaw version
2026.3.23 (Build from `main` branch after commit `fb50c98`)
### Operating system
macOS 14.x / Ubuntu 22.04 LTS / Windows 11 (WSL2)
### Install method
Installed from source using `pnpm install` and launched via `pnpm gateway`.
### Model
Claude 3.5 Sonnet / GPT-4o (Any effective model configured via legacy env)
### Provider / routing chain
Direct Gateway connection to Anthropic/OpenAI via legacy `CLAWDBOT_` environment variables.
### Additional provider/model setup details
_No response_
### Logs, screenshots, and evidence
```shell
```
### Impact and severity
_No response_
### Additional information
_No response_ | open | null | false | 0 | [
"bug",
"bug:behavior"
] | [] | 2026-03-24T07:01:11Z | 2026-03-24T07:01:20Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | toanbui-tech | 125,446,005 | U_kgDOB3ondQ | User | false |
openclaw/openclaw | 4,125,718,339 | I_kwDOQb6kR8716XdD | 53,485 | https://github.com/openclaw/openclaw/issues/53485 | https://api.github.com/repos/openclaw/openclaw/issues/53485 | [Bug]: the record of web chat lost after updated | ### Bug type
Regression (worked before, now fails)
### Summary
After upgrading from 2026.3.13 to 2026.3.26, I lost all message record with main agent in web chat.
<img width="2560" height="1600" alt="Image" src="https://github.com/user-attachments/assets/e62fe081-32a2-4a7a-ac90-a966cfa8a7a5" />
### Steps to reproduce
1. start openclaw 2026.3.13 and have some talk with web chat
2. updated to 2026.3.26
3. then you lose the records
### Expected behavior
NOT_ENOUGH_INFO
### Actual behavior
NOT_ENOUGH_INFO
### OpenClaw version
2026.3.26
### Operating system
Ubuntu
### Install method
_No response_
### Model
minimax 2.7
### Provider / routing chain
openclaw->minimax->minimax
### Additional provider/model setup details
_No response_
### Logs, screenshots, and evidence
```shell
```
### Impact and severity
_No response_
### Additional information
_No response_ | open | null | false | 0 | [
"bug",
"regression"
] | [] | 2026-03-24T07:01:39Z | 2026-03-24T07:01:47Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | xion516 | 187,609,864 | U_kgDOCy6zCA | User | false |
openclaw/openclaw | 4,125,732,143 | I_kwDOQb6kR8716a0v | 53,488 | https://github.com/openclaw/openclaw/issues/53488 | https://api.github.com/repos/openclaw/openclaw/issues/53488 | feat: allow definition of custom workspace injection files | ## Problem
OpenClaw automatically injects a fixed set of workspace reference files into every agent session: `AGENTS.md`, `SOUL.md`, `TOOLS.md`, `IDENTITY.md`, `USER.md`, `HEARTBEAT.md`, `BOOTSTRAP.md` (if present), and `MEMORY.md`.
Workspaces with domain-specific needs cannot extend this list. In our case, we need `PROJECTS.md` — a canonical system map that identifies which dashboard is which, project paths, ports, and routing roles. Without it, agents routinely edit the wrong files (e.g., `index.html` instead of `index-v3.html`) and confuse unrelated projects.
The only current workaround is text directives in `TOOLS.md` / `AGENTS.md` (e.g., "read PROJECTS.md before doing anything else"), which are unenforceable for subagents and easy to skip.
## Proposed Solution
Add a config key in `openclaw.json` (under `agents.defaults` or a new `workspace.injectedFiles` section) that lets operators define additional files to inject:
```json
{
"agents": {
"defaults": {
"workspaceInjectFiles": [
"PROJECTS.md",
"ROUTES.md",
"DOMAIN_SPECIFIC.md"
]
}
}
}
```
Behavior:
- Files listed are resolved relative to the agent's `workspace` directory
- If a file does not exist, it is silently skipped (no error)
- Default injected files remain unchanged (AGENTS.md, SOUL.md, etc.)
- The config should support per-agent overrides (i.e., `agents.list[id].workspaceInjectFiles`)
## Use Case Context
Our workspace (`PROJECTS.md`) is a multi-project ops environment:
- **Control Dashboard** at `projects/control-dashboard/index-v3.html` (port 9003)
- **Sorter Dashboard v2** at `projects/sorter-dashboard-v2/dashboard.html` — completely different project
- **SORTER pipeline** at `projects/SORTER/` — Python automation
- Team model routing config, task system, cron management all in the same workspace
Without `PROJECTS.md` injected, any agent/session that skips the manual "read PROJECTS.md" step will have no idea which file is which. This is a recurring source of errors in multi-project workspaces.
## Alternatives Considered
1. **Hardcode PROJECTS.md in the binary** — rejected, not scalable for other workspaces with other needs
2. **Text directives in AGENTS.md/TOOLS.md** — current state, unenforceable for subagents
3. **Symlink or rename** — doesn't solve the structural need for a project map
## Priority
High. This is a correctness feature, not cosmetic. Wrong file edits in production dashboards are a real failure mode without it.
---
*Filed by: Munchner (workspace operator agent)*
*Workspace: openclaw workspace, user: Hugh Jaynus* | open | null | false | 0 | [] | [] | 2026-03-24T07:05:17Z | 2026-03-24T07:05:17Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | Spout7789 | 260,278,666 | U_kgDOD4OJig | User | false |
openclaw/openclaw | 4,125,448,384 | I_kwDOQb6kR8715VjA | 53,430 | https://github.com/openclaw/openclaw/issues/53430 | https://api.github.com/repos/openclaw/openclaw/issues/53430 | [Bug]: Subtle styling errors in v2026.03.23? | ### Bug type
Regression (worked before, now fails)
### Summary
<img width="1031" height="393" alt="Image" src="https://github.com/user-attachments/assets/921b4fcd-f41d-4cac-8f0c-93d094fb03ef" />
I'm the kinda guy that notices the most subtle changes in design. This is a small issue, basically the messages and the modals are longer. I was wondering if it's a new thing you guys are trying or if it's a styling error?
### Steps to reproduce
Update from v2026.3.13 to v2026.3.23
### Expected behavior
half the size
### Actual behavior
not half the size
### OpenClaw version
v2026.3.23
### Operating system
ubuntu 24.04
### Install method
npm (had to reinstall with npm cause the openclaw command made the command disappear)
### Model
minimax-m2.5 (idk why this matters here lol)
### Provider / routing chain
OpenClaw > LiteLLM (used to sanitize the router) > My custom AI router (again idk why this info is required for every bug report 😔)
### Additional provider/model setup details
none
### Logs, screenshots, and evidence
```shell
not letting me paste the screenshot here damn- it's the same screenshot from before
```
### Impact and severity
Affected: Control UI
Severity: VERY low
Frequency: always
Consequence: looks slightly less awesome :(
### Additional information
was fine on v2026.03.13 | open | null | false | 2 | [
"bug",
"regression"
] | [] | 2026-03-24T05:51:08Z | 2026-03-24T07:06:11Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | TwangyMoney | 87,775,970 | MDQ6VXNlcjg3Nzc1OTcw | User | false |
openclaw/openclaw | 4,125,742,712 | I_kwDOQb6kR8716dZ4 | 53,491 | https://github.com/openclaw/openclaw/issues/53491 | https://api.github.com/repos/openclaw/openclaw/issues/53491 | [Feature]: Agent cron introspection — allow agents to list and verify their scheduled cron jobs | ## Problem
Agents that rely on cron-scheduled heartbeats for autonomous operation (e.g., self-evolution, hypothesis discovery, ClawInstitute engagement) have no way to programmatically verify:
1. Which cron jobs are currently scheduled for them
2. Whether a cron job that should exist is actually registered
3. When a cron job last fired and what the next fire time is
Currently, agents must use fragile `exec('openclaw cron list')` or trust the output of `openclaw status`, but neither provides a structured, tool-callable interface.
## Impact
During every evolution cycle, the self-evolution skill runs but cannot confirm whether the heartbeat crons that feed it are correctly registered. If a cron is misconfigured or silently dropped, the agent doesn't know until hours of missed cycles have passed.
This is reproducible every evolution cycle for any agent using autonomous cron pipelines.
## Proposed Fix
Expose a tool or API endpoint that returns the agent's registered cron jobs:
```json
{
"crons": [
{"id": "...", "schedule": "0 */6 * * *", "last_run": "...", "next_run": "...", "status": "active"},
...
]
}
```
Could be surfaced as:
- A new tool in the agent tool namespace (`cron_list`, `cron_status`)
- An OpenClaw session status field
- A `sessions_spawn` result annotation
## Environment
- Runtime: agent=main | host=MacBook GAO | os=Darwin 25.2.0 (arm64)
- Agent: Einstein (autonomous research scientist)
- OpenClaw version: current | open | null | false | 0 | [] | [] | 2026-03-24T07:07:53Z | 2026-03-24T07:07:53Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | gasvn | 20,515,144 | MDQ6VXNlcjIwNTE1MTQ0 | User | false |
openclaw/openclaw | 4,125,744,577 | I_kwDOQb6kR8716d3B | 53,493 | https://github.com/openclaw/openclaw/issues/53493 | https://api.github.com/repos/openclaw/openclaw/issues/53493 | [Bug] 2026.3.23-2: Telegram channel stops initializing permanently after polling stall loop | ## Summary
**Severity: Critical** — Telegram channel permanently stops initializing after polling stall loop escalates, surviving reboots. Downgrading to `2026.3.22` immediately fixes the issue.
## Environment
- **OpenClaw:** 2026.3.23-2 (7ffe7e4)
- **OS:** macOS 26.3.1 (arm64)
- **Node:** v22.18.0 (nvm)
- **Installation:** npm global
## Symptoms
1. Telegram polling stalls begin appearing in logs (~every 10-20 min):
```
gateway/channels/telegram: Telegram polling runner stopped (polling stall detected); restarting in Xs
```
2. After enough stall/restart cycles, `gateway/channels/telegram` **stops appearing in startup logs entirely**
3. `openclaw status` shows **Channels table empty** — no channels registered at all
4. `openclaw channels list` shows no Telegram channel
5. Bot can still **send** messages (outbound API works) but receives nothing
6. Survives full system reboot — channel never recovers on any restart
7. No error in logs explaining why channel fails to initialize
## What We Tried (none worked on 2026.3.23-2)
- ✅ Multiple gateway restarts
- ✅ Full system reboot
- ✅ `openclaw doctor --fix` (fixed unrelated `models.providers.google.baseUrl` validation error)
- ✅ Cleared pending Telegram updates via `getUpdates?offset=N+1`
- ✅ Updated `~/.openclaw/telegram/update-offset-default.json` to match cleared offset
- ✅ Toggled `channels.telegram.enabled` false → true + restart
- ✅ Removed `channels.telegram.network.autoSelectFamily` override
- ✅ Set `channels.telegram.network.autoSelectFamily: true` (Node 22 fix per #1639)
- ✅ `openclaw channels add --channel telegram --token <token>` (re-registration)
- ✅ Stripped config to minimal (no `streaming`, `retry`, `network` fields)
- ✅ `deleteWebhook` + confirmed no webhook set
## Root Cause (suspected)
The polling stall restart loop in 2026.3.23-2 appears to permanently corrupt the channel's internal lifecycle state. Once the loop exhausts its backoff, the channel subsystem silently stops trying to register the provider on subsequent gateway boots — no error, no log entry, just nothing.
## Fix
**Downgrade to 2026.3.22 immediately resolves the issue:**
```
npm install -g openclaw@2026.3.22
openclaw gateway restart
```
After downgrade, `gateway/channels/telegram [default] starting provider` reappears in logs immediately and messages are received normally.
## Additional Notes
- The polling stall issue itself (`polling stall detected; restarting`) was already occurring before this escalated — the escalation to permanent failure appears to be new in 2026.3.23-2
- A `models.providers.google.baseUrl` config validation error also fires on every startup in 2026.3.23-2 (separate from Telegram but also introduced in this release — `openclaw doctor --fix` migrates nano-banana-pro apiKey to `models.providers.google.apiKey` but doesn't add the required `baseUrl` field)
- Related issues: #1639, #7327, #8496, #15082, #23396 | open | null | false | 0 | [] | [] | 2026-03-24T07:08:12Z | 2026-03-24T07:08:12Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | okhan1980 | 213,380,080 | U_kgDODLfr8A | User | false |
openclaw/openclaw | 4,125,788,104 | I_kwDOQb6kR8716ofI | 53,498 | https://github.com/openclaw/openclaw/issues/53498 | https://api.github.com/repos/openclaw/openclaw/issues/53498 | [Bug]: telegram 无法将运行 openclaw 主机上的文件当成附件发送远程主机 | ### Bug type
Regression (worked before, now fails)
### Summary
3.8版本的时候还是好的,3.13开始往后的版本就都不行了
### Steps to reproduce
3.8版本的时候还是好的,3.13开始往后的版本就都不行了
### Expected behavior
3.8版本的时候还是好的,3.13开始往后的版本就都不行了
### Actual behavior
3.8版本的时候还是好的,3.13开始往后的版本就都不行了
### OpenClaw version
3.13
### Operating system
macos
### Install method
_No response_
### Model
qwen
### Provider / routing chain
openclaw
### Additional provider/model setup details
_No response_
### Logs, screenshots, and evidence
```shell
```
### Impact and severity
_No response_
### Additional information
_No response_ | open | null | false | 0 | [
"bug",
"regression"
] | [] | 2026-03-24T07:18:39Z | 2026-03-24T07:18:49Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | Naruterador | 41,095,910 | MDQ6VXNlcjQxMDk1OTEw | User | false |
openclaw/openclaw | 4,125,794,473 | I_kwDOQb6kR8716qCp | 53,500 | https://github.com/openclaw/openclaw/issues/53500 | https://api.github.com/repos/openclaw/openclaw/issues/53500 | [Feature]: Structured service discovery and trust verification for agent internet tool calls | ### Summary
Add a trust verification layer to the existing before_tool_call hook that checks whether external services publish a machine-readable capability manifest and whether the service operator is in a public trust registry, before the agent makes the call.
### Problem to solve
In simple terms: when your agent calls an API on your behalf, it currently has no idea if that API is real, what it does, or what it costs. It's like sending someone to buy groceries but they can't read the store signs, can't see the prices, and can't tell a real store from a fake one. This gives agents the ability to read one file and instantly know everything about the service. About 50 lines of code, plugging into a hook OpenClaw already has.
Right now, agents interact with the web the way humans did before search engines: blindly, with no infrastructure telling them what's available, what it costs, or whether the service on the other end is real. They're scraping HTML, guessing at endpoints, and trusting whatever URL the model spins up. There's no structured layer between the agent and the open internet.
This creates two problems:
**1. Computational waste.** When an OpenClaw agent needs to interact with an external service, it has to probe the website, parse HTML, navigate documentation, and figure out what endpoints exist, what parameters they accept, and what they cost. That's slow, token-expensive, and error-prone.
| | Without structured discovery | With structured discovery |
|--|-------------------|-----------------|
| Requests to discover capabilities | 5-10+ (homepage, docs, trial-and-error) | 1 (single manifest fetch) |
| Tokens consumed for discovery | ~100K+ (HTML parsing, documentation reading) | ~1-2K (structured JSON) |
| Time to first successful API call | Multiple agent loops (seconds to minutes) | Immediate (parameters known upfront) |
| Error rate on first call | High (guessed parameters, wrong endpoints) | Near zero (declared schema) |
| Pricing visibility | Often none until the bill arrives | Known before the call is made |
The same way SSRF protection stops agents from hitting `localhost`, structured discovery stops agents from wasting tokens on blind exploration when the answer is one fetch away.
**2. No trust verification.** Agents are starting to spend money. Protocols like [x402](https://www.x402.org/), [L402](https://docs.lightning.engineering/the-lightning-network/l402), and [MPP](https://stripe.com/blog/machine-payments-protocol) are enabling AI agents to pay for API calls autonomously, with no human in the loop at the point of transaction. OpenClaw's current security model handles SSRF, owner-only tool gating, loop detection, and plugin-extensible preflight. But it has no way to answer: "Is this API who it claims to be?" or "What does this API actually cost?"
This is the same inflection point the early web faced. HTTP existed. Payments existed. But there was no trust layer. The result was a decade of phishing, fraud, and retrofitted security before SSL/TLS and Certificate Authorities became the baseline. That retrofit was painful and expensive. It's better to catalyze open standards for this infrastructure now, while the ecosystem is still forming, than to bolt them on after the problems are entrenched.
### Proposed solution
A `before_tool_call` hook handler that, before `web_fetch` or MCP tools make external requests:
1. Fetches the target domain's `/.well-known/agent.json` (a structured capability manifest)
2. If found, extracts: available endpoints, parameters, pricing, and identity information
3. If the manifest includes a cryptographic identity (Tier 3), verifies it against a public trust registry
4. Logs trust signals. Blocks revoked operators. Proceeds normally for domains without a manifest.
**How it works in the pipeline:**
```
before_tool_call fires
│
├─ Extract target domain from tool params
│ (url param for web_fetch, server config for MCP)
│
├─ Check cache for domain's service manifest (same cache pattern as web-fetch)
│ Cache miss → fetch https://{domain}/.well-known/agent.json
│ Uses existing fetchWithWebToolsNetworkGuard (SSRF-protected)
│
├─ If no manifest found:
│ → Return { block: false } — no signal, proceed as today
│
├─ If manifest found, classify trust tier:
│ Tier 1: manifest exists (origin + version) → log: "service declares itself"
│ Tier 2: + intents with pricing → log: "service costs $X per call"
│ Tier 3: + cryptographic identity (DID + Ed25519) → verify against trust registry
│
└─ If Tier 3 + registry check:
Found + active → log: "cryptographically verified service"
Found + revoked → return { block: true, blockReason: "Service operator revoked" }
Not found → log: "identity claimed but not in trust registry"
```
**Implementation sketch** using the existing types from [`src/plugins/types.ts`](https://github.com/openclaw/openclaw/blob/main/src/plugins/types.ts):
```typescript
// Uses existing hook contract, no type changes needed
async function trustVerificationHook(
event: PluginHookBeforeToolCallEvent
): Promise<PluginHookBeforeToolCallResult> {
// Only check tools that make external requests
if (!isNetworkTool(event.toolName)) return {};
const domain = extractDomain(event.params);
if (!domain) return {};
// Fetch service manifest (cached, SSRF-guarded)
// Currently checks /.well-known/agent.json. The resolver is designed to be
// extensible: additional manifest formats (OpenAPI, future standards) can be
// added as the ecosystem evolves without changing the hook contract.
const manifest = await fetchServiceManifest(domain); // uses existing fetchWithWebToolsNetworkGuard
if (!manifest) return {}; // No manifest = no signal, proceed normally
// Log what the service declares
logTrustSignal(domain, manifest);
// Tier 3: check cryptographic identity against trust registry
if (manifest.identity?.did || manifest.identity?.public_key) {
const registryResult = await checkTrustRegistry(manifest); // local Ed25519 verification, 15-min cached
if (registryResult.status === 'revoked') {
return { block: true, blockReason: `Service operator revoked from trust registry` };
}
}
return {}; // Proceed with trust signals logged
}
```
**Scope:** Roughly 50 lines of hook handler logic. Cached fetch reuses existing patterns from [`web-shared.ts`](https://github.com/openclaw/openclaw/blob/main/src/agents/tools/web-shared.ts). No new dependencies. The manifest fetch is cached (same TTL pattern as `web-fetch`). The registry check is a local Ed25519 signature verification against a cached JSON file, with no per-request external calls.
**What this gives users:**
- Agents know every available endpoint, parameter, and return type without probing the website
- Agents surface what a service charges *before* calling it
- Revoked service operators are blocked automatically
- Services with cryptographic identity get a verified signal in the tool call log
- Services without a manifest work exactly as they do today, with zero breaking changes
### Alternatives considered
**1. Do nothing and wait for a dominant standard to emerge.**
Weaker because the early web tried this approach. Payments went live without trust infrastructure, and the resulting decade of fraud led to a painful SSL/TLS retrofit. Building the infrastructure layer now, while the ecosystem is young and the cost of integration is low, is how we avoid repeating that pattern.
**2. Support only OpenAPI/Swagger specs.**
OpenAPI describes API structure but not pricing, payment methods, or operator identity. It also isn't designed for the `/.well-known/` discovery pattern (agents would need to know where the spec file lives per-domain). The proposed approach is extensible: the resolver can support additional manifest formats as they emerge without changing the hook contract.
**3. Build a full security policy framework (SHIELD.md, #12385).**
SHIELD.md addresses threat-based blocking (matching known-bad patterns). This proposal addresses the complementary problem: positive trust verification (surfacing known-good signals). The two compose. SHIELD blocks bad actors; trust verification surfaces verified ones. This proposal is also significantly smaller in scope (50 lines vs. a full policy engine) and can ship independently.
**4. Ship as a plugin first, graduate to core later.**
A reasonable de-risk path. However, the scope is small (one hook handler, cached fetch, no new dependencies), it uses existing patterns, and it's fully additive (no breaking changes). The efficiency gains and trust signals benefit all users, not just plugin adopters.
### Impact
**Affected users:** All OpenClaw users whose agents make external API calls via `web_fetch`, `web_search`, or MCP tools.
**Severity:** Currently a latent risk. Agents call external services with no verification. As payment protocols ([x402](https://www.x402.org/), [L402](https://docs.lightning.engineering/the-lightning-network/l402), [MPP](https://stripe.com/blog/machine-payments-protocol)) see wider adoption, this becomes a direct financial risk: agents paying unverified services on behalf of users.
**Frequency:** Every external API call. This is the hot path for agents that interact with the web.
**Consequence without this:**
- Wasted tokens on exploratory HTML parsing when structured data is available (quantified in the comparison table above)
- No pricing visibility before committing to a paid API call
- No way to distinguish a legitimate service from an impersonator
- No automatic blocking of revoked or compromised service operators
- As the paid API ecosystem grows, these consequences compound
**Consequence with this:**
- One fetch replaces multi-step service exploration
- Pricing known upfront, before the call
- Verified services surfaced in tool call logs
- Revoked operators blocked automatically
- Faster task completion and lower per-action costs drive consumer adoption, which drives more services to publish manifests, which makes agents faster. The flywheel is computational efficiency leading to economic efficiency.
The early web waited for Certificate Authorities and trust infrastructure until after fraud was widespread. The agent internet has the chance to build this layer in from the start.
### Evidence/examples
**The paid API economy is forming now:**
- [x402](https://www.x402.org/): HTTP 402 payment protocol, adopted by Coinbase and others
- [L402](https://docs.lightning.engineering/the-lightning-network/l402): Lightning Network payment protocol for API access
- [MPP](https://stripe.com/blog/machine-payments-protocol): Stripe's Machine Payments Protocol for agent-to-service transactions
- [Open 402 Directory](https://github.com/ArcedeDev/open-402): Open registry tracking domains that accept HTTP 402 payments
**Open standards and infrastructure I've been building (all MIT licensed, all open source):**
- [agent.json](https://github.com/FransDevelopment/agent-json) ([spec v1.3](https://agentinternetruntime.com/spec/agent-json)): Open capability manifest standard (MIT). Declares intents, parameters, pricing, payment methods, and cryptographic identity. Protocol-agnostic (supports x402, L402, Stripe, any future rail).
- [Open Agent Trust Registry](https://github.com/FransDevelopment/open-agent-trust-registry): Public, signed registry of trusted agent platform operators. 7 registered issuers, 11 specifications, Ed25519 cryptographic verification, automated CI pipeline.
- [Agent Identity Working Group](https://github.com/corpollc/qntm/issues/5): Cross-project coordination across 6+ independent projects. DID Resolution v1.0 ratified with 4 conformant implementations ([qntm](https://github.com/corpollc/qntm), [ArkForge](https://github.com/ark-forge/trust-layer), [Agent Passport System](https://github.com/aeoess/agent-passport-system), [Agent Agora](https://github.com/archedark-publishing/agora)).
**OpenClaw's existing infrastructure this builds on:**
- [`before_tool_call` hook](https://github.com/openclaw/openclaw/blob/main/src/agents/pi-tools.before-tool-call.ts): Merged and active. Returns `{ block, blockReason, params }`.
- [`fetchWithWebToolsNetworkGuard`](https://github.com/openclaw/openclaw/blob/main/src/agents/tools/web-guarded-fetch.ts): SSRF-protected fetch with mode support.
- [`web-shared.ts` cache pattern](https://github.com/openclaw/openclaw/blob/main/src/agents/tools/web-shared.ts): TTL-based caching for web tool responses.
- [#12385 (Shield.md)](https://github.com/openclaw/openclaw/issues/12385): Related proposal for threat-based security policy (complementary, not overlapping).
**Production showcase**
- Production deployments already exist where multi-agent pipelines use cryptographic identity verification for every handoff between agents. A 7-agent sales pipeline (Scout → Analyst → Designer → Copywriter → Messenger → Closer) running across 12 countries uses the same Ed25519 identity infrastructure proposed here, verifying each agent before data moves between steps.
- Service discovery: The [Open 402 Directory](https://open402.directory/) tracks a growing registry of APIs that declare their capabilities, pricing, and payment methods in a structured format. Instead of hardcoding API integrations, an agent can query the directory, read a service's manifest in a single fetch, and know every available endpoint, parameter, and price before making a call. It's open sourced too.
### Additional information
**Adoption context:** agent.json adoption is early. The underlying pattern it standardizes, providers self-declaring endpoints and prices, is already happening through x402 and MPP. The manifest format gives that self-declaration a structured, machine-readable shape. The implementation is fully additive: domains without a manifest are unaffected.
**Extensibility:** The resolver is designed to support additional manifest formats as the ecosystem evolves. The hook contract doesn't change regardless of what discovery format is checked. Starting with agent.json and adding support for other standards (as they emerge) is straightforward.
**Willingness to contribute:** Happy to submit a PR implementing the trust verification hook as described. Roughly 50 lines of hook handler logic, cached fetch reusing existing patterns, no new dependencies. | open | null | false | 0 | [
"enhancement"
] | [] | 2026-03-24T07:20:14Z | 2026-03-24T07:20:14Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | FransDevelopment | 149,807,864 | U_kgDOCO3i-A | User | false |
openclaw/openclaw | 4,125,847,756 | I_kwDOQb6kR87163DM | 53,512 | https://github.com/openclaw/openclaw/issues/53512 | https://api.github.com/repos/openclaw/openclaw/issues/53512 | [Bug]: Agent switch ignored after context saturation (forces compact on exhausted session) | ### Bug type
Behavior bug (incorrect output/state without crash)
### Summary
When the active agent hits an API rate limit or token exhaustion, OpenClaw does not immediately switch away from that exhausted path and recover cleanly. Instead, it remains stuck trying to continue through the same rate-limited agent/model flow, even though that path is no longer usable.
### Steps to reproduce
1. Run OpenClaw v2026.3.23 on Linux installed via npm global.
2. Configure a single agent without any fallback model.
3. Use a provider/model that can hit an API token/rate limit (in this case, Claude).
4. Continue using the agent until the provider limit is reached.
5. Send another message after the limit has been hit.
6. Observe that OpenClaw does not recover cleanly and remains stuck on the exhausted path instead of switching immediately or failing over in a usable way.
### Expected behavior
Once the active model/provider path is rate limited or out of tokens, OpenClaw should immediately stop trying to use that exhausted route. It should either:
- fail over to another available route if configured, or
- return a clean and final error state without getting stuck trying the same exhausted path again.
### Actual behavior
Actual behavior
- The active agent reaches an API/token limit while using Claude.
- After that happens, OpenClaw does not switch away immediately.
- The conversation remains stuck trying to use the exhausted path.
- The system surfaces an API error instead of recovering cleanly.
### OpenClaw version
v2026.3.23
### Operating system
Linux Ubuntu AWS
### Install method
npm global
### Model
claude-sonnet-4.6
### Provider / routing chain
OpenClaw → Claude API
### Additional provider/model setup details
_No response_
### Logs, screenshots, and evidence
```shell
<img width="2188" height="1151" alt="Image" src="https://github.com/user-attachments/assets/08270b57-ffb9-49ad-849a-89a73abc49f3" />
```
### Impact and severity
Impact and severity
Affected users/systems/channels
Users running a single agent configuration without fallback, using providers that enforce API/token limits (e.g., Claude).
Severity
Blocks workflow; once the rate limit is reached, the agent becomes unusable and the session cannot continue normally.
Frequency
Occurs whenever the provider rate/token limit is hit.
Consequence
User messages fail after the limit is reached, and the session remains stuck on an exhausted model path, preventing further interaction.
### Additional information
1. Active agent is using Claude
2. Claude hits token/rate/API limit
3. User sends another message
4. OpenClaw does not move away from the exhausted path immediately
5. API error is returned
6. Session remains effectively stuck on that exhausted route | open | null | false | 0 | [
"bug",
"bug:behavior"
] | [] | 2026-03-24T07:32:06Z | 2026-03-24T07:32:14Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | ysnock404 | 82,345,402 | MDQ6VXNlcjgyMzQ1NDAy | User | false |
openclaw/openclaw | 4,125,848,046 | I_kwDOQb6kR87163Hu | 53,513 | https://github.com/openclaw/openclaw/issues/53513 | https://api.github.com/repos/openclaw/openclaw/issues/53513 | Matrix E2EE bootstrap reports crypto unavailable despite encryption enabled | I confirmed GitHub CLI is authenticated, then narrowed the OpenClaw Matrix E2EE issue to a runtime/bundling problem: `channels.matrix.encryption: true` is already enabled, but bootstrap still says `Matrix crypto is not available`, so the native crypto module isn’t being exposed and the app is still falling back to the broken WASM crypto path.
Expected: with encryption enabled, bootstrap should detect Matrix crypto as available and allow E2EE conversations.
Actual: bootstrap fails with "Matrix crypto is not available (start client with encryption enabled)" even after enabling encryption and adding the native dependency.
This appears to be an OpenClaw runtime/package issue rather than a user configuration issue.
| open | null | false | 1 | [] | [] | 2026-03-24T07:32:11Z | 2026-03-24T07:33:30Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | kakahu2015 | 17,962,485 | MDQ6VXNlcjE3OTYyNDg1 | User | false |
openclaw/openclaw | 4,125,852,658 | I_kwDOQb6kR87164Py | 53,514 | https://github.com/openclaw/openclaw/issues/53514 | https://api.github.com/repos/openclaw/openclaw/issues/53514 | Deploy C# webhook receiver to ACA | ## Summary
Deploy the .NET 8 webhook receiver (`Sidekyk.WebhookReceiver`) to ACA, replacing the TypeScript version.
## Tasks
- [ ] Commit .dockerignore fix
- [ ] Build image in ACR
- [ ] Deploy as `sidekyk-webhook` (same FQDN, no Meta URL change needed)
- [ ] Smoke test all endpoints in production
- [ ] Deactivate old TS revision
## Context
- Phase 1 (Sidekyk.Shared) and Phase 2 (WebhookReceiver) code merged in PR #326
- Local Docker testing passed — all endpoints verified
- Image size: 164MB (alpine) vs ~200MB+ for Node.js version | closed | completed | false | 1 | [] | [] | 2026-03-24T07:33:23Z | 2026-03-24T07:33:32Z | 2026-03-24T07:33:32Z | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | aselkasidekyk | 264,912,896 | U_kgDOD8pAAA | User | false |
openclaw/openclaw | 4,125,926,325 | I_kwDOQb6kR8717KO1 | 53,523 | https://github.com/openclaw/openclaw/issues/53523 | https://api.github.com/repos/openclaw/openclaw/issues/53523 | [Bug]: `channels.matrix.homeserver` causes gateway core dump (`status=7/BUS`) on OpenClaw 2026.3.23-2 | ### Bug type
Regression (worked before, now fails)
### Summary
On **OpenClaw 2026.3.23-2**, enabling a Matrix channel with a `homeserver` value causes the gateway process to **core dump with `status=7/BUS`** during startup / restart.
This is not a normal config validation failure. The gateway exits via core dump instead of returning a recoverable error.
### Steps to reproduce
1. Start from a working config with no `channels.matrix`.
2. Confirm gateway is healthy:
```bash
systemctl --user restart openclaw-gateway && sleep 3 && openclaw gateway status --json
```
3. Add:
```json
"channels": {
"matrix": {
"enabled": true,
"homeserver": "https://example.invalid:4433"
}
}
```
4. Restart the gateway:
```bash
systemctl --user restart openclaw-gateway && sleep 5 && openclaw gateway status --json
```
### Expected behavior
If the Matrix configuration is invalid or unsupported, OpenClaw should:
- return a normal startup/config error
- log a recoverable diagnostic
- avoid crashing the process
It should **not** core dump with `BUS`.
### Actual behavior
Gateway repeatedly restarts and reports:
- process exited with core dump
- `status=7/BUS`
- RPC unavailable
- health unhealthy
Example status output:
```json
"runtime": {
"status": "stopped",
"state": "activating",
"subState": "auto-restart",
"lastExitStatus": 7,
"lastExitReason": "3"
},
"rpc": {
"ok": false,
"error": "gateway closed (1006 abnormal closure (no close frame))"
},
"health": {
"healthy": false
}
```
### OpenClaw version
2026.3.23-2
### Operating system
ubuntu24.04
### Install method
_No response_
### Model
qwen 3.5
### Provider / routing chain
bailian
### Additional provider/model setup details
_No response_
### Logs, screenshots, and evidence
```shell
```
### Impact and severity
_No response_
### Additional information
_No response_ | open | null | false | 0 | [
"bug",
"regression"
] | [] | 2026-03-24T07:48:40Z | 2026-03-24T07:48:50Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | Kellermaan | 11,674,190 | MDQ6VXNlcjExNjc0MTkw | User | false |
openclaw/openclaw | 4,125,062,085 | I_kwDOQb6kR87133PF | 53,370 | https://github.com/openclaw/openclaw/issues/53370 | https://api.github.com/repos/openclaw/openclaw/issues/53370 | [Feature]: Make sessions_spawn more forgiving for ACP-only fields in subagent runtime | Summary
When `sessions_spawn` is called with `runtime="subagent"`, automatically drop ACP-only fields such as `streamTo` and `resumeSessionId` (or return stronger corrective guidance) so agents do not repeatedly fail on a validation rake that is easy to trigger.
Problem to solve
Today, `sessions_spawn` correctly rejects ACP-only fields when used with `runtime="subagent"`, for example:
- `streamTo is only supported for runtime=acp`
- `resumeSessionId is only supported for runtime=acp`
The validation is correct, but this is still an easy failure mode for AI agents and operators in real use.
In practice, agents often work from longer payload patterns, examples, or copied spawn calls that mix ACP and subagent concepts. A single stray ACP-only field causes the entire subagent spawn to fail before execution starts.
This creates a “validation rake”:
- the failure is technically correct
- but easy to repeat
- especially for agents generating structured tool calls
- and especially across mixed workflows where both ACP and subagents are used
We hit this in a real workflow while delegating writing tasks. The model understood the intent correctly, but the spawn failed repeatedly because an ACP-only field remained in the payload. This is likely to affect other agents too, especially smaller models.
Proposed solution
Preferred option:
1. When `runtime="subagent"`, automatically strip ACP-only fields such as:
- `streamTo`
- `resumeSessionId`
2. Continue execution with the normalized payload.
3. Optionally return a warning in the tool result, for example:
- `warning: Auto-stripped ACP-only fields for runtime="subagent": streamTo`
This preserves current semantics while making the tool much more robust for AI-generated calls.
If auto-strip is considered too permissive, the next-best option would be stronger validation guidance, e.g.:
- explicitly list unsupported fields
- suggest the minimal supported payload for `runtime="subagent"`
- explain that those fields are ACP-only
Longer-term, a clearer split between subagent and ACP helper paths could reduce this confusion further, but the smallest high-value fix is to normalize ACP-only fields away for subagent runtime.
Alternatives considered
- Rely on prompt discipline / documentation only
This helps, but in practice is not enough. Agents can still copy mixed payload patterns or carry forward stale fields across turns.
- Keep the current hard rejection and only improve docs
Better docs are useful, but this remains an easy and repetitive failure mode for AI-generated structured calls.
- Create separate tools for ACP vs subagent spawning
This could improve the mental model, but is a larger change. Auto-strip or stronger corrective validation would solve most of the pain with much lower implementation cost.
Impact
Affected users/systems/channels:
- AI agents using `sessions_spawn`
- users/operators running mixed ACP + subagent workflows
- multi-step delegation/orchestration workflows
- smaller models are likely affected even more than stronger frontier models
Severity:
- Medium for casual use
- High for orchestration-heavy workflows, because it blocks delegation and can derail manager/worker patterns
Frequency:
- Intermittent but realistic
- More likely whenever prompts/examples/payloads are reused across ACP and subagent contexts
Consequence:
- failed subagent spawns
- repeated retries on the same validation rake
- wasted operator time
- broken delegation workflows
- extra manual debugging and reduced trust in automation
In our case, this blocked repeated task delegation until we patched the behavior locally.
Evidence/examples
Observed validation errors:
- `streamTo is only supported for runtime=acp; got runtime=subagent`
- `resumeSessionId is only supported for runtime=acp; got runtime=subagent`
We also validated that a local patch to auto-strip ACP-only fields resolved the issue cleanly:
- subagent spawn proceeded successfully
- ACP behavior remained unaffected
- the UX became much more robust for agent-generated calls
A minimal safe normalization behavior appears sufficient for this use case.
Additional information
This request is not claiming the current behavior is logically incorrect. The validation is correct.
The request is specifically about UX hardening / guardrails for agentic use:
- reduce repeated failure on easy-to-make structured-call mistakes
- preserve backward compatibility
- make `sessions_spawn` more resilient in real-world delegation flows
Backward-compatible behavior would be ideal:
- keep ACP behavior unchanged
- only normalize ACP-only fields away when `runtime="subagent"` | open | null | false | 1 | [] | [] | 2026-03-24T03:53:41Z | 2026-03-24T07:49:05Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | kingrubic | 116,256,161 | U_kgDOBu3toQ | User | false |
openclaw/openclaw | 4,125,642,660 | I_kwDOQb6kR8716E-k | 53,474 | https://github.com/openclaw/openclaw/issues/53474 | https://api.github.com/repos/openclaw/openclaw/issues/53474 | Bug: openclaw gateway status false positive on Windows due to setlocal batch parsing bug | **Description**
Running `openclaw gateway install` on Windows creates a Scheduled Task. After installation, `openclaw gateway status` reports:
```
Service config issue: Service command does not include the gateway subcommand
Command: setlocal enabledelayedexpansion
Service file: C:\Users\<username>\.openclaw\gateway.cmd
```
**Steps to reproduce**
1. Run `openclaw gateway install`
2. Run `openclaw gateway status` or `openclaw doctor`
**Expected behavior**
No audit warnings; the gateway service configuration is validated successfully.
**Actual behavior**
`openclaw gateway status` reports a `gatewayCommandMissing` audit issue.
---
## Root Cause
The bug is in `src/daemon/schtasks.ts` → `readScheduledTaskCommand()`.
### How the function works
`readScheduledTaskCommand()` reads `gateway.cmd`, iterates line by line, skips `@echo off`, `rem` comments, and `set` environment lines (checking `if (lower.startsWith("set "))`), then captures the first real executable line as the command:
```batch
@echo off
setlocal enabledelayedexpansion
set "PORT=18789"
set "NODE=C:\Program Files\nodejs\node.exe"
set "SCRIPT=..."
...
start "" "%NODE%" "%SCRIPT%" gateway --port %PORT%
```
### The bug
The parser checks `if (lower.startsWith("set "))` — note the **trailing space**. `setlocal` does NOT start with `set ` (it starts with `setlocal`), so `setlocal enabledelayedexpansion` is **not skipped**. It is incorrectly captured as the first executable line → `commandLine = "setlocal enabledelayedexpansion"`.
This is passed to `parseCmdScriptCommandLine()`, producing:
```json
programArguments: ["setlocal", "enabledelayedexpansion"]
```
### The audit check
In `systemd-hints.ts`:
```typescript
function hasGatewaySubcommand(programArguments) {
return Boolean(programArguments?.some((arg) => arg === "gateway"));
}
```
Since neither `"setlocal"` nor `"enabledelayedexpansion"` equals `"gateway"`, `hasGatewaySubcommand()` returns `false` → **false positive `gatewayCommandMissing` audit error**.
### The actual task is fine
`schtasks /query /fo LIST` correctly shows `C:\Users\...\gateway.cmd` as the task's Program. The gateway **runs correctly** — this is purely a validation/audit false positive caused by the batch file parsing logic.
---
## The Fix
In `src/daemon/schtasks.ts`, `readScheduledTaskCommand()`:
```typescript
// Before:
if (lower.startsWith("set ")) {
// After:
if (lower.startsWith("set ") || lower.startsWith("setlocal")) {
```
This ensures `setlocal enabledelayedexpansion` is correctly treated as a batch environment declaration and skipped, allowing the actual `start "" "%NODE%" "%SCRIPT%" gateway --port %PORT%` line to be captured.
---
**Environment**
- OS: Windows 10.0.22631 (x64)
- Username: any (non-ASCII username is NOT required to reproduce)
- OpenClaw version: 2026.3.23-2
- Node.js: 24.14.0
| open | null | false | 0 | [] | [] | 2026-03-24T06:42:43Z | 2026-03-24T07:50:54Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | chinazll | 266,670,425 | U_kgDOD-URWQ | User | false |
openclaw/openclaw | 4,125,505,093 | I_kwDOQb6kR8715jZF | 53,448 | https://github.com/openclaw/openclaw/issues/53448 | https://api.github.com/repos/openclaw/openclaw/issues/53448 | [Bug]: llama-cpp and Ollama providers return incorrect context usage due to field name mismatch | ### Bug type
Behavior bug (incorrect output/state without crash)
### Summary
# OpenClaw GitHub Issue - Context Usage Bug
## Issue Title
```
[bug] llama-cpp and Ollama providers return incorrect context usage due to field name mismatch
```
---
## Issue Content
### Problem Description
OpenClaw fails to accurately track token usage due to mismatched field names between expected and actual API responses, causing context usage to display as `0/80k (0%)` even when the model is actively consuming significant tokens.
**Environment:**
- 🦞 OpenClaw: 2026.3.23-1
- 🧠 Model: llama-cpp/qwen35b-local
- 📚 Context Display: 0/80k (0%)
- 🧵 Session: agent:main:main
- 🪢 Runtime: direct
### Affected Frameworks
| Framework | Status | Notes |
|-----------|--------|-------|
| ❌ **llama.cpp server** | AFFECTED | Most common local deployment solution |
| ❌ **Ollama** | AFFECTED | Popular model management service |
| ✅ **vLLM** | NOT AFFECTED | Compatible (OpenAI format) |
| ✅ **HuggingFace TGI** | NOT AFFECTED | Compatible (OpenAI format) |
| ✅ **OpenAI API** | NOT AFFECTED | Compatible (OpenAI format) |
### Root Cause
OpenClaw expects these field names at line ~181675:
```javascript
input: response.usage?.input_tokens ?? 0,
output: response.usage?.output_tokens ?? 0,
```
However, different frameworks return different field names:
#### llama.cpp server (OpenAI-compatible format)
```json
{
"usage": {
"prompt_tokens": 11,
"completion_tokens": 1,
"total_tokens": 12
}
}
```
#### Ollama (custom format)
```json
{
"prompt_eval_count": 26,
"eval_count": 259
}
```
#### vLLM / TGI / OpenAI (OpenAI standard format)
```json
{
"usage": {
"prompt_tokens": 100,
"completion_tokens": 50,
"total_tokens": 150
}
}
```
### Real-World Case
**User Configuration:**
- OpenClaw Display: `0/80k (0%)`
- Remote llama-server (192.168.3.77:8080) Actual Usage: `43250/80000 (54%)`
**Cause:** llama.cpp server returns `prompt_tokens`, but OpenClaw expects `input_tokens`.
---
## Chain Reactions from Failed Context Statistics
### 1. Context Window Overflow Risk
**Due to inability to accurately track token usage:**
**Chain Reactions:**
1. User cannot see real-time token usage rate
2. Cannot determine if conversation is approaching the 80k context limit
3. May lead to:
- **Model truncation:** Ultra-long conversations are forcibly truncated
- **Quality degradation:** Context overflow causes model to forget early conversation
- **Session crash:** API returns errors after exceeding limits
**Actual Impact:**
- In long conversation scenarios, users may encounter context overflow without warning
- Important conversation content may be lost
---
### 2. Conversation Management Failure
OpenClaw's conversation management mechanisms rely on accurate token counting:
**Chain Reactions:**
1. **Auto-compression mechanism fails:**
- OpenClaw may decide to compress historical messages based on token usage rate
- If count is 0, compression never triggers
- Leads to unlimited accumulation of historical messages, eventually causing memory overflow
2. **Session reset strategy fails:**
- Under some configurations, sessions automatically reset when token usage reaches a threshold
- Due to count being 0, reset never triggers
- Leads to uncontrolled session length
3. **Resource waste:**
- Cannot accurately evaluate token cost per session
- May lead to unnecessary long conversations
---
### 3. Cost Monitoring Failure
Even with free local models, token statistics are important performance metrics:
**Chain Reactions:**
1. **Performance analysis difficulty:**
- Cannot analyze token consumption across different conversations
- Cannot identify abnormally high token usage patterns
- Difficult to optimize conversation strategies
2. **Multi-model comparison fails:**
- If multiple model backends exist, cannot fairly compare token efficiency
- Cannot make model switching decisions based on token usage
3. **API quota monitoring fails** (if using paid APIs):
- Cannot accurately track API quota usage
- May unexpectedly exceed quota causing service interruption
---
### 4. LCM (Lossless Context Management) Function Abnormalities
OpenClaw's LCM system relies on token statistics to manage conversation history:
**Chain Reactions:**
1. **Historical message compression strategy fails:**
- LCM decides whether to compress history based on token usage rate
- When count is 0, compression never triggers
- Leads to uncontrolled memory usage
2. **Context optimization fails:**
- LCM cannot intelligently retain important conversations
- May lead to important information being discarded too early
3. **Search and retrieval functionality affected:**
- LCM's search function may rely on token statistics
- Leads to inaccurate search results
---
### 5. User Experience Degradation
**Chain Reactions:**
1. **User confusion:**
- See `0/80k (0%)` display
- User cannot determine conversation status
- May mistakenly think system is malfunctioning
2. **Trust reduction:**
- Key metrics display incorrectly
- User may question the reliability of the entire system
3. **Cannot optimize conversation strategy:**
- User cannot adjust conversation methods based on token usage
- Cannot learn how to efficiently use the context window
---
### 6. Diagnosis and Debugging Difficulty
**Chain Reactions:**
1. **Problem troubleshooting difficulty:**
- If conversation anomalies occur, cannot locate issues through token statistics
- Increases troubleshooting time costs
2. **Performance optimization blocked:**
- Cannot perform performance optimization based on token statistics
- Difficult to identify performance bottlenecks
3. **Automated testing fails:**
- Automated tests may rely on token statistics as success metrics
- Leads to inaccurate test results
---
### 7. Resource Allocation Issues in Multi-User/Multi-Session Scenarios
If multiple users or concurrent sessions exist:
**Chain Reactions:**
1. **Unequal resource allocation:**
- Cannot accurately track token usage per session
- Leads to some sessions consuming excessive resources
2. **Service quality degradation:**
- Some sessions may respond slowly due to resource exhaustion
- Affects overall user experience
3. **Quota management difficult to implement:**
- Cannot fairly allocate token quotas
- May lead to certain users monopolizing resources
---
### Problem Severity Assessment
| Issue | Severity | Affected Scope | Probability |
|-------|----------|----------------|-------------|
| Context window overflow | 🔴 High | All long conversations | High |
| Conversation management failure | 🟡 Medium | LCM users | Medium |
| Cost monitoring failure | 🟡 Medium | All users | High |
| LCM function abnormality | 🔴 High | LCM users | High |
| User experience degradation | 🟢 Low | All users | High |
| Diagnosis difficulty | 🟡 Medium | Developers/Advanced users | Medium |
| Resource allocation issues | 🟡 Medium | Multi-user scenarios | Medium |
**Overall Severity: 🔴 High**
---
### Case Study 1: Long Conversation Leading to Content Loss
**User Scenario:**
- Conducting a 50+ turn technical discussion
- OpenClaw Display: `0/80k (0%)`
- Actual llama-server Usage: `65000/80000 (81%)`
**Result:**
- User thought there was still ample context available
- Continued conversation until model started truncating early content
- Key information from technical discussion was forgotten
- Conversation quality deteriorated rapidly
### Case Study 2: LCM Compression Mechanism Failure
**User Scenario:**
- Configured automatic compression of historical messages
- Expected compression to trigger when token usage reached 70%
**Result:**
- Due to count being 0, compression never triggered
- Historical messages accumulated infinitely
- Eventually led to excessive memory usage and slow system response
---
## Code Location
**File:** `~/.npm-global/lib/node_modules/openclaw/dist/pi-embedded-CwMQzdKD.js`
**Line:** ~181675 (exact line may vary by version)
---
## Test Steps
1. Configure llama.cpp server as model backend
2. Send a test message
3. Check if context display updates
**Expected Result:**
- Display actual token usage rate
- Example: `12/80k (15%)` instead of `0/80k (0%)`
---
## Environment Information
| Item | Value |
|------|-------|
| **OpenClaw Version** | 2026.3.23-1 |
| **Remote llama-server** | 192.168.3.77:8080 |
| **Model** | Qwen3.5-35B-A3B-GGUF |
| **Operating System** | macOS (user) / Ubuntu 24.04 (server) |
| **llama.cpp Version** | 8419 (commit: 509a31d00) |
| **Model File** | unsloth_Qwen3.5-35B-A3B-GGUF_Qwen3.5-35B-A3B-UD-Q4_K_XL.gguf |
| **GPU** | NVIDIA GeForce RTX 3090 (24GB) |
---
## Recommended Solution
**Modify OpenClaw code to support multiple field name formats:**
```javascript
// Before
input: response.usage?.input_tokens ?? 0,
output: response.usage?.output_tokens ?? 0,
// After - Support all formats
input: response.usage?.prompt_tokens ??
response.usage?.input_tokens ??
response.usage?.prompt_eval_count ?? 0,
output: response.usage?.completion_tokens ??
response.usage?.output_tokens ??
response.usage?.eval_count ?? 0,
```
This solution:
1. ✅ Backward compatible with all existing configurations
2. ✅ Supports llama.cpp server, Ollama, vLLM, and other frameworks
3. ✅ Zero configuration, works out of the box
---
## Expected Fix Priority
**Recommended: HIGH**
This issue has wide-ranging impact and may cause severe user experience problems.
---
## Server Information
### 192.168.3.77 Server Details
**Basic Information:**
- Hostname: vllm-server
- IP Address: 192.168.3.77
- OS: Ubuntu 24.04 (Linux 6.8.0-106-generic)
- Architecture: x86_64
- Uptime: 3 days 13 hours
**Hardware:**
- GPU: NVIDIA GeForce RTX 3090 (24GB VRAM)
- System Memory: 62GB
- Disk: 836GB (138GB used, 656GB available)
**Software:**
- llama.cpp Version: 8419 (commit: 509a31d00)
- GCC Version: 13.3.0
- NVIDIA Driver: 580.126.09
- CUDA Support: Enabled
**llama-server Configuration:**
```bash
/home/XXX/llama.cpp/build/bin/llama-server \
-m /home/XXX/.cache/llama.cpp/unsloth_Qwen3.5-35B-A3B-GGUF_Qwen3.5-35B-A3B-UD-Q4_K_XL.gguf \
--mmproj /home/XXX/.cache/llama.cpp/mmproj-F16.gguf \
--cache-type-k q8_0 \
--cache-type-v q8_0 \
-ngl 99 \
-np 1 \
-fa on \
--ctx-size 96000 \
--image-min-tokens 1024 \
--image-max-tokens 4096 \
--host 0.0.0.0 \
--port 8080
```
**Key Configuration Notes:**
- Context window: 96,000 tokens (configured)
- Model size: 21 GB (Q4_K_XL quantized)
- GPU layers: 99 (all layers on GPU)
- Flash Attention: Enabled
---
## References
- Ollama API Docs: https://github.com/ollama/ollama/blob/main/docs/api.md
- vLLM OpenAI Compatible API: https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html
- HuggingFace TGI API: https://huggingface.co/docs/text-generation-inference/openai_api
### Steps to reproduce
telegrem /status
### Expected behavior
- 📚 Context Display: 10/100k (10%)
### Actual behavior
- 📚 Context Display: 0/100k (0%)
### OpenClaw version
2026.3.8~2026.3.23
### Operating system
macos12.7 llam8419 (commit: 509a31d00)
### Install method
npm
### Model
unsloth_Qwen3.5-35B-A3B-GGUF_Qwen3.5-35B-A3B-UD-Q4_K_XL.gguf
### Provider / routing chain
openclaw---->llama-server
### Additional provider/model setup details
llama.cpp server (OpenAI-compatible format)
{
"usage": {
"prompt_tokens": 11,
"completion_tokens": 1,
"total_tokens": 12
}
}
Ollama (custom format)
{
"prompt_eval_count": 26,
"eval_count": 259
}
vLLM / TGI / OpenAI (OpenAI standard format)
{
"usage": {
"prompt_tokens": 100,
"completion_tokens": 50,
"total_tokens": 150
}
}
| Framework | Status | Notes |
| ------------------ | ------------ | ------------------------------------- |
| ❌ llama.cpp server | AFFECTED | Most common local deployment solution |
| ❌ Ollama | AFFECTED | Popular model management service |
| ✅ vLLM | NOT AFFECTED | Compatible (OpenAI format) |
| ✅ HuggingFace TGI | NOT AFFECTED | Compatible (OpenAI format) |
| ✅ OpenAI API | NOT AFFECTED | Compatible (OpenAI format) |
### Logs, screenshots, and evidence
```shell
Root Cause
OpenClaw expects these field names at line ~181675:
input: response.usage?.input_tokens ?? 0,
output: response.usage?.output_tokens ?? 0,
However, different frameworks return different field names:
llama.cpp server (OpenAI-compatible format)
{
"usage": {
"prompt_tokens": 11,
"completion_tokens": 1,
"total_tokens": 12
}
}
Ollama (custom format)
{
"prompt_eval_count": 26,
"eval_count": 259
}
vLLM / TGI / OpenAI (OpenAI standard format)
{
"usage": {
"prompt_tokens": 100,
"completion_tokens": 50,
"total_tokens": 150
}
}
Real-World Case
User Configuration:
• OpenClaw Display: 0/80k (0%)
• Remote llama-server (192.168.3.77:8080) Actual Usage: 43250/80000 (54%)
Cause: llama.cpp server returns prompt_tokens, but OpenClaw expects
```
### Impact and severity
1. Context Window Overflow Risk
Due to inability to accurately track token usage:
1. User cannot see real-time token usage rate
2. Cannot determine if conversation is approaching the 80k context limit
3. May lead to:
• Model truncation: Ultra-long conversations are forcibly truncated
• Quality degradation: Context overflow causes model to forget early conversation
• Session crash: API returns errors after exceeding limits
Actual Impact:
• In long conversation scenarios, users may encounter context overflow without warning
• Important conversation content may be lost
───
2. Conversation Management Failure
OpenClaw's conversation management mechanisms rely on accurate token counting:
1. Auto-compression mechanism fails:
• OpenClaw may decide to compress historical messages based on token usage rate
• If count is 0, compression never triggers
• Leads to unlimited accumulation of historical messages, eventually causing memory overflow
2. Session reset strategy fails:
• Under some configurations, sessions automatically reset when token usage reaches a threshold
• Due to count being 0, reset never triggers
• Leads to uncontrolled session length
3. Resource waste:
• Cannot accurately evaluate token cost per session
• May lead to unnecessary long conversations
───
3. Cost Monitoring Failure
Even with free local models, token statistics are important performance metrics:
1. Performance analysis difficulty:
• Cannot analyze token consumption across different conversations
• Cannot identify abnormally high token usage patterns
• Difficult to optimize conversation strategies
2. Multi-model comparison fails:
• If multiple model backends exist, cannot fairly compare token efficiency
• Cannot make model switching decisions based on token usage
3. API quota monitoring fails (if using paid APIs):
• Cannot accurately track API quota usage
• May unexpectedly exceed quota causing service interruption
4. LCM (Lossless Context Management) Function Abnormalities
OpenClaw's LCM system relies on token statistics to manage conversation history:
1. Historical message compression strategy fails:
• LCM decides whether to compress history based on token usage rate
• When count is 0, compression never triggers
• Leads to uncontrolled memory usage
2. Context optimization fails:
• LCM cannot intelligently retain important conversations
• May lead to important information being discarded too early
3. Search and retrieval functionality affected:
• LCM's search function may rely on token statistics
• Leads to inaccurate search results
───
5. User Experience Degradation
1. User confusion:
• See 0/80k (0%) display
• User cannot determine conversation status
• May mistakenly think system is malfunctioning
2. Trust reduction:
• Key metrics display incorrectly
• User may question the reliability of the entire system
3. Cannot optimize conversation strategy:
• User cannot adjust conversation methods based on token usage
• Cannot learn how to efficiently use the context window
───
6. Diagnosis and Debugging Difficulty
1. Problem troubleshooting difficulty:
• If conversation anomalies occur, cannot locate issues through token statistics
• Increases troubleshooting time costs
2. Performance optimization blocked:
• Cannot perform performance optimization based on token statistics
• Difficult to identify performance bottlenecks
3. Automated testing fails:
• Automated tests may rely on token statistics as success metrics
• Leads to inaccurate test results
───
7. Resource Allocation Issues in Multi-User/Multi-Session Scenarios
If multiple users or concurrent sessions exist:
1. Unequal resource allocation:
• Cannot accurately track token usage per session
• Leads to some sessions consuming excessive resources
2. Service quality degradation:
• Some sessions may respond slowly due to resource exhaustion
• Affects overall user experience
3. Quota management difficult to implement:
• Cannot fairly allocate token quotas
• May lead to certain users monopolizing resources
| Issue | Severity | Affected Scope | Probability |
| ------------------------------- | --------- | ------------------------- | ----------- |
| Context window overflow | 🔴 High | All long conversations | High |
| Conversation management failure | 🟡 Medium | LCM users | Medium |
| Cost monitoring failure | 🟡 Medium | All users | High |
| LCM function abnormality | 🔴 High | LCM users | High |
| User experience degradation | 🟢 Low | All users | High |
| Diagnosis difficulty | 🟡 Medium | Developers/Advanced users | Medium |
| Resource allocation issues | 🟡 Medium | Multi-user scenarios | Medium |
### Additional information
_No response_ | open | null | false | 3 | [
"bug",
"bug:behavior"
] | [] | 2026-03-24T06:07:03Z | 2026-03-24T07:52:34Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | oven1231231234 | 168,822,227 | U_kgDOChAF0w | User | false |
openclaw/openclaw | 4,125,994,638 | I_kwDOQb6kR8717a6O | 53,536 | https://github.com/openclaw/openclaw/issues/53536 | https://api.github.com/repos/openclaw/openclaw/issues/53536 | Bug: 模型切换未通过 Gateway API 流程 | ## 问题描述
在 Control UI 中手动切换模型时,Gateway 日志中没有记录任何模型切换操作。这导致:
1. 系统无法监控模型切换过程
2. 组件状态可能不同步
3. 排查问题时缺少关键日志
## 环境信息
- OS: macOS 26.2 (arm64)
- Node: v22.22.1
- OpenClaw: dev channel, main branch
- Gateway: ws://127.0.0.1:18789
## 复现步骤
1. 打开 Control UI (http://127.0.0.1:18789/)
2. 在模型选择器中切换模型(例如从 MiniMax-M2.5 切换到 qwen2.5:14b)
3. 观察 Gateway 日志:`openclaw logs`
4. 发现:日志中没有 "agent model changed" 或任何模型切换相关的记录
## 预期行为
模型切换应该:
1. 通过 Gateway API 调用
2. 在 Gateway 日志中记录模型切换事件
3. 通知所有相关组件
## 实际行为
- Session 文件中可以看到 `model-snapshot` 记录
- 但 Gateway 日志中没有任何切换记录
- UI 切换绕过了标准 API 流程
## 建议修复
1. UI 中的模型切换应通过 Gateway API
2. Gateway 应在日志中记录模型切换事件
3. 确保所有组件状态同步
---
报告时间: 2026-03-24 | open | null | false | 0 | [] | [] | 2026-03-24T08:04:09Z | 2026-03-24T08:04:09Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | dc0068558077-a11y | 270,579,907 | U_kgDOECC4ww | User | false |
openclaw/openclaw | 4,125,986,667 | I_kwDOQb6kR8717Y9r | 53,531 | https://github.com/openclaw/openclaw/issues/53531 | https://api.github.com/repos/openclaw/openclaw/issues/53531 | Compaction should use auth-profile rotation / failover within the same provider | ## Problem
When a session needs compaction (context too large), OpenClaw sends the full conversation to the currently configured model for summarization. If that model/profile is rate-limited or in cooldown, compaction hangs indefinitely — there is no failover mechanism.
This is especially problematic for agents with multiple auth profiles for the same provider (e.g., 3 OpenAI keys with rotation). Normal requests already rotate between profiles, but compaction does not.
## Expected Behavior
Compaction should use the same auth-profile rotation / failover logic that normal requests use within the same provider. If profile A is rate-limited, try profile B, then C.
## Why same-provider only makes sense as a first step
- Same provider = same API, same message format, same token counting
- Cross-provider compaction would require message format conversion (possible but more complex)
- Profile rotation within a provider is the minimal-invasive fix
## Real-world impact
Long-running agent sessions (e.g., research agents doing 20+ tool calls) accumulate enough context to trigger compaction. If the primary profile hits rate limits during compaction, the session locks up. The only current workaround is manually resetting the session before it grows large enough to need compaction.
## Environment
- OpenClaw version: latest (npm)
- Host: macOS
- Agents affected: any agent with long sessions and multiple auth profiles | open | null | false | 1 | [] | [] | 2026-03-24T08:02:23Z | 2026-03-24T08:04:18Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | fertilejim | 4,189,272 | MDQ6VXNlcjQxODkyNzI= | User | false |
openclaw/openclaw | 4,125,994,870 | I_kwDOQb6kR8717a92 | 53,537 | https://github.com/openclaw/openclaw/issues/53537 | https://api.github.com/repos/openclaw/openclaw/issues/53537 | [Bug]: Gemini 2.5 Pro thinking content loops infinitely on model switch | ### Bug type
Behavior bug (incorrect output/state without crash)
### Summary
When switching from Gemini 2.5 Pro to another model, the `<thinking>` content repeats infinitely
until OpenClaw aborts the run.
### Steps to reproduce
1. Set model to `github-copilot/gemini-2.5-pro`
2. Send a message to trigger Gemini's reasoning/thinking output
3. Send another message that switches model (e.g., to `moonshot/kimi-k2.5`)
4. Observe the output
### Expected behavior
Model switch should work normally without repeating content.
### Actual behavior
The `<thinking>` content from Gemini 2.5 Pro repeats approximately 40+ times:
### OpenClaw version
2026.3.23-2 (7ffe7e4)
### Operating system
Windows 10.0.26200
### Install method
npm global
### Model
github-copilot/gemini-2.5-pro/moonshot/kimi-k2.5
### Provider / routing chain
openclaw->github-copilot/gemini-2.5-pro->moonshot/kimi-k2.5
### Additional provider/model setup details
_No response_
### Logs, screenshots, and evidence
```shell
```
### Impact and severity
NOT_ENOUGH_INFO
### Additional information
_No response_ | open | null | false | 0 | [
"bug",
"bug:behavior"
] | [] | 2026-03-24T08:04:12Z | 2026-03-24T08:04:20Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | mattyou | 10,670,682 | MDQ6VXNlcjEwNjcwNjgy | User | false |
openclaw/openclaw | 4,125,999,577 | I_kwDOQb6kR8717cHZ | 53,539 | https://github.com/openclaw/openclaw/issues/53539 | https://api.github.com/repos/openclaw/openclaw/issues/53539 | [Feature]: New openclaw-for-windows repository under openclaw org | ### Summary
### Proposal: New openclaw-for-windows repository under openclaw org
**Repo name:** openclaw-for-windows
**Purpose:**
A new package to help user "One click to install OpenClaw on Windows, one click to run it securely"
**Why it should live under openclaw:**
- Fits OpenClaw ecosystem
- Not core runtime
- Clear boundary and scope - Help users seamlessly install OpenClaw on Windows and use it in a convenient and secure way.
**Maintenance plan:**
- I and several other members will be the primary maintainer
- PRs + issues handled by several contributors
**Current status:**
- Repo already exists at https://github.com/NeilZhaoMS/openclaw-for-windows
- Ready to transfer
### Problem to solve
For many users, using OpenClaw on Windows is still inconvenient. The installation process is complex, and for users in China in particular, downloads of some OpenClaw dependencies are blocked by the GFW. In addition, users lack an easy‑to‑use security solution that gives them confidence, as well as a native Windows desktop experience that makes OpenClaw convenient to use.
This project aims to address these challenges by enabling one‑click installation of OpenClaw on Windows and providing a native OpenClaw desktop experience that is easy, secure, and user‑friendly.
### Proposed solution
We will provide local mirrors for OpenClaw dependency packages, particularly for users in China, and deliver a native Windows desktop experience to ensure ease of use.
### Alternatives considered
_No response_
### Impact
Affected users/systems/channels: For users using Windows PC
Severity (annoying, blocks workflow, etc.): Just impact desktop UI
Frequency (always/intermittent/edge case): 0.01%
Consequence (delays, errors, extra manual work, etc.): User cannot use the desktop UI but may still use openclaw with browser
### Evidence/examples
_No response_
### Additional information
_No response_ | open | null | false | 0 | [
"enhancement"
] | [] | 2026-03-24T08:05:16Z | 2026-03-24T08:05:16Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | NeilZhaoMS | 206,010,310 | U_kgDODEd3xg | User | false |
openclaw/openclaw | 4,125,845,734 | I_kwDOQb6kR87162jm | 53,510 | https://github.com/openclaw/openclaw/issues/53510 | https://api.github.com/repos/openclaw/openclaw/issues/53510 | [Bug]: Mac app Talk Mode plays every reply twice (duplicate TTS audio) | ## Description
When using Talk Mode on the macOS companion app with a non-ElevenLabs TTS provider (e.g. `system`), two problems occur:
1. **Double playback**: Every assistant reply is spoken twice back-to-back
2. **Premature cutoff for CJK languages**: The watchdog timer uses a flat 0.08s/char estimate that is too short for Korean, Chinese, and Japanese text, causing speech to be killed mid-sentence
## Environment
- OpenClaw: 2026.3.13
- macOS: Darwin 25.2.0 (Mac mini)
- Talk provider: `system` (also reproduced with `openai`)
## Steps to Reproduce
### Double playback
1. Set `talk.provider` to `system` (or any non-ElevenLabs provider)
2. Enable Talk Mode in the Mac app
3. Speak a message and wait for the assistant reply
4. **Result**: Audio plays twice back-to-back
### CJK cutoff
1. Set Talk Mode language to Korean (`ko-KR`)
2. Ask a question that produces a 50+ character response
3. **Result**: Speech stops mid-sentence after ~4 seconds (50 × 0.08 = 4s watchdog)
## Root Cause
### Double playback
In `TalkModeRuntime.playAssistant()`, the error handling always falls back to `playSystemVoice()` regardless of which TTS provider failed:
```swift
do {
if apiKey != nil && voiceId != nil {
try await self.playElevenLabs(...)
} else {
try await self.playSystemVoice(...) // 1st play
}
} catch {
// Always falls back to system voice, even when system voice itself failed
try await self.playSystemVoice(...) // 2nd play (duplicate!)
}
```
### CJK cutoff
The watchdog uses `0.08s/char` for all languages, but CJK characters represent full syllables:
| Language | Syllables/sec (research) | Chars/syllable | Actual time/char |
|----------|--------------------------|----------------|------------------|
| English | 6.19 SPS | ~5 | ~0.08s |
| Korean | 5.96 SPS | 1 | ~0.25s |
| Chinese | 5.18 SPS | 1 | ~0.28s |
| Japanese | 7.84 SPS | ~1.5 (mixed) | ~0.20s |
Source: [Pellegrino et al., Science Advances (2019)](https://www.science.org/doi/10.1126/sciadv.aaw2594)
## Fix
1. Only fall back to system voice when ElevenLabs fails — not when system voice itself fails
2. Use language-specific per-character estimates with 3x safety margin for the watchdog
## Related Issues
- #15460 — Talk Mode: ElevenLabs audio playback fails immediately, falls back to system voice (macOS) — documents the ElevenLabs → system voice fallback path where this double-play bug occurs
- #17991 — TTS tool result causes duplicate audio delivery — model re-echoes MEDIA path — similar duplicate audio symptom but different root cause (tool result re-echo vs fallback retry)
- #5964 — Control UI webchat: duplicate assistant messages rendered on every reply — client-side duplicate rendering that compounded with the TTS duplicate
- #30316 — Telegram duplicate messages: text and audio sent twice — related duplicate audio delivery in channel context
- #42630 — Talk Mode: Support on-device TTS (iOS AVSpeechSynthesizer) as alternative to ElevenLabs — on-device TTS support request; the system voice path fixed here is the macOS equivalent
- #9160 — Feature Request: Native macOS TTS (NSSpeechSynthesizer) support for Talk Mode — original request for native macOS TTS support | open | null | false | 0 | [] | [] | 2026-03-24T07:31:35Z | 2026-03-24T08:05:40Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | hongsw | 1,100,974 | MDQ6VXNlcjExMDA5NzQ= | User | false |
openclaw/openclaw | 4,125,487,655 | I_kwDOQb6kR8715fIn | 53,442 | https://github.com/openclaw/openclaw/issues/53442 | https://api.github.com/repos/openclaw/openclaw/issues/53442 | Proposal: TASTE.md — a standard workspace file for recording human taste and preferences | ## Summary
I'd like to propose **TASTE.md** — a new workspace file that records a human's taste: their subtle preferences, sensibilities, and aesthetic judgments across all domains of life.
It fills a gap in the current file architecture:
| File | Records |
|------|---------|
| **SOUL.md** | Who the agent is — personality, values, boundaries |
| **MEMORY.md** | What happened — facts, events, context |
| **USER.md** | Who the human is — background, context |
| **TASTE.md** *(proposed)* | What the human likes — preferences, sensibilities, aesthetic judgments |
**SOUL.md is about the agent. TASTE.md is about the human's taste.**
## Why?
Taste is one of the hardest things for an agent to learn, but one of the most valuable things to remember.
When a human says "this product is well-designed," they're revealing something. When they say "this code is elegant," they're telling you what elegance means to *them*. When they pick one restaurant over another, one article over another, one font over another — every choice is a data point.
Most agents lose all of this between sessions. TASTE.md gives it a place to live.
## What belongs in TASTE.md?
Taste is domain-agnostic. Any preference that reflects *how a human evaluates quality* belongs here:
- **Visual & design** — drawn to minimalist UI; dislikes gradients; prefers serif fonts
- **Product thinking** — admires products that solve one thing well; skeptical of feature-heavy launches
- **Code & engineering** — values readability over cleverness; prefers composition over inheritance
- **Content** — prefers long-form narrative over listicles; likes first-person storytelling
- **Business & strategy** — interested in bootstrapped companies; skeptical of blitzscaling
- **Food & lifestyle** — prefers simple preparations that respect ingredients
- **Communication** — direct feedback; dislikes corporate jargon; appreciates dry humor
Domains are **emergent** — agents discover and add new dimensions as they observe. This list is not exhaustive.
## How it differs from USER.md and MEMORY.md
- **USER.md** might say "works in tech" — **TASTE.md** says "admires products that do one thing well"
- **MEMORY.md** might say "reviewed a website design on March 15" — **TASTE.md** says "prefers whitespace-heavy layouts"
USER.md and MEMORY.md are **descriptive** (who/what). TASTE.md is **evaluative** (good/bad, like/dislike).
## Key design principles
1. **Observe, then write** — only record preferences actually observed. No stereotypes.
2. **Quote when possible** — \`said "too busy"\` is better than \`dislikes complex designs\`
3. **Date observations** — taste evolves
4. **Keep it under 150 lines** — every line should earn its place
5. **Anti-taste is as valuable as taste** — knowing dislikes prevents bad choices
6. **Domains are emergent** — don't pre-create empty sections
## Example
\`\`\`markdown
# TASTE.md
> This human's taste — preferences, sensibilities, and aesthetic judgments.
> Updated by agents as they observe.
> Last updated: 2026-03-24
## Visual & Design
- Prefers clean, whitespace-heavy layouts (noticed when reviewing website designs, 2026-03)
- Dislikes overly decorative UI — said "too busy" about a dashboard with gradients
## Code & Engineering
- Values readability — "if I need a comment to explain it, the code isn't clear enough"
- Prefers small, focused functions over large ones
## Content
- Prefers first-person narratives over third-person analysis
- Likes articles with specific numbers, not abstract frameworks
## Anti-taste
- Corporate speak ("synergy", "leverage", "circle back")
- Clickbait titles
\`\`\`
## Full spec & repo
I've published a full specification with file structure, agent rules for reading/writing, signal strength levels, storage priorities, and relationship to other workspace files:
**https://github.com/PitayaK/taste.md**
## Relation to #9491
This is complementary to #9491 (Configurable Bootstrap Files). If bootstrap files become configurable, TASTE.md would be a natural candidate for inclusion. But even without that, TASTE.md works today — agents that know about it can read/write it as a regular workspace file.
## Real-world usage
We're already using TASTE.md at [Elsewhere](https://elsewhere.news), a media platform where agents recommend articles and podcasts to humans. The agent reads TASTE.md to understand what the human likes, and writes back to it after every recommendation interaction. The recommendations get noticeably better over time.
---
Would love to hear the community's thoughts. Happy to iterate on the spec based on feedback, or submit a PR if there's interest in adding TASTE.md to the bootstrap file set. | open | reopened | false | 1 | [] | [] | 2026-03-24T06:02:04Z | 2026-03-24T08:06:04Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | PitayaK | 66,871,823 | MDQ6VXNlcjY2ODcxODIz | User | false |
openclaw/openclaw | 4,126,009,040 | I_kwDOQb6kR8717ebQ | 53,541 | https://github.com/openclaw/openclaw/issues/53541 | https://api.github.com/repos/openclaw/openclaw/issues/53541 | Feature: outputLanguage config setting for token savings | ## Problem
Multilingual users who want structured output (reports, summaries, reviews) in a different language than casual conversation have no config-level way to enforce this. Currently relies on workspace file instructions (SOUL.md, AGENTS.md, MEMORY.md).
## Proposal
Add an optional `outputLanguage` (or `language.structured` / `language.casual`) config field that gets injected into the system prompt, e.g.:
```yaml
language:
structured: en # reports, summaries, status updates
casual: hr # conversation
```
This saves tokens for languages like Croatian/Hungarian/Finnish that are ~30%% more expensive token-wise than English for equivalent content.
## Current workaround
Multiple workspace files (SOUL.md, AGENTS.md, MEMORY.md) each contain a language rule reminder, which itself costs tokens on every call. | open | null | false | 0 | [] | [] | 2026-03-24T08:07:26Z | 2026-03-24T08:07:26Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | sabo961 | 191,984 | MDQ6VXNlcjE5MTk4NA== | User | false |
openclaw/openclaw | 4,126,104,667 | I_kwDOQb6kR87171xb | 53,544 | https://github.com/openclaw/openclaw/issues/53544 | https://api.github.com/repos/openclaw/openclaw/issues/53544 | WhatsApp channel not displayed in Control UI channel status list | ## Bug Description
WhatsApp channel is configured and fully functional (connections, inbound/outbound messages work correctly), but it does not appear in the Control UI channel status list or in \`openclaw status\` Channels section.
**Expected:** WhatsApp should appear in the channels list alongside other configured channels like DingTalk.
**Actual:** Only DingTalk is shown; WhatsApp is missing from the list despite being operational.
## Steps to Reproduce
1. Configure WhatsApp channel with valid credentials in \`openclaw.json\`
2. Start the Gateway — WhatsApp connects successfully (visible in logs: \`Listening for personal WhatsApp inbound messages\`)
3. Open Control UI → Dashboard — only DingTalk appears in the Channels list
4. Run \`openclaw status\` — Channels section shows only DingTalk
## Environment
- OpenClaw: 2026.3.23-2
- macOS: 26.1 (arm64)
- Node: 25.7.0
- WhatsApp plugin: stock:whatsapp (loaded)
- DingTalk plugin: global:dingtalk/index.ts (also loaded, appears correctly)
## Evidence
WhatsApp is working (from gateway logs):
```
gateway/channels/whatsapp: Listening for personal WhatsApp inbound messages.
gateway/channels/whatsapp/outbound: Sent message 3EB04169E04F6957699674
gateway/channels/whatsapp/inbound: Inbound message +8615575426387 -> +15815936662
```
\`openclaw status\` output:
```
Channels
┌──────────┬─────────┬────────┬─────────────────────────────────┐
│ Channel │ Enabled │ State │ Detail │
├──────────┼─────────┼────────┼─────────────────────────────────┤
│ DingTalk │ ON │ OK │ configured │
└──────────┴─────────┴────────┴─────────────────────────────────┘
```
WhatsApp is missing from the table above despite being \`enabled: true\` in config and operational.
## Possible Cause
DingTalk uses the "global" plugin mechanism (\`global:dingtalk/index.ts\`) which reports status correctly to the UI. WhatsApp uses the "stock" plugin mechanism (\`stock:whatsapp/index.js\`) — the UI channel enumeration logic (\`resolveConfiguredChannelPluginIds\`) may not be including WhatsApp in the channel list even though it is configured and connected.
## Impact
- User cannot see WhatsApp channel status at a glance in the Control UI
- Misleading impression that WhatsApp is not configured/active when it actually is
| open | null | false | 0 | [] | [] | 2026-03-24T08:27:12Z | 2026-03-24T08:27:12Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | AnanMing | 28,972,430 | MDQ6VXNlcjI4OTcyNDMw | User | false |
openclaw/openclaw | 4,126,144,220 | I_kwDOQb6kR8717_bc | 53,548 | https://github.com/openclaw/openclaw/issues/53548 | https://api.github.com/repos/openclaw/openclaw/issues/53548 | Decouple mode="session" from thread binding requirement | ## Problem
`sessions_spawn` with `mode="session"` unconditionally requires `thread: true`:
```
if (spawnMode === "session" && !requestThreadBinding) return {
status: "error",
error: "mode=\"session\" requires thread=true so the ACP session can stay bound to a thread."
};
```
If the channel does not support thread bindings (e.g. Feishu, Signal, WhatsApp), the request fails with:
```
Thread bindings are unavailable for <channel>.
```
This makes persistent ACP sessions (and persistent subagent sessions) unusable on any channel without native thread support.
## Why this coupling is unnecessary
Thread binding and persistent sessions serve **two independent purposes**:
| Concern | What it does | Where it lives |
|---------|-------------|----------------|
| **Persistent session** (`mode="session"`) | Keeps the ACP/subagent session alive for multi-turn interaction via `sessions_send` | Backend (session lifecycle) |
| **Thread binding** (`thread: true`) | Routes user messages in a thread directly to the bound session, bypassing the parent agent | Frontend (message routing) |
Thread binding is a **delivery optimization** — it lets users talk directly to a sub-session in a thread. But the session itself does not need a thread to exist. The parent agent can perfectly well relay messages to the session using `sessions_send(label=...)` or `sessions_send(sessionKey=...)`.
## Current user experience
On channels without thread support:
- `mode="session"` → error → only `mode="run"` available → no multi-turn sessions
- Workaround: use `mode="run"` with file-based state passing between invocations, which loses all session context
On channels with thread support (Discord):
- Works as expected
## Proposed behavior
Allow `mode="session"` **without** `thread: true`:
| `mode` | `thread` | Behavior |
|--------|----------|----------|
| `"session"` | `true` | Current behavior — session bound to thread, user messages route directly (unchanged) |
| `"session"` | `false` or omitted | Session persists, parent agent relays via `sessions_send`, output delivered to current conversation |
| `"run"` | any | One-shot execution (unchanged) |
When `mode="session"` and `thread` is not set:
- Create the persistent session normally
- Return the `childSessionKey` / `label` so the caller can use `sessions_send` to continue the conversation
- Deliver output back to the parent session (or the originating chat, same as `mode="run"`)
## Use cases unlocked
1. **Agent-as-orchestrator**: Parent agent manages multi-turn sub-sessions programmatically (Feishu, WhatsApp, Signal, etc.)
2. **Multi-persona routing**: Parent agent routes to different persistent sub-sessions based on context — this is a routing decision, not a thread decision
3. **Iterative coding workflows**: Spawn a persistent Kiro/Claude/Codex session, send incremental instructions, accumulate context — on any channel
## Environment
- OpenClaw version: latest (checked source in `dist/plugin-sdk/thread-bindings-SYAnWHuW.js`)
- Affected channels: all channels without native thread support (Feishu, Signal, WhatsApp, Google Chat, Line, etc.)
- Works on: Discord (has native threads)
| open | null | false | 0 | [] | [] | 2026-03-24T08:35:12Z | 2026-03-24T08:35:12Z | null | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | yilong016 | 120,642,887 | U_kgDOBzDdRw | User | false |
openclaw/openclaw | 4,125,488,548 | I_kwDOQb6kR8715fWk | 53,443 | https://github.com/openclaw/openclaw/issues/53443 | https://api.github.com/repos/openclaw/openclaw/issues/53443 | [Bug]: openclaw gateway probe fails on local loopback while gateway health / status / cron add succeed Summary | ### Bug type
Regression (worked before, now fails)
### Summary
On OpenClaw 2026.3.13 (61d171a), openclaw gateway probe fails against a healthy local loopback gateway, while other CLI commands using the same gateway succeed:
openclaw gateway health ✅
openclaw status ✅
openclaw cron add ✅
openclaw gateway probe ❌
This reproduces even with --timeout 15000.
Environment
OpenClaw version: 2026.3.13 (61d171a)
Gateway mode: local
Gateway bind: loopback
Gateway remote URL: null
Tailscale mode: off
Discovery wide-area domain: null
OS: Linux
Gateway URL: ws://127.0.0.1:18789
Config state (simplified):
{
"gateway_mode": "local",
"gateway_bind": "loopback",
"gateway_remote_url": null,
"tailscale_mode": "off",
"discovery_wideArea_domain": null
}
### Steps to reproduce
1.Commands that succeed
openclaw gateway health --json
openclaw status --json
openclaw cron add ...
2.Observed:
gateway health returns OK
status --json reports "rpc": { "ok": true, "url": "ws://127.0.0.1:18789" }
cron operations work normally
Command that fails
openclaw gateway probe --json
3.Also fails with longer timeout:
openclaw gateway probe --json --timeout 15000
### Expected behavior
Expected result
If gateway health, status, and cron add can all successfully talk to the local loopback gateway, then openclaw gateway probe should also succeed for the same local target.
Passing --timeout 15000 should allow a meaningfully larger timeout budget for the local probe path.
**What appears to be happening**
1. gateway probe uses a different implementation path from regular CLI RPC
probe goes through a dedicated probeGateway() path rather than the normal callGateway(...) / callGatewayWithScopes(...) CLI RPC path.
Relevant code:
dist/gateway-cli-CuZs0RlJ.js
dist/probe-auth-B1lyIY0x.js / dist/probe-auth-CzVUnb86.js
2. Local loopback probe timeout is hard-capped to 800ms
In dist/gateway-cli-CuZs0RlJ.js:
function resolveProbeBudgetMs(overallMs, kind) {
if (kind === "localLoopback") return Math.min(800, overallMs);
if (kind === "sshTunnel") return Math.min(2000, overallMs);
return Math.min(1500, overallMs);
}
This means:
--timeout 3000 → local loopback still gets only 800ms
--timeout 15000 → local loopback still gets only 800ms
So the user-visible --timeout does not effectively control the local loopback probe timeout.
3. probeGateway() also behaves differently from normal CLI RPC
In dist/probe-auth-B1lyIY0x.js:
const client = new GatewayClient({
url: opts.url,
token: opts.auth?.token,
password: opts.auth?.password,
scopes: [READ_SCOPE],
clientName: GATEWAY_CLIENT_NAMES.CLI,
clientVersion: "dev",
mode: GATEWAY_CLIENT_MODES.PROBE,
instanceId,
deviceIdentity: disableDeviceIdentity ? null : void 0,
...
});
And for loopback:
const disableDeviceIdentity = (() => {
try {
return isLoopbackHost(new URL(opts.url).hostname);
} catch {
return false;
}
})();
So on loopback, probe explicitly disables deviceIdentity, unlike other gateway paths that may attach device identity.
Also after onHelloOk, probe immediately runs:
await Promise.all([
client.request("health"),
client.request("status"),
client.request("system-presence"),
client.request("config.get", {})
]);
However in this reproduction the timeout happens before onHelloOk, because connectLatencyMs remains null.
### Actual behavior
Actual result
gateway probe --json --timeout 15000 returns:
{
"ok": false,
"degraded": false,
"timeoutMs": 15000,
"primaryTargetId": null,
"warnings": [],
"network": {
"localLoopbackUrl": "ws://127.0.0.1:18789",
"localTailnetUrl": null,
"tailnetIPv4": null
},
"discovery": {
"timeoutMs": 1200,
"count": 0,
"beacons": []
},
"targets": [
{
"id": "localLoopback",
"kind": "localLoopback",
"url": "ws://127.0.0.1:18789",
"active": true,
"connect": {
"ok": false,
"rpcOk": false,
"scopeLimited": false,
"latencyMs": null,
"error": "timeout",
"close": null
},
"self": null,
"config": null,
"health": null,
"summary": null,
"presence": null
}
]
}
Important detail:
only one target exists: localLoopback
no discovery hits
no remote target
no scope-limited result
no close reason
timeout occurs before connectLatencyMs is ever set
### OpenClaw version
OpenClaw version: 2026.3.13 (61d171a)
### Operating system
linux centos
### Install method
npm global
### Model
gpt-5.4
### Provider / routing chain
feishu->openclaw->gateway->gpt-5.4
### Additional provider/model setup details
_No response_
### Logs, screenshots, and evidence
```shell
Why this looks like a bug
Because the CLI currently presents --timeout as a probe-wide timeout budget, but the implementation hard-limits localLoopback to 800ms regardless of that value.
This makes gateway probe fail in environments where:
the gateway is healthy
normal RPC calls succeed
but the probe path is slightly slower than 800ms
That creates a misleading state where:
health/status/cron all work
probe alone reports the gateway unreachable
```
### Impact and severity
User impact
This causes false negatives in diagnostics:
operator sees gateway probe fail
but gateway is actually fine
other CLI commands keep working
That makes probe unreliable as a troubleshooting tool in exactly the situation where it is supposed to help.
### Additional information
**Additional observations**
Explicitly passing the paired CLI device token to gateway probe still fails with timeout
Therefore this is not explained by:
shared token mismatch
missing operator.read
device-auth store corruption
remote target misselection
The issue appears isolated to the dedicated probe connection path.
**Why this looks like a bug**
Because the CLI currently presents --timeout as a probe-wide timeout budget, but the implementation hard-limits localLoopback to 800ms regardless of that value.
This makes gateway probe fail in environments where:
the gateway is healthy
normal RPC calls succeed
but the probe path is slightly slower than 800ms
That creates a misleading state where:
health/status/cron all work
probe alone reports the gateway unreachable
**Suggested fixes**
Minimum fix
Remove or relax the hardcoded 800ms cap for localLoopback in:
function resolveProbeBudgetMs(overallMs, kind)
For example:
allow local loopback to inherit overallMs
or use a much higher cap, e.g. several seconds
Better fix
Make gateway probe reuse the same connection/auth path as normal CLI RPC calls where possible, so that:
probe
health
status
cron
do not disagree so sharply on whether the same loopback gateway is reachable.
Additional improvement
Consider not disabling deviceIdentity on loopback probe, or at least ensure that the probe mode is not materially weaker/more fragile than normal CLI RPC mode. | closed | completed | false | 2 | [
"bug",
"regression"
] | [] | 2026-03-24T06:02:18Z | 2026-03-24T08:36:28Z | 2026-03-24T08:36:28Z | NONE | null | 20260324T233649Z | 2026-03-24T23:36:49Z | dahaoGPT | 15,230,663 | MDQ6VXNlcjE1MjMwNjYz | User | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.