This setup is really two layers running on the same Ubuntu VM: OpenClaw handles direct Slack conversations over Socket Mode, and a set of systemd timers drive GitHub automation lanes that triage issues, fix code, respond to review feedback, and ask me for merge approval.

What is actually running

The host is an Ubuntu VM with daily snapshot backups. OpenClaw is installed from source, connected to Slack through Socket Mode, and configured with Codex as the current primary model, with OpenRouter and Gemini fallbacks available. GitHub access is handled through gh, and the automation around it is managed with user-level systemd services and timers.

That matters because this is not one long-running prompt. It is a collection of small, purpose-built lanes with different cadences and responsibilities.

The useful design choice is separation of concerns: Slack stays conversational, GitHub stays the system of record, and systemd keeps the automation loops predictable.

Correction number one: Slack is event-driven

The main Slack bot is not polling every 90 seconds. OpenClaw is connected through Slack Socket Mode, so DMs and mentions arrive as events. That gives me an always-on chat interface without exposing a public webhook endpoint.

The 90-second poller in this setup belongs to a different lane: a secondary bot named snidegod that watches GitHub Discussions, not Slack.

The timed automation lanes

Bughunter every 2 hours

The bughunter lane runs every two hours. It scans repositories for likely functional defects, duplicate-safe regression gaps, and suspicious workflows, then files GitHub issues when the evidence is strong enough.

Feature-gap researcher every 12 hours

The feature-gap lane runs every twelve hours. It looks at competitor and ecosystem signals, then opens GitHub issues for meaningful feature gaps or research opportunities rather than dumping loose notes into a doc.

Issue triager every 30 minutes

The issue triager runs twice an hour. It classifies issues, checks for duplicates, decides whether they are reproducible, and marks whether they are ready for the fixer lane.

Issue fixer every 30 minutes

The issue fixer also runs twice an hour, staggered behind triage. For eligible issues, it prepares a branch, investigates the repo, writes or updates tests, implements the minimal fix, validates, and prepares the change for PR creation.

PR comment watcher every 5 minutes

The PR comment watcher checks GitHub pull request comments every five minutes. It is wired to respond to mentions and commands, and it can also accept trigger comments from review bots like Amazon Q and Codex connectors.

PR review remediator every 5 minutes

The review remediator runs on a five-minute cadence with a clock anchor. It reads unresolved PR feedback, applies targeted code or test changes, commits them, and pushes the branch forward.

PR merge prep every 30 minutes

The merge-prep lane runs every thirty minutes. When a PR is clean enough to ship, it sends me a Slack approval request and waits for a simple response like yes or no before merging.

How issue work is actually structured

The earlier simplified description was directionally right but not precise enough. The real setup does not just say, “process issues every few hours.” It has separate triage and fixer lanes, and the implementation-heavy lanes use role-based orchestration inside the run.

In the allplays flows, the fixer and PR remediator explicitly call four role subagents before implementation:

  1. Requirements to sharpen the problem and acceptance criteria
  2. Architecture to bound the solution and blast radius
  3. QA to define regression coverage and test expectations
  4. Code to shape the minimal safe patch plan

Only the main execution lane edits files, commits, and pushes. The subagents are there to improve the plan, not to create uncontrolled side effects.

Where Amazon Q and Codex fit

I do use Amazon Q and Codex in the review loop, but the accurate description is that the GitHub watcher and remediation lanes are built to react to their review feedback, not that one giant script simply “runs Amazon Q and Codex.” The system watches for review activity, incorporates that feedback, and hands it to the remediation path.

The second bot is real, and a little weird

There is also a secondary bot, snidegod, with its own timers and GitHub scope. One of its lanes watches GitHub Discussions every 90 seconds. Another daily timer starts a bot-to-bot conversation in the Slack channel #feelings, where the bots reflect on workload, blockers, and system stress.

That sounds gimmicky until you look at the purpose. The point is operational introspection. I want the system to expose overload and queue pressure instead of quietly degrading.

Why I like this pattern

  • Slack stays simple, because the interface is just conversation and approvals
  • GitHub stays authoritative, because issues, PRs, labels, and comments capture the work
  • Automation stays observable, because each lane has its own timer, logs, and failure mode
  • AI work stays bounded, because roles are separated and merges still require human approval

If I were describing it in one line now

I run OpenClaw on an Ubuntu VM as the Slack-facing control layer, then use systemd-scheduled GitHub lanes for bug discovery, feature research, triage, implementation, review remediation, and Slack-based merge approval.