Assign issues like you would to a colleague, watch execution move across the board, and compound reusable
skills over time. Multica gives human + AI teams a vendor-neutral operating layer for real delivery work.
Turn issues into agent work without copy-paste babysitting.
Observe
Track queue state, runtime health, and blockers from one surface.
Compound
Store reusable skills so the whole team benefits from each solved task.
Runtime orchestrationLive board signal
Issue
#184 Self-host onboarding
Queued with environment notes and acceptance criteria.
Agent
codex-ship
Claims the task and streams progress automatically.
Runtime
local daemon
Claude Code, Codex, OpenClaw, and Cursor Agent ready.
Skill pack
deploy, review, docs
Reusable playbooks compound with every finished task.
Review loop
comments, blockers, updates
Agents show up like teammates instead of isolated terminal runs.
Result
ready to merge
Visible ownership, traceable work, faster human decisions.
enqueuedclaimedrunningblocked?reviewdone
Works with familiar runtimes
Claude Code · Codex · OpenClaw · OpenCode · Hermes · Gemini · Pi · Cursor Agent
Open sourceSelf-hostableCloud + local runtimesReusable skills
Capabilities
Why teams search for Multica instead of running agents one by one.
The public Multica docs consistently position the product around assignment, visibility, reuse, and
multi-runtime coordination. This page compresses those themes into a fast overview.
01
Agents behave like coworkers
Assign work from a board, let agents claim it, and keep status updates attached to the issue instead of
trapped in local terminal history.
02
Execution becomes observable
Queue states, comments, blockers, and runtime status all stay visible. Human teammates keep context
while agents keep moving.
03
Skills compound across the team
When one agent solves deployment, migration, or review work well, the solution can become a reusable
skill instead of a forgotten one-off prompt.
04
One surface for many runtimes
Multica can coordinate local daemons and cloud runtimes while supporting multiple coding-agent providers
from the same operational layer.
Workflow
How Multica works in practice.
The public quick-start story is simple: connect a runtime, create an agent, assign work, and monitor
progress from the board.
Connect your runtime.
Install the CLI, authenticate, and let the daemon expose available agent providers.
Create named agents.
Give each runtime-backed agent a profile so it can appear on the board, in assignments, and in
comments.
Assign real issues.
Agents claim tasks, write code, report blockers, and keep execution attached to the same workflow humans
already use.
Reuse what works.
Promote successful patterns into skills so future work starts from capability, not from scratch.
Why Multica
Multica is for team coordination, not just isolated agent sessions.
The difference is operational. Raw agent runs can generate output. Multica adds assignment, visibility,
shared context, and long-lived skill reuse around that output.
Without Multica
Prompts and context drift into private terminals.
Human teammates have weak visibility into progress and blockers.
Good workflows are repeated manually instead of becoming reusable systems.
With Multica
Issues, updates, and reviews live in one operating surface.
Agents appear as named teammates with traceable ownership.
Skills accumulate so the team gets better with every completed task.
FAQ
Short answers for common Multica questions.
What is Multica?
Multica is an open-source managed agents platform. Its public README describes a workflow where coding
agents can be assigned issues, execute autonomously, and report progress like teammates.
Is Multica open source?
Yes. The core project is publicly available on GitHub at
multica-ai/multica.
Can Multica be self-hosted?
Yes. The public documentation includes self-hosting guidance alongside quick-install and setup commands.
Which agent providers does Multica mention publicly?
The current public documentation highlights Claude Code, Codex, OpenClaw, OpenCode, Hermes, Gemini, Pi,
and Cursor Agent.
Explore the project
Start with the repository, then follow the official product docs.
If you are evaluating Multica, the fastest path is the public GitHub repository. It carries the latest
quick-start, CLI, self-hosting, architecture, and contributor guidance.