If you’re underwhelmed with AI coding agents or simply want to get more out of them, give parallelization a try. After seeing the results firsthand, the throughput improvements are incredible — and the key insight is that you don’t lose control of the codebase when you do it right. The effectiveness of using Git worktrees for simultaneous agent execution is gaining broader recognition. It’s mentioned in Claude Code’s docs, discussed on Hacker News, implemented in projects like Claude Squad, and actively discussed on X.Documentation Index
Fetch the complete documentation index at: https://mint.skeptrune.com/llms.txt
Use this file to discover all available pages before exploring further.
Why parallelization works
The core insight is probabilistic. A single LLM has some probability — say 25% — of producing a useful solution to a given UI or feature task on the first try. If you run four agents independently with the same prompt, the probability that at least one succeeds is:Example: Adding a UI component
Here’s a concrete example. When building astrobits, a component library, the task was to add aToggle component. To tackle it, two Claude Code agents and two Codex agents were deployed — all with the same prompt, running in parallel within their own git worktrees.
Worktrees are essential because they provide each agent with an isolated directory, allowing them to execute simultaneously without overwriting each other’s changes.
The results:
| Agent | Result |
|---|---|
claude-1 | Mostly correct, workable, but pixelated border-image and shadow needed fixing |
claude-2 | Completely broken — sliding circle too small, wrong color |
codex-1 | Very wrong — shadow on top, active state on wrong side |
codex-2 | Unusable — circle color wrong, active side incorrect |
The number of agents you deploy should scale with task complexity. Simple tasks might need only two. Novel or ambiguous tasks might warrant six or more. You’ll develop intuition for this over time.
The current manual workflow
Right now, the parallel workflow goes like this:- Manually create git worktrees with
git worktree add -b newbranch ../path - Start a
tmuxsession for each worktree - Run Claude Code in the first pane, paste the prompt
- Open a new pane with
leader+c, runyarn devto get a preview - Switch to the browser to review results
- Repeat for agents that didn’t succeed
- Commit, push, and create a PR for the successful output
- Branch tracking confusion. You can’t tell which branch a worktree was most recently rebased onto. If
agent-1was rebased ontofeature-xbutagent-2ontomain, you lose track without manual notes. - No broadcast prompting. If all agents are stuck on the same misunderstanding, you have to copy-paste the clarification into each session individually.
- Opening your IDE is clunky. To open VS Code for a given worktree, you have to
tmux a,leader + c, thencode .. A keyboard shortcut for this would help enormously. - Port management for previews. Running
yarn devin each worktree means mentally tracking which port each worktree is on. - Committing and PRs are manual. After finding a solution in
agent-3, you manually attach to that tmux session, thencommit,push, andgh pr.
Proposed solution: uzi
To address these challenges, the ideal developer experience would involve a lightweight CLI that wraps tmux and automates the orchestration. Nick and his co-founder Denzell are building exactly this: uzi. The core idea is to abstract away the manual, repetitive tasks while staying close to existing mechanics. The goal is to feel at home usinguzi alongside standard Unix tools like xargs, grep, and awk.
Here are some of the planned commands:
uzi start — launch agents
uzi start — launch agents
uzi ls — inspect agent state
uzi ls — inspect agent state
uzi exec — run commands across worktrees
uzi exec — run commands across worktrees
yarn dev across all agent worktrees simultaneously. No more mental port mapping.uzi broadcast — send follow-up prompts
uzi broadcast — send follow-up prompts
uzi checkpoint — commit a specific agent's work
uzi checkpoint — commit a specific agent's work
uzi kill — clean up an agent
uzi kill — clean up an agent
tmux send-keys instructions to the appropriate sessions — no wheel reinvention, just polish on the existing process.
The future is parallel: beyond code
The parallel agent pattern isn’t limited to software. The principle — run multiple agents independently, review outputs, select and merge the best — applies universally. Consider transactional law. A company like versionstory, which is pioneering version control for legal documents, could let attorneys run multiple agent instances to redline a contract. After reviewing outputs, they select and merge the best components. The final review is based on multiple independent analyses rather than a single agent’s judgment. Or consider marketing analytics. A team could prompt multiple AI instances to analyze ad performance data, then select the most insightful analyses to inform strategy. More coverage of the solution space leads to better decision-making. The parallel paradigm is a glimpse into a more efficient, robust future for AI-assisted productivity. For now, the workflow is a bit manual — but it’s already dramatically effective, and tools likeuzi are making it smoother.