Skip to main content

Documentation Index

Fetch the complete documentation index at: https://mint.skeptrune.com/llms.txt

Use this file to discover all available pages before exploring further.

If you’re underwhelmed with AI coding agents or simply want to get more out of them, give parallelization a try. After seeing the results firsthand, the throughput improvements are incredible — and the key insight is that you don’t lose control of the codebase when you do it right. The effectiveness of using Git worktrees for simultaneous agent execution is gaining broader recognition. It’s mentioned in Claude Code’s docs, discussed on Hacker News, implemented in projects like Claude Squad, and actively discussed on X.

Why parallelization works

The core insight is probabilistic. A single LLM has some probability — say 25% — of producing a useful solution to a given UI or feature task on the first try. If you run four agents independently with the same prompt, the probability that at least one succeeds is:
1 - 0.75^4 ≈ 0.68 (68%)
Four agents gives you a 68% chance of getting a workable result. Running one gives you 25%. That’s the math that justifies parallelization. With LLMs being so affordable, there’s virtually no downside. The cost difference between using one agent (0.10)versusfour(0.10) versus four (0.40) is negligible compared to the 20 minutes of development time saved. The financial risk is minimal, so you can afford to be aggressive.

Example: Adding a UI component

Here’s a concrete example. When building astrobits, a component library, the task was to add a Toggle component. To tackle it, two Claude Code agents and two Codex agents were deployed — all with the same prompt, running in parallel within their own git worktrees. Worktrees are essential because they provide each agent with an isolated directory, allowing them to execute simultaneously without overwriting each other’s changes. The results:
AgentResult
claude-1Mostly correct, workable, but pixelated border-image and shadow needed fixing
claude-2Completely broken — sliding circle too small, wrong color
codex-1Very wrong — shadow on top, active state on wrong side
codex-2Unusable — circle color wrong, active side incorrect
Only one of four produced something that saved time. That validates the math: four agents at ~25% each gives roughly a 68% chance of getting at least one workable result.
The number of agents you deploy should scale with task complexity. Simple tasks might need only two. Novel or ambiguous tasks might warrant six or more. You’ll develop intuition for this over time.

The current manual workflow

Right now, the parallel workflow goes like this:
  1. Manually create git worktrees with git worktree add -b newbranch ../path
  2. Start a tmux session for each worktree
  3. Run Claude Code in the first pane, paste the prompt
  4. Open a new pane with leader+c, run yarn dev to get a preview
  5. Switch to the browser to review results
  6. Repeat for agents that didn’t succeed
  7. Commit, push, and create a PR for the successful output
This works, but it’s cumbersome. The top pain points:
  • Branch tracking confusion. You can’t tell which branch a worktree was most recently rebased onto. If agent-1 was rebased onto feature-x but agent-2 onto main, you lose track without manual notes.
  • No broadcast prompting. If all agents are stuck on the same misunderstanding, you have to copy-paste the clarification into each session individually.
  • Opening your IDE is clunky. To open VS Code for a given worktree, you have to tmux a, leader + c, then code .. A keyboard shortcut for this would help enormously.
  • Port management for previews. Running yarn dev in each worktree means mentally tracking which port each worktree is on.
  • Committing and PRs are manual. After finding a solution in agent-3, you manually attach to that tmux session, then commit, push, and gh pr.

Proposed solution: uzi

To address these challenges, the ideal developer experience would involve a lightweight CLI that wraps tmux and automates the orchestration. Nick and his co-founder Denzell are building exactly this: uzi. The core idea is to abstract away the manual, repetitive tasks while staying close to existing mechanics. The goal is to feel at home using uzi alongside standard Unix tools like xargs, grep, and awk. Here are some of the planned commands:
uzi start --agents claude:3,codex:2 --prompt "Implement feature X"
Initializes three Claude instances and two Codex instances, each in its own worktree, each with the prompt already running.
uzi ls
Displays all active agents, their target branches, and current statuses. Solves the branch-tracking problem at a glance.
uzi exec --all -- yarn dev
Runs yarn dev across all agent worktrees simultaneously. No more mental port mapping.
uzi broadcast -- "Refine the previous response by focusing on Y"
Sends a follow-up prompt to all active agents at once. Solves the copy-paste problem immediately.
uzi checkpoint --agent claude-1 --message "Implemented initial draft"
Rebases the specified agent’s worktree and commits the changes.
uzi kill --agent codex-2
Cleans up a specific agent’s tmux session and optionally its worktree.
These commands operate via tmux send-keys instructions to the appropriate sessions — no wheel reinvention, just polish on the existing process.

The future is parallel: beyond code

The parallel agent pattern isn’t limited to software. The principle — run multiple agents independently, review outputs, select and merge the best — applies universally. Consider transactional law. A company like versionstory, which is pioneering version control for legal documents, could let attorneys run multiple agent instances to redline a contract. After reviewing outputs, they select and merge the best components. The final review is based on multiple independent analyses rather than a single agent’s judgment. Or consider marketing analytics. A team could prompt multiple AI instances to analyze ad performance data, then select the most insightful analyses to inform strategy. More coverage of the solution space leads to better decision-making.
Expect existing software products to gain more powerful version control and parallel execution capabilities in the coming years — emulating what git worktrees enable for software development, but across every knowledge-work domain.
The parallel paradigm is a glimpse into a more efficient, robust future for AI-assisted productivity. For now, the workflow is a bit manual — but it’s already dramatically effective, and tools like uzi are making it smoother.