Switch language:

How We Automated Code Review With Worktrees, a .sh Script, and MCPs — 95% Accuracy - 09/03/2026

How we combined git worktrees, a shell script, and Jira and GitLab MCP servers to automate our code review workflow and hit 95% accuracy.

My team achieved something many consider hard: making code review fast, consistent, and almost fully automatable, without losing technical depth. The key was combining three tools that, together, completely transformed how we review code.

The Problem We Had

The classic code review workflow has friction that accumulates:

The result: code reviews became a bottleneck, and developers either avoided them or approved them without real depth.

The Solution: Worktrees + .sh Script + Specialized MCPs

1. Git Worktrees: Isolated Spaces Without Switching Branches

The first piece was adopting git worktrees. Instead of git checkout and losing the context of current work, we create a parallel working directory for each MR to review:

# Create an isolated worktree to review an MR
git worktree add ../review-PCF-1539 feature/PCF-1539

# Review in that directory without touching current work
cd ../review-PCF-1539

# When done, clean up
git worktree remove ../review-PCF-1539

Each review happens in its own space, completely isolated. No stash, no conflicts, no interruptions.

2. The .sh Script: Automating Context Preparation

We built a shell script that, given an MR number or branch, automatically prepares the review environment:

#!/bin/bash
# dod-review.sh — Prepares the environment to review an MR

MR_BRANCH=$1
WORKTREE_PATH="../review-$(echo $MR_BRANCH | tr '/' '-')"

echo "🔧 Creating worktree for $MR_BRANCH..."
git worktree add "$WORKTREE_PATH" "$MR_BRANCH"

echo "📦 Installing dependencies..."
cd "$WORKTREE_PATH" && npm install --silent

echo "✅ Environment ready at $WORKTREE_PATH"
echo "🤖 Starting AI analysis..."

In seconds, the reviewer has a clean environment with dependencies installed, ready for AI analysis.

3. Specialized MCPs: Jira and GitLab as Sources of Truth

The piece that changed everything was the Model Context Protocol (MCP) servers for Jira and GitLab.

With the GitLab MCP, the AI can:

With the Jira MCP, the AI can:

// Example of automated flow
1. Script detects MR → creates worktree
2. AI reads Jira ticket (via MCP) → extracts acceptance criteria
3. AI reads GitLab diff (via MCP) → analyzes changes
4. AI applies review skills → generates report
5. AI posts comments on the MR (via MCP)

Skills Focused on Our Stack

What makes the 95% accuracy possible are specialized skills. We don’t use a generic “review this code” prompt. We have skills defined for our specific technologies:

Each skill is a precise set of instructions the AI applies systematically. No human variability. No forgetting to check something important.

The Result: What Changed

BeforeAfter
Context switching when reviewingIsolated worktree, no interruptions
Inconsistent reviewsSystematic skills, same standard always
Long manual review timeAI covers 95% in minutes
Developers avoided reviewingReview is minutes, not hours
Subjective criteriaObjective criteria from the Jira ticket

The most important impact: the team is freed to focus on what truly matters — building new features and resolving real bugs — instead of spending hours on mechanical reviews.

The 95% accuracy doesn’t mean AI replaces the developer. It means it reaches 95% of the analysis automatically and consistently, and the human contributes final judgment on the 5% that requires domain expertise.

Thank you so much for making it this far and reading this article. I hope it gave you ideas to improve the code review workflow in your own team.