I Built a CLI to Find the Riskiest Code in Any Repo — Introducing Hotspots
Key Points
What makes Hotspots different from tools like ESLint or SonarQube?
ESLint and SonarQube flag raw complexity and style violations. Hotspots adds the change dimension—functions that are complex but never touched aren't your immediate problem. Functions that are complex *and* actively changing are. Hotspots multiplies structural metrics by git activity to surface the code that's actually dangerous right now.
What is an activity-weighted risk score?
It's a score that combines structural metrics (cyclomatic complexity, nesting depth, fan-out, non-structured exits) with git signals (churn in the last 30 days, touch frequency, recency). A function that is both structurally complex and frequently changed gets a high activity-weighted risk score—indicating it's both hard to modify correctly and actively being modified.
What are the antipattern labels Hotspots detects?
Hotspots v1.2.0 detects two tiers of patterns. Tier 1 (structural) includes complex_branching, deeply_nested, exit_heavy, long_function, and god_function. Tier 2 (relational and temporal) includes hub_function, cyclic_hub, middle_man, neighbor_risk, stale_complex, churn_magnet, shotgun_target, and volatile_god. Each label indicates a specific structural or temporal risk signal.
How do Hotspots CI policy checks work?
Running hotspots in delta mode compares the current state of the codebase against the previous version. Policy checks can fail a PR if a new function enters the critical band, if an existing function's risk score increases by a large delta, or if rapid complexity growth is detected. This gives you a complexity gate in CI without requiring manual review for every PR.
What is hotspots.dev?
hotspots.dev is a blog I built alongside the CLI that runs automated Hotspots analyses on trending open-source repos nightly, generates AI-written summaries of the findings, and publishes them. It's a real-world showcase of the kinds of structural risks Hotspots surfaces in production codebases.
Every team has those files. The ones everyone knows are dangerous. The ones where a “simple” change takes three days of careful testing. The ones that keep showing up in postmortems.
I spent a long time noticing that pattern—the same 10% of a codebase causing 80% of the pain—and wondering why our tooling didn’t just show us where that 10% was. ESLint can tell you a function has high cyclomatic complexity. But that doesn’t tell you whether it’s actively being changed, who’s touching it, or whether it’s likely to bite you in the next sprint.
So I built Hotspots — a Rust CLI that finds risky code by combining structural complexity with real git activity.
The Core Idea: Complexity × Change
Raw complexity metrics are useful, but they’re incomplete. A gnarly function that hasn’t been touched in two years isn’t your emergency today. The real danger is complexity plus active change — functions that are structurally hard to reason about and are being modified regularly.
Hotspots computes a risk score for every function in your codebase by combining:
- Structural signals: cyclomatic complexity (CC), nesting depth (ND), fan-out (FO), non-structured exits (NS)
- Activity signals: git churn in the last 30 days, touch frequency (commit count), recency, call-graph influence
The result is an activity-weighted risk score — a prioritized list of functions that are both dangerous and active. Not functions you should eventually clean up. Functions you should care about this sprint.
Getting Started
Install with a single curl:
curl -fsSL https://raw.githubusercontent.com/Stephen-Collins-tech/hotspots/main/install.sh | sh
Point it at any repo and run a snapshot:
hotspots analyze . --mode snapshot --format text
You’ll get a ranked output grouped into risk bands:
Critical (risk ≥ 9.0):
processPlanUpgrade src/api/billing.ts:142 risk 12.4 CC 15 ND 4 FO 8
High (6.0 ≤ risk < 9.0):
validateSession src/auth/session.ts:67 risk 9.8 CC 11 ND 3 FO 7
applySchema src/db/migrations.ts:203 risk 8.1 CC 10 ND 2 FO 5
For a shareable HTML report:
hotspots analyze . --mode snapshot --format html --output report.html
Explain Mode: Why Is This Function Risky?
Once you have the list, you want to know why something ranked high — and what to do about it. Pass --explain-patterns and Hotspots annotates each function with antipattern labels:
hotspots analyze . --mode snapshot --format json --explain-patterns > snapshot.json
In v1.2.0, Hotspots detects two tiers of patterns:
Tier 1 — Structural:
complex_branching, deeply_nested, exit_heavy, long_function, god_function
Tier 2 — Relational & Temporal:
hub_function, cyclic_hub, middle_man, neighbor_risk, stale_complex, churn_magnet, shotgun_target, volatile_god
A function tagged god_function + cyclic_hub is both monolithic and at the center of a dependency cycle — a very different refactoring situation than one tagged exit_heavy + long_function. The labels make the action obvious.
CI Policy Checks: Stop Regressions Before They Merge
Identifying hotspots is useful. Preventing new ones from landing is better. Hotspots has a delta mode that compares the current branch against the baseline and applies configurable policy checks:
hotspots analyze . --mode delta --policy
Policies you can configure:
- Critical Introduction — fail if a new function lands in the critical band
- Excessive Regression — fail if a function’s risk score jumps by a large delta
- Rapid Growth — flag unusually fast complexity growth
- Watch/Attention — warn when functions approach thresholds
Add it to CI and complexity regressions fail the PR before review. Start it in warn-only mode, get the team used to the signal, then flip to blocking when you’re ready.
Here’s a minimal GitHub Actions step:
- name: Hotspots policy check
run: hotspots analyze . --mode delta --policy
Exit code 1 if any blocking policy fails. Zero if clean.
hotspots.dev — Automated OSS Analysis
Alongside the CLI, I’ve been building hotspots.dev — a blog that runs automated Hotspots analyses on trending open-source repos every night.
The pipeline:
- A GitHub Actions crawl job selects a fresh trending repo
- Hotspots analyzes it and extracts the top functions and antipatterns
- An AI draft gets generated and opened as a PR for review
- Once merged, it deploys automatically
It’s a real-world showcase of what the tool surfaces in popular codebases — eslint, Flowise, and more. The HTML reports are hosted at reports.hotspots.dev so you can drill into the full ranked analysis for any repo.
I find it genuinely interesting to run Hotspots on codebases I use every day and see where the structural risk actually lives. It’s rarely where you’d guess.
Try It
# Install
curl -fsSL https://raw.githubusercontent.com/Stephen-Collins-tech/hotspots/main/install.sh | sh
# Snapshot your current repo
hotspots analyze . --mode snapshot --format text
# Full JSON with pattern labels
hotspots analyze . --mode snapshot --format json --explain-patterns > snapshot.json
# CI policy check
hotspots analyze . --mode delta --policy
- 📖 Docs: docs.hotspots.dev
- 🔬 Live analyses: hotspots.dev
- 🦀 Source: github.com/Stephen-Collins-tech/hotspots
It’s MIT licensed, written in Rust, and works on any language — since it operates on git history and AST-level metrics rather than language-specific rules.
If you’ve ever looked at a PR and thought “this feels risky but I can’t explain why” — run Hotspots. It’ll tell you why.
TL;DR:
Complexity metrics alone miss the point. Hotspots combines structural analysis with real git activity to surface the functions that are both hard to change and actively being changed. It’s a Rust CLI, it works on any language, and it can gate PRs in CI. Try it.