March 21, 2026

Why measure AI-generated code?

AI-generated code is everywhere. But do you really know what share of your codebase was written by Claude Code, Copilot, or Cursor? Here's why this measurement is becoming essential.

By Logain

AI-generated code is exploding. Claude Code, GitHub Copilot, Cursor — these tools have become permanent fixtures in development team workflows. But one question often goes unanswered: what is the actual share of AI in your codebase?

The visibility problem

Most teams have an intuition: "we use a lot of AI". But intuition is not measurement. Without concrete data, it's hard to answer perfectly legitimate questions:

  • Productivity: Did AI actually make us faster this quarter?
  • Quality: Do AI commits have more or fewer bugs than human commits?
  • Compliance: Our contract states that delivered code is "human-written". Is that true?
  • Onboarding: Do new developers understand AI-generated code as well as the rest?

Without measurement, these questions remain opinion debates.

Why now?

Three reasons explain why 2026 is the right time to instrument this measurement.

1. Tools leave identifiable traces

Claude Code, Copilot and Cursor each have distinctive patterns in Git commits — commit messages, co-authors, metadata. These traces can be exploited reliably and non-intrusively.

2. The AI proportion exceeds 30% in many teams

When AI writes more than a third of the code, its impact on velocity, technical debt and team culture becomes structural. You need metrics to manage that impact.

3. Stakeholders are asking the question

Legal, compliance, enterprise clients — more and more external actors are asking for guarantees about code origin. A data-based answer is far more solid than an estimate.

What Logain measures

Logain integrates as a GitHub App and analyzes each commit:

  • Tool attribution: Claude Code vs Copilot vs Cursor vs human
  • Per-repo ratio: percentage of AI commits over a period
  • Time trend: week-by-week evolution
  • Stats by author: who uses what in the team

All this without modifying your workflow, without accessing source code — only commit metadata.

Measurement as a trust lever

Measuring AI-generated code is not an act of distrust. It's an act of maturity. Teams that instrument this data are not trying to limit AI usage — they're trying to understand and value what they do.

The next time your CTO asks "how much AI is in the code?", you'll be able to answer with a number.


Logain is an open-source GitHub App to measure AI tool usage across your repositories. Try it for free.