Home / Tech / “Tokenmaxxing” is making developers less productive than they think

“Tokenmaxxing” is making developers less productive than they think

Spread the love

There’s an old rule in management: what you measure matters. You usually get more out of what you measure.

Software engineers have discussed productivity metrics for decades, starting with lines of code. But as the new generation of AI coding agents delivers more code than ever before, what their managers should measure is becoming less clear.

Huge token budgets — essentially the amount of AI processing power a developer is authorized to consume — have become a badge of honor among Silicon Valley developers, but that’s a very strange way to think about throughput. Measuring process inputs makes no sense when you supposedly care more about the outputs. This might make sense if you’re trying to encourage more AI adoption (or sell tokens), but not if you’re trying to become more efficient.

Consider the evidence from a new class of companies working in the field of “developer productivity visibility.” They found that developers using tools like Claude Code, Cursor, and Codex are creating much more acceptable code than they did before. But they also found that engineers had to go back to review accepted code more often than before, undermining claims of increased productivity.

Alex Searcy, CEO and Founder WidefIt is building an intelligence layer to track these dynamics; His company works with 50 different clients who employ over 10,000 software engineers. (Circei has contributed to TechCrunch in the past, but this reporter has never met him before.)

He says engineering managers see code acceptance rates of 80% to 90% — meaning the share of AI-generated code that developers approve and retain — but they miss the change that happens when engineers have to review that code in the following weeks, resulting in a real-world acceptance rate of 10% to 30% of the code generated.

See also  Periwinkle is making self-hosted social media on Bluesky's AT Protocol even easier

The rise of AI coding tools has led Waydev, which was founded in 2017 to provide analytics to developers, to completely rework its platform in the past six months to address the proliferation of agile coding tools. Now, the company is launching new tools to track metadata created by AI agents, and provides analytics on the quality and cost of their code to provide engineering managers with more knowledge about AI adoption and effectiveness.

TechCrunch event

San Francisco, California
|
October 13-15, 2026

While analytics companies have an incentive to highlight problems they find, evidence is mounting that large organizations are still figuring out how to use AI tools efficiently. Big companies took note: Atlassian acquired DX, another engineering intelligence startup, for $1 billion last year, to help its customers understand the return on investment on programming agents.

Data from across the industry tells a consistent story: more code is written, but a disproportionate amount of it is not maintained.

GetclearAnother company in this field, Published a report In January, it found that AI tools increased productivity, but its data also showed that “regular AI users averaged a 9.4 times higher code iteration rate than their non-AI counterparts” — more than double the productivity gains provided by the tools.

Faros AI, an engineering analytics platform, relied on two years of customer data March 2026 report. The result: The rate of code change — lines of code deleted versus added — increased by 861% with high AI adoption.

Jellyfish, which describes itself as an intelligence platform for end-to-end AI architecture, Data collected on 7,548 engineers in Q1 2026. The company found that engineers with the largest code budgets produced the highest number of pull requests (proposed changes to the shared code base), but the productivity improvement did not scale. They achieved twice the throughput at ten times the cost of tokens. In other words, instruments generate volume, not value.

See also  Cartoonist Paul Pope is more worried about killer robots than AI plagiarism

These kinds of statistics ring true when you talk to developers, who find that code review and technical debt piles up even as they enjoy the freedom of new tools. One common finding is the difference between senior and junior engineers, with the latter accepting a much larger amount of AI-generated code, and dealing with a greater amount of rewriting as a result.

However, even as the developers work to understand exactly what their agents are up to, they don’t expect to go back any time soon.

“This is a new era of software development, and you have to adapt, and you are forced to adapt as a company,” Searcy told TechCrunch. “It’s not like it’s going to be a cycle that will pass.”

Source link

Tagged: