The release of a single Python library update has ignited one of the most consequential legal and ethical debates the open source software community has faced in years. At its center is a deceptively simple question: when an AI model generates an entirely new codebase inspired by an existing project, does the resulting work inherit the legal obligations of the original?
The controversy crystallized with the publication of chardet version 7.0.0, an overhaul of a widely used Python library for automatic character encoding detection. Originally authored by developer Mark Pilgrim
Rather than undertaking a conventional manual rewrite, Blanchard turned to Claude Code, Anthropic's AI-powered coding assistant. The results were striking. According to Blanchard, the overhaul was completed

Pilgrim did not accept that characterization without challenge. Resurfacing on GitHub, he contended that Blanchard's extensive familiarity with the original codebase disqualifies the new version from being considered an independent work. Traditional "clean room" reverse engineering requires a strict firewall between those who analyze existing code and those who write the replacement — a separation that was never established here. As Pilgrim argued directly:
"Their claim that it is a 'complete rewrite' is irrelevant, since they had ample exposure to the originally licensed code (i.e., this is not a 'clean room' implementation). Adding a fancy code generator into the mix does not somehow grant them any additional rights. I respectfully insist that they revert the project to its original license."
Blanchard's counterargument rests on the premise that the standards governing human clean room implementations should not be applied wholesale to AI-generated code. He acknowledges having had
Blanchard summarized his position plainly in the GitHub thread:
"No file in the 7.0.0 codebase structurally resembles any file from any prior release. This is not a case of 'rewrote most of it but carried some files forward.' Nothing was carried forward."
His described methodology was deliberate. Blanchard began by
Yet the case is far from clear-cut, and several complicating factors deserve serious consideration. First, Claude explicitly relied on some metadata files from previous versions of chardet, introducing a tangible point of direct continuity between the old and new codebases. Second, and more philosophically troubling, Claude's underlying models are trained on
The human dimension adds yet another layer of complexity. Blanchard was not a passive observer of the AI's output. He described his role as follows:
Prominent voices across the open source ecosystem have weighed in with sharply divergent views. Free Software Foundation Executive Director Zoë Kooyman drew a firm line:
"There is nothing 'clean' about a Large Language Model which has ingested the code it is being asked to reimplement."
Open source developer Armin Ronacher offered a contrasting perspective, pushing back against philosophical arguments that conflate behavioral similarity with legal derivation:
"If you throw away all code and start from scratch, even if the end result behaves the same, it's a new ship."
The legal terrain surrounding AI-generated software remains largely unsettled at the institutional level. Courts have ruled that AI
Beyond the immediate legal dispute, the broader implications are what many observers find most significant. If AI tools can enable rapid, low-effort rewrites of licensed open source projects — effectively resetting license obligations in days rather than years — the foundational economics of the open source model are at risk. Italian developer Salvatore "antirez" Sanfilippo framed the shift in structural terms:
"Now the process of rewriting is so simple to do, and many people are disturbed by this. There is a more fundamental truth here: the nature of software changed; the reimplementations under different licenses are just an instance of how such nature was transformed forever. Instead of combating each manifestation of automatic programming, I believe it is better to build a new mental model and adapt."
Open source advocate Bruce Perens reached for stronger language to convey the magnitude of the moment:
"I'm breaking the glass and pulling the fire alarm! The entire economics of software development are dead, gone, over, kaput! … We have been there before, for example when the printing press happened and resulted in copyright law, when the scientific method proliferated and suddenly there was a logical structure for the accumulation of knowledge. I think this one is just as large."
The chardet episode may ultimately be remembered less for its resolution than for the questions it forced into the open. Legal frameworks built around human authorship and clean room separation were not designed with AI intermediaries in mind. As AI coding tools become faster, more capable, and more widely adopted, the gap between existing intellectual property law and the realities of modern software development will only continue to widen — making this dispute not an isolated incident, but a preview of far larger conflicts to come.




