A federal court has dealt a significant blow to Elon Musk's artificial intelligence venture, xAI, after US District Judge Jesus Bernal denied the company's motion for a preliminary injunction that would have suspended enforcement of a California law mandating public disclosure of AI training data. The ruling, issued on Wednesday, allows California's Assembly Bill 2013 (AB 2013) to remain in full effect while litigation continues, forcing xAI into compliance despite its strenuous objections.
The California statute, which took effect in January, imposes substantial transparency obligations on AI developers whose models are accessible within the state. Under its provisions, companies must clearly disclose which dataset sources were used to train their models, when that data was collected, and whether collection is ongoing. Developers must also indicate whether their datasets contain materials protected by copyright, trademark, or patent, whether training data was licensed or purchased, and whether any personal information was included. The law further enables consumers to evaluate the proportion of synthetic data used — a factor many experts consider a meaningful proxy for model quality.
xAI's legal challenge rested on the assertion that complying with these requirements would effectively strip the company of its most valuable competitive assets. The firm contended that its dataset sources, dataset sizes, and data-cleaning methodologies all constituted closely guarded trade secrets protected under the Fifth Amendment. Allowing enforcement, xAI argued, could be "economically devastating" to the company, reducing "the value of xAI's trade secrets to zero."

The company painted a vivid picture of competitive harm in its complaint.
"If competitors could see the sources of all of xAI's datasets or even the size of its datasets, competitors could evaluate both what data xAI has and how much they lack,"
xAI argued. The company speculated further: if OpenAI were to discover that xAI was using a critical dataset that OpenAI was not, OpenAI would almost certainly acquire that dataset to train its own model — and vice versa. xAI also maintained that these disclosures "cannot possibly be helpful to consumers" and posed a systemic risk to the broader AI industry.
Judge Bernal was unconvinced. He found that xAI's arguments were characterized by vagueness rather than substantive evidence of harm, noting that the company offered only "a variety of general allegations about the importance of datasets in developing AI models and why they are kept secret." Bernal described xAI as trading in "frequent abstractions and hypotheticals" rather than presenting concrete demonstrations of the competitive injury it claimed to face.
The court acknowledged the theoretical possibility that training data could qualify for trade secret protection — but drew a sharp distinction between hypothetical eligibility and demonstrated fact.
"It is not lost on the Court the important role of datasets in AI training and development, and that, hypothetically, datasets and details about them could be trade secrets,"
Bernal wrote. However, xAI "has not alleged that it actually uses datasets that are unique, that it has meaningfully larger or smaller datasets than competitors, or that it cleans its datasets in unique ways." On that basis, the judge concluded that xAI was unlikely to succeed on the merits of its Fifth Amendment claim.
xAI also advanced a First Amendment argument, contending that California was leveraging AB 2013 to influence the outputs of its chatbot, Grok, by effectively compelling speech about data sources as a means of targeting what the state deemed biased training material. Over the past year, Grok has attracted considerable public scrutiny for generating antisemitic content, nonconsensual intimate imagery, and child sexual abuse materials — controversies that prompted a formal probe by California's attorney general. xAI argued this regulatory context revealed the law's true intent.
Bernal rejected this interpretation decisively.
"Nothing in the language of the statute suggests that California is attempting to influence Plaintiff's models' outputs by requiring dataset disclosure,"
he wrote. He further noted that "the statute does not functionally ask Plaintiff to share its opinions on the role of certain datasets in AI model development or make ideological statements about the utility of various datasets or cleaning methods." The judge added: "No part of the statute indicates any plan to regulate or censor models based on the datasets with which they are developed and trained."
Perhaps the most pointed element of the ruling addressed xAI's claim that consumers have no meaningful interest in training data disclosures. The judge dismissed this position outright.
"It strains credulity to essentially suggest that no consumer is capable of making a useful evaluation of Plaintiff's AI models by reviewing information about the datasets used to train them and that therefore there is no substantial government interest advanced by this disclosure statute,"
Bernal wrote.
The court offered a practical illustration of legitimate consumer interest: individuals may wish to know whether specific medical or scientific data was used to train a model in order to assess its reliability for their particular purposes. More broadly, Bernal framed the law as a market transparency mechanism.
"In the marketplace of AI models, AB 2013 requires AI model developers to provide information about training datasets, thereby giving the public information necessary to determine whether they will use — or rely on information produced by — Plaintiff's model relative to the other options on the market,"
the judge wrote.
The litigation is far from over, but xAI faces a considerably steeper path forward. To succeed, the company will need to produce concrete evidence demonstrating that its datasets or cleaning methodologies are sufficiently distinctive to warrant trade secret protection. It will also be required to strengthen its arguments that consumers derive no benefit from the mandated disclosures, and to show that California failed to consider less burdensome alternatives to achieve its stated transparency objectives.
One potential avenue for xAI involves challenging the law's scope as applied to individual Grok licenses, arguing that the statute's language is vague enough to potentially expose customers' training data. However, Bernal was clear that xAI "must actually face such a conundrum — rather than raising an abstract possible issue among AI systems developers — for the Court to make a determination on this issue."
The ruling carries additional strategic weight given xAI's ongoing legal battles with OpenAI. Last month, a separate judge dismissed one of Musk's OpenAI lawsuits, ruling that Musk had no proof the company had stolen trade secrets simply by hiring former xAI staff. Compliance with AB 2013 could now require Musk to share data-sourcing information that he has been actively seeking to keep away from OpenAI's view. xAI did not respond to requests for comment on the ruling.




