WeTransfer has rolled back a planned update to its Terms of Service after widespread public criticism. The file transfer service had quietly introduced a clause that would have granted it broad and perpetual rights over all uploaded content – including the ability to use, modify, sublicense, and distribute files for commercial purposes and AI training. Users would not have been compensated for this usage, and could have faced legal liability even if they were not the original rights holders. G.Business reports this, citing heise online.

The clause had already gone into effect for new users in early July 2025 and was set to apply to existing users starting August 8. The wording gave WeTransfer a perpetual, worldwide, royalty-free, transferable and sublicensable licensefor all intellectual property rights associated with transferred content. That included the right to publish, broadcast, alter, or resell the content — potentially in original or modified form.

The most alarming part: the clause explicitly permitted the use of files for training machine learning models. In the current legal and ethical climate surrounding AI and data privacy in Europe, this raised red flags for experts and users alike. Legal commentators warned that uploading files to WeTransfer under these terms could make users liable to copyright infringement claims — even when acting in good faith.

Under growing public pressure, WeTransfer issued a public statement and reversed course:

“We do not use machine learning or any form of AI to process content shared via WeTransfer,”
the company stated. “Content is also not sold or shared with third parties.”

The revised clause is now shorter and more restrained, only granting a royalty-free license to operate and improve the service — a formulation that remains legally vague, but avoids the explicit references to commercialization and AI development. The updated terms no longer mention sublicensing or public redistribution.

While the PR damage may be contained, the episode underscores a broader issue: tech platforms continue to expand their rights over user data, often without meaningful consent or compensation. In this case, only public outcry prevented a quiet but significant expansion of corporate power over user-generated content.

The retraction shows that transparency, legal literacy, and digital activism can still influence large platforms — especially when fundamental rights and AI ethics are at stake.

Stay connected for news that works — timely, factual, and free from opinion. Learn more about this topic and related developments here: chwoot: he sudo flaw that turns local Linux users into root – in seconds