Apertus was developed with due consideration to Swiss data protection laws, Swiss copyright laws, and the transparency obligations under the EU AI Act. Particular attention has been paid to data integrity and ethical standards: the training corpus builds only on data which is publicly available. It is filtered to respect machine-readable opt-out requests from websites, even retroactively, and to remove personal data, and other undesired content before training begins.
We probably won't get better, but sounds like it's still being trained on scraped data unless you explicitly opt out, including anything that may be getting mirrored by third parties that don't opt out. Also, they can remove data from the training material retroactively... But presumably won't be retraining the model from scratch, which means it will still have that in their weights, and the official weights will still have a potential advantage on models trained later on their training data.
From the license:
SNAI will regularly provide a file with hash values for download which you can apply as an output filter to your use of our Apertus LLM. The file reflects data protection deletion requests which have been addressed to SNAI as the developer of the Apertus LLM. It allows you to remove Personal Data contained in the model output.
Oof, so they're basically passing on data protection deletion requests to the users and telling them all to respectfully account for them.
They also claim "open data", but I'm having trouble finding the actual training data, only the "Training data reconstruction scripts"...