Many apps and platforms change terms without fanfare, and sometimes as part of an update. Those that do mention it seldom indicate what it is that was changed; they might say ‘we’ve changed our policy’ and perhaps give a very general description (i.e., ‘we’ve changed our policy about collecting and disclosing personal information’) — and leave it to the user to compare the previous and new versions to figure out what’s new. And while you’re trying to figure it out, they’re using your information according to the new, unilaterally-declared terms.
Is that fair or ethical?
Current consent requirements in privacy laws — and the lack of regulation over artificial intelligence — allow it.
Canadians hoping that the gaps will be remedied by Bill C-27 — that’s been promoted as an improvement over PIPEDA — will be disappointed since the new law won’t require any clearer conduct or provide any more granular consent; rather, it provides even more opportunities for organizations to collect, use, and disclose personal information without notice or consent. And Part 3, the Artificial Intelligence and Data Act, also offers little protection.
In the meantime, the UK Information Commissioner’s Office immediately raised concerns about LinkedIn’s approach to training generative AI models with information relating to its UK users; and the company promptly suspended its model training pending further engagement with the ICO.
If you’re not keen on LinkedIn using your information to train its artificial intelligence you can turn the setting to “OFF” by visiting visit https://www.linkedin.com/mypreferences/d/settings/data-for-ai-improvement