The DSA: #02 Why we need algorithmic transparency
Much of the damage done by online content such as hate speech, defamation and disinformation relates to its viral spread and amplification on and by big social media platforms whose business models are based on maximising attention while lacking transparency and accountability.
Today, most of the big social media networks use automated tools to recommend content or products (like YouTube’s “Next Up” or Facebook’s “Groups you should join”), and to rank, curate and moderate posts. These algorithms are likely to become even more ubiquitous. However, automated content governance tools such as recommender systems are problematic since
a) they have been found to lead to more divisiveness and polarization, and to better grab the attention of users, thereby generating more advertisement incomes. For instance, an internal Facebook report from 2016, said 64 percent of people who joined an extremist group on Facebook only did so because the company’s algorithm recommended it to them via “Groups You Should Join” and “Discover” algorithms.
b) platforms collect and use data on a massive scale and make very detailed profiles of people to decide which contents to display, based on what the platforms’ algorithms think may best retain their attention. For example, people who are interested in information about vaccines get to see anti-vaccine content but also content against 5G or COVID measures. The more people click, the more extreme the recommended content. This undermines our trust in the information we see, makes communication more difficult and damages our democracies. Everyone has a right to freedom of expression but this is nearly impossible if we all live in our own filter bubbles.
My internet of the future:
Algorithms and other decision-making processes based on values or other criteria used by online platforms for curating and recommending content need are transparent. This will enable analysis of the impact on public discourse and democracy in general.
Protection for everyone to be achieved via a differentiated transparency model:
a) non-sensitive, anonymised data can be shared in public datasets via APIs.
b) sensitive data can be shared through partnerships with relevant institutions (research, universities), under non-disclosure disagreements to safeguard confidentiality.
c) algorithms are available for audit by national regulators.
d) transparency reports for users are meaningful, standardised and publicly available. They contain an understandable explanation for users how algorithms are used on the platform and what impact they have on the user.
e) users have more control to decrease the filter bubble effect: we should empower users to make recommendation systems more responsive to their interests and needs, and not what platforms think they should see (stay tuned for my upcoming article on interoperability).
3. Transparency on its own cannot solve problems, but if it is truly effective and meets the criteria explained above, it can provide a strong basis to allow for platform accountability, oversight by public authorities and redress mechanisms for affected groups and users.