Europe has to set priorities and boundaries
The targeted use of artificial intelligence can propel our economy forward. We ought to use it more in the optimisation of processes, for climate protection and sustainability, and at the same time distance ourselves from applications used to monitor, rate and manipulate people.
The European Commission has announced a “made in Europe” strategy for artificial intelligence for the first half of 2020, thereby asking the question as to which tasks machines will take care of for us in the future. Many people complain that Europe has already lost the race for artificial intelligence (AI) against the USA and China. However, the more pertinent question is: where is this race heading?
China is rapidly becoming an Orwellian state of total surveillance in which citizens’ every move is monitored and undesired behaviour punished using social scoring. In the USA, everything is moving towards the unbounded power of a conglomerate oligopoly which is generating more and more profit with fewer and fewer workplaces. Industry’s power to innovate is weakened on the one hand by the giants buying up promising start-ups to destroy competition, and on the other hand by collecting incredible volumes of data and using the resulting power to oust traditional companies in other sectors from the market. The European Union should by no means follow suit. None of those models is compatible with our understanding of democracy and a social market economy.
Europe must seek to prevail with its own values and goals. The challenges are clear to see: climate protection and sustainable management, the optimisation of decentralised energy supply systems, and sustainable mobility. Algorithms can be used in industrial production, a field in which European companies are still and must remain the leading innovators, to create longer-life products whilst optimally using resources and reducing industry’s carbon footprint with customised maintenance intervals. Modern electricity grids benefit from servicing recommendations, threat detection, renewable energy control and demand management generated by AI systems. The optimum coordination of supply and demand will make future mobility efficient and resource-friendly.
Targeted AI research can advance our economy in areas that other continents do not focus on. That has to be our objective. The EU already invests considerably less in AI than its competitors abroad. This funding has to be increased and used wisely, namely for climate protection, sustainable management, democracy, safeguarding basic rights, and social justice.
By the same token, that means we have to distance ourselves from technologies which predominantly serve to monitor, rate and manipulate human behaviour. Anyone who followed the protests in Hong Kong will have noticed that people only went onto the streets wearing masks. Omnipresent facial recognition like in China turns democratic rights of freedom into a thing of the past. People in Europe still believe that images from video surveillance are only used to solve crimes. However, there is no freedom where an algorithm can scan millions of images in seconds to create a complete movement and emotional profile of a person, even in combination with data from social networks.
The broadly used ‘predictive policing’ systems in the USA also clearly contradict our understanding of justice. For instance, if poor districts are patrolled more regularly than rich areas, more crimes are registered there. Accordingly, the system then suggests that precisely those neighbourhoods be patrolled more often, even though the crimes committed on Wall Street probably cause much more damage to society as a whole than petty crime in the Bronx.
Technology will always be dangerous when artificial intelligence is used to rate people. It is highly likely that seemingly harmless AI systems used in job application selection processes will prefer young, healthy, white men for high-powered positions, because the datasets are pervaded with historical inequalities and discrimination towards women, people of colour, LGBTI and people with disabilities. As suggested by the Data Ethics Commission, for example, all algorithmic systems ought to be assessed for legality and neutrality according to a risk model. Where a discriminatory bias cannot be rectified, such systems in breach of basic rights must not be permitted in Europe.
Europe has to resist the siren songs and cure-all promises of the AI industry and instead set its own priorities and boundaries. Only then can we protect our democracy and social market economy whilst also strengthening European industry.