Why this was a good AND bad week for digital sovereignty
Dear friends,
What a week! So much has happened in digital policy over the last five days that even experts have found it difficult to keep track. That’s why I want to try today to connect the key political moments.
Part 1: The Franco-German Summit on Digital Sovereignty
On the initiative of Germany and France, representatives from all 27 EU member states met in Berlin on 18 November 2025 to discuss how Europe can regain control of its digital future. I was in Berlin with other MEPs/MPs, think tanks, NGOs and industry representatives. All the panels and discussions made one thing clear:
The need for genuine sovereignty and resilience in digital technologies is no longer just a demand from a few tech nerds (myself included). It has reached the highest political level. And that is really good news.
However, not everyone agrees with the conclusion of this analysis. Just one example: when establishing Europe’s own cloud infrastructure, President Macron specifically emphasised the importance of European companies, which could use the power of the largest single market to assert themselves against the dominant US giants. One way to achieve this would be to change EU procurement rules so that digital independence in Europe becomes a competitive advantage.
Merz and the German government, on the other hand, for instance, are much more hesitant. Merz lacks the courage to use the rules and investments so that the European technology industry, which includes a particularly large number of innovative companies from Germany, can finally be boosted. Merz prefers to toy with the idea of sticking EU labels on US digital products and then selling them as ‘European’. This sovereignty-washing would continue to pump billions in taxpayer money into the US tech industry, which pays very little tax in Europe, while at the same time unfairly disadvantaging European companies.
In the end, these and many other unresolved issues could not be resolved, and the joint final declaration, which had been leaked beforehand, was not adopted in the end.
If you want to know more about the summit, here is my LinkedIn and Instagram post. Feel free to share!
Part 2: The Digital Omnibus – Simplification must not become a back door for deregulation in favour of Big Tech
The day after the summit, on Wednesday, the European Commission’s Digital Omnibus was presented. With a whole series of these ‘omnibuses’, the European Commission officially wants to simplify and harmonise European rules, but as with climate action, this supposed simplification ultimately turns out to be nothing more than a weakening of strong European rules. They make Europe weaker, not stronger.
At the end of this email, I have attached a longer analysis of the Digital Omnibus. Here, I would like to highlight just one measure that illustrates where we currently stand: The European Commission wants AI companies to be legally allowed to train their models using the personal data of European users. Until now, users have been able to object to the processing of their data for these purposes. The EU Commission wants to remove this option.
This is nothing less than an infringement of the fundamental digital rights of all EU citizens.
The rationale behind this move is that it is intended to support European AI companies in particular and make the European market attractive for foreign investment. However, when talking to the European digital industry, it is not data protection or European rules that are the problem, but above all a lack of investment and longterm planning security. This omnibus therefore primarily benefits the US and Chinese tech giants, who will be able to extract data from Europe even more easily and thus become ever richer and more powerful.
The crucial question by Maria Ressa: ‘What are we building: a technology serving democracy, or a technology perfecting authoritarianism?’
As the digital world looked to Berlin, Nobel Peace Prize winner Maria Ressa raised this question at the EU Parliamentary Democracy Forum. Whether at the summit in Berlin or at the Digital Omnibus in Brussels, this decision on the direction to take is exactly what it’s all about. Are we building technologies that ultimately strengthen our democracies, or are we giving the autocrats of this world ever better tools to make themselves and their modern royal courts richer and everyone else poorer and, above all, less free?
That is what is ultimately at stake in the question of digital sovereignty. I am hopeful that not only Macron, but also many other leaders in Europe, are clearly on the side of democracy here. I am also hopeful that a lot is happening in the non-governmental sector, for example with the launch of the European social media platform ‘Eurosky’ (also this week!). To come back to the beginning, that is really good.
However, I am also concerned that too many people, such as Friedrich Merz and parts of the EU Commission, still think that bowing down to Trump and watering down our European fundamental rights will lead to independence. The opposite is true. And the fact that this is still not clear to everyone is the really bad news.
After this very busy but mixed week, it is all the more clear to me that we must continue to fight. Because the urgency has now arrived, thanks in part to the pressure from you. Now it’s a matter of applying pressure to ensure that the measures are appropriate to the problem.
If you want to help, please share my videos on the summit today, here: here is my LinkedIn and Instagram post.
And if anyone asks you over dark November coffee or Christmas dinner what we can do about our digital lack of freedom, I have written down my three solutions for you to read and share here.
With determined regards,
Yours, Alexandra Geese
My three measures for digital sovereignty in Europe:
- European social networks without surveillance and harmful algorithms – for genuine democratic dialogue and exchange.
- The rules for public procurement must be changed – European taxpayers’ money should be spent more on European companies that process our data securely, build expertise here and pay taxes in Europe.
- A law on the development of cloud and AI – The cloud is the digital backbone of our economy, but its current growth is strengthening the dominance of US hyperscalers, which cannot protect data from access by foreign governments. We must determine which clouds are secure and which sensitive data must be stored there.
–
Appendix – My press release on the Digital Omnibus:
MEP Geese: Simplification must not become a back door for deregulation benefiting Big Tech
Brussels, 19 November 2025. Today, following the European Commission’s publication of its Digital Omnibus package, MEP Alexandra Geese warns that the Commission’s approach risks hollowing out Europe’s landmark digital-rights framework under the misleading label of “simplification”.
MEP Alexandra Geese highlights:
“Europe needs clarity and efficiency. But not at the price of deregulation that rewards Big Tech and erodes our digital sovereignty. Simplification cannot be allowed to mutate into a political shortcut for weakening hard-won rights.”
The concerns raised today mirror the broad opposition that emerged after the 6 November leak of the Digital Omnibus plans, which drew strong criticism from three political groups in the European Parliament and 127 civil-society organisations. Since then, leading academics have also questioned the legal foundation of the Commission’s simplification agenda, warning that several proposed changes may exceed the Commission’s mandate or contradict existing EU law.
MEP Geese stresses:
“The Digital Omnibus package risks eroding the strong digital framework that shields the digital life of Europeans. By placing the focus on innovation and “building value from data”, the EU Commission moves away from ensuring human-centric and trustworthy technology. This doesn’t just weaken Europe’s ability to steer its own digital future – it hands even more leverage to dominant foreign tech giants, undermining digital sovereignty and Europe’s role as a global leader in responsible technology. Our digital rules are built on European values: transparency, accountability, security and fundamental rights.”
MEP Geese concludes:
“Allowing AI systems to ingest highly sensitive personal data is a dangerous step backward. It effectively legitimises the practice of targeting ads at people searching for medical help – a group that is often vulnerable, stressed, and looking for trustworthy information. Far too many of these ads turn out to be scams or misleading offers. This shift risks eroding public trust and exposing people to real harm at the very moment they most need protection.”
RULES THAT BENEFIT BIG TECH – BUT NOT THE PEOPLE
The following propositions are of major concern:
- Personal data use for the training of AI systems: The Commission legalises uncontrolled commercial exploitation of our information for the training of AI systems. In practice, this means that personal and even sensitive information that a user types into a chatbot could be reused to train an AI model on this basis, even if the user has not given consent. Although such processing is still subject to some safeguards, it is not clear these will be effective in practice.
- Addition in the definition of personal data in Art.4 GDPR: It becomes much easier for companies to create detailed profiles about people and share them in datasets without any real oversight, as long as the people are listed under fake identifiers (like user IDs) and the company sharing the data cannot directly match those IDs to real names. But companies do not actually need names or birth dates to target people with personalised ads and content.
- Delay of 16 months for applying the rules to high-risk AI systems:The Commission gives in to pressure from a small set of industry interests and compromises consumer rights for short-term commercial gains, it will undermine public confidence in the EU’s commitment to prioritizing people over corporate actors.
- Allowing the use of very intimate, sensitive data for high-risk AI-systems. This may give AI systems a wildcard to process such data with minimal transparency on how this data is used for bias correction.
- Allowing the testing of high-risk AI systems in the real world: This may lead to violations of individuals fundamental rights without consequences.
- Exemptions for SMEs or mid-caps: The harm caused by AI systems does not depend on size. It is essential that providers of these systems are held to the rigorous standards that the co-legislators agreed.
Instead of reopening and weakening core digital laws, we now have the opportunity to safeguard the EU’s digital acquis and ensure the Digital Omnibus focuses on genuine clarifications that strengthen enforcement, legal certainty and coherence without lowering standards on fundamental rights, data protection or cybersecurity.
That means closing long-identified gaps. A welcome proposal by the Commission is therefore the suggestion to replace intrusive cookie banners with browser privacy signals or a “one-cklick” solution. This is something that the EU Parliament has called for repeatedly.
However, much more can be done: for example, confirming that dark patterns and opaque tracking cannot constitute valid consent, ensuring “data protection by design” in all systems and devices, and finally empowering data protection authorities with the resources they need. It also means improving cybersecurity coherence through streamlined reporting, stronger guidance and better-resourced standardisation bodies. These are the kinds of targeted improvements that reinforce Europe’s digital framework rather than dismantle it.