This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Overregulating AI will lead to start-ups “dying on the beach”
Governments worldwide are eyeing the opportunity offered by AI technologies, but concerns linger around regulation. The US, UK, China, and the European Union are all looking to position themselves at the forefront of AI, as the EU AI Act is finally green-lit.
Earlier this month, at the inaugural SIM Conference in Porto, Portugal, a panel of experts — moderated by European Parliament cabinet member Catarina Peyroteo Saltier — outlined the evolving regulatory landscape in front of a dimly lit room full of tech-oriented minds.
With the EU AI Act set to shape the future of enterprise technology on the continent, here are TI’s takeaways from the “Regulatory Impact on AI: Doing No Significant Harm” panel.
A divided vision of AI’s future
As is the case with many topics, the diverse political structure within the EU has led to ideological divides within the European Parliament, according to Kai Zenner, head of office and digital policy adviser for Member of European Parliament (MEP) Axel Voss.
“50% have indicated already that they are rather afraid of this new technology, all of its new possibilities and so on — making references to the social benefits scandal in the Netherlands,” he explained.
The scandal in question refers to the Dutch tax authority’s use of an algorithm to detect benefits fraud in 2013. It led to thousands of families, particularly those with lower incomes or from ethnic minorities, being unjustly penalised, resulting in poverty, suicides, and over a thousand children being taken into foster care.
“On the other side, 50% are really trying to foster AI development and saying we can use AI to make the world a better place and fight against climate change,” he said.
This divide reflects broader global debates on the balance between harnessing AI’s potential for good and mitigating its risks.
Zenner, who is also part of a ‘network of experts’ supporting the UN Secretary-General’s ‘High-Level Advisory Body on AI’, pointed out that the dynamic nature of AI creates further challenges for regulators: “When the commission was coming out with their original proposal for the AI Act, it was already outdated.”
“The commission was not really thinking [about ChatGPT and foundation models] when they prepared the AI Act in 2019/20,” he added. “In the European Union, there was a big push to finish the Act in time for the elections coming up. I think it would have been better if we took a little bit more time.”
This comes after the European Parliament gave the AI Act the final green light this week, making it the world’s first major law regulating AI.
National quests for innovation
European countries such as France, Spain, and Portugal have begun creating their own national AI strategies. However, according to Saltier, the EU lags behind in AI innovation compared with other regions.
When asked how national governments can address the EU’s shortcomings, Manuel Caldeira Cabral, former Portuguese minister of economy and current deputy of the Portuguese Assembly of the Republic, clarified which side of the fence he sits on.
He called on national governments to prioritise AI’s opportunities, especially those related to enhancing public services and fostering competitive markets: “As a professor, I can’t say to my students, ‘You can’t use AI. You have to work like it’s the Middle Ages, writing it all out.’” he said.
“National strategies shouldn’t be about regulating to avoid the dangers of AI down the road; it has to be about the opportunities and how to use them to grow faster, implement within new areas, and create better services for the people,” he added.
He suggested that the EU take a positive outlook on AI, advocating for regulations that foster growth and innovation rather than stifle it with overbearing restrictions.
“The consensus about having the best possible legislation, better than the US or China, has led us to a situation where most of the data-intensive start-ups grow faster in the US or China than they manage to in the EU.”
He warned that if this continues, EU firms won’t be able to keep up and will be bought up by firms in the US or China. He added that this could leave EU data vulnerable to foreign businesses, and we would have to trust that they won’t use it nefariously.
Regulatory Sandboxes
Zenner discussed how regulatory sandboxes could bridge the gap between regulation and innovation, explaining how they could “play a major role in enabling SMEs and start-ups to get compliant by entering a very close dialogue with the regulators and enforcers. I think that will help and give them a competitive edge.”
What is a regulatory sandbox?
According to the European Parliament, while there is no agreed definition, regulatory sandboxes generally refer to tools that allow businesses to experiment with innovative products, services, or business models under the supervision of a regulator for a limited period. This setup is intended to help companies innovate faster by reducing the usual regulatory hurdles while ensuring that consumer protection and system integrity are maintained.
Over recent years, the sandbox approach has gained traction across the EU as a means of helping regulators address emerging technologies such as AI and blockchain. Whilst predominantly used in the fintech sector, sandboxes have also emerged in other sectors like transport, energy, telecoms, and health to test innovations like autonomous cars, smart meters, 5G deployment, and predictive health technologies.
Caldeira Cabral added: “In financial services — which is quite a sensible area — in Portugal, we have worked with firms to help them comply with the regulations instead of waiting for them to do the things they know they should. We help them make things better in a way that produces better results for the community without stopping them or dragging them back. Dragging them back leads to them dying on the beach.”
His sentiment clearly resonated, eliciting applause from a few audience members.
International cooperation and the start-up ecosystem
Luther Lowe, head of public policy at start-up accelerator Y Combinator, joined the panel later.
He brought his Silicon Valley perspective to the discussion, highlighting the global nature of start-ups and the importance of international cooperation in AI development.
“Every year, we fund about 500 companies. About half of those are AI businesses. When you’re a founder, and you’re looking to identify where to put your flag, you want someplace that’s not going to require you to undertake something too burdensome,” he said.
Lowe commented on recent legislative developments in California, which echo elements of the EU AI Act, pointing to a growing consensus on the need for ethical and safe AI development practices.
However, he also cautioned against regulations that could stifle small-scale innovation.
“I think it has given some pause to some of the VCs and developers. It’s still very early, but I think we want to ensure we’re protecting open-source development.” He continued, “For example, if I’m tinkering on a small company and exploring how to build something new with generative AI, I don’t want to have to register with the government for some marginal project.”
Caldeira Cabral supported Lowe’s position, citing that centralised regulations were more effective than each European country creating their own bespoke regulations.
“Having rules is a good thing for firms because they know what they can and can’t do — but not overregulating, or this idea of having licences for each of the 27 countries in the EU. We really don’t want to say to start-ups, ‘If you don’t want to adhere to 27 different licenses from different governments, you’d better go somewhere else.’”
When asked what he was most excited about in terms of regulation from Europe, Lowe mentioned the Digital Markets Act (DMA): “If you think about the ability of a law to curb the self-preference of the gatekeepers and introduce a lot more oxygen into the markets, that’s going to unlock a lot of opportunity,” he said.
What is the Digital Markets Act?
The Digital Markets Act Regulation 2022 is an EU regulation that aims to create a fairer, more competitive digital economy. It came into effect on 1 November 2022 and became mostly applicable on 2 May 2023.
The DMA seeks to promote increased competition in European digital markets by preventing large companies from abusing their market power and by enabling new players to enter the market. This regulation specifically targets the largest digital platforms operating in the European Union, commonly referred to as “gatekeepers” due to their dominant market position in certain digital sectors and their fulfilment of specific criteria related to user numbers, turnover, or capitalisation.
In September 2023, the EU identified twenty-two services across six companies (deemed “gatekeepers”) — Alphabet, Amazon, Apple, ByteDance, Meta, and Microsoft — as “core platform services.” These companies were given until 6 March 2024 to comply with all of the Act’s provisions.
Caldeira Cabral spoke of the difficulties of having multiple sets of regulations for different regions: “I think we have to be realistic about what we can impose onto our firms, what we can impose onto the world, and how we negotiate with the world.
“I don’t know if the United States wants to negotiate with the European Union. Today? Yes, if we are reasonable. But by the end of the year, I don’t know what kind of America they’re going to have — and China may negotiate everything but then do whatever they want anyway.”
“This doesn’t mean that we should have no rules, but we should be careful about the side effects of having too many rules.”
#BeInformed
Subscribe to our Editor's weekly newsletter