Recently, the Irish Data Protection Commission halted the launch of Google’s new artificial intelligence (AI) product, Bard, over concerns about data privacy under European Union (EU) law. This follows a similar action by Italy following the initial launch of ChatGPT in the country earlier in 2023.
Debate continues over whether potential regulation is needed to address concerns about AI safety. However, the disruptive nature of AI suggests that existing regulations, which never foresaw such a rapid development, may be preventing consumers from accessing these products.
The difficulty of launching AI products in Europe shows one of the problems of the use of static regulation to govern technology. Technology often evolves faster than regulation can adapt. The rapid uptick in the use of generative AI is the latest example of the increasingly fast adoption of new technologies by more consumers. But regulations typically lack the flexibility to consider such disruption, even if it might provide better alternatives.
The General Data Protection Regulation (GDPR) is an EU law that created a series of data protection and privacy requirements for businesses operating in Europe. Many American companies spent over $10 million dollars each to ensure compliance, while others chose to exit the European market instead. Furthermore, the law led to decreased investment in startups and development of apps in an already weaker European tech sector.
But beyond these expected outcomes stemming from static regulations, the requirements of GDPR have raised questions about whether new technologies can comply with the specific requirements of the law. Stringent and inflexible technology regulations can keep us stuck in the past or present rather than moving on to the future. At the time, much of this concern was focused on blockchain technology’s difficulty in complying with GDPR requirements, but now the disruptive nature of AI is showing how a regulatory and permissioned approach can have unintended consequences for beneficial innovation. A static regulatory approach impedes the evolution of technology, which, if permitted to develop without such restrictions, could potentially rectify the very deficiencies that the regulations originally aimed to prevent.
Unlike market‐based solutions or more flexible governance, such a compliance‐focused approach presumes to know what tradeoffs consumers want to make or “should” want to make. Ultimately, it will be consumers who lose out on the opportunity and benefits provided by new technologies or creative solutions to balancing these concerns.
While there may be privacy debates to be had over the use of certain data by the algorithms that power AI, regulations like the GDPR presume privacy concerns should always win out over other values that are significant to consumers. For example, more inclusive data sets run afoul of calls for data minimization in the name of privacy but are more likely to respond to concerns about algorithmic bias or discrimination.
Europe has long seemed set on continuing a path of heavy‐handed regulation over a culture of innovation, and the growing regulatory thicket is starting to result in regulations that contradict one another on issues such as privacy. As the U.S. continues to consider any data privacy regulations and any regulatory regimes impacting AI, policymakers should carefully watch the unintended consequences the more restrictive approach in Europe has yielded.