AI and the Law: An In-Depth Look at the Current Legal Landscape

We certainly cannot say that change will come. At this point, change is happening and AI use is a fact.

This is for sure something you have heard countless times, for some time now: 2023 is a breakthrough year for AI and the law.The technological revolution we are witnessing inevitably also raises important legal challenges creating new regulatory gaps to be filled. 

The Current State of AI Regulation

There is the need to regulate specific issues related to the use of AI such as, for example, the allocation of liability for conduct of intelligent machines or the applicability of intellectual property legislation to the world of AI. Both those issues are not easily resolved in the light of the existing categories and legal principles. Now, the adoption of specific legislation on the subject is required.

Copyright and AI: Ownership Dilemmas

In some cases, the copyright may belong to the person or organisation that created the AI system, while in other cases, the copyright may belong to the person or organisation that provided the data or training to the AI system used to generate the work. And also, another category of legal issues related to AI is data privacy and protection. To date not only is there no specific legislation addressed to regulate the use of artificial intelligence systems, but also it is not possible to foresee the consequences deriving, on a civil and criminal level, respectively from harmful events or offences dependent on or connected to their use.

Protecting the Players in AI Innovation

The fundamental objective to be pursued by the regulation is to protect all the players involved in this innovation, but still guarantee the technological evolution we could benefit from. Characters with different interests are involved: the designer, the manufacturer, the programmer, the developer, the user and AI system itself. On one side lawmakers need to make sure that they are not preventing the expansion of these new technologies. In fact, having rules against the technology’s growth would cause the industry to collapse. But, on the other side, governments should protect their people.

AI-Created Music: Copyright Conundrums

At present, numerous lawsuits have been initiated by music, book, and art creators who contend that their copyrighted works are being utilised as raw material for AI tools. These creators are vehemently opposed to digital platforms they believe are responsible for making copyrighted content accessible.  Cases like these raise questions: if the AI learns from what is available, should we consider the designer liable? Maybe the user? Or maybe, taking a step back, those who illegally made copyrighted contents available for the AI to learn? There is a practice that is bringing concerns in the world of artists: voice cloning using Artificial Intelligence allows digital copies of human voices, including individual characteristics such as accent, intonation and breathing. This is amazing  for sure but, again, this practice also brings with it some questions, speaking about the impact on the existing laws and principles: important legal issues concerning copyright and the protection of the rights of artists and creators are involved.

What if a new song is created by AI using the cloned voice of a famous singer? Who would be entitled to the corresponding rights and royalties?

It is not yet defined who owns the copyright on music produced through AI. Some legal decisions in the field of graphics and illustration have suggested a “hybrid copyright” approach, according to which the intellectual property on the work may vary depending on the involvement and contribution of the AI creator, of the person who commissioned the music or to the one who holds the data used for the training of the AI system. But yet, it can be said that we are still in a phase of experimentation, searching for the best solution.

What is happening around the world regarding the regulation of AI?

Here and there, all over the world, governments are trying to find some solutions to the lack of regulations of the artificial intelligence phenomenon. Let's take a quick look at the international trends, focusing - at the end - on the European example, considered by most as the first of its kind in the world, as for now. While in some nations, governments are still in the “discussion” phase, in some others, national regulations are being implemented to regulate the use and development of artificial intelligence. For this reason, as we read these lines, probably something is already changing!

Early Stages of AI Regulation

In the US, the regulation of AI is still in its early stages, and there is no comprehensive federal legislation dedicated solely to AI regulation yet. However, there are existing laws and regulations that touch upon certain aspects of AI, such as privacy, security and anti-discrimination. In October 2022 the White House released the Blueprint for an AI Bill of Rights, a non-binding list of principles for a responsible design and use of AI tech in a way that protects civil and human rights from harm. This guide raised some doubts, due to the non-binding nature of the document. The question then arose as to whether it was enough to influence the work of lawmakers. Nevertheless, it provides valuable insights into how technology can be developed while prioritising the protection of civil liberties.

Several countries, like China, have established regulatory frameworks for AI systems, asking developers and operators to adhere to specific safety and security standards.The Authorities, with a view of encouraging the innovative development of Generative AI, have softened the initial proposed rules by removing the more punitive terms, such as high fines for technology-related offences. Measures against “deepfake” i.e. AI-generated content that is realistic but false, are also in place. However, many of the existing laws only apply to private companies that use AI and not to the Chinese state.

The UK's AI White Paper

The UK Government published its AI White Paper in March 2023. It is focusing on setting expectations for the development and use of AI alongside empowering existing regulators like the Information Commissioner's Office (ICO), the Financial Conduct Authority (FCA), and Competition and Markets Authority (CMA) to issue guidance and regulate the use of AI within their scope. Finally the UK Government has announced its intention to host a global summit on AI safety in early November 2023, highlighting the urgency of dealing with the topic and being aligned with other Countries too.

The European example: The Artificial Intelligence Act

The EU Artificial Intelligence Act is set to become the first ever comprehensive legal framework for artificial intelligence. Work started in April 2021, when the European Commission proposed this regulation, then the policy was adopted by the European Council last year, and the European Parliament just adopted its position in mid-June 2023. Now the three Institutions will negotiate the final details before the policy can become law and possibly will become enforceable within the next few years. The AI Act proposes a risk-based approach and establishes obligations for providers and users depending on the level of risk the AI can generate. 

According to it, there are four different levels of risk:

  1.  Low-risk systems - systems for which stakeholders are encouraged to build codes of conduct. These include applications that are already widely deployed today. 
  2. Limited or minimal risk systems- systems that meet specific transparency requirements.
  3. High-risk systems - There are different types of systems that fall into this category. These systems are subject to stringent obligations and must undergo conformity assessments before being put on the EU market. This category includes applications related to transport, education, employment and welfare, among others. It should be limited to those that have a significant harmful impact on the health, safety and fundamental rights of individuals in the EU and such limitation minimises any potential restriction to international trade, if any. The high-risk is subject to permanent review, according to the above mentioned document.
  4. Systems with unacceptable risk - these systems are not permitted to be sold/used in the EU. This includes applications that include subliminal techniques, exploitative systems or social scoring systems used by public authorities and biometric identification systems used by law enforcement in publicly-accessible spaces.

The primary aim of such an approach is to limit regulatory bonds to only sensitive AI systems, “protecting individuals, companies, democracy and rule of law and the environment from risks while boosting innovation and employment and making the Union a leader in the field”, as clearly stated by the same AI Act, as per amendments adopted by the European Parliament in June 2023. According to the latest updates on this regulation, details will have to be worked out with the various member states’ lawmakers, before the draft rules become legislation. It is not excluded that there will be points of confrontation among the member states, as already happened regarding facial recognition and biometric surveillance when some lawmakers asked for a total ban while EU countries want an exception for national security, defence and military purposes.


Although a regulation has not yet been passed, there is a pressing need for this to happen and also in a timely manner, so that businesses can anticipate the level of regulation that will be imposed on their systems and adapt their planning for the coming years. While laws and regulations will be needed to ensure that AI is governed in a consistent and harmonised manner across borders, it is not excluded that there will be discrepancies among various countries. A possible trend according to which different levels of protections and legal boundaries can rule the same phenomenon, will strongly influence where AI systems will be developed, sold and used.

The impact would be significant and could cause several damages in terms of technological development, since businesses must ensure that they are aware of pertaining laws and regulations related to their AI systems and comply with them, in order to avoid possible fines, criminal penalties or other penalties as those are most likely to be ruled. The most probable - or better - desirable of events is that the official source, document, law, regulating the AI will be promptly followed and taken as a starting point for subsequent sources, document and law in order to ensure a homogeneous and harmonious system of rules for this resourceful phenomenon.


Published Date:10/26/2023