Focus On AI

AI And The Law: The Lawyers Are Innovating Too

No one likes to see the lawyers get involved. But in AI, as in all things, the lawyers have a very definite role to play. The use of AI implicates rights and interests and grey areas, and it will produce new opportunities and new types of agreements, along with new types of disputes. It will test the application of existing laws and reveal gaps to be filled with new laws. So, the law and lawyers are at the beginning their own steep innovation curve, along with everyone else.

It would be easy to think that the legal issues are all about copyright, with Open AI and Stability AI and other foundation models being sued by over a dozen companies for breach of their copyrights. In fact, it is partly about that – these are indeed important issues to sort out in the courts and in board rooms.

But there is much more to it. In the United States, by direction of the White House in the Executive Order issued on October 30, 2023, each component of the U.S. Government is required to review their use of and rules for the use of AI. That work is in progress and being reported out on a regular basis. Much of the work implicates new codes of conduct and guidelines that will be informed by internal legal counsel, as well as the harder work of applying such guidelines to real-world fact patterns.

Alongside this, the same U.S. agencies are reviewing their existing regulations to establish and then articulate how those laws-on-the-books apply to the use of AI within their domains. They are reviewing existing regulatory powers and enforcement policies. The U.S. does not have any omnibus regulation for AI use, unlike the European Union, for instance, so the work is happening on a sectoral basis. Without question, U.S. government lawyers, working with policy teams, are at the center of that effort.

For instance, all of the following have publicly pledged to enforce federal laws to promote responsible innovation in the context of automated systems: the Equal Employment Opportunity Commission (EEOC), Department of Justice (DOJ) Civil Rights Division, Federal Trade Commission (FTC), Consumer Finance Protection Board (CFPB), Department of Education (DOE), Department of Health and Human Services (HHS), Department of Homeland Security (DHS), Department of Labor (DOL) and Department of Housing and Urban Development (HUD). Other Departments and agencies have also issued AI guidance, including in defense, healthcare and securities regulation. The FTC has been particularly active in its work in this area, including cases against Weight Watchers and Amazon (Alexa) for the inappropriate collection and use of data from children and RiteAid for the inappropriate use of facial recognition used to surveille customers without their knowledge. The remedies have included the disgorgement of entire models from use. Most recently, the FTC has proposed rules to protect individuals against AI impersonation and launched investigations into the relationships between the foundation model builders and their financial and commercial partnerships.

Similarly, the EEOC has emphasized that companies using AI to make or support hiring or promotion decisions should expect the rules to apply as they would in the absence of those advanced tools. They are subject to enforcement actions for violations or incorrect use or discriminatory practices. Even in the absence of new legislation and regulation, we are likely to see other such applications of existing law in the U.S.

In Canada, Air Canada was successfully sued when a chatbot from its website served a customer incorrect information about bereavement fares. The customer relied on the incorrect information and Air Canada argued that because the chatbot was developed by a third-party, the carrier did not bear responsibility for its performance. The British Columbia Civil Resolution Tribunal (CRT) ruled otherwise, roundly dismissing that as a basis for defense and ordering compensation for the customer stating: “It should be obvious to Air Canada that it is responsible for all the information on its website. It makes no difference whether the information comes from a static page or a chatbot.”

In Europe, the new EU AI Act and related regulations will provide many more guardrails on AI use and impact, protecting the rights of EU citizens and imposes obligations on businesses operating within the Common Market. The re-vamped EU Product Liability Directive (March 2024) allows consumers to seek compensation from manufacturers of defective products which includes “providers” of AI systems. The Directive also simplifies the burden of proof where systems have “technological or scientific complexity.”

Beyond the application of specific regulatory restrictions, of concern to businesses and their lawyers, is the simple fact that we know that Generative AI systems, in part because of their basic design, still make a lot of errors (“hallucinations” are one species of these errors). Organizations rolling these tools out will need, at a minimum, strong governance for how the tools are developed and used, how employees are trained on the tools, and how customers are invited to interact with them. Errors will occur, and that is an expected part of experimentation (as well as the human experience), but the absence of ex ante governance to anticipate, minimize and account for those errors is now within the ambit of the lawyers and regulators who may be imposing ex post liability. Those that insist on moving fast and breaking things would do well to remember that the lawyers are innovating too.

Kay Firth-Butterfield, one of the world’s foremost experts on AI governance, is the founder and CEO of Good Tech Advisory. Until recently she was Head of Artificial Intelligence and a member of the Executive Committee at the World Economic Forum. In February she won The Time100 Impact Award for her work on responsible AI governance. Firth-Butterfield  is a barrister, former judge and professor, technologist and entrepreneur and vice-Chair of The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. She  was part of the group which met at Asilomar to create the Asilomar AI Ethical Principles, is a member of the Polaris Council for the Government Accountability Office (USA), the Advisory Board for UNESCO International Research Centre on AI, ADI and AI4All. She sits on the Board of EarthSpecies and regularly speaks to international audiences addressing many aspects of the beneficial and challenging technical, economic,  and social changes arising from the use of AI. This is the third of a planned series of exclusive columns that she is writing for The Innovator. It was compiled from information supplied to her by Karen Silverman, CEO of of Cantellus Group.

This article is content that would normally only be available to subscribers. Sign up for a four-week free trial to see what you have been missing.

About the author

Kay Firth-Butterfield