As the tech companies behind generative AI (GenAI) and their corporate clients struggle to figure out how to put guardrails in place, insurance companies sense an opportunity.
Business risks associated with GenAI include everything from cybersecurity issues to copyright infringement, false or biased outputs and the leaking of proprietary company data.
Although it’s still early days, analysts told the Wall Street Journal that there is appetite for AI insurance, and major carriers could offer specialized coverage for financial losses stemming from AI and GenAI. Existing liability or cybersecurity policies could also soon be amended for generative AI. “I would bet that over fifty percent of large enterprises would buy some of these insurance policies if they come out, and they make sense,” Avivah Litan, a Gartner analyst who focuses on AI trust, risk and security, told the Journal.
Insurance, though, is not a panacea, as corporate clients of cybersecurity insurance have learned. Consumer product manufacturer Clorox has spent $25 million to respond to a recent cyber attack , including hiring forensic investigators and legal and technology hel and more cyber expenses are expected to arise in 2024. The company said it has cyber insurance, but it can’t predict which costs will be covered, or when, according to an October 4 filing with the Securities and Exchange Commission.
Other ways for companies to manage GenAI risks are also emerging.
Recognizing concerns that businesses may have with embedding generative AI into operations, vendors including Microsoft, IBM and Adobe are offering some protections . IBM recently said its standard contractual intellectual property protections will apply to the generative AI models it has developed. Adobe in June said that businesses can purchase IP indemnification from the software company for generative-AI-created content on its Firefly platform. In September, Microsoft announced a commitment to defend and pay for lawsuits stemming from a customer’s use of its GenAI-based Copilot tools. The company said customers must be using its built-in guardrails, which aim to filter out copyrighted content.
Meanwhile, the Financial Times reported this week that tech companies such as Anthropic and Google DeepMind are creating “AI constitutions” — a set of values and principles that their models can adhere to, in an effort to prevent abuses. The goal is for AI to learn from these fundamental principles and keep itself in check, without extensive human intervention.“We, humanity, do not know how to understand what’s going on inside these models, and we need to solve that problem,” Dario Amodei, chief executive and co-founder of AI company Anthropic told the Financial Times. Having a constitution in place makes the rules more transparent and explicit so anyone using it knows what to expect “and you can argue with the model if it is not following the principles,” he added.
Regardless of whether GenAI insurance policies become available, experts say having an effective governance strategy will be vital for companies.
In a report about managing GenAI risk PwC suggests that companies establish their own responsible AI strategies and alert C-Suite executives about how GenAI might impact areas under their responsibility.
For the CISO, GenAI adds a valuable asset for threat actors to target. They could, for example, manipulate AI systems to make incorrect predictions or deny service to customers, says PwC. As a result, proprietary language and foundational models, data and new content will need stronger cyberdefense protections.
Chief data officers and chief privacy officers need to be aware that GenAI applications could exacerbate data and privacy risks, says the report. Employees entering sensitive data into public generative AI models is already a significant problem for some companies. GenAI, which may store input information indefinitely and use it to train other models, could contravene privacy regulations that restrict secondary uses of personal data.
Compliance officers will need to keep up with new regulations and stronger enforcement of existing regulations that apply to generative AI and legal teams will need deeper technical understanding that lawyers typically don’t have to challenge and defend Gen-AI-related issues, says PwC. . Without proper governance and supervision, a company’s use of generative AI can create or exacerbate legal risks. Lax data security measures, for example, can publicly expose the company’s trade secrets and other proprietary information as well as customer data, says the report. Failure to review generative AI outputs can result in inaccuracies, compliance violations, breach of contract, copyright infringement, erroneous fraud alerts, faulty internal investigations, harmful communications with customers and reputational damage.
Auditing will be a key governance mechanism to confirm that AI systems are designed and deployed in line with a company’s goals. But to create a risk-based audit plan specific to generative AI, internal audit divisions must design and adopt new audit methodologies, new forms of supervision and new skill sets, says PwC.
Chief financial officers and controllers will also have to play a role. Without proper governance and supervision, a company’s use of GenAI can create or exacerbate financial risks. If not used properly, it opens the company to “hallucination” risk on financial facts, errors in reasoning and over-reliance on outputs requiring numerical computation. These are high-consequence risks that CFOs face in the course of their normal duties, often in a regulated environment. Highly-visible, unintended financial reporting errors result in loss of trust with customers, investors, regulators and other stakeholders and have resulted in severe reputational damage that is costly to companies, says the PwC report.
IN OTHER NEWS THIS WEEK:
Neurotech startup Precision Neuroscience announced October 5 that it has acquired a factory in Dallas, where it will build the key component of its brain implant, the Layer 7 Cortical Interface. The facility will help the company speed up development and move closer to the regulatory approval it is hoping to clinch in 2024. The company has started testing its brain implant on human patients and believes it could ultimately help people with paralysis operate digital devices with their brain signals. Precision said the manufacturing plant is the only facility capable of producing its “sophisticated” electrode array
Frances’ SiPearl, the company building the world’s first energy-efficient HPC-dedicated microprocessor designed to work with any third-party accelerator (GPU, artificial intelligence, quantum) and one of The Innovator’s Startups Of The Week, announced a contract to equip JUPITER, the first European exascale supercomputer. The contract is a major milestone for SiPearl in fulfilling the mission assigned to it by the European Union through the European Processor Initiative (EPI) consortium: to ensure European sovereignty with the return of high-performance, low-power microprocessor technologies in Europe.
Belgium’s intelligence service is monitoring the main European logistics hub of Chinese technology and e-commerce company Alibaba Group Holding over concern about possible espionage, the Financial Times reports. Referring to the company’s logistics center at the cargo airport in the city of Liège, the security service said it was working to detect “possible espionage or interference activities” by Chinese entities including Alibaba, the news outlet said on October 5.
To read more of The Innovator’s News In Context articles click here.