News

New Law in NH Creates Private Right of Action For Victims of Deepfakes

September 6, 2024

 

There’s been no shortage of examples in recent years of how deepfake technology can be used in alarming ways:

  • Fraudsters recently posed as a multinational company’s CFO over video, convincing an employee to pay $25 million out of the company to the scammers.
  • A disgruntled athletic director at a high school in Maryland allegedly created and disseminated a fake recording of the school’s principal in audio that contained racist and antisemitic comments.
  • Reports are surfacing across the country of deepfake images being deployed as a cyberbullying tool, such as through face swapping and “undressing” apps.

These are clear use cases of deepfakes being generated to hit three primary types of content: video, audio, and image.

As the technology has improved and damage has been inflicted on the victims, concerns about deepfakes have continued to increase. Recently, this culminated in the enactment of a new law in New Hampshire that could have implications across the United States.

New Hampshire: Generation of a Deepfake Could Lead To Civil and Criminal Actions Against The Perpetrator

Not mentioned above – but perhaps a tipping point of deepfake fears – came earlier in 2024 when a deepfake recording of Joe Biden was disseminated across New Hampshire through individual robocalls, suggesting New Hampshire voters not participate in the state’s presidential primary.

This prompted the filing of a civil lawsuit against the generator of the audio, as well as the telecom companies that distributed the call. The New Hampshire Attorney General also indicted the individual who created the deepfake on several charges.

A few months later, New Hampshire’s Governor signed into law H.B. 1432, which is the first state law enacted that specifically allows for a private right of action from victims of deepfakes. From the statute:

A person may bring an action against any person who knowingly uses any likeness in video, audio, or any other media of that person to create a deepfake for the purpose of embarrassing, harassing, entrapping, defaming, extorting, or otherwise causing any financial or reputational harm to that person for damages resulting from such use.

The statute also stipulates that the generator of a deepfake is guilty of a class B felony “if the person knowingly creates, distributes, or presents any likeness in video, audio, or any other media of an identifiable individual that constitutes a deepfake for the purpose of embarrassing, harassing, entrapping, defaming, extorting, or otherwise causing any financial or reputational harm to the identifiable person.”

The law will be made effective January 1, 2025.

  deepfake

 

New Hampshire Law Could Provide Playbook For Other States

Even in divided times, it stands to reason that there will be extensive bipartisan motivation for more laws addressing deepfakes to surface. No politician is insulated from the risks these deepfakes pose, and their constituents are likely just as concerned about the adverse impacts deepfakes can have.

As of June, per the Voting Rights Lab, there were 118 bills in 42 state legislatures on the table that contained provisions intended to regulate election disinformation produced by AI.

What will be worth monitoring is if the laws that end up enacted are drafted broadly to capture actions produced in a non-political context, and if they follow suit with New Hampshire and allow for a private right of action by those affected by deepfakes. Legislation proposed by New York Governor Kathy Hochul this past Spring would provide for this private right of action.

Insurance and Risk Impact

Private Right of Action are four words that will always catch the attention of liability insurance professionals. General Liability and Homeowners policies, as well as other Specialty lines of business –could potentially be implicated if and when civil actions involving deepfakes proliferate.

General Liability

With respect to General Liability insurance, the use cases involved in deepfakes primarily should be considered in the context of Coverage B – Personal And Advertising Injury – of the ISO Commercial General Liability policy. The definition of “personal and advertising injury” in the ISO CG 00 01 base policy includes the following two subparagraphs:

d. Oral or written publication, in any manner, of material that slanders or libels a person or organization or disparages a person’s or organization’s goods, products or services;

e. Oral or written publication, in any manner, of material that violates a person’s right of privacy.

It’s certainly possible that transgressions involving deepfakes could facilitate claims brought under this coverage part. Coverage B is unique to Coverage A in that, depending on exclusions, there could be some level of coverage in place for acts that are intentional. If a business disparages and/or violates the right of privacy of another party through a deepfake, it’s possible that claims could make their way to that business’s GL carrier.

Homeowners

Cyberbullying, which could trigger civil claims involving invasion of privacy, intentional infliction of emotional distress, and negligent entrustment, has been discussed as an exposure for Homeowners insurance since the early days of the Internet. The majority of U.S. states have laws in place that determine a parent’s liability for a minor’s wrongful acts.

With deepfake (and other AI tools) more readily available for misuse by adolescents, this risk has only been exacerbated as numerous applications to deploy this technology surface. Ultimately, determining if Homeowners coverage would kick in is dependent on the policy language in force – as well as the jurisdiction of the case.

Specialty Lines

In addition to General Liability and Homeowners insurance, more specialized lines of business could also be materially impacted, including Crime, Cyber, and D&O policies. Excess policies may also be involved if verdicts track recent social inflation trends and 7 or even 8 figure payouts result.

Ultimately, as deepfake technology continues to improve, the barrier to entry lowers: anyone with an internet connection can build a deepfake and expose themselves to liability. Given this dynamic, it will be important for risk and insurance professionals to do the following:

  • Understand how the use cases for deepfakes – and artificial intelligence technology in general – continue to evolve.
  • Track how regulations and laws – both at the state and federal level – are crafted to address deepfakes.
  • Be mindful of how insurance policy language could respond in the event of a claim.