In March 2022, a video of Ukrainian President Volodymyr Zelenskyy telling his soldiers to put down their weapons and surrender to Russia was posted online. The video ran a minute long, and it quickly spread on social media.

The only problem? It was completely fake.

Deepfakes Are Presenting Complicated Challenges for Attorneys. Here’s How to Combat Them. The video was taken down as soon as it was discovered to be heavily manipulated and edited. However, it presented a huge dilemma to governments, news organizations, and the public: How do you tell the truth when it can be so easily distorted? And how can you convince people that the information you’re presenting is true?

The video of Zelenskyy was a deepfake – one of many that’s been created since the technology recently came out. While some people use deepfakes to create homages to their favorite movies or make funny face-swapping videos, others are using it for more nefarious purposes: to lie and change the narrative of a story.

When it comes to the legal landscape, this creates a nightmare scenario for attorneys. How can they prove that these videos are real or fake? How can they show that their client is innocent or determine that a defendant is guilty? Attorneys aren’t experts on technology. How are they supposed to know whether a video is legitimate?

By learning about deepfakes – and how to correctly identify them – attorneys can protect their clients and cases while not being duped by misinformation.

Here’s some more information.

What Are Deepfakes?

Deepfakes are synthetic media, typically generated through deep learning algorithms, that can manipulate or fabricate video, photos, or audio to make it seem as though individuals are saying or doing things they never actually did. They use deep learning, which is artificial intelligence, to fake events that never actually happened. As AI becomes more intelligent, deepfakes will be increasingly harder to spot.

The Problems Deepfakes Cause

Deepfakes have become a ubiquitous facet of our digital landscape, creating myriad challenges for governments, law enforcement, attorneys, forensic experts, journalists, and the general public. The problems with them include:

  • Misinformation and Disinformation: Deepfakes have the potential to spread false information rapidly. From manipulated political speeches to fabricated celebrity endorsements, they can be used to deceive and manipulate public opinion. Even if a corrected video is later released, some people may still believe the false narrative.
  • Privacy Invasion: Deepfake technology can easily be employed to create invasive content, like malicious videos aimed at tarnishing an individual’s reputation. This jeopardizes personal privacy and mental well-being.
  • Erosion of Trust: As deepfakes blur the line between reality and fiction, trust in visual and audio content is eroding. People are growing increasingly skeptical, which undermines the credibility of legitimate content.
  • Legal and Ethical Dilemmas: The creation and distribution of deepfakes raise complex legal and ethical questions. Determining responsibility and liability is challenging when synthetic content is involved. Proving a client is innocent or a defendant is guilty can become near impossible when you don’t know whether or not the content is real.

How to Spot a Deepfake

While deepfake technology continues to advance, several techniques have been developed to detect synthetic content.

  • Forensic Analysis: Digital forensics experts examine videos, images, and audio recordings for inconsistencies or artifacts that suggest manipulation. Anomalies in lighting, shadows, and details in an image such as hair, fingers, mouth movements during speech, or facial expressions can often be indicators of a deepfake. For example, AI is known for distorting people’s limbs. By looking closely, you can discern if people in the video are real or fake.
  • Reverse Engineering: Researchers can reverse-engineer the algorithms used to create deepfakes. By understanding the technology behind these manipulations, they can develop countermeasures and detection methods.
  • Content Metadata: Analyzing metadata associated with a piece of content, such as timestamps, geolocation, and device information, can reveal discrepancies that may signal a deepfake.
  • Chain of Evidence/Chain of Custody: Use multiple sources of information, including metadata, to verify the authenticity of a piece of media. Tracing the content back to its origins and tracking how it moved from place to place can be essential in ensuring authenticity. Sometimes, deepfake creators scrub the metadata, which makes this more difficult.
  • Machine Learning Models: Machine learning-based algorithms have been developed to detect anomalies in Deepfake content. These models can identify patterns and irregularities in facial features, audio signatures, and more.
  • File Hashing: When comparing two seemingly identical files and trying to determine if one has been altered, hashing the files (via MD5 of SHA) and comparing the results offers a fast and easy way to determine if there are any differences in the underlying data.

Additionally, it’s crucial to look to the Content Authenticity Initiative (CAI) for guidance. The CAI is an important organization that seeks to address the deepfake problem. CAI is a cross-industry standardization effort led by Adobe, aimed at developing tools and standards to enable content consumers to verify the authenticity of media they encounter online. The initiative involves collaboration with technology companies, news organizations, and other stakeholders to establish best practices and technologies for content attribution and provenance.

The Impact of Deepfakes on Attorneys, Law Enforcement, and Legal Proceedings

Attorneys will face both challenges and opportunities with the proliferation of deepfake technology. On one hand, they will need to develop new expertise in identifying and proving the authenticity of evidence. The rise of deepfakes could also lead to increased litigation over the admissibility of audio and video recordings in court, and attorneys may need to adopt advanced forensic techniques or hire forensic experts to establish the veracity of such evidence. Additionally, legal precedents and guidelines for handling deepfake evidence will need to be established to ensure fair and just proceedings.

The emergence of deepfakes has given rise to the need for expert witnesses who specialize in detecting and authenticating digital media. Attorneys should call upon these experts to provide testimony in court, helping judges and juries understand the intricacies of deepfake technology and its implications for a case. These expert witnesses play a crucial role in educating the legal professionals involved in the proceedings and ensuring that justice is served.

Similarly, law enforcement agencies face a significant challenge in the fight against deepfake technology. Criminals can use deepfakes to create false alibis, fabricate confessions, or impersonate victims or witnesses, making it more difficult to investigate and prosecute cases. Law enforcement may need to invest in advanced forensic tools and training to distinguish between authentic and manipulated evidence.

The use of deepfakes as evidence in court could fundamentally transform the legal landscape. They raise concerns about the reliability and authenticity of all video and audio evidence. Judges and juries will need to become more technologically literate to make informed decisions regarding the admissibility and weight of video and audio evidence. Courts may establish more stringent criteria for admitting digital evidence, requiring additional layers of verification. Attorneys will need to stay updated with these evolving standards and advocate for their client’s interests within this changing legal landscape.

Combatting Deepfakes Going Forward

Deepfakes are a challenging issue that threatens the fabric of our digital society by undermining trust, spreading misinformation, invading privacy, and raising legal and ethical dilemmas. As we continue to see news stories about emerging conflicts – such as the current war in Israel and the war in Ukraine – being able to authenticate media is an essential step in reducing propaganda and verifying the authenticity of narratives surrounding the conflicts.

As for attorneys, judges, and courts, there is some good news: The future promises innovative techniques and technologies to tackle deepfake challenges. As deepfakes become more advanced, so will the technology to identify and delegitimize them.

One thing is for sure: In this ongoing struggle for authenticity, a united front across industries is essential. Then, we can successfully safeguard the credibility of the digital content we consume and share.