Skip to Main Content

Deepfakes and Blockchain

Leo Lu team headshot
Leo Lu
01.17.24

Introduction

This research aims to address how blockchains and public key cryptography can be used to address the rise of deepfakes. Deepfake technology — videos, audio or and other content that have been created by AI to impersonate people and events —  are a serious threat to democracies around the world. First, we provide a general overview of deepfakes, diving deeper into their rise and current methodologies of detecting them. After reviewing deepfakes, we evaluate how blockchains are positioned to combat them and highlight some key players in the space. The target audience for this piece are Web3 and AI enthusiasts, researchers, investors, and builders who have an understanding of AI and the core properties of blockchains.

Why are Deepfakes a Real Problem?

Deepfake technology — videos, audio or images that have been manipulated by AI to impersonate real people — has advanced so much in recent years that it now poses a real security threat to businesses, political outcomes, and societal beliefs. These convincing creations have the capacity to substitute a person’s appearance and voice, raising concerns about their potential for misuse in spreading misinformation, perpetrating fraud, and other malicious activities.

The rise of deepfakes is a formidable threat that raises profound concerns across various domains. Some of the concerns and issues associated with deepfakes include:

  • Misinformation and Fake News: Deepfakes can be used to create realistic-looking videos of public figures saying or doing things they never did. This can lead to the spread of false information and contribute to the problem of fake news.
  • IP Theft: Deepfakes and AI-generated content are often created by leveraging Intellectual Property such as songs and movies. This can lead to copyright infringement, which poses ethical concerns and potential economic losses for those affected.
  • Privacy Concerns: Individuals can be targeted with deepfake content that falsely depicts them in compromising or inappropriate situations, leading to personal and professional consequences. Beyond tarnishing personal reputations, this poses severe emotional and psychological risks for those targeted.
  • Security Threats: Deepfakes can be used for malicious purposes, such as creating realistic fake identities for unauthorized access to secure systems or facilities.
  • Trust and Credibility: The prevalence of deepfakes challenges the trustworthiness of visual and audio evidence, making it more difficult to discern between authentic and manipulated content.

How are current deepfake detection technologies falling short?

Although ongoing research is dedicated to developing various techniques for identifying deepfakes, the fundamental challenge lies in establishing a foolproof and trust-worthy method for confirming the authenticity of digital content. There has been a plethora of research into deepfake detection algorithms. Researchers from top universities and companies such as Intel have released several deepfake detection algorithms trained to detect real content versus AI-generated content. Typically, these algorithms are between 70% to 80% effective at distinguishing real from fake content, noteworthy, but not reliable enough for large scale adoption.

The issue with these detection algorithms is that they are in a never-ending game of “cat and mouse” with the deepfake creators. As deepfakes become more advanced and realistic, the detection models need to be retrained and upgraded. For example, consider blinking. In 2020, researchers found that because deepfake systems weren’t trained on footage of people with their eyes closed, the videos they produced featured unnatural blinking patterns. AI clones didn’t blink frequently enough or, sometimes, didn’t blink at all — characteristics that could be easily spotted with a simple algorithm. Just weeks after the deepfake detection technology went public, the next generation of deepfake synthesis techniques incorporated blinking into their systems

Ironically, the back-and-forth dynamic echoes the technology at the core of deepfakes: the generative adversarial network, or GAN. This machine learning framework comprises two neural networks working in tandem. One network generates the deceptive content, while the other strives to discern it, creating a reciprocal process that refines and improves with each iteration. This dynamic is reflected in the broader research landscape, where each new deepfake detection paper poses a fresh challenge for deepfake creators to overcome.

The deepfake detection algorithms not only have accuracy and longevity shortcomings, but also fail to address another issue with deepfakes: accountability and repercussions for bad actors. Detection algorithms simply distinguish between AI-generated and authentic content, but they do not determine the source of the deepfake. As a result, it’s difficult to punish bad actors and reward good ones as there are no economic repercussions for creating and spreading fake content. Moreover, there’s no record of good and bad actors, preventing consumers from assessing the reputations and trustworthiness of the sources of their content they consume. While deepfake detection algorithms may improve over time and are likely part of the answer, they are not a fool-proof solution to combating the rise of deepfakes. 

In addition to detection algorithms, watermarks also can help combat the rise of deepfakes and help people find the source of a piece of content. Watermarks are recognizable, unique signals in an image or video. Watermarking is commonly used by picture agencies and newswires to prevent images from being used without permission—and payment. Both AI model developers and companies that publish digital content are working on their own watermarking solutions to tag their content and ensure copyright protection.

However, the problem is that there is no standard for watermarking, as each company uses a different method.  For instance, Dall-E employs a visible watermark (and a simple Google search will yield several tutorials on how to remove it). In contrast, alternative services rely on default metadata or pixel-level watermarks that are invisible to people. While certain watermarks are challenging to remove, visual watermarks, such as those employed by Dall-E, may lose effectiveness upon image resizing and editing. The lack of standards and varied approaches make it difficult for people to rely on watermarks to authenticate content and limits widespread adoption. However, widespread adoption is pivotal for an effective authentication protocol. We need an open protocol and standardized watermarks to establish a universal content authentication repository that spans industries, countries, and time.

Moreover, existing watermarks are not secure, and not designed to be part of a cryptographic protocol. Robust security is essential for content authentication as it prevents bad actors from forging watermarks and marking fake, AI generated content as “real”. To ensure accurate verification of content, there needs to be cryptographic guarantees and some form of incentives or punishments, whether economic or reputational. Fortunately, cryptographic guarantees and economic incentives are the backbones of robust blockchain technology.

How can blockchains help?

An encouraging approach to tackling this problem involves utilizing blockchains and smart contracts. By harnessing the inherent immutability of blockchains and the automated execution of smart contracts, it becomes feasible to establish a system that verifies and validates genuine content, effectively discerning it from altered or deepfaked versions. There are several properties of blockchains that seek to make it a robust solution to address the challenges posed by deepfake technology:

  • Immutable Ledger: Blockchain’s decentralized and tamper-resistant ledger ensures the immutability of recorded transactions. By signing and storing digital content and information, such as images, videos, or audio recordings, on the blockchain, it becomes extremely challenging for malicious actors to alter or manipulate the data once it has been recorded. This immutability can serve as a foundation for verifying the authenticity of media.
  • Timestamping: Blockchain allows for accurate timestamping of data. When media content is timestamped and recorded on the blockchain, it establishes a chronological order of creation or publication. This timestamping can be crucial for determining the original source and creation time of content, aiding in the identification of manipulated or deepfake material. Knowing when the artifact was created allows you to handle all kinds of different problems. An interview with a person that was created after the person’s death is more than just shady, it’s clearly fake.
  • Decentralized Verification: Utilizing a decentralized network of nodes, blockchain enables a consensus mechanism for verifying the authenticity of digital content. Instead of relying on a centralized authority, multiple nodes in the network can independently validate the legitimacy of media, reducing the risk of false positives or negatives in the detection process. A solution could involve incentivizing nodes, each running their own deepfake detection algorithm, with rewards for participating in consensus and casting a correct vote on the authenticity of digital content.
  • Smart Contracts for Authentication: Smart contracts, self-executing contracts with the terms of the agreement directly written into code, can be employed for automated authentication processes. By creating smart contracts that define the criteria for authentic content, blockchain can automatically validate the integrity of media based on predefined rules, providing an efficient and automated verification process. For example, a smart contract could be made to verify the source of a content via an invisible watermark.
  • Content Ownership and Attribution: Blockchain can be utilized to establish and manage ownership and attribution of digital content. By creating a transparent and traceable record of content ownership on the blockchain, it becomes easier to track the origin of media and identify instances where unauthorized alterations may have occurred.

How does this work in practice?

The most practical blockchain solution to help ensure authentic digital content is to leverage public key cryptography, allowing creators to stake their reputation on the authenticity of their content by signing it with their public key. Protocols could rely on creators to stake an asset (USDC, ETH, etc.) that could be slashed if they sign inauthentic content. In the event that a protocol determines that some signed content is fake, the creator would lose their staked assets, leaving an immutable record of that public key’s actions and reputation.

There are also solutions that could go well beyond a stand-alone public key signing mechanism. A more sophisticated solution could leverage real-world identities. By associating public keys with verified identities, a feedback and penalty system can be established to address misuse, such as signing fraudulent images or videos. With identities that are soulbound and cannot be reissued on chain, it’s impossible to avert the system and recreate another on-chain identity once someone is deemed a malicious actor.

The effectiveness of this system relies heavily on integrating public key signing with real-world identity verification. Blockchain identity systems such as Worldcoin and soul-bound tokens on Ethereum could play a pivotal role in establishing a decentralized and tamper-proof identity registry. This registry would correlate public keys with real-world identities, simplifying the trust-building process and holding individuals accountable for malicious actions.

One key to scaling this solution is embedding the signing capabilities into the products creators use, either in the software or hardware. For embedded hardware, the anticipation is that smartphones and other devices will soon integrate inherent hardware-based signing features for various media, including images and videos. In the software layer, we expect software providers like PhotoShop and image generators like Stable Diffusion to incorporate public key cryptography mechanisms. This integration will empower creators to authenticate their work, while also acknowledging the specific tools utilized in the production process.

Another critical element of this system is a robust consensus mechanism that can distinguish between fake and real content and handle disputes and slashing. This would require leveraging a network of nodes that are identifying the sources of watermarks and running deep fake detection algorithms. Invisible watermarks are likely the most robust solution as the sources of all content would be publicly accessible in the registry. An ideal solution would require people to agree to a standard watermark solution. While coordination is certainly a large undertaking, decentralized networks have proven the ability for large social coordination through strong economic incentives.

Emerging Players

Source: StoryProtocol.xyz

Although it’s still early in the development of robust blockchain-based infrastructure for authenticating content, there are several emerging players seeking to tackle the problem. To date, most approaches have focused on creating a public IP registry, underpinned by blockchain infrastructure. This partly solves the deepfake problem, but some specific tooling regarding deepfake detection is still needed. Regardless, there are several companies making progress and directing resources to authenticating content.

Story Protocol is attempting to bring IP into the internet era by providing both an open IP repository and a set of modules to interact with that IP in a frictionless way. Story’s intellectual property registry allows imaginative intellectual properties, encompassing prose, images, audio, and beyond, to document their developmental path from initial creation to the limitless possibilities of digital collaboration. Its entire infrastructure — the data structures and modules — is built on blockchain. As content proliferates on the internet, the Story team believes blockchains offer provenance and authenticity without the need for an intermediate entity. While Story is not exclusively focused on deepfakes, their solution addresses the problem by enabling creators to cryptographically sign their content, creating an ownership standard and a record of the content they share, whether real or fake. Additionally, we expect the team to build infrastructure to identify deepfakes and maintain a public reputation system.

Numbers Protocol provides content verification services to AI-driven companies and creative tools. Leveraging low-cost digital provenance infrastructure and decentralized storage, it enhances trust in digital content and promotes innovative methods for content monetization. Numbers establishes immutable records for digital content and monitors any modifications made, facilitating collaboration and management. The Numbers Mainnet operates as a decentralized GitHub, securely and transparently storing data pertaining to assets, including provenance, ownership, and historical records.

Atem is a decentralized content creation protocol, aimed at helping creators tokenize their content and build web3 native communities. While their roadmap is not focused on deepfakes, Atem is focused on content ownership, leveraging soul bound tokens and NFTs. By doing so, Atem creates an immutable, public record of the creator’s assets, with timelines and sources for every work.

Challenges and Outlook

While the application of blockchain in combating deepfakes shows promise, it’s important to acknowledge the challenges, such as scalability and user adoption, and continue refining and developing these solutions through interdisciplinary collaboration and ongoing technological advancements. Blockchain scalability comes to the forefront when considering a deepfake solution. Decentralized verification and consensus algorithms have been historically expensive to implement, as evidenced by the high gas prices on Ethereum and many other Layer 1’s. If it’s expensive to mint content on-chain and verify it as real, media companies, foundational model developers, and creators will not be able to mint all their assets. The cost to mint content likely needs to be under one cent for mass adoption in a social media or publishing setting. We’re making progress with the proliferation of ZK and optimistic rollups and growth of cost effective, monolithic chains like Solana, but still need lower prices for large scale adoption.

The other key challenge is creating a universal standard and incentivizing people globally to adopt it. While token incentives and strong Web3 communities can help, we’ll also need cohesive partnerships within international communities across governments, corporations, artists, and technical leaders committed to shaping the governance of digital content creation. While Web3 communities are likely to adopt a blockchain-based standard, we need large Web2 leaders such as Instagram or creator tools like Adobe to integrate with public key cryptography for mass adoption. Given the scale of these organizations, this would require seamless technology and tactical collaboration. 

Finally, multidisciplinary teams are pivotal to creating robust standards for detecting and disincentivizing the misuse of deep fakes. Blockchain has its constraints in addressing the challenge of combating deepfakes and revealing the culprits responsible for malicious activities. While the integration of blockchain with other advancing technologies, like AI, can enhance its efficacy in identifying deepfakes, it is crucial to acknowledge that technological issues cannot be entirely resolved through technological solutions alone. Moreover, regulatory advancements and societal alignment are crucial to combat the proliferation of deepfakes.  Regardless, it’s clear that the transparency and immutability of blockchains have the ability to present a unique solution to the digital IP and deepfake problem. We believe the space is ripe for innovation and look forward to tracking the latest developments.

Key Sources:

Disclaimer

The information provided in this blog post is for educational and informational purposes only and is not intended to be investment advice or a recommendation. Struck has no obligation to update, modify, or amend the contents of this blog post nor to notify readers in the event that any information, opinion, forecast or estimate changes or subsequently becomes inaccurate or outdated. In addition, certain information contained herein has been obtained from third party sources and has not been independently verified by Struck. The company featured in this blog post is for illustrative purposes only, has been selected in order to provide an example of the types of investments made by Struck that fit the theme of this blog post and is not representative of all Struck portfolio companies.

Struck Capital Management LLC is registered with the United States Securities and Exchange Commission (“SEC”) as a Registered Investment Adviser (“RIA”). Nothing in this communication should be considered a specific recommendation to buy, sell, or hold a particular security or investment. Past performance of an investment does not guarantee future results. All investments carry risk, including loss of principal.