Based on a speech given by our Policy Adviser, Anna Bulakh, to the European Parliament. “Tech Developments and Regulatory Approaches regarding Disinformation”
15 April 2021
New technologies are shaping the media ecosystem. In this ecosystem, we are all consumers. The question is how to make the media ecosystem a safer space for users and creators and develop practical initiatives to fight online disinformation.
New face of content
New technologies available for user generation of content are appearing at a rapid rate. Consider the development of synthetic media, such as in the case of deepfakes.
The term ‘deepfake’ describes a face-swapping technique. AI uses an image of an individual to generate a digital look-alike copy of a person. Commercial applications allow for the production of realistic face swaps with one selfie. The amount of deepfakes is growing quickly.
There were only 14K synthetic videos online in 2019. After our app went viral in 2020, the number of user-generated synthetic media through Reface alone surpassed 3 billion videos online in just 14 months.
Technology is getting better and more sophisticated every day. The definition of media is changing before our eyes.
Commoditization of AI in new era of creative economy
We can observe the commoditization of AI and deep learning tools. One hundred million users worldwide installed Reface in just 14 months. This trend results from the development of the creative economy, in which content creators are becoming a driving force of the information market.
Yet as creativity tools develop with an unimaginable speed, we also find more vulnerabilities to misinformation, disinformation, and criminal activities. Some of the initial concerns focused on political applications, with the widely circulated ‘deepfake’ of former US President Barack Obama. However, probably the most commonly deployed bad user deepfakes are not political but ethical and criminal regarding porn deepfakes. Deepfakes also have been deployed to defame individuals and assist in illegal plundering activities.
We have an alarming problem with the growing amount of inauthentic content that is filling the internet and online platforms. Such content is attributed to fake accounts and anonymous users. It is difficult to trace such content, establish authenticity, and call for ownership.
Deliberately deceptive media and bad actors are using this gap in how we allow users to operate on the platforms. Reface is concerned about the overwhelming amount of deepfakes rapidly filling the internet.
We need to make synthetic media a safer space for users and creators.
Here’s how we can do it
1. Ensure Content Authenticity
To ensure content authenticity, we should mark the content. Metadata empowers content creators and editors to disclose information about who created or changed the content, what was modified in the video (lip-sync, the change of voice, or image), and how it was changed.
Such metadata helps to safeguard content authenticity and make data accessible on platforms. What advantages would it provide?
First, it would empower fact-checking and detection initiatives.
Second, this would build trust online between creators, publishers, and consumers.
And finally, it will enable us to create an ecosystem of accountability.
2. Content Labeling
Platforms and companies need to start labeling their content, with watermarks for instance. Individual labeling may vary.
Media and companies need to educate their users on spotting modified content. Media literacy is part of ensuring responsible use of technology. Labeling and the metadata marks also help trace content from the moment of its production to its final destination.
3. Integrate Safety and Ethics in Your Product
Companies need to integrate safety thinking into their product design. Synthetic media is where technology meets safety and ethics. To defend the power of creativity and freedom of expression, we must align safety, ethics policies, and technology during product development. The responsibility of implementing effective measures to counter malicious use of technology belongs to multiple stakeholders. It is the platform policies and governance, regulations, and user awareness that do the job.
We need to include moderation and detection in the creation of commercial tools on synthetic media. A combination of algorithmic identification and human-centered verification of intentionally misleading content will reduce the number of deepfakes.
The synthetic media industry should be, first and foremost, an industry that prioritizes consumer security. Platforms have to educate creators. Only the responsible use of creative tools will allow us to make synthetic media a safe space.