Why this is Tech’s Oppenheimer Moment, and what we’re doing about it.

Picture this: you wake up to find your digital self entangled in activities you have zero association with. It sounds like a Black Mirror episode, but the reality is far grimmer.

In a world where deepfakes can defraud people out of nearly $1 million, we are standing at the edge of a precipice. Still think it's just a minor issue? Consider this: the deepfake video market has grown 84% since last year, and AI experts have been warning us about the "point of singularity," a stage when technological progress will surpass human understanding by 2030. Even the European Union is scrambling to regulate deepfakes, acknowledging the gravity of the problem. Governments are moving as quickly as they can to mitigate the dangers irresponsible AI - but as always, it's not fast enough. That's where Thatsmyface comes in.

Thatsmyface is a plug-in security technology which lets users know when their face is being uploaded without your consent on platforms such as Youtube, TikTok and Pornhub using your facial metric data. By doing so, we also enable the platforms themselves - much loved, and often integral in our modern, digital lives - to protect their users through a simple, rollable, efficient plug-in technology and stay legally compliant.

Here's a granular breakdown of why we exist, what we do, and how we will do it.

The Problem

1. Tech is outpacing our ethics, fast.

As much as 90 percent of online content could be synthetically generated within a few years. Deepfake technology has been used in recent years to make a synthetic substitute of Elon Musk that shilled a cryptocurrency scam, and to steal millions of dollars from companies by mimicking their executives’ voices on the phone. Yet, law makers and industry leaders remain passive and unwilling to mitigate the risk of AI.

2. No one can trust anything anymore.

The increasing volume of deepfakes could lead to a situation where “citizens no longer have a shared reality, or could create societal confusion about which information sources are reliable; a situation sometimes referred to as ‘information apocalypse’ or ‘reality apathy,’” (Europol). The worst part is that deepfakes are rapidly eroding trust in key public institutions, such as the court (For example, A doctored recording of a parent abusing their child was submitted and accepted as evidence in a child custody case in 2019).

3. Organisations risk serious legal and financial liabilities.

Knowing these risks, policy makers are rapidly introducing new policies. The EU introduced the European deepfake policy, which advises platforms, including those established outside of EU, to cooperate with ‘trusted flaggers' to identify and remove illegal content. China recently established provisions that require companies and people to obtain consent, verify identities and offer recourse mechanisms for deepfake content. These are sweeping changes, which can result in serious legal liabilities, such as civil and class action lawsuits, heavy fines, ban on operations and loss of consumer trust.

How We Do It

Traditionally, deepfake mitigation has been spearheaded smart detection technologies. But these technologies are, and forever will be, outpaced by deepfake creators. We have to admit it - very soon, deepfakes will be entirely undetectable.

That's why Thatsmyface's approach to deepfake mitigation is different. Our facial-recognition technology tokenises each user's facial data, and uses sophisticated data crawling to notify the users when their faces has been used. The user then gets notified and can do one of three things: verify the face as their own, as not their own, or report it to be taken down.

Why?

The next 3 years are the tech industry's Oppenheimer moment. Tech is rapidly approaching singularity, when our rules of deploying and mitigating technology won't apply anymore. Collectively, we need to pour our intellectual capacity and effort into setting up safety mechanisms for the future, before it devours us whole.

The digital realm needs a new moral compass. And we're here to build it. But we can't do it alone. We invite you—our users, our mentors, our community—to be part of this seismic shift towards a more ethical, secure digital world.

Nadia & Sigurd

ThatsMyFace

Previous
Previous

Navigating the Deepfake Regulatory Landscape: A Global Overview