Deepfakes & Misinformation Abound, Here’s How We Can End the Internet Chaos
“A human is worth more if they’re addicted, polarized, outraged, misinformed and narcissistic because that’s better for producing an effect in human attention.” This strong statement from Tristan Harris describes the era of attention capitalism and it catapulted the day for one of Bloomberg’s largest tech events. At “Sooner Than You Think” (STYT), technologists, policymakers, educators and journalists gathered to talk about the impact of technology on our society and the balance between innovation and obligation in the industry.
I found myself mixed with concern, anger and optimism as we explored the crossroads and paved a path forward. Read on to see the three critical themes I took home with me.
I'll believe it when I see it.
Maybe we shouldn’t? Creating fakes isn’t a new concept - many of us have used Photoshop to add a mustache to a friend’s face or make it look we’re in a picture with our favorite TV characters. But now we have the technology to create fake audio and video that’s incredibly believable and imperceptible to the human eyes and ears. We also have the ability to distribute this fabricated content to the global masses or to a micro-target audience with the help of algorithms. The reality is that this deep fake-making technology is amoral, its use can be benign like making comedic satire or it can be weaponized by placing women’s faces in pornography or by creating messages of hate.
Check out this deep fake Jordan Peele made by using AI on a video of Barack Obama.
In this video, Jordan Peele calls on us to be more vigilant about what we trust on the internet. Shamir Alibhai mentioned at STYT that not only is it easier to create deepfakes, but eventually the deepfake creation technology will outcompete deepfake detection technology. It only takes about 15 minutes of human work to make a deepfake by leveraging cloud computing to do the rest. You can literally download this and make one today, and I just googled that.
Alibhai emphasized the urgent need in creating a system to authenticate critical content and videos that have an evidentiary character, such as film captured by a bystander, security footage or a camera on a police officer’s car. We shouldn’t let technology and our inability to detect fakes with our naked eye get in the way of due process.
Scary thought: Just imagine someone deepfaking a benign video of you speaking and making you appear to say something racist, and then it goes viral on twitter? Could you be fired or sued? How do you prove that it’s fake?
Check out these other fakes that have taken over the internet.
Free Speech vs. Paid Speech
Fake news that proliferates, aggravates, incites action and polarizes us has been the topic of discussion for the last few years. Shamir defined fake news as “deceptive blogs with a veneer of newsworthiness being shared online.” According to an MIT study, fake news spreads six times faster than true information on Twitter.
At SYTY, Brittany Kaiser (pictured above in the second chair from the right) who used to run business development at Cambridge Analytica (CA) took the stage and spoke about how they leveraged the tremendous amount of Facebook user data to identify and target the “persuadables”, those who haven’t made up their mind yet and could be persuaded to decide in a specific direction. For the 2016 election, CA bombarded these “persuadable” users with over 5 million pieces of customized content to create a desired perception of the world that CA wanted them to have. You can watch The Great Hack or read Vox’s op-ed to learn more.
The panel pointed out that while Facebook shut down over 2 billion fake user accounts in three months this year, they still won’t fact check political ads or posts by candidates, even if it violates the site’s hate speech rules. This decision came from Facebook’s desire to be neutral during the election, but this may further proliferate misinformation and malevolence being spread by those that can afford to create and promote fake news.
This stance upset the staff at Facebook, leading them to write an open letter to Mark Zuckerberg demanding a more active stance on misinformation. They suggested solutions “where they submit campaign ads to fact-checking, limit microtargeting, cap spending, observe silence periods or at least warn users.”
What can we do about the state of misinformation?
The speakers shared some advice for us to consider as we grapple with the current state and look towards the future with optimism, here’s what they said:
Don’t give up
Actress Kerry Washington shared that it’s important to be aware and active during the election off-seasons, so we can make sure that we’re selecting the right leaders to represent our community. Speakers also suggested readers should be mindful of what to follow and who to trust.
Make products with privacy in mind
DuckDuckGo, Density, and FourSquare shared how they’re leading profitable companies without commoditizing user data and only tracking what’s necessary. Jeff Gleuck of FourSquare emphasized how they even have a blacklist of locations they do not share to protect groups from harm, like locations of Planned Parenthoods and LGBT spaces. Also, give your users a “terms and conditions” they can read and understand. Yes, please!
The Chief Information Officer of Equifax relayed that you should store data assuming that you’ll have a data breach, so ask yourself “How can we store less valuable information?”
The government needs a new framework
Former FCC commissioner, Mignon L Clyburn (pictured above on the far right) points out that the reason the government hasn’t been able to regulate big tech is that “we’ve got a 19th-century framework for 21st-century problems”. She also points out that as long as we’re all working in our own silos, we won’t make progress. Instead, lawmakers, regulators, ethicists and technologists need to actually hear each other, get past their own industry cultures and work together.
Tom Bossert, who served as the homeland security advisor to two presidents, emphasized that this new framework of rules and standards needs to account for the current and evolving state of technology and a process of accountability and responsibility.
Take a stance.
Many panelists suggested that leaders need to take a stance on where they stand and use it to inform their organizational decisions.
Well, Jack Dorsey, CEO of Twitter took a stance on misinformation this Wednesday. He tweeted that he’s banning ads from candidates on Twitter globally:
Twitter also won't accept payments to promote tweets or other ads that take a position on policy issues, such as immigration, health care, national security, and climate change.
The worldwide web turned 30 years old this year and it’s still learning and evolving, and us with it. The creators built it to efficiently share information from computer to computer, from person to person. They probably never imagined a day where the internet would need passwords, rules and protections. So by design, it was open, vulnerable, and a big unknown.
Similarly, we may be making new technology today with unforeseeable repercussions. So it’s critical that we’re having these conversations about truth, trust, and responsibility while demanding ethical standards and challenging business models anchored around selling user data and creating digital addictions.
What do you think about this balance we must maintain between innovation and obligation?