By Kristin Acheson
Technology and Innovation Writer
In the era of fake news, deepfakes are the application of artificial intelligence that has spilled into misinformation with the potential to exploit and target individuals online. The first place to start with assessing the severity of deepfakes is to explain what they are. According to CNBC, a deepfake is a manipulated picture or video that uses sophisticated artificial intelligence to create fabricated images and videos that seem real. Deepfakes come from deep intelligence or artificial intelligence that uses algorithms and code to make decisions on its own. Whether it be actors or politicians, a person within public media can be affected by this intelligence that has the power to make them say something they did not actually say. Not only does it affect celebrities, but this technology also affects everyday people. If you have ever used a filter on your phone’s camera or taken videos of yourself, the systems are already tracking your face and facial movements making deepfakes a problem that could affect us all one day. This may lead to creating a hoax about people that could spread misinformation and wrong facts. Deepfakes should be taken more seriously as these companies and technologies only get smarter about your patterns on and offline. These technologies will only get better as strides in machine learning are made to better identify how you interact online. It is even more important within this election year because it will be imperative to be skeptical about videos and information that is spread around online.
Deepfakes has even become a concern to the sole providers of them: social media. Companies like Facebook have installed “fact-checkers” on these deepfake videos, but they will not be applied to largely trimmed videos or parody videos. This is another reason to be wary of what videos circulate on your feed on any form of social media. Dr. Chase Cummingham, a Principal Analyst at Forrester Research said, “people make decisions on headlines and videos in 37 seconds,” this is a problem because most content online is longer due to monetization guidelines and limits set by apps. People will be more inclined to believe what they are reading since this content is longer and highly addictive to see. For example, when someone watches a video online that content is automatically tied to you to try to predict your behavior in the future. This is the main reason “recommended” features are created to show you more of what you want to see. Deepfakes are only a small part of this very convoluted story of how news is shared to us online and what we are exposed to. Whether or not people think they can differentiate between a deepfake and a real video, artificial intelligence will only be enhanced and designed to a point where soon you may not be able to identify what is real on the internet.
Contact Kristin at firstname.lastname@example.org