The Blurred Line between Reality & Illusion
Perception of reality is akin to viewing the world through a clear window, where the images one sees are faithful depictions of the real world. In contrast, watching the world through screens has changed the way we see and understand the real world.
It is time to question the reality and illusion in this age of technology.
The rise of AI has led to a proliferation of fake content that can be used to manipulate people and spread misinformation. One of the most concerning examples is deep fake videos, which use AI algorithms to manipulate images and videos to create hyper-realistic simulations of people doing or saying things they never actually did.
A few months ago, I came across a video on social media that showed a famous celebrity making a controversial statements. The footage looked natural and was shared by many people, so I assumed it was real.
However, I started to doubt the video’s authenticity, especially after reading conflicting information about the celebrity’s stance. So, I decided to do some research and fact-checking.
After some digging, I discovered that the video was deepfake content created by AI. It had been designed to look and sound like a celebrity, but the words and actions in the video were entirely fabricated.
As AI technology continues to evolve and become more sophisticated, we’ll likely see even more advanced forms of manipulated content in the future. By being aware of this and staying vigilant, we can help protect ourselves from falling prey to these illusions.
This experience has motivated me to write about this topic and hopefully raise some awareness. In this article, I will share some areas where blurred lines between reality and illusion are happening and some ways on how to protect ourselves.
Fake News and AI Generated Images
Let me tell you, the internet can be a scary place. These days, you can’t always trust what you see online, especially when it comes to news stories. It’s not just about the fake news we’ve all heard about. There’s a new player in town: AI generated fake news.
Here is an article from NYT covering this topic: The People Onscreen Are Fake. The Disinformation Is Real.
AI algorithms can now create news stories that look and feel like they were written by humans. These stories can be complete with quotes, statistics, and bylines from legitimate news sources. That means you can come across a news story on your social media feed or a news site that looks real but is actually wholly fabricated.
The thing is, these fake stories can be compelling. They can be designed to target specific groups or people and even incorporate personalized details to make them even more believable. And once they’re out there, they can be shared on social media and spread like wildfire.
Another great example related to this topic is the recent AI generated images of Trump going viral on the internet: That photo of Trump being tackled by police isn’t what you think it is as the internet enters a new era of AI disinformation.
It’s a frightening, but there are ways to protect yourself. For starters, always be skeptical of what you see online. Take the time to fact-check and verify sources.
And, be aware of your biases and how they might affect the information you consume. If you only read news stories confirming your beliefs, it might be time to diversify your sources and read alternative viewpoints.
Don’t be afraid to speak up if you see something that seems fishy. Report fake news stories and content to the appropriate authorities and social media platforms, and help raise awareness about the issue.
Together, we can fight against AI generated fake news and protect ourselves and others from being misled by these illusions. More on this later.
Deepfake and AI Generated Audio
AI generated audio is another form of fake content that is becoming increasingly sophisticated. I have to admit, the idea of AI-generated audio kinda freaks me out. It’s scary to think that with the help of speech synthesis technology, AI algorithms can create audio recordings of people saying things they never actually said.
The implications of this are concerning, to say the least.
As a tech enthusiast, I’m always fascinated by the latest advancements in AI technology. But the potential misuse of this technology is a cause for alarm. The recent deepfake video of Tom Cruise performing a magic trick is a perfect example of just how advanced and sophisticated this technology has become. The AI-generated voice and face were so convincing that it fooled many people into believing it was actually the actor himself.
For instance, there was a case where a deepfake video of former President Barack Obama was created by BuzzFeed Motion Picture with actor Jordan Peele, which went viral on social media.
It showed Obama saying things he never actually said, making it appear as if he was making a speech about the dangers of this technology. The video was compelling, but it was entirely fake. This illustrates how easy it is for people to be fooled by unnatural content in the age of AI.
But it’s not just about celebrity impersonations. The use of AI-generated audio to create fake phone calls or to impersonate someone’s voice is a real threat. In addition, this can be used to create propaganda by generating fake speeches or interviews of public figures.
A project idea to solve this is to a develop technology that can detect and identify fake audio recordings. Voice recognition software, for example, can be trained to recognize the unique patterns of a person’s natural speech, making it easier to distinguish between real and fake recordings.
As someone who loves technology, I believe it’s important to stay informed and vigilant in the face of this technology, and to work together to come up with solution ideas instead of attacking the problem.
So, how can we protect ourselves from these mind games?
We need to increase media literacy and critical thinking skills among the public. We need to learn how to evaluate sources of information and understand the motivations behind the dissemination of certain news stories or recordings.
One approach is to rely on fact-checking websites and organizations that can help us verify the accuracy of the information that we encounter. This includes sources such as Snopes, PolitiFact, and FactCheck.org, which provide reliable and impartial assessments of the truthfulness of various claims.
Another approach is to use AI-powered detection tools that can identify AI generated content. I know, that sounds ironic — right? Trying to detect AI generated content using another machine intelligence model.
Detecting AI generated Images and Videos
There are some tools that can analyze the pixel patterns in images and videos to determine whether they have been manipulated or altered. Some of these tools can be listed as following:
- TruePic — a mobile app that allows users to capture and verify the authenticity of photos and videos in real-time, using a patented digital watermarking technology.
- ExifTool — a command-line tool that can extract and analyze metadata from images and videos to determine if they have been edited or tampered with.
- Adobe Photoshop — while it may seem counterintuitive, Adobe Photoshop also has tools that can be used to analyze pixel patterns in images and identify any inconsistencies or signs of manipulation.
Detecting AI generated Text using Natural Language Processing
There are also tools that can analyze the language and syntax of text to detect whether it has been generated by an AI algorithm or written by a human. Some of these tools can be listed as following:
- Botometer — a tool that can analyze Twitter accounts and tweets to determine whether they are likely to be generated by a bot or a human.
- Grover — a tool developed by the Allen Institute for Artificial Intelligence that can detect generated text by analyzing the language patterns and inconsistencies that are common in AI-generated text.
- Diffbot — a web-based tool that uses advanced natural language processing techniques to analyze text and identify any signs of automation or machine-generated content.
While these detection tools are not flawless, they can at least help us to be more cautious about the information we encounter and to develop a healthy questioning towards what we see and hear in the internet.
The key is to be vigilant and critical in our consumption of information, and to understand that in the age of artificial intelligence, the line between real and fake is becoming increasingly blurred. As we continue to push the boundaries of technology, we must stay aware of our actions’ potential implications and consequences.
We must be observant about protecting our trust in what we see and hear, and work towards creating a future that benefits everyone, not just the creators of the technology.
We should work towards creating a world where technology serves humanity rather than the other way around.
Feel free to share your thoughts, questions, recommendations…
I am Behic Guven, and I love sharing stories on programming, technology, and education. Consider becoming a Medium member if you enjoy reading stories like this and want to support my journey. Ty,
If you are wondering what kind of articles I write, here are some: