The first time you used ChatGPT, you probably thought that new robot overlords would take over the world. Ever since it was introduced, we know that machines can learn, think, and make their own decisions.
But it’s probably not too bad since the Terminator hasn’t traveled back in time yet. In the movie, Skynet sparks a nuclear catastrophe. In reality, world leaders may be swept up by misinformation and spark a nuclear catastrophe. The potential for damage from manipulated algorithms, manipulated data, exaggerated claims, or just plain lies is endless.
Life will become similar to a funhouse of mirrors. The truth will have bends and twists, and everyone will be unsure which reality to believe. Many news sites use AI tools like ChatGPT to create content and rank high on Google. The styles are indistinguishable from humans, except in a situation where the text itself contains sentences like: “I cannot complete this prompt,” “as an AI language model,” or “My cutoff date is September 2021”.
Spotting misinformation won’t be a walk in the park. It’s sly. It’s disguised in flashy headlines. A great example is the story of the Trump arrest with fake photos. It takes skepticism and a keen eye to spot misinformation in the world of AI. Here’s how you can do it.
Be Careful When Using AI
The reason why ChatGPT is the fastest growing product to have 1 million users is because it genuinely helps productivity. Before it was introduced, everyone had to do research manually. You type in a question in Google and get loads of results. Based on the hierarchy of trust, you select the websites you trust. Google has been ranking pages for 25 years, using dozens of factors to determine what to show you.
Now, the process has changed. You ask ChatGPT a question, and it gives you an answer within seconds. But instead of scouting the web for the best content, it just predicts likely word combinations. This causes misinformation made by AI. In The Social Dilemma, computer science engineers showcase the impact of social media in controlling the world. Even if an opinion could be shifted by one percent in a long enough time frame, it could have a massive effect. Eventually, social media will become the battleground for misinformation made by AI. That’s why staying on your toes in this new landscape is so important.
Intentional Disinformation
Hackers and scammers were rubbing their hands when OpenAI released ChatGPT. Finally, they had a tool to fabricate stories, images, and videos. Cybercrime is increasing rapidly, especially since you can ask the algorithm to write about a specific topic in the style of disinformation. It can generate fake data, nonexistent citations, and compelling stories to make you believe anything.
Many prominent experts in the field have quit their jobs to sound the alarm that humanity needs to stop the further development of this technology before we learn how to control it. There’s no way to prevent bad actors from using it for evil purposes. Creating phishing attacks, incorrect information, and intentional disinformation is dangerously easy.
Fabricated Sources
When you read an article, seeing a list of sources makes it more trustworthy. Well, ChatGPT is famous for making up sources. Of course, it can generate correct ones, but that doesn’t mean it doesn’t sprinkle lists with a few fake ones. The wild part about it is that the generated citations are plausible. They sound similar to well-renowned journals and have co-authored with large reputations. If you don’t take the time to verify the sources, it’s easy to fall into the trap.
Another scenario that often happens is conflicting answers to the same question. When asked to clarify the differences in the responses, ChatGPT answers, “Sometimes I make mistakes.” Because the answers are wildly different, you could be hit with a truth or a lie.
Improve Your Fact-checking!
Don’t trust anything on the internet without doing your own research. Traditional search engines have been around for a long time. Google is still the best place to check whether the information is true or false. But a new problem arises when you spend much time on social media. Most people spend too much time on Tik Tok, Reels, or YouTube shorts. You see people posing as experts creating content around a specific topic or event. It’s in our nature to trust other people.
Sometimes, you see a minute-long video about a sensitive topic, and it’s hard to trust. Then, you see a few more. Without thinking about it, you form an opinion based on what the algorithm decided to show you. Even though you know you’re watching those videos for pure entertainment, they can have an impact on what you think.
Stay Vigilant
In the future, there will probably be AI fact-checkers (like the text generators weren’t enough). But before that time comes, we must do the hard work ourselves. Being vigilant is extremely important. Your friend could send you a short video related to a problem you might be experiencing.
Instead of trusting everything you see online, it’s best to identify and evaluate sources of information. This is extremely important for topics like climate change, vaccines, or serious illnesses. There’s no magic product or combination of herbs that cure cancer. There’s no way for climate change to be reversed in a day. And there’s no way to fight off diseases without vaccines. The earth is not flat. These topics require you to be deliberately thoughtful. Take the time to vet sources and do actual research.
See also: How Artificial Intelligence is Changing Software Development?
Use Tools
Back in 2016, the United States elections became infamous for the fake Russian content filled with grammar mistakes. With AI available and a conflict in Ukraine, the potential for another try at fake news is massive. But now, there won’t be any grammar errors or stylistic deficits.
Bad actors can create and repurpose even more content for all social media platforms. Fake views, comments, and likes are easy to manipulate. Add on top of that a few deep fakes of leaders saying something they’re not supposed to, and you can sway the public into believing anything.
Digital signatures could help in determining whether human-made online content. Watermarking AI tools like ChatGPT is another option. But developing these solutions takes time. For now, the easiest way to not be a target is to use a VPN.
Virtual private networks make you private online, which makes it impossible for marketers, governments, or bad actors to know your demographics. You won’t be getting any cookies from websites, meaning no targeted ads or fake news.
Plus, you can change your location to any place in the world. Instead of being in the midst of an information war, you can change your IP to somewhere else. VPNs also protect you from hackers by encrypting your data.
This tool is like magic because you can opt out of price discrimination. Things that cost much more money in the United States cost less in countries with weaker economies. You can change your location and buy a digital product for half the price with a single click of a button. Using VPN to get cheaper flights and hotels is great, too, because you can pretend to be a local.
Inform Those Around You
When somebody forms a strong opinion, it’s almost impossible to change their mind. If you’ve ever argued with a conspiracy theorist, then you probably know how hard it is. With an epidemic of fake news bustling under our feet, it’s essential for everyone to improve their digital literacy. Teens, teachers, parents, and professionals everywhere must help. There are tools online to support digital literacy, but that’s not enough.
Raising awareness about the problem and the concrete steps one must take to fact-check information is necessary. Even if you don’t use ChatGPT or tools like it, you’ve probably encountered something generated by it. Researching information takes time and effort, and it’s up to each and every one of us to determine whether it’s worth it.
Related: The Future of VoIP Communication: How Artificial Intelligence is Revolutionizing the Way We Connect.
A Few Final Words
The wild realm of AI is filled with misinformation. But if you follow the truth, it will feel like having a superpower in a digital world full of lies. Misinformation has been around for thousands of years, but now it’s adapted and evolved for the modern AI age. Staying vigilant in the age of misinformation is essential.
Whenever you see a headline that sounds too good or too absurd to be accurate, stop for a second. Put on your critical thinking cap, be curious, double-check the sources, and try to use a bit of healthy skepticism.
AI can be a force of good if it’s used correctly, but there’s no way to stop bad actors from using it for their personal gain. Spread awareness to those closest to you, and it can create a ripple effect where everyone becomes better at spotting fake news and misinformation produced by AI.