The promise of artificial intelligence (AI) was that it would improve productivity, allowing people to focus on high-value tasks while AI agents handled the boring, everyday tasks. However, AI is now being used to deceive people. With artificial intelligence (AI) models for images and videos becoming more powerful, new trends are emerging, one of which is creating deepfakes of celebrities and/or AI models that never really existed in real life.
What are Deepfakes?
Deepfakes are AI-generated media (images, videos, and audio) that make a person appear to say or do something they did not. The term deepfake combines "deep learning" and "fake," referring to the use of neural networks to create highly realistic copies that can deceive both humans and detection algorithms. Basically, you can't really trust anything you see on social media anymore.
Making deepfakes used to be a highly technical, time-intensive process, but it can now be done in under 30 seconds on a smartphone. Creating a convincing fake used to require deep knowledge of machine learning, specialized programming skills, and days or weeks of processing time on high-end computers.
Now, AI deepfake tools are readily available, and you don't need to take help from a research lab, specialized tech, a custom training set, or a large amount of data.
The AI Deepfakes Problem
As we have established, anyone can make deepfakes, and not just that, these features are available on customer apps. What started as "AI slop" that most people recognised as AI-generated has now become realistic enough to fool people into believing it's real. This has happened because artificial intelligence (AI) models and tools have become cheap, easy, and just good enough to flood social media with somewhat realistic AI slops.
Now, while we stand with the idea that artificial intelligence (AI) is here to help us improve our productivity and can be a valuable asset, AI deepfakes and AI slops aren't going to help anyone improve productivity. Instead, they are used to scam people and mislead them.
Deepfake Scams:
Deepfakes are now used to scam people, either by using the credibility of people or through emotional manipulation.
- There are AI deepfakes of famous influencers and celebrities promoting shady products.
- Emotionally manipulating deepfakes of fake old people selling fake handmade items to boost e-commerce sales.
- Scam call centers now use AI callers that don't need to eat or sleep, rather than human callers, to scam people more efficiently.
Artificial intelligence (AI) in the wrong hands is making sophisticated scamming simple under a solo operator who can easily fake an identity or steal credibility.
Deepfake Propaganda:
People in power and those with some influence can also use deepfakes to spread propaganda or a false narrative. When you are always suspicious about what is real and what is not, you sometimes just cognitively check out and give up trying to understand what is real or not. Another issue with the use of AI deep fakes in spreading propaganda is that most people don't care if the content is deepfake propaganda if it supports their biases; they will resist believing that it is fake because they want to believe it is fake.
AdCreative.ai: An AI-powered platform that automates the creation of high-performing ad creatives for social media and display campaigns.
Deepfakes in adult media:
Deepfakes became popular as people sought explicit media of celebrities, and the demand grew, leading to AI-generated deepfakes gaining more attention. Fortunately, non-consensual publication of intimate deepfakes is criminalized. While that is great, and those creating those non-consensual AI deepfakes should be liable, there are also consensual AI deepfakes used to catfish and scam people.
Yes, OnlyFans models are allowing agencies to consensually create AI deepfakes, which are then used to catfish people into believing that they are actually talking to the model, when in fact, in the majority of cases, those people are either talking to other men working for the agency or AI bots.
In Conclusion:
Artificial intelligence (AI) isn't the problem here; it is meant to make our work easier and help us be more productive. The majority of the responsibility lies with the companies that develop these AI features and make them accessible to the general public without restrictions. Individuals also bear the responsibility to use AI responsibly. Some platforms, like YouTube, are trying to remove AI deepfake videos, but even they are struggling to catch what is real and what is not.
It is expected that, within a year, these AI-generated deepfake videos will be so good that it will be extremely difficult to tell what is real and what is AI-generated. Lawmakers might need to step in and collaborate with the platform to enact laws that encourage positive AI development and restrict AI-generated deepfakes that promote scams and misleading propaganda.
A major source for the article was Coffeezilla's YouTube video titled "Investigating AI Deepfakes."
💡 For Partnership/Promotion on AI Tools Club, please check out our partnership page.