Celeb deepfakes just the tip, revenge porn, fraud & threat to polls form underbelly of AI misuse

0 62

Artificial content, in the form of fake audios or visuals (images and videos) that copy a person’s actual expressions and voice, deepfakes are generated using a kind of machine learning called “deep learning”.

Clamour over reels and videos which deepfaked actors Rashmika Mandanna and Katrina Kaif this month, is the latest against this manipulative AI technology, with mounting demands being made for stringent action against those generating such fraudulent content. The issue has also raised privacy concerns.

To make matters worse, there is “no absolute legal framework” in India yet to specifically address such threat, claim experts.

Last week, following Mandanna’s deepfake video, the Information Technology (IT) Ministry issued a warning to social media platforms, while the Delhi police filed an FIR in the case.

But both police sources and legal experts cite the absence of specific laws as a challenge.

“The primary issue emerging right now from deepfake technology is extremely distressing and this has reputational risk against women, said Apar Gupta, lawyer and founding director of the Delhi-based Internet Freedom Foundation (IFF), an NGO involved in advocacy of digital rights.

He added: “This [the deepfake issue] can be dealt with under several sections of the Indian Penal Code (IPC) — regarding outraging the modesty of a woman and obscenity and under sections of the IT Act. But there is still a legal and regulatory vacuum regarding deepfake technology.”


Also read: Delhi police act in Rashmika Mandanna deepfake video case, file FIR, ‘obtain details’ from Meta


 

A notch higher 

While visual editing tools, like Photoshop, have been used for decades to circulate morphed images and videos on the digital space, the first use of deepfake is reportedly traced back to a user on the Reddit platform in 2017, who is said to have uploaded online fake pornographic videos of celebrities using the algorithm.

Images of Hollywood actors Gal Gadot, Emma Watson, Kay Perry, Taylor Swift and Scarlett Johansson were reportedly used to produce the content.

Subsequently, a dedicated subreddit called r/deepfakes was created where thousands of users shared these AI-manipulated videos and adult content, before Reddit banned it.

In 2020, TikTok’s parent company ByteDance, reportedly built a ‘deepfakes maker’ that allowed users to swap faces in videos after the app took a multi-biometric scan of the face. The next process involved selecting the videos where the user wanted their faces to be in. Banned in India now, TikTok allows face-swap.

According to media reports, Snapchat had built a technology that allowed its users to face-swap way back in 2016. Snapchat’s ‘Cameos’ feature allowed users to replace faces of people from videos — an amateurish version of deepfakes.

Then there are shallowfakes or cheapfakes, which don’t require highend AI tools to produce videos. Shallowfakes are less convincing or threatening, and can also be created using basic editing tools.

In May last year, a video of Ukrainian President Volodymyr Zelenskyy asking his countrymen to lay down their weapons went viral after it was uploaded via hacking on an Ukrainian news website. A closer look at the video revealed how Zelenskyy had awkward pauses and his accent was off. Social media platforms such as Facebook, Twitter (now X) and Instagram took down the video.

“I just dropped a 150 milligram edible & I’m feeling f$%&*!g zooted. Honestly, I’m feeling — on the top floor right now. I’m about to design like 30 new f$%&*!g  space cars and get us to Mars,” — this is what Elon Musk was seen saying in a video which went viral in December last year. The problem was that it was fake and wasn’t the billionaire speaking.

Last month, videos of YouTube sensation MrBeast and two BBC presenters went viral. MrBeast was shown through a sophisticated deepfake video promoting iPhones for $2. The BBC presenters were shown endorsing an Elon Musk investment scheme.

Legal vacuum 

Following the row over Mandanna’s deepfake reel, the IT ministry issued notices to social media platforms last Tuesday stating that impersonating online was illegal under Section 66D of the Information Technology Act of 2000. Intermediaries, according to the IT Rules 2021, are required to take action against such cases and any such content must be removed within 36 hours of it being reported.

The Delhi Police Special Cell has also registered an FIR under IPC sections 465 (forgery) and 469 (forgery to harm reputation of a party), 66 C (fraudulently using unique identification feature of another person) and 66E (capturing, publishing images of private area of a person) of the IT Act against unknown persons in the Mandanna case.

Legal experts, however, told ThePrint that there was a legal and regulatory vacuum when it came to dealing with deepfake technology.

“Currently, there is no law explaining the concept of deepfake or banning their misuse explicitly. Sections 66D and 66E of the IT Act criminalises publishing or transmitting obscene material in electronic form and material containing sexually explicit acts in electronic form, respectively. Further, Section 500 of IPC provides punishment for defamation,” said Kazim Rizvi, a digital policy expert and founding director of Delhi-based policy think tank, The Dialogue.

He added: “However, these laws are limited only to misuse of deepfakes in the domain of sexually explicit content and, in a sense, present only a myopic view of the otherwise various domains that deepfakes can percolate into.”

According to him, “Similarly, under the IT Rules of 2021, platforms are obligated to respond promptly to user complaints related to misinformation or privacy breaches. While these provisions can be used in cases of deepfakes given the associated misinformation spread and privacy breach concerns, they too fail to comprehensively address this deep-rooted menace.”


Also read: ‘Call girls, old recordings’: How ‘jilted’ Delhi Police constable hid sub-inspectors murder for 2 yrs


Elections — disinformation and polarisation

The term “fake news” reportedly became mainstream during the 2016 US elections, credited in a big way to its use by former American President Donald Trump.

Now, experts fear the use of deepfake technology for its potential capacity to impact elections, by the spread of disinformation and creating polarisation among voters.

“Deepfakes will have tremendous effects on employment, discrimination, disinformation, and electoral integrity. They can be used for spreading targeted misinformation and fake news to influence the voters. The government needs to evaluate and articulate the changes required not just in the legal system but also in public policy,” said Gupta.

In 2019, a video of former US Speaker Nancy Pelosi where she seemingly appears to stumble on her words, went viral on social media. A year later, ahead of the 2020 US elections, another video of her — where she appeared to be drunk and her speech slurred — was shared on social media. Both videos were reportedly “fake” and digitally altered. manipulated.

Back the, social media platform Facebook (now Meta) has assured users it will remove all deepfake and “manipulated” videos. This month, the Meta said it will start labeling political ads on the platform that have been created using AI.

Under this new policy, for both Facebook and Instagram, labels acknowledging AI usage will appear on the user’s gadget screen when they view these ads.

In 2020, the Bharatiya Janata Party (BJP)’s Delhi unit, in collaboration with a communications agency, reportedly produced two AI-altered videos of MP Manoj Tiwari.

The alteration was allegedly done in a video originally in Hindi and the audio was overlaid to make it appear like the BJP leader has spoken in English and Haryanvi, in a purported attempt to appeal to a larger voter audience.

“Deepfake technology has helped us scale campaign efforts like never before,” a news report quoted Neelkant Bakshi, co-in-charge of the BJP’s social media and IT for the Delhi unit, as saying. “The Haryanvi videos let us convincingly approach the target audience even if the candidate didn’t speak the language of the voter.”

Mishi Choudhary, founder and former legal director of Software Freedom Law Center, which provides “pro-bono legal services to developers of Free, Libre, and Open Source Software” noted that the rise of deepfake technology has been noticed prominently in terms of pornographic content and for the elections.

“Two major democracies are going for elections next year (India and the US). We have all seen how fake news impacted the 2016 US elections. We have seen how deepfake images of Donald Trump kissing and hugging Anthony Fauci (then president’s chief medical advisor) were shared this year,” said Choudhary.

She added: “One has to understand that this issue requires far more seriousness than it’s being given at the moment. Meta has stated about its policy change for political ads but there is a significant gap between policy framing and its enforcement. Deepfakes have a huge potential of creating mischief during the run-up to the elections and our police and legal systems aren’t equipped to tackle the scale and outreach of it.”

Websites, app tutorials & porn

The technology for creating deepfakes is available online and is easily accessible. While some content may take weeks to be produced, similar versions will take just a couple of minutes, according to experts, who said the ease with which deepfake content can be produced is alarming.

“Deepfake content is created using sophisticated AI software designed to swap faces in videos, synthesise human voices, or alter visual and auditory content to convincingly mimic reality. The process begins with the collection of a substantial amount of visual or audio material of the individual who is to be mimicked, often sourced from publicly available media such as online videos and interviews,” Rizvi explained, adding that there is an urgent need for better regulation of deepfake technology.

A simple Google search shows tutorials worth Rs 500 to master the technology, and there are YouTube tutorials and reports with links explaining the process. On Instagram as well several users — deepfake makers — offer these tutorials. Some Instagram accounts even claim in the bio section that they promote only the “best deepfake makers”. They also share fun content of replacing a Hollywood action hero like  Marvels’ Iron Man with an Indian actor.

This, however, is only one side of what may at first seem like just a fun tool.

Computer and digital security software company McAfee’s May 2023 report titled ‘Beware the Artificial Impostor’, extensively mentions voice scams and voice cloning scams.

“Targeted imposter scams are not new, but the availability and access to advanced artificial intelligence tools is, and that’s changing the game for cybercriminals. Instead of just making phone calls or sending emails or text messages, with very little effort a cybercriminal can now impersonate someone using AI voice-cloning technology, which plays on your emotional connection and a sense of urgency to increase the likelihood of you falling for the scam,” Steve Grobman, McAfee’s chief technical officer, says in the report.

The report added: “A quarter of adults surveyed globally have experience of an AI voice scam, with one in 10 targeted personally, and 15 percent saying somebody they know has been targeted. When you break it down by country, it’s most common in India, with 47 percent of the respondents saying they had either been a victim themselves (20 percent) or knew somebody else who was one (27 percent).”

According to another 2023 report, US-based Home Security Heroes’s ‘State of Deepfakes’, about 98 percent of deepfake content online comprises pornographic content and 99 percent of the individuals targeted are women.

Home Security Heroes is run by a team of nine online security experts. The report notes that there has been a 550 percent increase in deepfake videos from 2019 to 2023. India ranks sixth in the list in countries that are targeted by deepfake pornography, it showed.

Deepfake pornography is synthetic manipulated porn using existing pornographic content.

An online search with the keyword “deepfake porn” throws up multiple websites with adult content. In most cases, celebrities are a target.

Sensity, a visual threat intelligence company, in a 2019 said 99 percent of deepfakes are non-consensual deepfakes and 99 percent featured women.


Also read: Who is Ravisutanjani Kumar, ‘finfluencer’ once tagged by Modi, now accused of ‘faking’ degrees


Fraud, manipulation & fake news

Deepfake technology has the potential to violate privacy — such as in the cases of Mandanna and Kaif.

Then there are financial frauds that have allegedly taken place using deepfake.

There have been reports of people receiving audio or video calls from a family member or close friends, asking for immediate monetary assistance, which have been created using deepfake to fraud the receiver.

A Kerala man was reportedly duped in July this year after he received a call from an unknown number. The image of the person on the video resembled that of the man’s former colleague. The caller also threw in names of other colleagues to make it all appear genuine. The caller allegedly asked the victim, a retired officer of a central government firm, to send in Rs 40,000 for a medical emergency. Once the amount was transferred, the caller demanded another Rs 35,000 and that’s when the man grew suspicious.

According to media articles, this was India’s first reported case of cheating and fraud using deepfake technology.

“Deepfake technology is alarming to everyone across the spectrum — legal system, banking, investigators and the common man. Using this technique one can readily create an image, copy the facial features and expressions of someone in your family and even your child to demand money. The situations are created such that the victim panics and ends up losing money,” a senior police officer in Delhi told ThePrint on condition of anonymity.

However, the question here is if we are equipped to handle this crisis, he added.

“The investigation ideally starts with tracing the original video or image that was manipulated to produce this synthetic image or image. While some videos are amateurish, some others are done so professionally that it is a real task to trace the original content. We don’t have the technology to tackle these high-end deepfake videos with the tools and tech available with investigating agencies. So then we have to do it manually which takes a lot of time — backtracking the first upload and then finding the person who actually created the video,” the officer said.

The officer added: “The more manipulated videos are shared, the more outreach. The backtracking in these situations also is a tedious and exhausting process.”

A case in point? Mandanna’s reel had over eight million views.

(Edited by Smriti Sinha)


Also read: ‘Dilli mein bada kaam kiya hai’ — how a ‘friend’ & Google Search led cops to Delhi heist ‘super thief’


 

Source link

Denial of responsibility! TheNewsMotion is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.
Leave A Reply

Your email address will not be published.