No recent spike in tech-facilitated sexual harm, but AI poses concern for future: Women’s groups

Many worldwide fall victim to graphic deepfakes, which involve artificial intelligence tools being used to fraudulently create images or videos. PHOTO: ST FILE

SINGAPORE – “Was that really you? It looks real.”

Lydia (not her real name) received a few messages like this in March 2023, when someone posted a sexually explicit deepfake video of her on social media platform Reddit.

The micro-influencer shut down her Instagram account for a few months as she was afraid someone would publicly comment about the video on her posts.

Her boyfriend at the time messaged the Reddit user who posted the video, and threatened to file a police report if it was not removed.

The video did not have many comments or shares before it was taken down, but Lydia had to battle paranoia and anxiety for close to a year at the thought of it still being circulated.

Lydia is one of many worldwide who have fallen victim to graphic deepfakes, which involve artificial intelligence (AI) tools being used to fraudulently create images or videos.

Over the past week on social media platform X, sexually explicit AI-generated images of American pop star Taylor Swift were shared.

One of the posts featuring the images garnered more than 45 million views, and seemingly sparked other new graphic fake images of the singer-songwriter to be shared.

Edited graphic images are not new, but the ability to generate them through AI has made the process easier.

The director of advocacy, research and communications at women’s advocacy group Aware, Ms Sugidha Nithiananthan, said that technology-facilitated sexual violence is growing more complex.

She said: “With increasingly sophisticated encrypted platforms and generative artificial intelligence, victims may not only find themselves unknowingly featured in explicit content, but also face difficulty in securing evidence and reporting such advanced forms of online harms.”

SG Her Empowerment (SHE) chief executive Simran Toor said that the organisation aims to draw attention to issues like the rise in generative AI facilitating online harms, and will work to find solutions and provide support and assistance to those who have been targeted.

“Everyone has the right to be safe, both online and offline,” she said.

SHE is a non-profit that champions gender equity and works with community partners to empower women and girls.

In the year since online harms help centre SheCares@SCWO opened, it has helped more than 100 clients and provided over 220 counselling sessions.

It also helped clients file 13 police reports and more than 80 reports with Internet platforms.

The centre was launched in January 2023 by SHE and Singapore Council of Women’s Organisations (SCWO) to help victims report online harms.

It also offers free counselling support and legal clinics.

The most common types of online harms encountered by clients of the centre are cyber bullying, image-based sexual abuse, cyber stalking, and sexual harassment.

In one case, a woman faced harassment from her husband’s former mistress for two years.

The mistress stalked the victim on social media platforms and subjected her to impersonation and doxxing.

Photos of the victim’s family were taken from her social media profile, edited crudely, and then uploaded online. The perpetrator also left unpleasant remarks on the social media profile of the school that the victim’s child was attending.

Victims often face more than one type of online harm, said SHE.

Based on SHE’s own research, the top harms encountered by Singaporeans are impersonation, cyber bullying, defamation, and image-based sexual abuse.

At Aware’s Sexual Assault Care Centre, half of the cases of unwanted sexual behaviour perpetrated through digital technology in 2022 were contact-based sexual violence, which is defined as explicit, coercive and sexually harassing messages or comments on social media.

The centre received 179 technology-facilitated cases of sexual violence in 2022. Twenty-eight per cent occurred on messaging platforms such as Telegram and WhatsApp.

Social media platforms like Facebook and TikTok accounted for 19 per cent of all cases.

Those who sought support from the centre spanned a wide age range.

In cases where the ages of victims were known, the bulk of them involved those between 18 and 34.

Those between 35 and 44 accounted for 13.4 per cent of all cases in 2022, while those under 18 accounted for 8 per cent. Four per cent of victims were over 45.

Almost eight in 10 perpetrators in the 179 cases were known to victims, with 29 per cent reporting that the perpetrator was an acquaintance, like a classmate or online friend. Another 27 per cent said it was an intimate partner or former partner.

Dating app contacts made up 17 per cent of perpetrators.

In one case at Aware’s centre, a man realised he had been “catfished”, or tricked, when he met someone from a dating app in person. The perpetrator appeared much older and different from the profile.

During the date, the perpetrator sexually harassed the victim and forced him to share graphic images of himself.

When the victim declined future meet-ups, the perpetrator threatened to post the images online.

There have been steps taken to better regulate online spaces with the passing of two Bills in the past two years.

Under the Online Safety (Miscellaneous Amendments) Act, social media platforms can be ordered to take down some types of egregious content, which includes sexual violence or coercion in association with sexual conduct.

The Online Criminal Harms Act allows the Government to act more effectively against criminal online activities such as scams and malicious cyber activities.

To seek help at SheCares@SCWO, call 8001-01-4616 or WhatsApp 6571-4400 on weekdays between 9am and 9pm. Those who want to contact Aware’s Sexual Assault Care Centre can call 6779-0282 on weekdays between 10am and 6pm.

Join ST's WhatsApp Channel and get the latest news and must-reads.