Content Warning: Discussion of gender-based violence

Artificial Intelligence (AI) has greatly contributed to the world we’re living in today. The integration of AI where computer operated mechanics are controlling and performing all sorts of tasks that are linked with human beings has changed the very outlook of technology we see today. AI and new forms of technology are making tasks more efficient and quicker for all of us. The scope of AI is not only limited to one particular field and has expanded across all sectors and markets which has made its integration into our lives more robust. However, on one hand, while AI has made our lives easier and efficient, it has also contributed to new problems for humans that we perhaps hadn’t anticipated while developing or using this new technology. Like the manifestations of the physical world where patriarchal structures lead to suppressing certain marginalized groups, particularly gendered minorities in society, AI has led not only to the reinforcement of these structures but has also led to making them more complex and nuanced issues.

Across the globe, with the evolution of the internet we saw a new form of harassment increase greatly. Online gender based violence (OGBV) is the harassment and discrimination individuals face when they are targeted online due to their gender. This harassment can be in the form of stalking, non-consensual use of intimate images (NCII), bullying, unsolicited messages and threats online. OGBV can be more serious and dangerous at times due to perpetrators using anonymity to hide their identity and location while harassing someone online. The rapid nature of things going viral on the internet and its permanence online also adds an added layer of caution when women are using the internet. To counter OGBV, we see that legislation is being adopted across the globe to regulate it and it seems that we are still trying to fully understand its impact and scope, but now we also see an exacerbated and intense version of the same violence rising with AI.

Rohini LakshanĂ©, technologist and fellow at Factor Daily agrees that this new form of OGBV through AI can be catastrophic for victims, ‘Yes, I would call them new manifestations of existing forms of violence against women and gender-diverse persons. While the medium or means to perpetrate this violence may change with the rise of different technologies, two things do not: the intent of the perpetrator and the impact on the victim. As long as there is an intent to hurt, silence, harass, defame, or shame and there is a corresponding impact on the victim, we will continue to see a replication of existing forms of violence.’ She adds that ‘The full impact of newer technologies, especially AI systems on the individual and society is not known or knowable, which creates many legal, social, regulatory, technical and ethical conundrums and, among other things, complicates the problems of tackling gender-based abuse.’

Hyra Basit, Program Manager at Digital Rights Foundation’s Cyber Harassment Helpline also explains why the rise of OGBV, particularly through AI, can be problematic for women and gendered minorities. She explains, ‘Advancements in digital technologies have made life easier for many, but they’ve also made the misuse of technology to target women and gender minorities easier. The current ways in which OGBV manifests itself is already a testament to the ways in which technology has contributed to new forms of OGBV. Compared to a few years ago, we have been an increase in complaints of image-based abuse (including edited images) which is significant because of the pronounced effect that visual content has on the people who view the images, as well as the victims.’ According to the Cyber Harassment Helpline annual report, in the past six years they’ve received 14,376 cases from across Pakistan related to cyber harassment and 59% of these cases were by women and most of these complaints ranged from blackmail to non-consensual use of information (NCUI), threatening messages and unsolicited contact online.

AI generated online violence can be in different forms and shapes some of which are deep fakes, manipulation of images and audio and also through virtual reality and gaming universes. These different forms of AI-assisted and enabled gender-based violence are alarming as detecting what is real and what is fake is becoming increasingly difficult. Deep fakes are particularly popular these days and these are produced by combining AI-generated images and videos with intimate images and generative images together. The term deep fake was coined back in 2017 when a Reddit user swapped faces of famous celebrities like Taylor Swift, Scarlet Johansson and Gal Gadot onto pornographic content. After this incident and with the rise of AI apps across all platforms online, we now see how easy it is to swap images of individuals particularly women online. According to Sensity AI, 90 to 95% of all online deepfake videos are nonconsensual pornographic images and around 90% of these are of women.

In South Asia, while the use of deep fakes hasn’t been as widespread as in other parts of the world, we still do see that deep fakes are produced particularly of women in the public sphere to silence and shame them. Rana Ayyub, an award-winning journalist and writer based in India, was targeted through deep fakes online to silence her. As a Muslim women reporter, she was targeted because of a story she covered and a pornographic video made rounds on social media allegedly claiming it was her. Rana describes her feeling of helplessness and how even with complaints to law enforcement agencies nothing could be done because by the end the video itself was on people’s devices and kept on getting uploaded online.

In Pakistan, we also see how deep fakes are being used for propaganda by political machinery. Imran Khan, ousted ex-Prime Minister, made public claims that his political foes had been using deep fake videos to malign him. Allegedly this was also done by members of Imran Khan’s own party against prominent women politicians of the country which have been making rounds all across social media. However, this may now just be the tip of the iceberg and Hyra Basit anticipates a rise in deep fakes in the future in Pakistan. She adds, ‘While we haven’t seen many cases of OGBV facilitated by AI-generated images at the Cyber Harassment Helpline yet, the intrigue of AI will catch up soon enough and we expect such cases will start rising. For now, however, we do see a noteworthy number of cases of bots and trolls that harass women, especially journalists, politicians and human rights defenders (HRDs) on social media that indicate the use of AI to some degree.’

AI-generated online abuse is not only limited to deep fakes and we see how AI is being used in different forms and variations of online abuse. Rohini sheds light on how the rise of virtual reality universes are also manifesting gender-based violence in virtual spaces. She says, ‘Apart from deep fake photos and videos and AI-generated non-consensual intimate images, an example of AI tools: Virtual reality programs in which a real person is shown performing sexual acts, which is recorded and later distributed as pornography. No consent is sought from the person who is in the VR video. The intent is often to defame, shame, humiliate or discredit the victim or to gain validation among certain groups.’ The Metaverse, Meta’s prodigy of a virtual reality universe, also faced similar issues at the beginning of its launch. A researcher studying user behavior on the platform reported how her avatar was raped in the virtual space multiple times. Many argued that since the incident happened in a virtual reality universe and was not real its effects and consequences on the user don’t count. However, users reported that the incident felt very much real and was very much a manifestation of the real world.

Social media companies and companies making generative AI are still not fully able to comprehend the seriousness of the issues that these technologies are giving rise to. In Metaverse’s case, two investors expressed concerns over issues of harassment and abuse on the platform and despite that, the company has been pushing the virtual reality universe to the public. Apart from that, apps like Deepnude and Y that swap faces of women in pornographic images were only removed from the internet after women spoke up about the harassment and abuse they faced through these apps. These are just two of many apps out there on the internet that are perpetuating violence against women online. Hyra Basit points out that, ‘Social media companies have started to realize the importance of addressing the rise in OGBV based off new technologies on their platforms and how people have identified new ways to circumvent restrictions. While their efforts are welcomed, there is most certainly the need to put in more effort, especially in understanding the ways in which cultural, religious, and social contexts play into the narrative and consequences in different regions. Significantly more reviewers need to be dedicated to the task so that genuine reports are not pushed aside and to counter delays in response time. The reviewers also need to be cognizant of the ground realities and context of the region where complaints are coming from to be able to interpret community guidelines better.’

Rohini points out that, ‘ Major social media companies have policies that address these forms of violence, such as, non-consensual intimate imagery. In my observation, one area where they consistently lag has been addressing reports of abuse adequately and on time. Another problem is that of nuance: sometimes sexual expression or sexuality education content gets taken down.’ From South Asia, it is already an added labor that many users take upon themselves to educate and explain to social media companies why a particular image or video might be harmful to women or the LGBTI+ community living in the region. With the generative AI, it seems it would become even more difficult to give context and explain why and how these videos are perpetuating violence for these groups.

Multiple news sources have already reported how AI is contributing to sextortion, harassment online and the rise of illegal child sex abuse images. The circulation of these images and videos online can be catastrophic for society at large and will be making the internet a darker place than it already is. We’re already seeing how technology is perpetuating societal norms by adopting human behavior in the case of Tay by Microsoft. Tay is a bot by Microsoft to understand human interactions, learning from the conversations users were having from it. In a couple of days, the bot adopted misogynistic, racist and homophobic language. Microsoft eventually had to roll Tay back because of its out-of-control speech but now comes up with another AI bot called Bing. Relating AI with feminine characteristics like voice and avatars has also led to the sexualization of women through this new technology. There’s been a petition going around about how Alexa and Siri’s feminine voice is leading men to oversexualize them and also reinforcing patriarchal structures in the technology itself. It’s also seen how AI girlfriend apps are leading to expectations that are unhealthy for human beings and leading to men becoming more violent. AI anchors are also being used across the globe in newsrooms and these have now also been introduced in Pakistan, India, and Bangladesh with women journalists. These AI anchors and talk shows are reporting on issues that are promoting xenophobia, racism and sexism while at the same time perpetuating the perfect body and skin colour for women in South Asia, countries where women journalists are already unsafe and targeted because of their work.

Maas Misha’ari Weerabangsa, Co-lead Programmes from Delete Nothing (Sri Lanka) adds that, ‘Online violence has always been there against women and queer communities but AI has made it easier for the violence to perpetuate. In order to address this violence it is important for AI to adopt human mechanics and elements. Social media companies and AI generative technologies need to help in countering the problem and already Meta is using prevention mechanisms to address the problem through AI.’

AI ‘bodyguards’ are already being used in France to filter out some of the hate and abuse that tennis stars are facing online. We also see researchers now trying to develop ‘machine learning’ algorithms to detect, intervene and prevent online gender-based violence which is a step in the right direction and right use of AI. However, to fully understand the scope of AI in creating problems for marginalized groups and providing solutions one needs to bring into account that a region-specific human rights-centric approach for AI should be adopted. The AI Act adopted by the European Union is trying to provide some form of protection to users in the form of legislation and is far from perfect but conversations around AI in South Asia are still new. The South Asian region is a complex place where women and queer groups are targeted in offline and online spaces and the role of AI will only make it worse which is why it is important for lawmakers, law enforcement agencies and civil society to work together to adopt a human-centric intersectional approach when addressing issues pertaining to AI and keep in mind that these machines are man-made and their biasness and projection of certain groups can have real-life consequences for people across the board.

About Seerat Khan

Seerat Khan is the Programs Lead at the Digital Rights Foundation in Pakistan, and has done extensive work on gender and technology over the past 7 years. She mostly works with women human rights defenders and women journalists on key themes like data protection, online safety, gender, privacy and misinformation.

About the Organization for Ethical Source

The Organization for Ethical Source (OES) is a diverse, multidisciplinary, and global community that is revolutionizing how tech culture works. We are investing in tools like Contributor Covenant as part of our commitment to creating a better future for open source communities around the world. If you’d like to help us shape that future, consider becoming an OES member.

Still curious? Explore more from our blog.

Read More