ONLINE MISOGYNY AND THE LAW: ARE HUMAN RIGHTS PROTECTED ON THE NET?

OSCAR PÉREZ DE LA FUENTE1

Abstract: This paper opens by analysing the complexity of misogyny, sexism, and toxic masculinity. It then examines online misogyny, dissecting the many acts and behaviours that comprise this kind of digital discrimination. It considers the Gamergate scandal and demonstrates how the video game industry reinforces gender stereotypes. It closes with an analysis of efficiency and limits of legislative systems for combatting online sexism.

Keywords: Misogyny, sexism, toxic masculinity, IT-facilitated gender-based violence.

1. SEXISM, MISOGYNY, AND TOXIC MASCULINITY

The Internet has become one of life’s essential components by providing access to information and communication. Unfortunately, it also facilitates the dissemination of hate speech and sexism. From trolling to toxic masculinity, online settings are often riddled with misogyny and abuse capable of harming both men and women. Online misogyny can take many forms, including slut-shaming, victim shaming, objectification of women's bodies, sexualisation of female characters in video games and films, etc. This conduct not only promotes gender stereotypes, but also creates a society where violence against women is accepted.

Misogyny, sexism, and toxic masculinity are interrelated concepts that describe different aspects of gender-based prejudice and discrimination, particularly against women and those who don't conform to traditional gender roles.

Misogyny refers to hatred of, contempt towards, or prejudice against women. It can manifest in a variety of ways, such as discrimination, objectification, devaluation, or violence towards women and it stems from cultural, social, or individual beliefs and attitudes and often perpetuates gender inequality. Sexism is a belief or behaviour based on the idea that one sex (usually men) is superior to another (usually women) and that this superiority justifies unequal treatment or discrimination. Sexism can also take a variety of forms, such as stereotypes, prejudice, discrimination, or exclusion based on sex or gender. It is a broader term than misogyny and encompasses biases and prejudices against men and women alike. Toxic masculinity refers to a set of cultural norms and expectations around traditional male behaviour that can be harmful to both men and women. These norms often emphasise dominance, aggression, emotional suppression, and the devaluation of women or feminine traits. Toxic masculinity can perpetuate negative stereotypes about men and lead to harmful behaviours, such as violence, abuse, or harassment, and negatively impact men's mental health and well-being.

It is important to note that these concepts in no way imply that all men are misogynistic, sexist, or exhibit toxic masculinity, rather refer to broader societal patterns that can be perpetuated by individuals of any gender. To address these issues, traditional gender roles and expectations to create a more equitable and inclusive society must be challenged and re-evaluated.

This paper first examines the concepts of misogyny, sexism, toxic masculinity and manosphere. It then analyses the concept of online misogyny, before going on to describe the many acts and behaviours that comprise this concept. Gamergate is analysed to demonstrate how the video game industry reinforces sexist stereotypes. Finally, this paper looks at how legislation, education, and politics may counteract this sort of cyber sexism.

Sexism often naturalises sex differences to legitimate patriarchal social institutions, either by making them seem inevitable or by portraying those who oppose them as engaged in a hopeless fight (Manne, 2018, 79). Following Clark and Lange’s sexism approach, views about the inferiority of women promoted a sexually unequal society in which women were handicapped, not by innate characteristics that made them inferior to males, but by being compelled to do reproductive labour (Clark, Lange, 1979, 12).

Okin argues that women are realising that the modest formal political achievements of the early feminist movement have in no way ensured their attainment of genuine equality in the economic and social spheres of life. Despite now being citizens, women continue to be considered second class. In terms of characteristics often seen as desirable in citizens, such as education, economic independence, and conditional status, women continue to appear significantly behind men. In terms of political involvement -especially at higher levels- and political influence, women lag far behind men (Okin, 1992, 3-4).

In the History of Ideas on women roles, some authors -Aristotle, Aquinas, Rousseau and Hegel- contend that there is a natural and unassailable differentiation between the sexes that must be reflected in social position and function, all of which are acknowledged in an organic society with a natural hierarchy (Coole, 1988, 2). It would be churlish to deny that the opposing viewpoint promotes sexual equality, which is why it seems more radical. It has permitted tremendous progress in this area (Coole, 1988, 3). This view has been defended by Socrates, Plato, Augustine, Wollstonecraft, the utilitarian, Marx and Engels, Beauvoir, Firestone.

Although feminists often use terms like “sexism” and “misogyny” to characterise patriarchal oppressive forces, there has been little examination of these notions (Richardson-Self, 2017, 260). Sexism and misogyny are prevalent in our society in many forms, causing immense suffering to women and girls. While the two terms are related, Parikh states that sexism and misogyny are not identical in meaning. Sexism refers to discrimination based on sex that predominantly afflicts women, whereas misogyny implies hate or entrenched prejudices against women. Online forums enable victims of sexism and misogyny to share their experiences freely and widely by granting anonymity and connecting far-away people. However, content that perpetrates sexism or misogyny also exists on the web, in particular on social media (Parikh et al, 2021, 2).

In addition to animosity, misogyny is distinguished by its coercive nature, which may be made express or kept implied. Misogyny is the functional essence of coercive enforcement mechanisms, whereas sexism is conceptually separate; it does not compel (Richardson-Self, 2017, 261). Due to women not conforming with the principles and standards of patriarchy -a man's world-, misogyny is associated with different types of antagonism against them (Mane, 2018, 78; Richarson-Self, 2017, 260). This link to hostile language and conduct led Zeinert to suggest that misogyny is a subcategory of hate speech and is defined as material that incites hatred towards women (Zeinert, 2021, 3182). According to Frenda, misogyny is a form of aggressive and abusive language that looks to eliminate women and incorporates all preconceived notions about women (Frenda, 2018, 261). Misogyny connotes innate hostility or prejudice against women (Parikh, 2021, 2).

Misogyny, whilst sexist, is not a male-only trait or phenomenon -much distain for women and girls has been expressed by both men and women, under patriarchal structures and ideas (Simões, Amaral & Santos, 2021, 172). According to a common, dictionary definition style understanding of the notion, which Manne calls the naïve conception, individual agents (usually, but not always, males) who are predisposed to feel anger, animosity, or other comparable feelings against every and every woman, or at least women in general, merely because they are women are chiefly responsible for sexism (Manne, 2018, 32).

Misogyny is often trivialised as simply disliking women (Kyleanne Hunter, 2021, 59). Manne says that misogyny is the mechanism that regulates and enforces women's subordination and maintains male dominance within a patriarchal social order (Manne, 2018, 33). Manne (2018, 33-34) defines misogyny as:

“Misogyny is primarily a property of social systems or environments as a whole, in which women will tend to face hostility of various kinds because they are women in a man’s world (i.e., a patriarchy), who are held to be failing to live up to patriarchal standards (i.e., tenets of patriarchal ideology that have some purchase in this environment).”

Consequently, sexist ideology frequently consists of assumptions, beliefs, theories, stereotypes, and broader cultural narratives that portray men and women as significantly different in ways that, if true and known to be true, or at least likely, would make rational people more likely to support and participate in patriarchal social arrangements. In addition, sexist ideology includes depictions of patriarchal social structures as more desirable and less difficult, disappointing, or unpleasant than they really are. Manne characterised misogyny as it serves to police and maintain a patriarchal social order without necessarily going through the assumptions, beliefs, theories, values, etc. of individuals. Misogyny helps to actualise or bring about patriarchal social connections in direct and often forceful ways (Manne, 2018, 79).

The Council of Europe’s 2019 Recommendation on preventing and combatting sexism defines sexism as:

“Any act, gesture, visual representation, spoken or written words, practice or behaviour based upon the idea that a person or a group of persons is inferior because of their sex, which occurs in the public or private sphere, whether online or offline, with the purpose or effect of:

  i.violating the inherent dignity or rights of a person or a group of persons; or

 ii.resulting in physical, sexual, psychological or socio-economic harm or suffering to a person or a group of persons; or

iii.creating an intimidating, hostile, degrading, humiliating or offensive environment; or

iv.constituting a barrier to the autonomy and full realisation of human rights by a person or a group of persons; or

  v.maintaining and reinforcing gender stereotypes” (Council of Europe, 2019, 10).

Feminist theory investigates misogyny in both historical and present situations to explain how sexism grows with culture. Misogyny is described not just as behaviour that objectifies, diminishes, or degrades women, but also as the exclusion of women, shown in discrimination, physical and sexual abuse, and hostile attitudes toward women (Farell et al, 2019, 88).

In general, sexism and misogyny look to maintain or restore a patriarchal social order. However, sexism feigns reasonableness, whereas misogyny becomes aggressive and attempts to force the issue (Mane, 2018, 80). Misogyny therefore aims to stifle the public and political voices of women, to push women away from public and political places, and to undermine any efforts to establish gender equality in the public domain (Baker, Jurasz, 2019b, 100). As a result, misogyny is a self-masking phenomenon: attempting to call attention to it is likely to increase its prevalence. (Manne, 2018, xix).

Misogyny is similar to hostile sexism (Glick and Fiske, 1997; García Díaz, 2021, 84) and is based on three axes: (i) the idea of domineering paternalism, i.e. understanding that women are inferior to men; (ii) the concept of competitive gender differentiation, which considers that women are different from men in the qualities that qualify them for public life; and (iii) heterosexual hostility, the consideration that women are objects of sexual desire (García Díaz, 2021, 84).

The relative absence of sex from the majority of institutional definitions of hate speech is indicative of a blind spot and shows the institutional failure to perceive gender as a social component sufficient to incite hatred. Several factors are at play here. The widespread notion that fundamental gender equality has been achieved, leaving merely vestiges of sexism in popular culture, may explain the reluctance to refer to sexist speech as hate speech. The lack of gender as a component of hatred is further explained by the fact that although hate speech targets “vulnerable minority” with racist and xenophobic aggressions aimed at their annihilation, the same cannot be said for women (Khosravinik, 2018, 52).

The broader concept of misogyny, which may or may not involve violence, yet almost always involves some form of harm. This harm may be direct, manifesting as psychological, professional, reputational or, in some cases, physical harm. Alternatively, it may be indirect, in the sense that it makes the internet a less equal, less safe, or less inclusive space for women and girls.

This cultural, rather than legal, approach captures manifestations and effects of online abuse that extend beyond violence, such as the chilling, silencing, or self-censorship effects this phenomenon has on women, and enables us to consider manifestations of online misogyny in the broader political contexts of the online culture wars (Ging, Siapera, 2018, 516)

Toxic masculinity refers to conventional male ideas that prioritise violence and domination above empathy and sensitivity, attributes often associated with women. Popular culture, such as movies, television programs, music videos, and video games, often reinforces these ideas, creating an environment in which males feel pushed to conform to these norms to achieve success.

Interestingly, the term toxic masculinity comes from the work of sociologists and psychologists working in the early 1990s, looking at different aspects of men’s thought in reference to feminism per se (Haider, 2016, 557). According to Haider, these studies examined different representations of masculinity and men’s relationship with their fathers, and not in relation to feminism as many people believe (Haider, 2016).

Some of these assessments centred on the social scripts that surrounded war, portraying idealised masculinity as a model for heroism and depicting war as a rite of passage from childhood to adulthood. Through the evolution of these social scripts, violence has become a means of asserting masculinity (Haider, 2016, 557). Furthermore, Haider maintains that masculinity is constantly defined in relation to femininity within the patriarchal matrix, and toxic masculinities exaggerate this binary (2016, 557). This dichotomy depicts the feminine as weak (subordinate) and the male as powerful (dominant) (Jones et al., 2020, 1905).

This is a socially detrimental manifestation of masculinity –“toxic masculinity”– with especially negative effects on social movement learning, which includes connecting, giving account, and transforming individual tales into collective political action (Simes, Amaral, & Santos, 2021, 172). Toxic masculinity is described this way to emphasise both its violent characteristics and negative outcomes. In fact, it comprises the precise components of hegemonic masculinity “that nurture others' dominance “(Kupers, 2005, p. 717). Toxic masculinity is characterised by extreme competition and greed, insensitivity to or disregard for the experiences and emotions of others, a strong need to dominate and control others, an inability to nurture, a fear of dependency, a willingness to resort to violence, and the stigmatisation and subjugation of women, gays, and men who exhibit feminine characteristics (Kupers, 2005, 717).

The term toxic masculinity is useful in discussions about gender and forms of masculinity because it delineates those aspects of hegemonic masculinity that are socially destructive, such as misogyny, homophobia, greed, and violent domination; and those that are culturally accepted and valued (Kupers, 2005). In the end, there is nothing particularly toxic about a man's pride in his ability to win sports games, keep friendships, perform at work, and provide for his family. These constructive activities are also facets of hegemonic masculinity, although they are scarcely harmful (Kupers, 2005, 716). In other words, a man who believes he cannot get respect in any other manner has a strong need to dominate people (Kupers, 2005, 717). Domestic violence, harassment, gender-based rhetorical violence or sexual violence are a few concrete expressions of what can be perceived and labelled as toxic masculinity (Simões, Amaral & Santos, 2021,172). Toxic masculinity transforms violence into rage, but the latter dissolves difference, renders the individual into an unmarked body, and into a mere channel for ‘‘drives’’ and ‘‘impulses” (Haider, 2016, 560).

Toxic masculinity is the outcome of glorifying these negative, hypermasculine characteristics to the point that they become the idealised and desired masculine identity. The key aspects of toxic masculinity identified from the literature that formed the deductive code frame for this study are homophobia, misogyny, violent domination, a willingness to resort to violence, the need to be dominant and controlling, not displaying or admitting weakness or dependence, and devaluing both women and feminine characteristics in men (Jones et al., 2020, 1906).

A network of websites and social media groups, known as the “manosphere”, clearly examine male viewpoints, wants, gripes, disappointments, and aspirations. Typically, women and feminism are objects of antagonism whilst men's rights activism, which exposes stories of discrimination against men ranging from child custody to homelessness and conscription, is a prevalent topic of conversation in such venues (Farell et al., 2019, 87). In recent years, the Manosphere has emerged as a notable collection of “niche “communities, essentially united by a shared interest in masculinity and its apparent crises. Communities like Pick Up Artists (PUAs), Men’s Rights Activists (MRAs), Men Going Their Own Way (MGTOW), and Involuntary Celibates (Incels) have increased in size and participation in online harassment and real-world violence (Horta Ribeiro et al, 2021, 196).

Donald Trump’s election as President in 2016 enabled these internet networks to be even more open on their worldview. As one manosphere thought leader stated, “His presence [in office] automatically legitimises masculine behaviours that were previously labelled sexist and misogynistic”. More concerning still, it has placed a few men who hold these views in positions of power close to the President (Zuckerberg, 2018, 4).

Online misogyny manifests when sexism, misogyny, and toxic masculinity have repercussions in the digital realm. This phenomenon look to have new dimensions owing to technological potential.

2. HOW MISOGYNY SPREADS ONLINE

Online misogyny refers to all acts of hatred against women that are committed, instigated, or aggravated, in part or in whole, through various digital environments (Internet, mobile phones, email, websites, social networking apps, instant messaging, virtual campuses, etc.) and that involve any type of direct or indirect psychological, professional, or physical harm (Ging and Siapera, 2018; Rubio Martin et al, 2021, 2).

According to Jane (2017, 51), online misogyny dates back as early as the 1990s, when feminist analyses emphasised a pessimistic attitude to resolving the processes through which the Internet was perceived as a masculine technology and males continued to exert authority over women.

The fundamental justification for this allegation is the Internet's origins in the so-called military-industrial complex, since the Advanced Research Projects Agency Network, the Internet's predecessor, was first constructed by male engineers and scientists for military purposes. In addition, many feminists acknowledged that the Internet has always been created and used under a patriarchal system, as men have traditionally controlled its technological development. Feminist researchers claim that the Internet is naturally imbued with masculine ethics and values and, therefore, is a patriarchal technology.

There is sufficient empirical evidence of power inequalities online, particularly in terms of discourse and linguistic style, despite computer-mediated communication flourishing in the late 1990s and more women learning about the communicative practices through user-friendly online applications. The findings reveal that apologetic, polite, and communicative language patterns are characteristic of female discourse, whilst argumentative, forceful, and even harsh and aggressive language patterns are typical of male discourse.

As a result, the growth of gendered cyberhatred today is best understood as a result of a mix of new technology, subcultural capital and appropriation, individual psychologies, mob dynamics, and, most significantly, systematic gender inequality (including a backlash against feminist gains and activism). Never before have misogynists had so many opportunities to collectivise and harm women with so few repercussions. Likewise, never before have so many potential victims been so conspicuous and so available (Jane, 2017, 51).

Online misogyny is a contemporary phenomenon that dominates women's online interactions. Online misogyny is only a kind of cyberhatred aimed at women because of their gender (Baker, Jurasz, 2019, 25). Misogyny, “in short, has gone viral” (Jane, 2017, 3).

Although online hate speech encompasses both gender and the online environment, it is insufficiently inclusive of the multiplicity of situations women face. Drilling down into legal, legislative, constitutional, and ethical frameworks reveals a labyrinth of contradictions, conflicts, and grey areas: about what constitutes hurt or actionable violence as opposed to offence, and what actions may be done without compromising on freedom of speech (both that of alleged abusers and of women at the receiving end of abuse). It could be used a broader concept of misogyny, which may or may not involve violence but almost always involves some form of harm; either directly as psychological, professional, reputational, or in some cases physical harm; or indirectly in the sense that it makes the internet a less equal, less safe, or less inclusive space for women and girls. This cultural, rather than legal, approach captures manifestations and effects of online abuse that extend beyond violence, such as the chilling, silencing, or self-censorship effects this phenomenon has on women, and enables us to consider manifestations of online misogyny in the broader political contexts of the online culture wars (Ging, Siapera, 2018, 516).

Other terms such as “online gender-based violence”, “online sexual harassment and violence” or “gender-based cyberhate”, which taken separately would only capture partial aspects of the problem (Ging and Siapera, 2018), may be included in the concept of “online misogyny”. Often, these acts of hate in new digital media converge and feed back into acts of hate in traditional media, creating cases of online misogyny in mixed spaces (referred to as “hybrid realities”) (Rubio Martin et al, 2021, 2). Jane (2017, 43) asks:

“So, why are so many men calling so many women ugly, fat, and slutty on the internet?
1. Because men continue to hold a disproportionate share of the political, economic, and social power, some using various forms of violence to keep women in their place; and
2. Because – thanks to the design and dominant norms of the contemporary. cybersphere – they can.”

More recently, an impressive array of Twitter hashtags, such as #PussyRiot, #YesAllWomen, #Slutwalk, #Gamergate, #EverydaySexism, #MeToo, etc., has refocused public and scholarly attention on the pervasiveness of online misogynist culture (Han, 2018, 4). This current research reveals that conventional gender norms can be regarded as a key factor accounting for the widespread phenomenon of misogyny in online environments (Han, 2018, 4).

It focuses on fundamentally organising society such that women are humiliated, diminished, and denied equal rights. In severe instances, it leads in unfriendly repercussions for women who transgress the standards connected with their position. Most online misogyny stems from the assumption that society is undergoing a “drop of males “as a result of the rising presence of women in the labour market and in positions of socio-political power. The informal and varied group of men's rights activists who adhere to this worldview is known as the “manosphere “domestically. Nonetheless, internet sexism extends beyond the manosphere. Diverse repercussions of online misogyny are concealed by benign and benevolent sexism, loyalty to stated conventional values, and assumptions about social secure (Kyleanne Hunter, 2021, 59).

The last two decades have seen a rise in radicalised violent organisations, hate groups, and other forms of misogyny on diverse social media platforms. Easy access to technology has increased misogynistic radicalisation at a pace with which neither the security sector nor the law has kept up. The widespread recruitment that the virtual world has facilitated has moved misogyny into the information warfare domain. There is a lack of preparedness and coordination among government and private security agencies to mount an appropriate and proportionate response to this new threat. This protean threat is evolving in two related “war zones” with shifting and ill-defined borders: cyberspace and the information space (Kyleanne Hunter, 2021, 61).

Contemporary misogyny is considered as a response to the diffusion of postfeminist sensibility into daily society. Frequently, online misogyny takes the form of anonymous and hostile sexism (MacCarthy, 2002, 365). Anonymity, or more precisely, the perception of anonymity, is one of the most well-acknowledged elements that increase online animosity. It is believed to have a significant part in liberating individuals from adhering to societal norms and traditions, as they do not fear punishment or responsibility for their actions. The notion of anonymity produces inhibitionlessness Closely linked to anonymity is an overall sense of de-individuation in the cybersphere, i.e. “a subjugation of the individual to the group and a concomitant reduction in self-focus” (KhosraviNik, 2018, 47-48).

First, the recognition of women's precarity in the cybersphere as a result of its users' adherence to gendered social norms. Second, an awareness that the subordination of women as precarious subjects in the cybersphere is largely achieved and maintained through harmful speech acts with a precise illocutionary and perlocutionary force aimed at annihilating certain groups, legitimising their discrimination, advocating violence and hatred, and causing changes in attitudes and behaviours (KhosraviNik, 2018, 53).

When transferred into the cybersphere, gender-based discrimination creates a new, subtle type of digital gap, which has the potential to diminish the equality and inclusiveness of both online and offline cultures. Only a feminist social critique of online misogyny would permit and be able to account for gender-based inequality, prejudice, and stereotyping in institutional, public, and media discourses as the substance of online misogyny (KhosraviNik, 2018, 57).

3. VARIOUS FORMS OF INTERNET MISOGYNY

Recent academic interest in online misogyny has seen the introduction of a variety of alternative terms for gendered online hostility, harassment, and abuse. Examples include ‘technology violence’, ‘technology-facilitated sexual violence’, ‘gender trolling’ and – from the United Nations (UN) Broadband Commission – ‘cyber violence against women and girls’ or ‘cyber VAWG’ (Jane 2017, 7). This phenomenon is described using a variety of different terms, including gendered cyberhate, technology-facilitated violence, tech-related violence, online abuse, hate speech online, digital violence, networked harassment, cyberbullying, cyberharassment, online violence against women, and online misogyny (Ging, Siapera, 2018, 516). There are several names used to characterise gendered types of online harm, including ‘e-bile’, online sexual harassment and gendertrolling (McCarthy, 2002, 366).

All of these relate to constellations of offensive behaviour on digital platforms that involve the purposeful imposition of severe emotional anguish, via online discourse, which is never an isolated occurrence but rather a consistent pattern of conduct (Simões, Amaral & Santos, 2021, 172). Problematic online behaviours include trolling, insults, sexual harassment and rape, and death threats, which are typically accompanied by racism, homophobia, ableism, and all other types of hate (McCarthy, 2002, 354-365).

It is not beneficial to establish a definitive definition of gendered cyberhatred. Rather than doing yet another search for a set of objective message qualities, Jane uses a casuistic approach by giving several examples of gendered cyberhatred. Then, a notion of the phenomenon may be constructed by extrapolating from these particulars (Jane 2017, 7).

For situations involving social media abuse, but where the reprehensible behaviour falls short of the making of threats to kill, it is possible to argue that the activity may amount to stalking (Baker Jurasz, 2019, 56). It refers to material that targets girls or women, includes abuse, death threats, rape threats, and/or sexually violent language, and involves the internet, social media platforms, or communications technology such as mobile phones (though it may also have offline dimensions). Jane (2017, 7) uses the slang term 'rapeglish' to describe the prevalence of sexual violence in much of this conversation.

Notably, in the context of social media abuse and online harassment, this includes: “contacting, or attempting to contact a person by any means.” On its own, this will fall short of the activity required to sustain a conviction for stalking. The behaviour complained of must also satisfy the requirements of harassment. In other words, trying simply to contact someone – for example, through Twitter – is not enough. The law requires that the contact be harassing; targeted at an individual; calculated to cause alarm, or distress to the individual; and oppressive, and unreasonable (Baker Jurasz, 2019, 58). In addition, to secure a conviction, the conduct must occur at least twice. The courts have interpreted these requirements strictly so as to limit the breadth of potential convictions (Baker Jurasz, 2019, 63).

Online harassment is a nuanced and complex concept as it varies in expression and severity. However, in recent years, research has begun to address online harassment from a gendered perspective, raising awareness of the often-gendered nature of the abuse and how women are disproportionately targeted (Jones et al., 2020, 1907). Online sexual harassment refers to any form of unwanted sexual advance that occurs in digital spaces, such as social media, online chat rooms, or email. This can include unwanted sexual advances, comments, or messages, and the sharing of express images or videos without consent. Online sexual harassment can take many forms, such as cyberstalking, trolling, sextortion, and revenge porn. It can also occur across various online platforms, such as dating apps, gaming communities, and social media platforms. Online sexual harassment can have significant negative impacts on the victim's mental and emotional well-being, and it is a serious violation of their privacy and safety. Many countries have laws that criminalise online sexual harassment2 and victims are encouraged to report such behaviour to the relevant authorities or seek support from organisations that specialise in addressing online harassment.

Many of those who have experienced online abuse have received abusive messages containing threats. In the most serious cases in this area, the threats made include threats to rape, physically assault, or kill, or any combination of all three (Baker Jurasz, 2019, 47). Gender harassment consists of unwanted verbal and visual insults based on a person's gender, or the use of stimuli known or intended to evoke unpleasant feelings. These include publishing pornographic images in public or in areas where they want to offend, uttering chauvinistic jokes, and making gender-based derogatory comments (Barak, 2005, 78).

Similar to verbal gender harassment, visual harassment may be both active and passive. Active graphic gender harassment consists primarily of the intentional sending of erotic and pornographic still images or digital videos through individual online communication channels, such as email, or posting them online. The majority of examples of passive graphic gender harassment are images and videos posted on websites. In contrast to materials published in authorised pornography sites or online sex stores, where surfers often choose to visit and know what to expect, sexual harassment applies when web users do not know in advance and have no previous indication of what they may later find offensive. This sort of gender harassment is pervasive due to the widespread usage of forced pop-up windows and redirected connections to pornographic sites (Barak, 2005, 79).

Unwanted sexual attention refers to unsolicited advances that expressly communicate sexual desires or intension toward another individual. This category includes overt behaviours and comments, such as staring at a woman’s breasts or making verbal statements that express- or impliedly suggest or insinuate sexual activities. Sexual coercion involves putting physical or psychological pressure on a person to elicit sexual cooperation. This category includes actual, undesired physical touching, offers of a bribe for sexual favours, or making threats to receive sexual cooperation (Barak, 2005, 78).

Tracing a person's virtual tracks by monitoring their visits to chat rooms and forums might induce anxiety. Therefore, if online stalking (also known as cyberstalking) incorporates sexual insinuations and suggestions, it should be regarded a type of psychological pressure to attain sexual benefits, or a form of sexual coercion (Barak, 2005, 80). The non-consenting publication of sexually graphic photos, generally obtained from a previously consensual personal connection, is referred to as “revenge porn” (Barak, 2005). Victims of online sexual coercion often suffer great emotional distress and are at an increased risk of suicidality (Moloney, Love, 2018, 3-4)

Trolling is a common phrase used to describe anything from playground insults, nasty jokes, and intentional insensitivity to threats of assault, rape, and murder. Trolling appears to be the deliberate act of luring others into useless circular discussion, thereby interfering with the positive and productive exchange of ideas in a given environment (such as an online forum) and transforming the dialogue into one that is confusing, unsuccessful, and unproductive. This is often accomplished by making useless and provocative remarks with the sole intent of provoking an equally hostile response, relishing the ensuing discord and strife (KhosraviNik, 2018, 49-50).

Recent high-profile examples of gendertrolling include a website offering financial reward to anyone who could offer proof of the rape and/or murder of blogger Melissa McEwan. Women speaking out about sexually inappropriate behaviour are frequently “punished”. After Rebecca Watson spoke out about sexism she experienced at an atheist conference, her YouTube and Wikipedia pages were vandalised and trolls posted graphic photos of dead bodies to her Facebook page” (Moloney, Love, 2018, 4). There is a modality of trolling, which is called “strategic trolling”, aims to silence feminist critics (Rubio Martin et al, 2021, 5).

In early definitions, flaming was considered a generic statement of strong and provocative thoughts and expressing oneself more passionately on the internet than in other communication contexts “However, these basic descriptions do not seem to account for the phenomenon's aggressive, angry, and profanity-laced character. In actuality, flaming is often characterised by profanity, insults, bad affect, and 'typographic energy' such as capital letters and exclamation marks and involves cursing or otherwise objectionable words (KhosraviNik, 2018, 49).

Cyberbullying impacting children and adolescents has often been labelled as flaming or trolling. When speech has been coded using either of these concepts, it has, at least until recently, typically been classified as a mix of uncommon, innocuous, innovative, hilarious, crucial for identity formation, laudably transgressive, artistically rich, etc. Both flaming and trolling are contentious and imprecise terms. The former is fairly archaic and often refers to only heated cybercommunications that include vitriol, insults, and bad affect, etc. Trolling is sometimes used more openly to refer to the publishing of purposefully provocative or off-topic content with the intention of eliciting responses and emotional reactions in targeted (Jane, 2017, 5-6).

Doxing is the practice of acquiring and leaking personally identifiable or sensitive information about an individual or group without that person's consent. This can include their home address, email and phone numbers, social security number, and other personal information. Doxing is typically performed maliciously and can be used to irritate, upset, or intimidate the target. Doxing is a common internet crime3, and websites, online forums, and social media are used to spread the word. Many countries forbid the practice, and those who engage in it run the danger of harsh penalties like criminal prosecution and civil litigation. Doxing can have serious, long-lasting effects on a person, including loss of privacy, monetary loss, and even bodily injury.

Impersonation is the act of pretending to be someone else with a view to mislead or deceive another. This may involve adopting a false identity when interacting with people online or impersonating someone else without their consent. It can have disastrous effects when impersonation is used to perform fraudulent activities like identity theft or financial fraud. Often unlawful, impersonation can result in both criminal and civil legal action. Impersonation is possible in many contexts, including social media, email, online chat rooms, and other digital platforms. When interacting with strangers online, it is important to use caution and be vigilant for any signs of fraud or impersonation.

Underlying this conflict is the fact that social systems are, in general, constructed around generally stable patterns in which laws, conventions, beliefs, and practices are set by the dominant group of males, so making patriarchy a very major social order. In other words, because of its masculinist and militaristic beginnings, the Internet might become a new source of women's oppression and will reaffirm real-world power hierarchies (Han, 2018, 5).

4. GAMERGATE: SEXIST STEREOTYPES' DOMINANCE IN THE VIDEO GAME INDUSTRY

Stereotypes frequently found in video games have been related to one recent instance of misogyny. The videogame industry still produces material that caters to the alleged tastes of a young, male, heterosexual audience. Both the dearth of female video game characters and the hypersexualisation of those that do exist are indications of these biases. However, large-scale surveys that consistently reveal that at least 40% of video game players are female refute the assertion that women make up a tiny minority of gamers (Paaßen et al, 2017, 421-422).

The male gamer stereotype also informs the perceived audience for game developers, leading to game content and marketing that panders to a clichéd young male audience, most notably with violent game play and hyper-sexualised depictions of female characters (Paaßen et al, 2017, 429). Such symbolic border construction confines gender-based symbolic/social identities in addition to containing and separating the realm of play from its actual social setting (Dowling et al, 2020, 988). The idea that gamers are often unpleasant, unattractive, idle, and asocial has already been weakened by the prevalence of video game playing (Paaßen et al, 2017, 430).

Stereotypes about the ways in which men and women play video games could help to explain this seeming inconsistency. Women play video games, but are not seen as true gamers. This is apparently due to the fact that women only play informally and on “inferior platforms, “playing “inferior games “like Candy Crush Saga or Farmville (Paaßen et al, 2017, 422).

There are female-only leagues in e-sports, but they have not yet attracted a following comparable to that of competitions that are nearly exclusively male. Female-only competitions may increase the visibility of female athletes, but may also reinforce the notion that women and men have different “innate “skill sets, similar to the widespread belief that women and men have different upper-body strengths in physical sports where female-only leagues are common. Therefore, gendered gaming stereotypes may be reinforced rather than challenged by female-only leagues (Paaßen et al, 2017, 430).

FeministFrequency.com founder and executive director, Anita Sarkeesian, created the well-liked video series “Tropes vs Women in Video Games, “in which she explores the misogynistic representations in video games utilising feminist, sociological, psychological, and historical analyses. Unfortunately, in reaction to these well-reasoned comments, Sarkeesian was subjected to severe harassment and abuse. Threats of murder, rape, bombing, and a mass shooting were among the abuse that was experienced (Burgess et al, 2017, 2).

For instance, a Canadian blogger made a game in which players can strike Anita Sarkeesian, the brains behind the well-known website: “Feminist Frequency: Conversations with Pop Culture, “and inflict facial injuries. The game was created in response to news of her Kickstarter campaign, in which she suggested looking into historical representations of women in videogames. The game was just the most recent of many attacks on Sarkeesian related to her project, including death threats, having her Wikipedia page hacked with obscene images, and being harassed regularly on the Kickstarter website and elsewhere (Consalvo, 2012, 1).

Gamers' social definitions are deliberately intended to be depoliticised through the use of masculinist gaming identity. One tweet, for instance, included a link to a subreddit page that explained why Anita Sarkeesian was ruining the gaming community and why politics should not be included in video games. As a result, the magic circle is painted as being apolitical, and those who embrace it are painted as the community's natural opponents who pose a threat to Sarkeesian and her sympathisers (Dowling et al, 2020, 988).

Although the personal and usually violent character of these comments is viewed incidental to a discussion of journalistic ethics by many who view Gamergate as sexism in gaming, Gamergate is frequently represented as being about ethics in journalism (Burgess et al, 2017, 2). The magic circle, which is a depoliticised space for male gamers, has been invaded by feminist gamers and developers who have brought attention to gender inequality in traditional gaming structures. This poses a threat to the hegemony of a subculture that limits the appeal and acceptance of women, both in terms of community membership and game representation, for example, roles limited to non-player characters or victims in need of rescue (Dowling et al, 2020, 988).

Supporters of Gamergate launched their public campaign against corruption in game journalism in 2014. However, their online comments on the #GamerGate Twitter hashtag indicates a hotbed of misogynistic opposition to progressive feminist game makers and players. The hashtag campaign was intended “to perpetrate misogynistic insults by wrapping them in a debate about ethics in game journalism “according to a number of academic and journalistic studies” (Dowling et al, 2020, 983). Nevertheless, some game development houses have taken a stand against the sexist attitudes of some players while others have been slower to understand issues such as heteronormativity’s persistent presence in online game space (Consalvo, 2012, 1).

5. HOW THE LAW CAN BE USED TO FIGHT AGAINST ONLINE MISOGYNY

Misogyny is an aggressive form of sexism, which implies an ideological element. Both a practical element—the precise implied actions—and a hierarchical element—the subordination of women—are also present.4 Misogyny is a phenomenon with many dimensions and can occasionally be vague and ambiguous. The many connected actions frequently have varied legal implications. Therefore, sometimes a solid education in human rights is the answer, but in other cases, legal action will be necessary.

There are a number of situations where legal action is necessary to fight against online misogyny. These are listed below:

a) Express crimes

Online misogyny can involve behaviours contrary to the criminal law. Most states consider these acts, including as harassment, libel, threats of violence, hate speech, and the non-consensual distribution of private photographs or data, to be crimes. Here, the emphasis is on the connection between misogyny and hate speech motivated by gender.

Hate speech is expression that is characteristically hostile, and that tends to do certain things, like silence, disparage, vilify, degrade, and so forth (Richardson-Self, 2017, 260). A critical approach to online misogyny relies on feminist studies to recognise misogyny as a type of hate speech in a comprehensive and systematic manner (KhosraviNik, 2018, 53). Historically, at the international level, hate speech was linked to motivations based on race or religion. Nonetheless, the European Commission against Racism and Intolerance's General Policy Recommendation No. 15 on Combating Hate Speech covers gender-based hate speech, since it states:

“(…) the use of one or more particular forms of expression – namely, the advocacy, promotion or incitement of the denigration, hatred or vilification of a person or group of persons, as well any harassment, insult, negative stereotyping, stigmatisation or threat of such person or persons and any justification of all these forms of expression – that is based on a non-exhaustive list of personal characteristics or status that includes “race”, colour, language, religion or belief, nationality or national or ethnic origin, and descent, age, disability, sex, gender, gender identity and sexual orientation5” (European Commission against Racism and Intolerance, 2015, 16).

It is particularly interesting that section 510 of the Spanish Criminal Code (Código Penal), as amended in 2015, contains gender-based hate speech:

“Those who, either directly or indirectly, foster, promote or incite hatred, hostility, discrimination or violence against a group, or part of such, or against a certain person for belonging to such a group, for reasons of racism, antisemitism or for other reasons related to ideology, religion or beliefs, family circumstances, the fact that the members belong to an ethnicity, race or nation, national origin, gender, sexual orientation or identity, or due to gender, illness or disability6” (Spanish Law 10/1995, 23 November, section 510(a)).7

In a European context, inciting hatred through gender-based hate speech is prohibited. Determining whether the complex phenomenon of misogyny can be reduced to gender hate speech is the task for jurists because it is the first in a broader concept.

It is noteworthy that GREVIO General Recommendation No. 1 on the digital dimension of violence against women, adopted by the Council of Europe on 20 October 2021, analyses the obligations under the Istanbul Convention in relation to violence against women and domestic violence in its digital dimension. In particular, it states:

“33. The digital dimension of violence against women encompasses a wide range of behaviour that falls under the definition of violence against women set out in Article 3a of the Istanbul Convention. This definition comprises “all acts of genderbased violence against women that result in, or are likely to result in, physical, sexual, psychological or economic harm or suffering to women, including threats of such acts, coercion or arbitrary deprivation of liberty, whether occurring in public or in private life”. Non-consensual image or video sharing, coercion and threats, including rape threats, sexualised bullying and other forms of intimidation, online sexual harassment, impersonation, online stalking or stalking via the Internet of Things and psychological abuse and economic harm perpetrated via digital means against women and girls all come under the above definition” (Council of Europe, 2021, 17).

This recommendation includes these behaviours under the label of online sexual harassment: (i) non-consensual image or video sharing; (ii) non-consensual taking, producing or procuring of intimate images or videos; (iii) exploitation, coercion and threats (iv) sexualised bullying: and (v) cyberflashing. Specifically, this recommendation affirms:

“(a) non-consensual sharing of nude or sexual images (photos or videos) of a person or threats thereof include acts of image-based sexual abuse (also known as “revenge pornography”);
(b) non-consensual taking, producing or procuring of intimate images or videos include acts of “upskirting” and taking “creepshots” as well as producing digitally altered imagery in which a person’s face or body is superimposed or “stitched into” a pornographic photo or video, known as “fake pornography” (such as “deepfakes”, when synthetic images are created using artificial intelligence);
(c) exploitation, coercion and threats coming within the remit of Article 40 of the Convention include forms of violence such as forced sexting, sexual extortion, rape threats, sexualised/gendered doxing, impersonation and outing;
(d) sexualised bullying constitutes behaviours such as circulating gossip or rumours about a victim’s alleged sexual behaviour, posting sexualised comments under the victim’s posts or photos, impersonating a victim and sharing sexual content or sexually harassing others, thus impacting their reputation and/or livelihood, or “outing” someone without their consent with the purpose of scaring, threatening and body shaming; and
(e) cyberflashing consists of sending unsolicited sexual images via dating or messaging applications, texts, or using Airdrop or Bluetooth technologies” (Council of Europe, 2021, 18-19).

The legal landscape is evolving, but the challenge for jurists remains: to determine whether the multifaceted phenomenon of online misogyny can be adequately addressed as gender-based hate speech. The Council of Europe's recommendations provide a robust framework for understanding and combating digital violence against women, but the onus is on individual states to enact and enforce laws that align with these international standards.

While the law is a critical tool, it must be supplemented by educational and societal activities that combat gender stereotypes and encourage digital literacy. As the digital era advances, the social need to prevent online sexism becomes more essential. It is not only a legal or technological difficulty, but also a human rights obligation that requires joint effort from all sectors of society.

b) Breaching the platforms policies

Based on the platform policies, the second sort of legal action to stop online misogyny is taken. The benefit of this strategy is a kind of self-regulation. On the 31 May 2016, the European Union Commission presented with Facebook, Microsoft, Twitter and YouTube (“the IT Companies”) a “Code of conduct on countering illegal hate speech online”. The main commitments are:

“a) The IT Companies to have in place clear and effective processes to review notifications regarding illegal hate speech on their services so they can remove or disable access to such content. The IT Companies to have in place Rules or Community Guidelines clarifying that they prohibit the promotion of incitement to violence and hateful conduct.
b) Upon receipt of a valid removal notification, the IT Companies to review such requests against their rules and community guidelines and, where necessary, national laws transposing the Framework Decision 2008/913/JHA, with dedicated teams reviewing requests.
c) The IT Companies to review the majority of valid notifications for removal of illegal hate speech in less than 24 hours and remove or disable access to such content, if necessary” (European Union, Code of conduct on countering illegal hate speech online, 2016).

This appears to be an efficient system that includes a complaint procedure, in which IT Companies demonstrate a commitment to refrain from posting hateful content online. However, it also leaves room for interpretive discretion and the possibility of controversial cases.

While the criminal law provides a more conventional way, the self-regulatory approach of platform policies has a unique mix of benefits and difficulties. This self-regulatory technique is shown by the European Union Commission's “Code of conduct on countering illegal hate speech online, “which was agreed upon by major IT companies such as Facebook, Microsoft, Twitter, and YouTube. The Code requires these platforms to have clear procedures for monitoring and eliminating unlawful hate speech, including sexist content, often within 24 hours of being notified.

While this approach looks efficient at first glance, it also presents concerns around interpretative discretion and the possibility of contentious choices. The self-regulatory character of platform regulations enables a quicker reaction to developing forms of online sexism, but they may lack the rigour and accountability inherent in legal systems. Given that these platforms have become de facto public squares where society standards are both reflected and shaped, this is very troubling.

This approach looks efficient on the surface, but it raises problems regarding interpretative discretion and the possibility for contentious outcomes. The self-regulatory structure of platform regulations enables a quicker reaction to developing forms of online sexism, but may lack the rigour and accountability inherent in legal institutions. This is especially troubling considering that social media platforms have become de facto public squares where society norms are both reflected and shaped.

c) Discrimination legislation

The various anti-discrimination legislation at various levels should address issues related to online misogyny. Employment and education regulations, for example, govern this issue. In relation to social media and advertising, internet and social networks, the recent Spanish Equal Treatment and Non-Discrimination Act provides:

Section 22(2): “The public administrations, within the scope of their respective competences, must promote self-regulation agreements for the media, advertising, internet, social networks and information and communication technology companies that facilitate compliance with equal treatment and non-discrimination and intolerance legislation for the grounds that form the basis of this Act, and must promote a non-stereotyped image of the different persons and population groups, including sale and advertising activities that performed in them and encouraging language and messages contrary to discrimination and intolerance. They must also promote agreements with internet service companies and platforms to improve the effectiveness to prevent and eliminate content that violates the right to equality in this area”8.

The fact that this equity and anti-discrimination framework acknowledged media stereotypes as a public concern and suggested certain preventative actions are pertinent here.

Anti-discrimination legislation often extends to employment and education sectors, offer another layer of protection against gender-based discrimination online. The recent Spanish Equal Treatment and Non-Discrimination Act is particularly noteworthy for its comprehensive approach. Section 22(2) of this statute not only mandates self-regulation agreements for media, advertising, and technology companies but also calls for the promotion of a non-stereotyped image of different persons and population groups.

The inclusion of media stereotypes as a public concern in this legislative framework is significant. It acknowledges that the portrayal and treatment of women in digital spaces are not just individual acts of discrimination but are part of a larger systemic issue that perpetuates gender inequality. By promoting self-regulation and encouraging language and messages contrary to discrimination, the Act aims to create a more inclusive and equitable digital environment.

However, the effectiveness of such anti-discrimination legislation hinges on their implementation and the willingness of various sectors to comply. While such legislation provide a formal structure for combating online misogyny, they must be complemented by other measures, including criminal law and platform policies, to create a holistic approach to the issue.

d) Breaching human rights

Online misogyny and its effects should be recognised as something to avoid in human rights frameworks at all levels. Equal human dignity is at the heart of human rights, and misogyny opposes this ideal. However, diverse approaches to Internet governance, some focused more on freedom or some based more on enforcing human rights, can show divergences in how avoid misogyny in a specific case.9

The II Human Rights Plan of the Spanish government of 2023 includes some measures to combat hate speech based on gender. Specifically, it states:

“Monitoring and analysis of the evolution of hate speech on social networks, paying special attention to the migrant and LGTBI population, as well as the categories of racism, xenophobia, anti-Semitism, Islamophobia and anti-Gypsyism, with the aim of improving the formulation of cohesive public policies, with a cross-cutting, gender-based approach, with universal scope and full respect for fundamental rights” (II Human Rights Plan, 2023, 104).

GREVIO General Recommendation on the digital dimension of violence against women of 2021, previously mentioned, recommends that States Parties implement the following preventive measures:

“(a) Consider reviewing any relevant legislation in place and adopt new legislation where needed to prevent, provide protection from and prosecute the digital dimension of violence against women against the standards of the Istanbul Convention and other relevant standards, including the Budapest Convention;
(b) Undertake initiatives aiming to eradicate gender stereotypes, sexist attitudes and discrimination against women that play out online as much as offline, taking into account the Council of Europe Committee of Ministers’ Recommendation CM/Rec(2019)1 on preventing and combating sexism;
(c) Foster gender equality in society and support the empowerment and representation of women online by enhancing digital literacy and participation of all women and girls;
(d) Encourage all members of society, especially men and boys, to abandon harmful stereotypes of women and men and to adopt respectful and healthy behaviours in the digital sphere;
(e) Implement awareness-raising campaigns targeting women and men, girls and boys at different levels of society on different forms of violence against women perpetrated in the digital sphere as well as on the support services available to victims. The State Parties should support the efforts of women’s organisations towards this end and recognise and make use of their expertise;
(f) Provide mandatory and continuous capacity building, education and training for all relevant professionals, including but not limited to law-enforcement professionals, criminal justice actors, members of the judiciary, health-care professionals, asylum officials, social service professionals and education professionals, to equip them with knowledge on digital expressions of violence against women, responding to women and girls as victims without causing secondary victimisation and re-traumatisation, and, where relevant, information on existing legal frameworks and international co-operation mechanisms relating to the digital dimension of violence against women as well as on the gathering and securing of electronic evidence;
(g) Promote the inclusion of digital literacy and online safety in formal curricula and at all levels of education. Teaching materials made available in line with Article 14 of the Istanbul Convention should enable learners to learn about equality between women and men, non-stereotyped gender roles, mutual respect, non-violent conflict resolution in interpersonal relationships and violence against women, including in its digital dimension and should be accessible to persons with physical and/or intellectual disabilities;
(h) Incorporate digital manifestations of violence against women in any existing intervention programmes for perpetrators of violence, in particular in the context of intimate-partner violence;
(i) Encourage the ICT sector and internet intermediaries, including social media platforms, to make an active effort to avoid gender bias in the design of smart products, mobile phone applications and video games, as well as the development of artificial intelligence and -respectively- to create internal monitoring mechanisms towards ensuring the inclusion of victim-centric perspectives as well as to advocate stronger awareness of the perspective and experiences of female users, in particular those exposed to or at risk of intersecting forms of discrimination. Internet intermediaries as well as technology companies should be incentivised to co-operate with NGOs working on violence against women in their awareness-raising and other efforts; and
(j) Encourage media organisations and journalists’ unions to take concrete steps to eradicate gender-based discrimination, victim-blaming attitudes and violations of the privacy of victims of gender-based violence against women and their children in all their journalistic activities. Further efforts should be undertaken to uproot male-dominated power dynamics in media landscapes” (Council of Europe, 2021, 23-24).

Misogyny is inherently incompatible with the notion of equal human dignity, a pillar of human rights. However, the administration of the Internet provides a dynamic environment in which the focus might shift between the protection of free speech and the enforcement of human rights, resulting in various methods to combatting sexism.

Adopted in 2021, the GREVIO General Recommendation on the digital dimension of violence against women presents a comprehensive set of preventative measures that adhere to international human rights norms. Legislative reviews, the removal of gender stereotypes, and development of digital literacy and gender equality are among these efforts. Importantly, the resolution also calls for the active participation of diverse sectors, such as ICT and media, in removing gender prejudice and promoting victim-centric viewpoints.

This human rights-based approach offers a holistic framework that goes beyond punitive measures. It calls for societal transformation, involving not just legal reforms but also educational initiatives, public awareness campaigns, and multi-sectoral cooperation. The recommendation recognises that online misogyny is not an isolated issue but part of a broader system of gender inequality and discrimination.

e) Specific online misogyny legislation

To prevent misogyny online, individual States may decide to enact their own legislation. This generates two main issues. The first is that domestic legislation apply within the confines of national territory, whilst the Internet is global. Global problems cannot be solved by domestic law. The second is the debate over whether the law is usually the best preventative measure for the many behaviours associated with online misogyny. It is advised that academic research be done to define online misogyny and examine its effects on human beings because this may be an ideological issue.

Remarkably, the Recommendation on Preventing and Combating Sexism, approved by the Committee of Ministers of the Council of Europe on 27 March 2019, introduces the category of sexist hate speech and advocates its criminalisation. Specifically, it states:

“II.B.1. Implement legislative measures that define and criminalise incidents of sexist hate speech and are applicable to all media, as well as reporting procedures and appropriate sanctions. More proactive detecting and reporting procedures for sexist hate speech should also be encouraged in respect of all media, including the internet and new media.
II.B.2. Establish and promote programmes (including software) for children, young people, parents and educators to assist in advising children on media literacy for a safe and critical use of digital media and appropriate digital behaviour. This should be done through school curricula and through the production of handbooks and factsheets on what constitutes sexist behaviour, unwanted sharing of material on the internet, and appropriate responses, including gender-sensitive information about online safety. Ensure the wide dissemination of such materials.
II.B.3. Develop information and campaigns to raise awareness about sexist misuse of social media, threats in the internet environment and the situations children and young people face (for example blackmail, requests for money or unwanted posting of intimate pictures) with practical assistance about how to prevent and respond to such situations.
II.B.4. Undertake campaigns directed at the wider public on the dangers, opportunities, rights and responsibilities related to the use of new media” (Council of Europe, 2019, 18).

The specific online misogyny legislation enacted by individual states offers a targeted approach, but it grapples with two main challenges. First, the inherently global nature of the Internet limits the scope of domestic law. Second, there is an ongoing debate about whether legal measures are the most effective preventative strategy for the myriad behaviours constituting online misogyny. These challenges underscore the need for academic research to define the contours of online misogyny and its human impact, especially given its ideological dimensions.

The Recommendation on Preventing and Combating Sexism approved by the Committee of Ministers of the Council of Europe offers a noteworthy approach by introducing the category of sexist hate speech and advocating for its criminalisation. This recommendation goes beyond punitive measures to include educational programs, awareness campaigns, and public engagement, addressing the issue from multiple angles.

The recommendation acknowledges that tackling online misogyny is not just a matter of law enforcement but also involves media literacy, public awareness, and behavioural change. It suggests a multi-pronged approach that combines legal sanctions with educational and awareness-raising initiatives, thereby offering a more holistic solution to the problem.

Increasingly, it is clear that tackling online sexism demands a multifaceted, human rights-based strategy. This essay contributes to the continuing conversation by analysing the possibilities and limits of existing legal frameworks, including specialised online misogyny legislation, so providing the basis for more comprehensive and effective remedies to this prevalent form of digital discrimination.

6. CONCLUSIONS

This paper serves as a seminal exploration into the multifaceted issue of online misogyny, a subject that has been largely overlooked in academic discourse outside of engineering and computer science. By dissecting the complex interplay between sexism, misogyny, and toxic masculinity, it is offered a comprehensive framework for understanding the various manifestations of online misogyny, from cyberbullying and hate speech to doxing and revenge porn.

The Gamergate controversy serves as a case study that underscores the urgency of addressing gender-based discrimination in digital spaces. While education and politics are essential components in mitigating online misogyny, the law remains a contentious yet indispensable tool for safeguarding human rights. The debate over the most effective means of combating online misogyny is far from settled, and this article emphasises the need for interdisciplinary research to clarify the concept and its societal implications.

In line with the recommendations of the Council of Europe, which calls for member states to take comprehensive measures to combat gender-based discrimination, including online misogyny, our findings suggest that a multi-pronged approach is necessary. This involves not only legal reforms but also educational initiatives that challenge gender stereotypes and promote digital literacy. Tech companies must also be held accountable for creating and maintaining online environments that are free from harassment and discrimination.

The Council of Europe also emphasises the importance of involving women in decision-making processes related to internet governance and digital policy. As our article highlights, the silencing or marginalisation of women's voices online is a significant aspect of online misogyny. Therefore, any efforts to combat this issue must be inclusive and participatory, ensuring that women are not just beneficiaries of digital policies but active contributors to their formulation.

The stakes are exceedingly high: online misogyny jeopardises the mental and emotional well-being of its targets and perpetuates broader systems of gender inequality and discrimination. As we move further into the digital age, the collective responsibility to address online misogyny becomes increasingly urgent. This article contributes a more informed discourse and, ultimately, looks to develop a more effective, human rights-based strategies to combat this pervasive form of digital discrimination.

By elucidating the concept of online misogyny and aligning this discussion with the Council of Europe’s recommendations, it is hoped to provide both a theoretical and practical foundation for future research and policy interventions. The fight against online misogyny is not just a legal or technological challenge; it is a human rights imperative that demands concerted action from all sectors of society.

FUNDING

This publication is included in the project TRUST: digital Turn in Europe: Strengthening relational reliance through Technology. Marie Skłodowska-Curie – RISE - Research and Innovation Staff Exchange, Grant agreement no.: 101007820. It is also part of the research project 'Access to justice and vulnerability' PID2019-108918GB-I00 R&D&I project funded by the Ministry of Science and Innovation, within the framework of the State Programmes for Knowledge Generation and Scientific Strengthening.

ACKNOWLEDGMENTS

I would like to thank Mª Carmen Barranco Avilés, Jorge Zavala Salgado, Jesús Mora, and Juan Carbajal for their insightful remarks. I would also want to thank the attendees of the III Jornadas “Derecho y derechos” for their contributions to this paper's excellent discussion.

REFERENCES

BARAK, A. (2005). “Sexual harassment on the Internet”, Social Science Computer Review, vol. 23 num. 1, 77–92. https://doi.org/10.1177/0894439304271540.

BARKER, K., and JURASZ, O. (2019a). Misogyny as a hate crime, Routledge, London and New York.

BARKER, K., and JURASZ, O. (2019b). “Online misogyny: a challenge for digital feminism?”, Journal of International Affairs, vol. 72, num. 2, pp. 95–113.

BURGESS, M. C. R., BYARS, F. SADEGHI-AZAR, L., and DILL-SHACKLEFORD, K.E. (2017). “Online Misogyny Targeting Feminist Activism: Anita Sarkeesian and Gamergate”, The Wiley Handbook of Violence and Aggression, Peter Sturmey (Editor-in-Chief), John Wiley & Sons Ltd., Hoboken US. https://doi.org/10.1002/9781119057574.whbva006.

CLARCK, M.G., and LANGE, L. (1979). The sexism of Social and Political Theory: Women and reproduction from Plato to Nietzsche, University of Toronto Press.

CONSALVO, M. (2012). “Confronting toxic gamer culture: A challenge for feminist game studies scholars”, Ada: A Journal of Gender, New Media, and Technology, num. 1. https://doi.org/10.7264/N33X84KH.

COOLE, D. (1988). Women in Political Theory, Wheatsheaf, Sussex.

DOWLING, D. O. GOETZ, Ch., and LATHROP, D. (2020). “One Year of #GamerGate: The Shared Twitter Link as Emblem of Masculinist Gamer Identity”, Games and Culture, vol. 15 num. 8, pp. 982–1003. https://doi.org/10.1177/1555412019864857.

FARRELL, T., FERNANDEZ, M., NOVOTNY, J., and ALANI, H. (2019). “Exploring Misogyny across the Manosphere in Reddit”, 11th ACM Conference onWeb Science (WebSci ’19), June 30-July 3. https://doi.org/10.1145/3292522.3326045.

FRENDA, S., GHANEM, B., and MONTES-y-GÓMEZ, M. (2018). “Exploration of Misogyny in Spanish and English tweets”, IberEval 2018 Evaluation of Human Language Technologies for Iberian Languages Workshop.

GARCÍA-DÍAZ, J. A. et al. (2021). “Detecting misogyny in Spanish tweets. An approach based on linguistics features and word embeddings”, Future Generation Computer Systems, num. 114, pp. 506–518.

GING, D., and SIAPERA, E. (2018). “Special issue on online misogyny”, Feminist Media Studies, vol. 18 num. 4, pp. 515–524. https://doi.org/10.1080/14680777.2018.1447345.

GLICK, P., and FISKE, S. (1997), “Hostile and benevolent sexism. Mesuring ambivlent sexist attitudes toward women”, Psichology of Women Quaterly, vol. 2, num. 997, pp. 9-35.

HAIDER, S. (2016). “The Shooting in Orlando, Terrorism or Toxic Masculinity (or Both?)”, Men and Masculinities, vol. 19 num. 5, pp. 555–565. https://doi.org/10.1177/1097184X16664952.

HAN, X. (2018). “Searching for an online space for feminism? The Chinese feminist group Gender Watch Women’s Voice and its changing approaches to online misogyny”, Feminist Media Studies. https://doi.org/10.1080/14680777.2018.1447430.

HORTA RIBEIRO, M., BLACKBURN, J., BRADLYN, B., DE CRISTOFARO, E., STRINGHINI, G., LONG, S., GREENBERG, S., and ZANNETTOU, S. (2021). “The Evolution of the Manosphere Across the Web”, Proceedings of the Fifteenth International AAAI Conference onWeb and Social Media (ICWSM 2021), pp. 196–207.

HUNTER, K., and JOUENNE, E. (2021). “All Women Belong in the Kitchen, and Other Dangerous Tropes: Online Misogyny as a National Security Threat”, Journal of Advanced Military Studies, vol. 12, num. 1, pp. 57–85.

JANE, E. (2017). Misogyny online. As short (and brutish) history, SAGE, London.

JONES, C. TROTT, and VERITY WRIGHT, S. (2020). “Sluts and soyboys: MGTOW and the production of misogynistic online harassment”, New media & Society, vol. 22 num. 10, pp. 1903–1921.

KHOSRAVINIK, M., and ESPOSITO, E. (2018). “Online hate, digital discourse and critique: Exploring digitally-mediated discursive Practices of gender-based hostility”, Lodz Papers in Pragmatics, num. 14 vol. 1, pp. 45–68. https://doi.org/10.1515/lpp-2018-0003.

KUPERS, T. A. (2005). “Toxic masculinity as a barrier to mental health treatment inin prison”, Journal of Clinical Psychology, vol. 61 num. 6, pp. 13–724. https://doi.org/10.1002/jclp.20105.

MCCARTHY, B. (2002). “‘Who unlocked the kitchen?’: Online misogyny, YouTube comments and women’s professional street, skateboarding”, International Review for the Sociology of Sport, vol. 57 num. 3, pp. 362–380. https://doi.org/10.1177/10126902211021509.

MANNE, K. (2018). Down Girl. The logic of misogyny, Oxford University Press.

MOLONEY, M. E., and LOVE, T. P. (2018). “Assessing online misogyny: Perspectives from sociology and feminist media studies”, Sociology Compass. https://doi.org/10.1111/soc4.12577

OKIN, S. M. (1992). Women in Western Political Thought, Princetown University Press, New Jersey.

PAAẞEN, B., MORGENROTH, T., and STRATEMEYER, M. (2017). “What is a True Gamer? The Male Gamer Stereotype and the Marginalization of Women in Video Game Culture”, Sex Roles, num. 76, pp. 421–435. https://doi.org/10.1007/s11199-016-0678-y.

PARIKH, P., ABBURI, H. C., NIYATI GUPTA, M., and VARMA, V. (2021). “Categorizing Sexism and Misogyny through Neural Approaches”, ACM Trans. Web, vol. 15, num. 4, pp 1–31.

PÉREZ DE LA FUENTE, O. (2023). Odio, minorías y libertad de expresión, Colección Pluralismo y minorías, Dykinson, Madrid.

RICHARDSON-SELF, L. (2017). “Woman-Hating: On Misogyny, Sexism, and Hate Speech”, Hypatia, vol. 33, num. 2, pp. 256–272.

RUBIO, M., and GORDO, A. (2021). “La perspectiva tecnosocial feminista como antídoto para la misoginia”, Revista Española de Sociología, núm. 30 vol. 3. https://doi.org/10.22325/fes/res.2021.64.

SIMÕES, R. B., AMARAL, I., JOSÉ SANTOS, S. (2021). “The new feminist frontier on community-based learning. Popular feminism, online misogyny, and toxic masculinities”, European journal for Research on the Education and Learning of Adults num. 12, pp. 165–177. https://doi.org/10.25656/01:22501.

ZEINERT, P., INIE, N., and DERCZYNSKI (2021). “Annotating Online misogyny”, Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pp. 3181–3197.

ZUCKERBERG, D. (2018). Not all dead white men. Classics and misogyny in The Digital Age, Harvard University Press, Cambridge, Mass.

Regulations

European Union

Code of conduct on countering illegal hate speech online, 2016.

General Data Protection Regulation (GDPR).

Council of Europe

ECRI general policy recommendation no. 15 on combating hate speech adopted on 8 December 21 Strasbourg, 2015.

Recommendation CM/REC(2019)1 on preventing and combating sexism, adopted by the Committee of Ministers of the Council of Europe, 27 March 2019

GREVIO General Recommendation No. 1 on the digital dimension of violence against women adopted on 20 October 2021.

Canada

Personal Information Protection and Electronic Documents Act (PIPEDA), 2019

Criminal Code, 2023

South Africa

Protection from Harassment Act, 2011

Spain

Criminal Code, LO 10/1995, 23 November.

Law 15/2022, of 12 July, integral for equal treatment and discrimination.

II Human rights plan, Spanish government, May 2023.

United Kingdom

Malicious Communications Act, 1988

Communications Ac, 2003

The Data Protection Act, 2018

United States

Computer Fraud and Abuse Act (CFAA), 2008

Received: 30th June 2023

Accepted: 9th October 2023

_______________________________

1 Associate Professor. Gregorio Peces-Barba Human Rights Institute. International and Ecclesiastical Law and Philosophy of Law Department. Carlos III University of Madrid.

2 In the United Kingdom, the Malicious Communications Act 1988 and the Communications Act 2003 have been used to prosecute cases of online harassment, including sexual harassment. In South Africa, he Protection from Harassment Act (2011) applies to harassment victims in South Africa and could include online sexual harassment.

3 In the United States, while there isn't a specific federal law against doxing, existing laws such as the Computer Fraud and Abuse Act (CFAA) or various state-level laws regarding stalking, harassment, or invasion of privacy could potentially be applied to doxing cases. In the European Union, under the General Data Protection Regulation (GDPR), unauthorised sharing of personal data could potentially be deemed illegal, making doxing potentially punishable. Individual EU countries may also have more specific laws. In the United Kingdom, The Data Protection Act, the Malicious Communications Act, and the Communications Act can potentially be used to prosecute doxing. In Canada, unauthorised sharing of personal information can potentially be seen as a breach of the Personal Information Protection and Electronic Documents Act (PIPEDA), and certain cases could also fall under Criminal Code provisions related to harassment, intimidation, identity theft, or identity fraud.

4 To analyse many manifestations of hate and hate speech, I created this methodology based on three elements (ideological, hierarchical, and practical). See Pérez de la Fuente (2023).

5 Author emphasis.

6 Author emphasis.

7 Section 510(b) of the Spanish Criminal Code also establishes: “A custodial sentence of one to four years and a fine of six to twelve months will be imposed on: (…) b) Those who produce, prepare, or possess with a view to distribute, grant third-party access to, proceed to distribute or sell, be it in writing or any other class of material or format that, due to its format, makes it possible to foster, promote, or incite, be it directly or indirectly, hatred, hostility, discrimination or violence against a group, part of a group, or against an identifiable person for belonging to that group, due to racism, antisemitism, antigypsy, or other reasons relating to ideology, religion, beliefs, family situation, ethnicity, race or creed, origin, sex, sexual orientation or identity, or due to their gender, social means, illness or disability.”

8 Author emphasis.

9 Idea developed in (Pérez de la Fuente, 2023).