Online harassment is any digital behaviour that targets someone in a way that feels threatening, silencing, degrading, or invasive. It can involve direct messages, public posts, impersonation, or coordinated attacks—and it can happen across any platform, including social media, messaging apps, email, forums, or comment sections.
Online harassment is any digital behaviour that targets someone in a way that feels threatening, silencing, degrading, or invasive. It can involve direct messages, public posts, impersonation, or coordinated attacks—and it can happen across any platform, including social media, messaging apps, email, forums, or comment sections.
Harassment may be persistent, widespread, or severe—even a single threat, image, or post can cause us lasting harm. We may experience harassment from strangers; we might be targeted by current or former partners, classmates, co-workers, or community members.
What matters most is how it affects us. Harassment is not defined by platform rules or legal thresholds alone. If we feel afraid, overwhelmed, or unsafe, our experience is valid—and we deserve support.
Harassment may be persistent, widespread, or severe—even a single threat, image, or post can cause us lasting harm. We may experience harassment from strangers; we might be targeted by current or former partners, classmates, co-workers, or community members.
What matters most is how it affects us. Harassment is not defined by platform rules or legal thresholds alone. If we feel afraid, overwhelmed, or unsafe, our experience is valid—and we deserve support.
Online harassment can take many shapes. Sometimes it’s a single serious act, like a violent threat or the release of private information. Other times it’s a pattern of repeated behaviour that wears us down over time.
The common thread is how it makes us feel. The tactics may differ, but the intent is often the same: to silence, isolate, humiliate, or intimidate. Here are some of the most common forms online harassment can take.
Online harassment isn’t always easy to define. It can look like cruel comments, targeted posts, unwanted messages, impersonation, or public humiliation—but it can also be subtle, persistent, or hard to explain to others. Sometimes it’s one person. Sometimes it’s a group. And sometimes it’s made worse by platforms that ignore it, excuse it, or fail to act.
Laws and platform policies often rely on narrow definitions. But harm doesn’t have to be ‘serious enough’ to count. What matters most is the impact: how it affects our wellbeing, our safety, or our ability to exist online without fear.
Even a single incident—like a violent threat or the posting of private information—can be serious. And when harassment is ongoing, hard to block, or coming from multiple sources at once, the impact can be especially intense.
AI is making online harassment more targeted, convincing, and difficult to trace. Some people might be harassed by bots pretending to be real users. Others might have their photos, voices, or writing styles copied to create AI-generated messages, images, or accounts designed to mock, impersonate, or humiliate them.
Below we’ve listed some of the ways AI is now being used in online harassment. These tools can make harassment feel relentless or surreal. Some survivors say it’s hard to explain what’s happening—or to be believed—when the harm is being carried out by AI or bots. But the impact is real. Whether the abuse is human-led, AI-powered, or both, no one deserves to be targeted or silenced online.
AI-generated images and videos can be used to make fake nudes, memes, or edited photos that are then spread to shame, harass, or threaten us.
Sextortion happens when a person is threatened with the release of sexual images or videos—real or created with AI—unless they meet certain demands. This may include sending more images, transferring money, or unwillingly staying in contact. In many cases, the scammer uses emotional manipulation, impersonation, or AI-generated media to appear more believable.
AI can be used to flood us with abusive messages, comments, or replies, making the harassment feel constant and difficult to stop.
Automated tools can be used to scan our online presence, pulling together photos, contact information, or location details to use as part of targeted abuse or public exposure.
The clues that once helped us spot AI-generated content aren’t always reliable any more. Things like extra fingers in photos or blurry edges around objects were once common signs, but new tools are improving fast.
Instead of relying on those glitches, it’s more helpful to look at the bigger picture. These are some helpful questions to ask:
Trusting your instincts, asking questions, and slowing down can all help make sense of what you’re seeing or hearing.
Online harassment can feel isolating, confusing, or frightening—especially when it’s ongoing or coming from someone we know. Many people hesitate to call it abuse; worrying that they’re overreacting. Many others don’t know what steps to take.
That’s completely understandable in the digital world where technology is constantly evolving, and abusers can be so elusive. Here are some real concerns survivors have shared.
There’s no one right way to respond to or prevent online harassment. Some people choose to report or block, while others focus on harm reduction or digital boundaries. What matters most is doing what feels safest and most supportive.
These are some strategies that can help, whether we’re experiencing harm directly or supporting someone else. Taking these steps can be exhausting, both emotionally and practically. It’s okay to move at our own pace and choose only the steps that feel manageable right now. No one should have to manage this alone—we deserve safety, privacy, and peace of mind.
Experiencing online harassment can take a toll on emotional health, relationships, and feelings of safety. No one should have to navigate it alone. Support is available—whether we’re looking for emotional care, help with reporting, or legal guidance.
Glitch: A UK non-profit focused on ending online abuse, particularly for Black women and marginalised communities.
Centre for Countering Digital Hate: Research and advocacy tackling online abuse and misinformation.
PEN America Online Harassment Field Manual: US-based but widely applicable resource on digital abuse prevention, response, and resilience.
Right to Be (formerly Hollaback!): Offers bystander intervention training, reporting guidance, and public education about online harassment.
HateAid: German organisation providing support to those who have experienced online hate. They also help specifically with reporting unsolicited pictures of a sexual nature.
Digital Rights Foundation: Offers a cyber harassment helpline for those based in Pakistan. Available Mon–Sun, 9am–5pm.
In this section, we looked at the many forms of online harassment, from direct threats and impersonation to group pile-ons and ongoing surveillance. What’s clear is that online harassment carries a heavy emotional toll, while both platform and legal definitions are very limited. We also included tips and strategies for documenting and reporting, and we showed that our emotional experience is always valid—whether the harassment is a one-time incident or a long-term pattern.
When we’re dealing with difficult experiences, it can be hard to take in new information. That’s okay, because this guide is here to come back to at any time. For now, here are the main points to keep in mind.
Online harassment doesn’t need to meet legal definitions to be harmful.
AI is now being used to generate threats, impersonations, and abuse.
Harassment can be subtle or constant—and both forms can create harm.
Documenting and setting digital boundaries can support safety and healing.