Twitter Without Violence Guide

THE BASICS

What youth on Twitter need to know


How does Twitter address Cyberviolence?

Cyberviolence on Twitter is a complicated problem that depends as much on behaviour as it does context. On Twitter, what counts as abuse must fit one or more criteria: reported accounts sending harassing messages; one-sided harassment that includes threats; incitement to harass a particular user; or sending harassing messages from multiple accounts to a single user. Since abuse takes place in a particular context, it must be evaluated by a Twitter representative prior to any decision on how best to respond. There are, however, obvious exceptions—threats and calls to violence based on race or gender, for instance: accounts reported for violent threats will be suspended and, when appropriate, reported to law enforcement.

Twitter has committed to potentially taking a number of steps to assist users experiencing suicidal thoughts and users engaging in self-harm, such as reaching out to that person expressing our concern and the concern of other users on Twitter or providing resources such as contact information for our mental health partners.


Terms of Service

You:

  • Must be 13-years-of-age or older.
  • Your original content is your responsibility. Twitter is not liable for your content.
  • Must not post copyrighted materials.
  • Acknowledge that you may be exposed to content that might be harmful or offensive to you.
  • Must not use your Twitter account “for commercial gain.”
  • Must keep your contact information up-to-date.
  • Must not solicit login information or access another users’ account.
  • Must not share spam.*
  • Must not bully, intimidate, threaten, harass or target other users.
  • Must not promote terrorism.
  • Must not use Twitter for illegal activities or break Twitter’s user rules.
  • Must not upload viruses or other malicious code to Twitter.
  • Must not post other users’ private or confidential information.
  • Must not use ‘pornographic’ or excessively violent media in your profile image, header or background image.
  • End your user agreement with Twitter by deleting your account or when Twitter deletes your account.

*Spam is any unsolicited—usually irrelevant or inappropriate—message sent on the internet to a large number of recipients


Twitter...

  • Does not own your content. However, becoming a Twitter user gives the platform permission to use your intellectual property—photos, videos, etc—for free*.
  • Can delete content it does not approve of.
  • Can delete inactive accounts (over 6 months).
  • Does not check all posts for content violating its standards.
  • Is not responsible for your data plan or phone bill.
  • Is allowed to change this agreement without telling you
  • Twitter may allow some forms of graphic content in Tweets marked as sensitive media

*Note: This means Twitter does not have to pay you royalties or any compensation to use the content you upload to the platform. Even if you delete content, Twitter may have backup copies or access to content that has been re-shared by other users and not deleted yet.

*Spamis any unsolicited—usually irrelevant or inappropriate—message sent on the internet to a large number of recipients


LEARN MORE: Read Hollabacks! Twitter Safety Guide 

LEARN MORE: Twitter is currently working with Women, Action & the Media on better reporting tools! Check out their badass work here.


Some things we think Twitter is doing well to tackle cyberviolence!

These recommendations were created in collaboration with the Purple Sisters Youth Advisory Committee of Ottawa. 

  • Twitter has made some serious strides in fighting online harassment on their platform. For example, Twitter recently updated its violent threats policy 
  • In April 2015, Twitter introduced another anti-harassment practice. Twitter now has the ability to limit a user’s account for a certain time period, or until they register a phone number and delete abusive Tweets.
  • Twitter also begun to test a product feature to help them identify suspected abusive Tweets and limit their reach.
  • Twitter allows users to choose from multiple categories of harassment. While in the past Twitter had allowed users to report spam, the new tools allow users to report harassment, impersonations, self‑harm, suicide and, perhaps the most proactive, harassment on behalf of others.
  • Within the harassment & abuse category you can also report whether or not users are targeting you or someone else (yay, bystander intervention!)
  • Twitter opened its account verification to the public with an open application process by filling out a form with a verified phone number and email address, a profile photo, and additional information regarding why verification is required or helpful. Verified users get more tools for filtering their notifications, and the requirement to use a real name and a photo restricts the anonymity many trolls rely on.
  • Twitter further cracked down on abuse and unveiled a new quality content filter designed to automatically prevent users from seeing or being exposed to harassment, violent messages or harmful content. This feature takes into account a wide range of signals and context that frequently correlates with abuse including the age of the account itself, and the similarity of the Tweet to other content that Twitter’s safety team has in the past independently determined to be abusive.
  • Twitter established a blocked accounts page. This feature allows users to more easily manage the list of Twitter accounts they have blocked (rather than relying on third-party apps, as many did before).
  • Twitter’s Mute function is a feature that allows you to remove an account's Tweets from your timeline without unfollowing or blocking that account. Muted accounts will not know that you've muted them and you can unmute them at any time.
  • Users who violate community standards & breach policies now face heavier sanctions and consequences. (Eg. Ghostbusters Star Leslie Jones harasser was permanently banned from the platform due to ongoing gender based and raced based harassment)
  • Twitter has tripled the size of its support team handling abuse and harassment reports and added rules prohibiting ‘revenge porn’.
  • Tweet location is off by default, and as a user you need to opt in to the service.
  • Twitter recently announced a new initiative to combat harassment on its platform: the Twitter Trust & Safety Council
  • Twitter has partnered with a long list of organizations for support and safety, and links to their Twitter profiles, click here.
  • Twitter introduced the new Safety Center, a resource for anyone to learn more about online safety, on Twitter and beyond. It is organized around Twitter’s tools and policies to address safety, with sections created especially for teens, parents and educators (Cartes, 2015).

Key recommendations for Twitter on tackling cyberviolence

These recommendations were created in collaboration with the Purple Sisters Youth Advisory Committee of Ottawa. 

Twitter should. . .

  • Diversify its leadership—including expanding opportunities for women and LGBTQ+ people. Twitter’s own 2014 report reveals the company’s leadership is 79% male and 72% white.
  • Twitter must more broadly and clearly define what constitutes online harassment and abuse in order to increase accountability for a broad range of abusive behaviours. This includes explicitly condemning abuse and harassment based on race, gender identity, sexuality, class, ability, ethnicity, and religion to in its policy.
  • Instagram should use the language of non-consensual sharing of intimate images instead of 'revenge porn'. 
  • Develop new policies, which recognize and address current methods that abusers & harassers may use to manipulate and evade Twitter’s evidence requirements. For example harassing someone and then deleting the post.
  • Sub-Tweeting has been identified as a higher risk form of engagement that leaves certain people more vulnerable to experiencing online harassment and violence.
  • Provide users with the option to turn off previews or filter images attached to posts.
  • Hold online abusers better accountable for their actions. Suspensions for harassment or abuse are currently indistinguishable from suspensions for spam, trademark infringement, etc. 
  • Change evidence requirements to reflect the reality of cyberviolence—including allowing screenshots to be used in evidence.
  • Ensure the teams responding to reports of cyberviolence have training on gender-based violence issues informed by experts in the field—including frontline workers and survivors of cyberviolence.
  • Make terms of service—particularly policies on data use—easier to find and easier to read.
  • Ensure location services are “off” as the default and make it clearer to users when location settings are on.
  • Adopt & advocate for Open Certificate Transparency Monitoring, which offers the ability to know the certificates a CT-enforcing browser will trust. This gives social media platforms more capacity to monitor and verify malicious access, certificates and code from third parties.
  • Restrict or completely shut down third party applications.
  • Make safety and support services accessible to users—including providing easy to find and easy to read information on local support services.
  • Build long-term partnerships with anti-violence experts and frontline workers. 

Read our general recommendations for social media platforms.

Add new comment