The Dangers of Automated Abuse/DMCA Policies

The rise of the robots must be checked...

Dog Rates ImageTwitter user @Dog_rates runs a popular account dedicated to the rating of dog images and gifs. Operated by Matt Nelson, the account is full of adorable puppies and dogs, almost all with high ratings.

However, earlier this week the account was suspended briefly by Twitter, which claimed that account had received multiple notices of copyright infringement under the Digital Millennium Copyright Act (DMCA).

However, there was a serious problem with the case. As explained on The Daily Dot, all of the notices were sent from the same person, identified only as Myteenquote123@gmail.com. Nelson claims that the person behind that email address posed as several different people and filed multiple copyright infringement notices for image he had permission to use.

Nelson was able to get his account restored but only after he agreed to remove the images involved in the notices. Those photos remain down. However, the reasons for Myteenquote123’s actions remain vexing, with him first claiming the account was an imposter but later turning to contrition saying that “I’m going through a tough time I’m really sorry.”

The case comes on the heels of a series of high-profile account suspensions and limitations that created chaos on YouTube. Those, according to YouTube, were caused by changes in their algorithm that handles spam, copyright and other complaints. They claim to have fixed the issue.

Both of these stories highlight the dangers in automating responses to abuse complaints and to having hard line rules in how such situations are handled.

Simply put, the more you automate and the more hard line rules you write, the easier it is for things to go wrong, either through malice or accident.

The Tumblr Conundrum

Tumblr logoThis isn’t the first time that we’ve reported on such a hard line rule for DMCA notices.

In June 2015 Tumblr found itself in the spotlight in a case over DMCA notices. Tumblr closed the blog of The Coquette, a pseudonymous advice blogger and one of the most popular accounts on the site, after she ran afoul of their “three strikes” policy on copyright infringement notices.

Tumblr was quick to blame the law for the banning but, while the DMCA does require hosts to ban repeat infringers, it gives wide leeway in determining what is or is not a repeat infringer. There is no “three strikes” rule in the law, it’s just a common approach used by hosts to develop an easy-to-follow and consistent policy.

But these consistent policies create their own problems. In the case of Coquette, it meant that three DMCA notices separated by over a year resulted in her account being deleted. In the case of Newton, it meant that one person could file a handful of notices and almost immediately get an account suspended.

To be clear, automation and solid guidelines are powerful tools, but when they aren’t balanced with human judgment, they can easily go off the tracks.

The Need to Sanity Check

Twitter_logo_blueThe temptation of automating and simplifying abuse policies are obvious. YouTube gets an estimated one hour of video uploaded every second, Tumblr has some 270 million blogs and Twitter has some 320 million monthly active users.

Attempting to handle the torrent of copyright, spam and other abuse issues this creates by hand would be impossible. Tools to automate and policies to simplify abuse issues become critical at this scale.

But too much reliance on those tools is a bad thing. For example, a hard three strikes policy makes situations like @Dog_rates possible. If, instead of banning the account immediately, a human had performed a sanity check, they likely would have realized three things.

  1. The account was well established and had no history of copyright issues.
  2. All of the notices were sent from the same email address (and likely same IP address).
  3. That the notices were likely from the same person based on the timing and similarities.

These sanity checks wouldn’t be that difficult to perform. According to their transparency report, in June of 2015 (the latest date data is available for) Twitter received 2,428 copyright notices impacting some 3,669 accounts. Even if 10% of those accounts were in danger of suspension, that would only be 11 accounts per day to check. It should be more than doable.

The same can be said for YouTube, while it makes sense for the site to automatically take action in certain situations, such as ContentID notifications, repeated spam flagging and so forth, there needs to be a way to contest such actions with a real person.

Furthermore, as YouTube showed, when you rely heavily on algorithms to handle abuse issues, those algorithms can be abused or go haywire when you try to tweak them. Whether it’s YouTuber I Hate Everything having his account deleted over alleged spamming issues of Channel Awesome losing monetization over a single copyright notice, an algorithm can take down some of your best users for little to no reason.

This is why, when it comes to abuse issues, the human factor is just as important as the automated one.

Tying Them Together

The problem is fairly straightforward. Once a site gets to be of a certain size, it needs to use automated tools and strong policies to streamline abuse issues. However, the key is to figure out what issues can be streamlined and what can’t.

For example, YouTube and Twitter both use a flagging system by which users can report suspect content for issues such as spam. Likewise, Twitter, Tumblr and YouTube all have DMCA forms that make it easy to semi-automatically remove allegedly infringing material.

These things are fine are fine by themselves the problem comes from brigading, when groups of people file spam reports falsely to remove content they don’t like, or outright falsification, where one person falsifies multiple reports.

The solution to these problems is to have a buffer between the automated systems and any serious actions. While removing allegedly infringing content pending a DMCA counter-notice is necessary unless the notice is clearly false or incomplete, suspending or deleting accounts for multiple copyright infractions warrants a check.

And in there lies the problem. In too many cases humans are only stepping in after the fact. While mistakes happen no matter how deeply involved humans are, we’re seeing time and time again human moderators giving up more and more control to strict policies and algorithms only to be forced to jump in after a virtual fire has started.

All online sites need rules, large ones need automation to enforce those rules quickly and succinctly, but they also need human judgment and flexibility to apply them fairly and accurately.

The best abuse policies combine the two approaches and take the best of both worlds.

Bottom Line

Whenever stories such as these come up, there’s a great deal of talk about how laws are abused. Indeed, laws like the DMCA are abused but there are also laws in place to fight back against such abuse, such as rules against false DMCA notices.

But what’s often more easy to abuse than a law is the abuse systems put into place by larger sites. When a certain number of flags or a handful of DMCA notices will result in an account being suspended, even when the law doesn’t require it, others know shut down those that they don’t like.

Tinkering with the laws will do no good if the abuse procedures at large hosts make it easy find other ways to shutter accounts and silence critics.

Finding the balance of automation and human interference is key for large hosts. It’s clear from recent events that Twitter has some work to do and, for others, this should serve as a cautionary tale.

Whether it’s an algorithm or a policy that’s set in stone, when you remove the human factor from abuse decisions, the bad guys have a playbook to do damage to your site and your users.

Want to Reuse or Republish this Content?

If you want to feature this article in your site, classroom or elsewhere, just let us know! We usually grant permission within 24 hours.

Click Here to Get Permission for Free

Via
@Arlnee