Categories
Perspectives

Tech and terrorism – part 1

This past week I had the opportunity to attend a fascinating workshop in Montreal sponsored by Concordia University’s Montreal Institute for Genocide Studies (MIGS) entitled ‘Tech Against Terrorism’.  A number of academics, private sector entities and think tanks all came to talk about the challenges behind identifying and removing terrorist content from the Internet and from social media.  If you have read anything on this topic lately you know that a lot of people are worried about it and demanding answers.

Some of the top providers – Twitter, Google, FaceBook and YouTube among them – have come under a  lot of pressure to do something about the fact that their platforms are being used by terrorist groups like Islamic State to post propaganda and recruit new members.  Some states are not convinced that the tech giants are doing enough in this regard and the European Union has threatened to enact legislation to force the companies to show, within three months, that they can remove content an hour after they detect it.  Twitter appears to be doing its part: since August 2015 it has taken down over 935,000 accounts linked to terrorism.

Regardless of whether the firms take action of their own accord or are forced to do so by governments, there are a lot more questions than answers when it comes to what to do with the terrorist use of the Internet and social media.  Here are a few of those outstanding issues.

  1. Who decides what constitutes objectionable material?  A beheading video is one thing but it is far from clear what is in fact terrorist in nature.  Mistakes can infringe on privacy and freedom of speech issues as YouTube found out when it took down right wing material in the wake of the Florida school shootings.
  2. Does taking down material work?  I learned at the conference that much of what is removed has already been cached and reappears as soon as it disappears.  I also know that IS terrorists whose Twitter accounts have been cancelled simply make new ones and drawing Twitter’s attention has perversely become a badge of honour  of sorts.  At times it seems we are playing an online version of Whack-a-Mole.
  3. Can tech companies keep up with the volume?  There is a lot of garbage on the Internet and only so many people to detect and neutralise it.  A lot of companies, including a few small start-ups, are creating Artificial Intelligence and machine learning algorithms to do so automatically.  The technology is far from perfect – after all the algorithms are still designed by humans – and there is still a risk of false positives (i.e. material removed that is not terrorist in nature) and false negatives (i.e. material deemed ok that really shouldn’t be out there).
  4. What about the needs of security intelligence and law enforcement agencies?  These players use online content to judge risk and intent and to pursue legitimate cases.  Furthermore, if terrorists keep changing account names because Twitter keeps shutting them down this complicates the acquisition of court-allowed intercept warrants.
  5. Do we really have to do this?  In other words, are we sure that this material is so bad that it has to be eliminated?  I am not so sure on this point.  Jihadis have been posting millions of videos and tweets and other material for decades now and yet their numbers are still very small. In other words, the vast majority of people targeted by terrorist groups have in effect told the terrorists to go to hell. Wouldn’t it be a more interesting question to ask why most people are not affected by this material rather than why a tiny sliver are?

I do not have the answers to these questions and I did not hear a lot of assuredness in Montreal this week. It is not that some very smart people are not trying; they are really putting their hearts and souls into this difficult work and should be commended.  It’s just that as with pretty well everything else in life there is no simple solution.  Bad people have been using new technologies for evil purposes forever: dynamite, invented by Alfred Nobel to aid in mining operations, was after all the tool of choice for the anarchist terrorists of the 19th and early 20th centuries.  I fear that any progress we make will only evaporate once the terrorists adapt to the next new thing.

 

By Phil Gurski

Phil Gurski is the President and CEO of Borealis Threat and Risk Consulting Ltd. Phil is a 32-year veteran of CSE and CSIS and the author of six books on terrorism.

Leave a Reply