Summer 2023 Edition

Predictive Terrorism Technology: A Slippery Slope for Society

Sophie Ulm
Staff Writer

The use of predictive technology in counter terrorism has emerged as a preventative measure to avoid actions before they occur. These developments come with the ambition of creating a safer environment for people to live in and reducing threats of violent extremist actions. Yet while many have heralded the use of such technology as a means for further protecting the safety of individuals, it also presents an evident threat to individual privacy and freedom which can be seen across the world.
Humanity’s desire for safety and protection is strong, as demonstrated by the increasing nature of predictive technology. But many individuals have not yet begun to grapple with the potential negative effects of these measures, which have already been seen in several instances. In China, there has been an increase in predictive technology as a means to continue the oppression of the Uyghur population and other groups seen as societal outsiders. In France, artificial intelligence and predictive technology have been used in a way that creates conflicting requirements for  individuals suspected to be threats, leaving them with no way to meet the standards set to lift their restrictions. In the United States, predictive artificial intelligence has been known to solidify racial biases, rather than working to fix them.
Yet perhaps the most troubling use of predictive measures in counter terrorism is when it is meant for good, but still results in missteps or issues in the justice system. What happens when a reliance is built on such predictions, yet these predictions are unverifiable, or even wrong? The balance between protecting people’s safety and their autonomy is a balance that many nations are struggling with, yet one that many people are not fully aware of.
Australia is a leader in preventative terrorism laws, which target potential criminals before they have a chance to commit crimes. Using intelligence to determine threats, Australia has implemented continuing detention orders and extended supervision orders, which The Guardian reports allow people to be imprisoned for three years or extensive supervision “on the basis of predicted crimes,” even if they have not yet committed any crimes.
The use of predictive technology in Australia is not as politically motivated or even as pervasive as in other countries, but that does not mean it is perfect. Australia’s system has made errors on multiple occasions that have resulted in the unfair or questionable treatment of many, some of whom are not even aware that the mistakes were being made.
In April of 2023, the government came under fire for their continued use of a tool that independent investigators reported they were very critical of, the Violent Extremism Risk Assessment 2 Revides (VERA-2R). According to The Guardian, the tool is used to assess whether or not extremists should be subject to strict court orders after their sentences, such as extended detentions or regular check-ins with police. A report by Drs. Emily Corner and Helen Taylor paid for by taxpayer money and submitted in May of 2020 found that “the lack of evidence” backing the tool had “serious implications for its validity and reliability.” The report, however, remained largely undisclosed to lawyers and defendants for two years.
According to the Australian government, the VERA system was designed to be used on individuals who had committed acts of extreme violence or acts related to terrorism, and as such were first used primarily in prison settings. As time has gone on, the system has been utilized in a much wider sense, expanding its use to those who might commit terrorist or extremist actions in the future. The validity of VERA-2R is also difficult to measure, as the information needed to do so is not widely published or available to the public. As such, research into its effectiveness and necessity is limited.
Yet that might not be the greatest flaw in the VERA-2R system. Earlier this year, it was found that the tool considered individuals as posing a greater risk of committing crimes if they were autistic or suffered from other mental illnesses, despite having no data-based reason to do so. A report released by the Australian government and academics at the Australian National University found that the tool was not “able to predict their specified risks with anything other than chance.” While there was an attempt at recognizing and reconciling the issues, it was reported that the tools were used 14 times by the federal government after they were made aware of such issues.
More than that, the federal government did not communicate these issues with the government of New South Wales, meaning that New South Wales continued to use results from this software to extend the monitoring and detention of individuals who had finished their sentences. Furthermore, once made aware of these issues, the New South Wales government did not alert the attorneys of those affected by the errors. When a potential threat is discovered, the Australian government holds the right to restrict the travel, work, and education of individuals, meaning that if falsely flagged, individuals’ entire lives can be put on hold without a clear end date.
Hayley Le, a lawyer who represents a number of men assessed using the VERA-2R tool, told The Guardian that the New South Wales court did not share that it had the independent report regarding one of her clients. Le said that her client had not committed a terrorist action, and had possible mental health issues, as well as having faced circumstances that negatively impacted their reintegration into the community. As a result of findings based on the VERA-2R tool, Le’s client was not allowed to leave the country or start any jobs, volunteer work, or education courses without the approval of the government. Only after the report was made clear to Le and her client did New South Wales offer to end its case for an extended supervision order for the client.
One of the main concerns with the extended supervision orders is that they do not seem to have the end goal of rehabilitation. The orders only keep their subjects out of the general population for a slightly extended period, and, as with Le’s client, include stipulations that make it much harder for the subject to reenter the community once they have finished serving their sentences. Grant Donaldson, an independent national security legislation monitor, noted to The Guardian that while the intent of the orders was for the protection from and prevention of future terrorist activities, it “seemingly quite deliberately does not include rehabilitation or reintegration of the offender into the community.” This idea, he concluded, was inconsistent with the intended purpose of sentencing, which include those two concepts as an important component, as well as disproportionate to the threats of terrorism.
With all of the issues surrounding its predictive tools and use of continued detention, Australia has faced some pushback. Human rights groups have pointed out that any false continuing detention orders would almost certainly amount to arbitrary detention under the International Convention on Civil and Political Rights, The Guardian adds. It also remains unclear whether the measures have actually done anything to prevent terrorist actions, with the home affairs department citing a number of attacks that occurred after the release of those convicted in places like England and Austria, while reports made by independent investigators say that there is no certainty to their efficacy. Yet the home affairs department maintains that the measures “provide for the management of terrorist offenders in custody and in the community,” and does not plan on discontinuing their use.
The fact that these errors can and do occur should be an issue that is concerning, even more so once one realizes how hard it is to know that they are happening. The clearance needed to understand the workings of this system and others like it is incredibly high, meaning that as these issues occur, it may take time for those affected by them to learn that they have been. These discrepancies in information create an environment where not only the outcome is uncertain, but the process is as well. As time goes on, the potential reach of such monitoring is unknown, as countries that are striving to prevent terrorist actions may extend limited measures beyond their current limitations and into other spheres.
The biggest issue faced in regulating the use of predictive technology is that it is inherently invasive. Defining the limit of how invasive a technology can be before it begins violating an individual’s right to privacy is a slippery slope that creates contentious disagreement. While the right to privacy is important, governments and organizations, such as the United Nations, have stated that it is not absolute, though there has never been a decision made on just how far governments can extend their searches.
The intentions behind using predictive technology are not inherently bad in most cases. The desire to fight terrorism is noble, but as time goes on, the question of where “terrorism” begins is something that must be considered. Does it begin at the first concerning internet search, or perhaps the joining of an extremist group? As these events become easier to identify, what level of intervention is necessary to stop them? As technology advances from more simple predictive technology to the advanced artificial intelligence-based technology that is on the rise in many parts of the world, these questions will become much more poignant, and a definition will have to be decided upon.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Share This