The story which broke on Friday about a traveler risk scoring system called the Automated Targeting System, or “ATS,” evokes an image of an Orwellian world in which the State compiles a secret dossier on every individual and sorts the population according to secret criteria, assigning each person a “risk score.” The individual has no recourse to challenge his risk rating, and he has no way of correcting any false or incomplete information about him. In fact, he will never know what information is being used against him, or even the criteria on which he has been judged a risk to the State. It is a disturbing image, and the fact that the government has been conducting the ATS program in secret for four years has shocked many people. However, the ATS is hardly a surprise to those who have been keeping track of similar programs.
First, there was Total Information Awareness, or “TIA,” a program that was to data mine “the transaction space” in order to single out people who might be terrorists. Then there was the Multi-state Anti-terrorism Information Exchange, or “MATRIX,” which linked together state and commercial information and was probably a data-mining program. In a test run of their technology for government officials, its developers boasted that they had found 120,000 likely terrorists living in the United States. In the area of travel, the second-generation Computer Assisted Passenger Prescreening System, or “CAPPS II,” was to data mine airline and commercial information in order to score travelers as red, green or amber risks. Its successor program, “Secure Flight,” tried to do a similar thing. Then, in the area of telecommunications, there was the NSA program, secretly authorized by the President to data mine the telephone calls and emails of the American people.
All of these programs, except for the NSA’s, were ostensibly scrapped by the government or Congress. Americans thought TIA was just too creepy, states opted out of MATRIX in droves because it was so intrusive, the GAO said that CAPPS II was ineffective in identifying possible terrorists, and Secure Flight was killed after it was caught risk scoring, which Congress had expressly forbidden it to do. Each program never really went away. Instead, they were simply repackaged-or carried on in secret, like the ATS program.
Data mining is the use of computer algorithms to search masses of information for specified criteria. Risk scoring is a statistical rating on how closely an individual matches the criteria. The government is using these two techniques to sort through the masses of information it has been gathering and buying from private data aggregating companies since 9-11, in order to watch every transaction made by the American population, and populations outside the United States, all of the time. This is mass surveillance, and it’s global in scope. Domestic systems feed into global ones and global systems-like biometric passports, the sharing of airline reservation system information, the interception of international banking records, and the interception of global communications, to name a few-feed into the domestic.
The purpose of data mining is not to check individuals’ personal information against information about known terrorists, or those suspected of terrorism on “reasonable grounds” as they cross borders, send emails or access public services. The purpose of it is to predict who might be a terrorist a little like the film “Minority Report,” in which officials stop criminal acts before they happen by reading people’s minds. However, the technology that is being used today falls far short of the technology of Hollywood fantasy.
First, the information on which data mining or risk scoring depend is often inaccurate, lacking context, dated, or incomplete. And like the ATS program, data mining and risk scoring programs never contain a mechanism by which individuals can correct, contextualize or object to the information that is being used against them, or even know what it is. Operating on a “preemption” principle, these systems are uninterested in this kind of precision. They would be bogged down if they were held to the ordinary standards of access, accuracy, and accountability.
Secondly, the criteria used to sort masses of data will always be over-inclusive and mechanical. Data mining is like assessing guilt by “Google” key-word searches. And since these systems use broad markers for predicting terrorism, ethnic and religious profiling are endemic to them.
Welcome to the national insecurity state, where our virtual identities are continually assessed for the risk we pose to the state and the normal relationship between the individual and the state in democratic societies is turned on its head. Now, the individual answers to the state and woe betide the person who is branded with a high “risk score.”
MAUREEN WEBB is a human rights lawyer and activist. She has spoken extensively on post-September 11 security and human rights issues, most recently testifying before the House and Senate Committees reviewing the Canadian Anti-terrorism Act. In 2001, Webb was a Fellow at the Human Rights Institute at Columbia University in New York. A litigator for some of the first constitutional cases heard under Canada’s Charter of Rights and Freedoms, including the landmark freedom of association case, “Lavigne, “and a case challenging the powers of Canada’s newly instituted spy agency, CSIS, she sits as co-chair of the International Civil Liberties Monitoring Group. She is also the Coordinator for Security and Human Rights issues for Lawyers’ Rights Watch Canada. Her first book Illusions of Security: Global Surveillance and Democracy in the Post-9/11 World will be published by City Lights in February 2007.