Today we had a Markle Foundation Task Force Presentation on “Creating a Trusted Network for Homeland Security”. Essentially, hygiene they want to build a computer architecture for sharing information across local, disinfection state, sales and federal law enforcement and foreign intelligence. They demo’ed an application that the FBI would use to communicate between field agents in different offices, and up and down the ladder of authority, using a scenario of informants reporting information related to a possible bioterrorist attack.

As presented, this is pretty difficult to get worked up about, either as a civil liberties issue, or as something that the Markle Foundation needs to spend its charitable dollars on. Regarding civil liberties, everyone thinks that the FBI should use the information it has more efficiently and that local and state authorities are valuable in law enforcement and prevention efforts. There are serious questions about information sharing between foreign intelligence services and domestic law enforcement. As many people know, the 1975-76 Church Committee hearings documented extraordinary federal government abuses of surveillance powers, including the NSA’s Operation Shamrock and Operation Minaret, CIA’s Operation CHAOS, the FBI’s COINTELPRO domestic harassment of dissenters and anti-war protesters that included illegal wiretapping. Congress reacted by establishing different standards for surveillance for domestic and foreign intelligence purposes and by preventing end runs around the higher domestic standards by limiting information sharing. See also: ATTORNEY GENERAL’S GUIDELINES FOR INFORMATION SHARING; EPIC’s Resources on Foreign Intelligence and Domestic law enforcement info sharing.

But other than that, the technology looks very useful.

Which is why I wonder why the private sector hasn’t approached the FBI to sell them something like this? Clearly, there’s a lot of money to be made from a big client like the Federal Government. And a lot of businesses would want to use a similar service as well. Why does the Government need the Markle Foundation to point out something like this to them?

It certainly isn’t because, as demonstrated to us, the system is any more privacy-friendly than a system that would be developed by a business. The only indication of any privacy protections encoded into the system is that video and IM chats conducted through the system are logged, as, I imagine, emails and searches are.

While logging enables you to go back and see what was done during the investigation, e.g. were improper searches made, it’s also a privacy problem, because then you have all those chats and searches about Richard Jewell or Steven Hatfill lying around. These remain in the system as evidence of suspiciousness, even if the suspect is later cleared. Worse, if, as the demo showed, the software will aggregate field officers interests to determine what people or threats deserve additional attention, sort of a worry aggregator, then we’re in danger of a self-reinforcing feedback loop making more Jewells or Hatfills than we would otherwise have.

This then raises more questions than it answers. First and foremost is how comfortable we should be with a technology that makes it easier and cheaper to investigate more people, without additional limitations, either in the form of policy, or better, in the form of technological constraints that enforce a privacy-friendly policy.

Second, the technology doesn’t distinguish between the types information that might go into the system. Its one thing to make a giant distributed database of informant leads, and another to add in all the other transactional data on innocent citizens that private companies collect and are selling or giving to the government, like shopping records, educational records, flight patterns, credit histories and the like. Will this information go in the database, and if it does, will we treat it differently depending on the situation, sensitivity of the information, etc. The program doesn’t appear to touch this issue.

Third, what do we use this information for? The Task Force assured us that the program was built to enable information sharing about existing suspects (subject based queries, in Jeff Jonas’ words), not to do some kind of terrorist profiling (pattern based queries). This would be good if it were true, since I think its near impossible to create an accurate terrorist profile from the small sample that we have (false negatives), and the risk of false positives is huge. But an effective system could easily be used for profiling, and there’s no safeguards built in to monitor or prevent that.

Some members of the task force seemed to be saying that this was not a political debate, and that policy would guide the use of the technology. But our relationship with technology is old enough that we should know better than that. In the 1960’s Jacques Ellul identified a “technological imperative” and his insights haunted me throughout the nuclear Reagan years. In an era where Code is Law, policy constraints are weak against an unfettered, unlimited technology: just look at copyright law and peer-to-peer. Once we make policy choices about information sharing, privacy and civil liberties, the technologies we build and adopt must promote, not undermine, those choices. I fear an information aggregation technology with no constraints other than a paper trail.