Can robots invade your privacy?

In advance of the Trottier Symposium, Ian Kerr discusses the issues of robots and personal privacy and human-robot interaction
“Society’s obsession with the idea of super-intelligent machines is actually the problem here,” says incoming Trottier Symposium lecturer, Ian Kerr. “It obfuscates and undermines our investigation of the actual risks posed by robots and AI right here, right now.”

Ian Kerr, Canada Research Chair in Ethics, Law and Technology at the University of Ottawa, is an expert when it comes to AIs and robots; specifically, when it comes to the legal issues surrounding them. A global leader in the field of Privacy Law, Dr. Kerr will employ a legal lens to our increasingly technological world, examining what (or who) governs these technologies if they do not possess a conscience. Can Robots Invade Your Privacy will be presented at the two-day Trottier Public Science Symposium on Monday, October 29, at the Centre Mont-Royal. For more information on #Trottier2018, please visit the Trottier Symposium webpage. Free; first-come, first-serve. Register here. Want to volunteer? Email

In advance of the Symposium, Dr. Kerr spoke to the Reporter about the issues of robots and personal privacy and human-robot interaction.

The title of your lecture, Can Robots Invade Your Privacy?, sounds like science fiction! How can this be the subject of the Trottier Public Science Symposium, which is dedicated to separating sense from nonsense? 

I guess I don’t blame you for thinking that this sounds like fiction. After all, every media item on robotics and AI law and policy starts either with the Terminator, or a reference to Asimov’s Laws of Robotics. Which is such a distraction from the real issues. The nonsense here actually results from privacy law – which currently says that only conscious, sentient beings can invade our privacy.

So, you think robots and AIs will become conscious and sentient?

No, No. Nothing could be further from the truth. I think society’s obsession with the idea of super-intelligent machines is actually the problem here. It obfuscates and undermines our investigation of the actual risks posed by robots and AI right here, right now.  Robots and AI can already be used to form beliefs about people. Robots use those beliefs to sort us into social categories. This allows decisions to be made about us – often without our knowledge or understanding – and those decisions can affect our life chances and opportunities in important ways.

You are talking about profiling. But how is this different from what we are doing now with Big Data?

You are right to point out that this is going on already and that big data is an important driver. But now, in addition to crunching the numbers, forming beliefs, and making decisions about us – decisions that were once exclusively within the purview of human judgment – robots have improved abilities to actuate those decisions on their own with real life consequences and without human intervention or oversight. This drastically changes the accountability of decision-making and can affect social justice, big time.

Can you give us an example? 

Sure. A current example would be the US National Security Agency’s PRISM and UPSTREAM programs. In essence these programs use robots rather than humans to intercept and read emails and other online communications in order to make decisions about people, often with a discriminatory effect. It is the use of AI that is said to justify these programs. The constitutionality of mass surveillance is cast aside in favour of what governments are now calling the “bulk collection” of data.

Their claim is that since there are only robot eyes (and not human eyes) on the data, the right to privacy and other constitutionally enshrined rights are not in play. I wonder if the same justifications will be offered when surveillance robots – some already on the market; others under development – begin to project force or otherwise limit the liberty of citizens with no human officers on the scene?

This is heavy stuff. So what else will you say about robots and privacy at the Trottier Public Science Symposium? 

Well, I don’t want to be a plot spoiler but I think that a better understanding of robots and privacy requires a deeper dive into an interesting new field of applied science called “human-robot interaction.”

Whether we are talking about software bots or robot sex (yes, that’s a thing!), we have this incredible tendency to anthropomorphize machines, imbuing them with all sorts of human characteristics that they are otherwise incapable of. Our tendency to project humanity onto machines impacts our willingness to interact with them and trust them. This has huge implications for the uptake of robots and AI in society. It also makes us vulnerable to manipulation by those who want us to trust these machines; because robots can be designed to make us trust them.

When this happens, we forget about the fact that they are packed with sensors and in constant communication with the Mothership. I am gonna talk a lot about this, which is pretty cool and compelling stuff. My aim will be to help the McGill Office for Science and Society separate the sense from the nonsense.