Digital Assistants, Privacy, and the Infinite Loop


Digital assistants are idealized as the ultimate technological tool, built to make our lives easier. Apple’s Siri, Google’s Home, and Amazon’s Alexa are the most popular “intelligence” assistants with ever increasing features—managing the lighting and temperature of one’s home, placing online orders, and pretty much offering a plethora of information at one’s disposal. The ad below describes the depth of Amazon Echo’s functionality, and was among the first promotional ads for the device:

The development of these devices involves a combination of artificial intelligence, natural language processing, and machine learning. However, in the endless labyrinth of what must be the blueprint for a digital assistant, there is one programming construct that stands out—the infinite loop.

Looping is a programming construct, a syntax akin to a grammatical rule, that every first-year computer science student is introduced to, and it is a concept that plagues us to the very end of senior year. A loop is simply the repetition of a process. If I ask you to tell me the sum of the numbers from 1 to 10, how do you go about intuitively finding the answer? You add 1+2 equals 3, and then 3+3 equals 6… At every step you are repeating what you did before: you take the sum which you already have and add the new number to it. After ten such iterations, you have the answer! This is looping.

ATMs use an infinite loop (Photo credit: Adam Fagen via Flickr)

Unlike the example above where you stop after ten repetitions, an infinite loop has no end. This would be the case if I asked you to sum all the numbers up to infinity. It’s a scary concept and, as a general rule, programmers dislike infinite loops. However, there are some situations where an infinite loop is ideal. For example, the ATM at your local bank has to continuously prompt new customers for their pin number. The machine is essentially looping through the authentication process for every customer.

Now here’s the catch with digital assistants. It’s lovely to have your digital assistant recognize your voice after some prompt like: “Hey, Siri.” And it’s easy to justify this at first by claiming that nothing happens unless you say those two words. But what is the underlying assumption that enables this voice activation to occur? The device is always listening… There is an infinite loop buried somewhere in the code that drives its functionality. At this point, I can’t help making the analogy to the telescreens in Orwell’s 1984. Given the telescreens in Orwell’s dystopian novel didn’t do much “assisting,” they certainly established a sense of always watching and always listening. And it is the implications of digital assistants on user privacy that eerily remind me of telescreens.

Apple allows Siri’s listening functionality to be turned off

Our increasing dependency on technology creates a new and interesting conversation about consumer rights and quality assurance, as discussed in a New York Times article in March of 2018. According to the article,  “…each company has filed patent applications, many of them still under consideration, that outline an array of possibilities for how devices like these could monitor more of what users say and do. That information could then be used to identify a person’s desires or interests, which could be mined for ads and product recommendations.” Thus, in our quest to make our lives easier, we may become vulnerable to targeted marketing.

Although I certainly don’t want to induce paranoia, I fear I might have done exactly that. The fact is the technology that digital assistants are built on paves the way for debates about privacy. It is the nature of the beast. In order for voice activation to work, an infinite loop has to keep the device listening. There is no way around it and gaining access to people’s private lives was certainly not at the back of any developer’s mind. It is simply a by-product of innovation which can now be used exploitatively. And therein lies a possible solution to the problem. The consumer should not be responsible for protecting themselves against the product. That is why, similarly, the FDA was established long ago for the food and drug market. There now needs to be a collective call for consumer protection laws and a monitoring agency in the market for voice-activated technology.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.