Would you be willing to equip your bedroom or living room with an internet-connected microphone that could record and send all your conversations to the data-hungry server of a giant tech company or to a random person in your contact list? That is basically the privacy and security risk you’re taking when you bring home an Amazon Echo, Google Home or other smart speaker.
Since the introduction of the Echo in 2014, smart speakers have moved from a niche domain for geeks and gadget freaks to an inherent part of the lives of tens of millions of people in the U.S. and across the world. Thanks to advances in artificial intelligence and natural language processing (NLP), smart speakers provide us with a hands-free and easy-to-use interface to interact with computers and accomplish tasks that previously required a display and input devices such as a mouse and keyboard.
The convenience and benefits of smart speakers are obvious, but like every other technology they come with their own tradeoffs, highlighted by the many stories that have raised—and exaggerated—concerns about the security and privacy implications of having a smart speaker in your home. Here’s what you need to know.
Smart speakers are always listening
Smart speakers become activated with a “wake word.” For the Echo, it’s “Alexa,” and for the Google Home, it’s “OK Google.” After hearing the wake word, the smart speaker starts analyzing whatever comes after it. But to catch the wake word, smart speakers have to keep their microphone active at all time, which is why they call them “always listening” devices.
This has raised concerns about Amazon and Google listening to and storing all your conversations, especially after stories surfaced in which Alexa recorded and shared users’ voices without being order to do so. However, while smart speakers’ “always listening” mode is a privacy issue, it’s often exaggerated.
Echo and Google Home must send conversations to their cloud servers because the AI algorithms that analyze and process voice commands require processing capabilities that the devices don’t possess. The device doesn’t send anything to the cloud before the wake word triggers it. In fact, Google and Amazon would be overwhelmed with useless data if they were recording their smart speakers all day long.
However, this doesn’t mean that a smart speaker, which is basically a computer packed with a microphone and an internet connection, doesn’t have the capability to record and store your conversations in the cloud. In fact, if it’s hacked, or if it malfunctions, that’s exactly what will happen.
But then again, the same threats apply to your phone, which is also a computer with a microphone (and a camera and GPS) and connected to the internet, and you always carry it with you instead of letting it sit on a table in your living room.
Data stored in the cloud
Both Google and Amazon keep a copy of every voice command you send their smart speakers in the cloud. They do so to “improve their services.” This means that if someone gets a hold of your phone, they’ll be able to go through your recorded conversations by accessing the Amazon or Google account associated with your smart speaker. Or if the police serves a warrant, the law of the land and the manufacturer’s devotion to user privacy will determine whether they’ll get access to voice recordings stored in the cloud.
But again, this is basically no different from someone gaining access to your email account and reading through your emails. Like all online accounts, using two-factor authentication and strong passwords is an effective measure to prevent unwanted access to your recordings. However, in contrast to email and messaging services, which offer a range of privacy settings such as PGP and strong, end-to-end encryption, using smart speakers is predicated on letting the manufacturer collect and process your voice.
Users can also go through the accounts linked to their smart speakers and delete their recording history, but it will probably affect the performance of the device.
Unwanted triggering of commands
Smart speakers are pretty decent at answering to queries for information such as the time, the weather and appointments. But the real convenience they bring to consumers’ lives is the accomplishment of tasks. Alexa and Home support thousands of applications such as setting up alarms, playing music, placing orders, setting appointments and more. They’re also capable of manipulating IoT devices such as smart door locks, air conditioners, coffee makers, fridges, toasters and a bunch of other useless stuff.
What this means is that anyone who’s within the hearing range of your smart speaker will be able to send it commands to perform functions. All they need to do is say the magic word. Of course, this can happen if someone breaks into your home (in which case you’ll have bigger problems than your Amazon Echo being used without your permission). But what if your smart speaker was close enough to the window for someone from outside to order it to unlock the door?
Both Echo and Home have also shown that a person doesn’t necessarily need to be within their vicinity to activate them. Smart speakers will take command from any device that can play an audio file that says the wake word. Last year, Burger King ran TV commercial that asked Google to explain what a Whopper is. Tests showed that when a Google Home device was next to the device that played the commercial, it would start describing the whopper.
Another episode involved a 6-year-old kid who accidentally (or intentionally maybe?) ordered an expensive dollhouse and four pounds of cookies while playing with the Amazon Echo in her family’s home. Afterwards, a local morning show covered the story and the anchor made a remark about Alexa ordering dollhouses, which triggered even more unwanted orders and refunds. This shows how smart speakers can cause innocent (and sometimes expensive) accidents.
Beyond accidents however, there are real security implications for the remote activation of smart speakers. For instance, a hacker could lure a victim to a malicious website that runs an audio file of a command for Alexa or Google Home. Given the number of functions that the devices can perform, there are many ways this functionality can be put to evil use, such as unlocking doors, making money transfers and more.
Smart speakers usually have settings that add security checks to functions such as shopping. They also have settings that link profiles and functions to specific voices. Users who care for their security should activate those or avoid using smart speakers for critical tasks altogether.
One of creepier security threats of smart speakers is what is known as “adversarial attacks,” in which malicious actors send commands to the devices by exploiting weaknesses in the AI algorithms that power them. The way deep learning algorithms and deep neural networks analyze and process audio is different from that of humans. With meticulous work, a malicious actor can create an audio file that sends a hidden command to a smart speaker while sounding like music to human ears.
Adversarial attacks against smart speakers are still in proof-of-concept stage, and there still hasn’t been a real-world example of the Echo or Home being compromised in this manner. But it’s only a matter of time before hackers find ways to put them to destructive use. Unfortunately, there’s not much users can do about this and it will be up to manufacturers to harden their devices to minimize the risk of their AI algorithms being exploited to harm their customers.
We often misunderstand and exaggerate the security and privacy implications of smart speakers. Where privacy and personal information are concerned, the security threats of smart speakers run parallel to that of other services we’ve been using for the past decades. The appearance and methods might be different, but the nature is the same.
However, what makes the smart speaker security important is the access they have to our physical world and daily life. As we increasingly trust smart speakers to accomplish tasks on our behalf in our homes, cars and offices, we must also be wary of who else will be able to do the same.
- This article was originally published on Tech Talks. Read the original article here.