Since releasing its Alexa AI-driven personal assistant home speaker device, Amazon has been dogged by alarmist, clickbait reports that the device listens to and records user's conversations.
But Amazon has always claimed that Alexa only starts recording when the command, 'Alexa' is spoken.
How then, to explain recent reports from Danielle, from Portland, Oregon that a private conversation between her and her husband was recorded and mysteriously ended up in the inbox of her husband's colleague?
Danielle discovered this had happened after receiving an alarming call from her husband's colleague, insisting, “Unplug your Alexa devices right now, you’re being hacked.”
He then proved it by relaying the - truly hack-worthy - conversation about hardwood flooring that had taken place between the couple.
“I felt invaded,” Danielle told KIRO-TV. “A total privacy invasion. Immediately, I said, ‘I’m never plugging that device in again because I can’t trust it.’”
Amazon first responded by simply confirming that this chain of events had taken place. So how does this tally with Amazon's claims that the device is not always snooping on our conversations?
Amazon has now produced an explanation which relies on muffled word cues instructing Alexa and her responses going unheard by the couple.
They say she was initially triggered by a word similar to 'Alexa', and then later heard 'send message' followed by the name of a contact upon her (presumably inaudible) request.
Although this chain of events does seem improbable, it would provide an explanation that keeps Amazon in the clear regarding Alexa's eavesdropping.
So are devices like Alexa and rival product Google Home really always listening?
Drawing from all the available information, the short answer is no. Both devices only listen and record conversations after a 'wake word' is spoken, and even then these requests can be deleted from your history.
There are two elements to this story, there is the conduct of Amazon and Google in regards to if and when it listens to and records conversations within earshot of their devices, and there is the threat that if these conversations are recorded, can they be acquired by hackers?
Google is clear that it does not record any ambient conversations around the Home device. The official data security and privacy support page states: "No. Google Home listens in short (a few seconds) snippets for the hotword. Those snippets are deleted if the hotword is not detected, and none of that information leaves your device until the hotword is heard.
"When Google Home detects that you've said "Ok Google," the LEDs on top of the device light up to tell you that recording is happening, Google Home records what you say, and sends that recording (including the few-second hotword recording) to Google in order to fulfill your request. You can delete those recordings through My Activity anytime."
Amazon is similarly clear in its Alexa and Alexa Device FAQ: "Amazon Echo and Echo Dot use on-device keyword spotting to detect the wake word. When these devicesdetect the wake word, they stream audio to the Cloud, including a fraction of a second of audio before the wake word."
You can also review voice interactions by visiting the history section in the settings of the Alexa App or the settings in the Google Home app. Users can even listen to audio sent to the cloud, delete voice clips and enter feedback.
It's important to note, however, that if you do delete your history this will adversely affect its ability to be a 'smart' assistant, as both use this data to tailor responses and learn your habits in order to get better the more you use them.
Simply put, if you don’t want Alexa to listen you push the microphone button until it turns red, which indicates that the device is not listening or streaming data to the cloud, even if you say the wake word. Also, anything that is recorded is pretty secure with Amazon and Google and is encrypted in transit.
The truth is that neither Google nor Amazon would sell many of these devices if they were glorified wiretaps.
Of course, just ask Apple if you can completely hold off hackers. Research by MWR InfoSecurity successfully compromised an Amazon Echo by exploiting a vulnerability in the device to turn it into a 'wiretap' without affecting its overall functionality.
The vulnerability can only be leveraged through a physical attack, however, like an actual wiretap. The 2017 edition of the Amazon Echo and Amazon Dot are also not vulnerable to this physical attack.
A statement from MWR reads: "By removing the rubber base at the bottom of the Amazon Echo, the research team could access the 18 debug pads and directly boot into the firmware of the device, via an external SD card, and install persistent malware without leaving any physical evidence of tampering. This gained them remote root shell access and enabled them to access the 'always listening' microphones."
Amazon responded to the research with some fairly typical steps: "Customer trust is very important to us. To help ensure the latest safeguards are in place, as a general rule, we recommend customers purchase Amazon devices from Amazon or a trusted retailer and that they keep their software up-to-date."
I've had multiple conversations with owners of these devices, as well as paranoid smartphone owners, who are convinced that they are being targeted by advertisements based purely off the back of ambient conversations they have had around their favourite devices.
Facebook even felt compelled to issue a short denial of this practice - entitled Facebook Does Not Use Your Phone's Microphone for Ads or News Feed Stories - back in June 2016.
The important thing to note here for sceptics is that Google and Amazon stand to lose way more in terms of consumer confidence and potential fines than it does to gain in monitoring all of your conversations and tweaking its advertising algorithms. The risk-reward equation simply doesn't match up.
Originally published in:
TechWorld by Scott Carey - May 25, 2018 & additional reporting by Laurie Clarke.