Amazon’s Echo voice-activated assistant has recently been the subject of high profile technology news stories about its activation for uses other than owners intended. An Oregon couple found out that the Echo, which consumers call Alexa, had recorded their conversation and sent it to a person on their contacts list — and found out not from Alexa, but from the contact who received it.
Awake and Recording…
The story illustrates how many safeguards can be potentially ignored when Alexa thinks it has been bidden to do something. The couple didn’t intend to activate Alexa, did not intend the Echo to do anything, and did not intend for the recording to be sent. They were, according to National Public Radio, discussing hardwood floors, a topic that had nothing to do with the contact.
Essentially, Alexa thought it heard the wake word that activated it, recorded; thought it heard a command to send a message; and thought it heard a name as recipient — none of which the couple actually said.
Not only that, but it actually asked out loud whether the message should be sent to the recipient, to verify the command. Apparently, the couple didn’t hear that question. But Alexa thought it heard an affirmative response.
There is no word on what could have caused such a pile-up of voice-assisted device misunderstanding. The couple have a number of Echos, and have unplugged them all.
But it’s not the first time similar things have occurred. Owners of a similar device, Google Home, know well that a Burger King commercial asking “Ok, Google, what is the Whopper burger?” wake up their device with “Google” and cause it to answer. How secure are these devices – and how much does it matter?
To be fair, a voice in the Whopper burger case is asking a question that the device quite naturally feels is addressed to itself. None of the voice-activated home devices can recognize specific voices.
In a closer example, a North Carolina man found that a recording had been made and sent to his insurance agent, all without his knowledge.
Do users run the risk of privacy invasions?
Violating Privacy?
These stories raise a security question. How secure are voice-activated devices — and does their security matter?
For its part, Amazon immediately went into business strategy mode, indicating that the Echo would be redesigned to avoid the flaws that allowed it to misread the conversation.
And many people, such as a columnist for Gizmodo, see the devices as so friendly and convenient that security questions are beside the point.
But the larger issue is that microphones constantly on may be a security issue for some people at some point in the future.
Perhaps people will need to decide for themselves in the future whether the convenience and pleasure of a voice-activated device outweighs any potential security concerns.
Or people may need to decide whether they want to, as the Oregon couple did, unplug the devices until their privacy and operability can be further guaranteed.
Potential security concerns do exist. The fact that the devices can’t recognize voices can potentially open large security concerns. Guests in your home could, conceivably, ask a voice-activated device for private information such as your savings balances, if they have been linked to the system.
Another possibility? In the future, it is conceivable that law enforcement agencies could activate the devices of suspects remotely to overhear conversations.
What would your balance between convenience and security be?