Toymaker Mattel recently announced that privacy concerns led it to stop plans to sell a voice-activated device modeled on Amazon’s Alexa and targeted to children and parents.
The device, named Aristotle, was first announced at a trade show in early 2017, according to Fortune. It was planned to serve as a versatile voice-activated wireless assistant and internet of things (IoT) center, complete with a camera, lights, and a microphone.
For children, the device could become a nightlight, play songs, teach a foreign language, or instill manners, by not working until asked “please.” For parents, it was designed to be a helpful data collector, providing data on nap times and alerts when diapers were running low.
While its abilities are similar to products marketed to adults, such as Alexa and Google’s Home, Aristotle met with a firestorm of criticism from the beginning from organizations such as Campaign for a Commercial-Free Childhood.
The monitoring of data was a central concern. Critics were also highly concerned that children would be conditioned from babyhood to think constant surveillance by a device was a normal part of daily life, and that Aristotle would cause their early bonds to be with computerized devices rather than with people.
A letter released by Campaign for a Commercial-Free Childhood and its partner and signed by 15,000 people referred to the potential young customers as “guinea pigs for AI experiments” as well.
The planned product would have had a camera for surveillance inside.
A Sign of Things to Come?
As the industry publication Mobile Marketer points out, the incident is a significant one for companies developing digital assistants, artificial intelligence (AI)-related products, and products that can be construed to monitor customers as well as to provide them with goods and services. Digital assistants are marketed to adults as helpful, able to order products, and check the weather. While they do these, they also collect data on queries and consumption patterns for use by corporations.
As a result, many companies are approaching or in entering a new digital frontier, one in which delight in the products is conjoined with or supplanted by concerns about privacy and how the data will be used.
As the Mattel incident shows, people may react differently to products based on the demographic target. Complaints about Aristotle about primarily driven by the fact that the target audiences are babies and children, who can’t turn off the device’s intrusion.
Concern also centered around Mattel’s combining of toys and aggressive sales or corporate-focus techniques. Aristotle was going to be marketed as an educational toy while serving also as reminders that could be used to get parents to put in steady orders for diapers.
While Mattel’s toys like Barbie dolls and Hot Wheels are wildly popular with children, it has been accused in the past of overly aggressive marketing where children are concerned. In 2016, for example, it was accused of collecting information on children and sharing the data with third parties, all without parents’ permission. It paid $200,000 to the Attorney General’s office in New York as a result of this accusation.
Mattel’s Aristotle is likely to stand as a harbinger of things to come for digital companies down the road, as consumers and marketers define the acceptable in digital tools and separate it from the unacceptable.