How does it feel to be mothered all day long by intelligent assistants? Our author has researched what is in store for us with the new generation of chatbots and digital butlers.
It’s six in the morning. my smartphone on the nightstand wakes me up with music that gets louder by the second. Today is a special day: I want to let digital assistants guide me through everyday life. They are supposed to save me stress and give me time, the digital companies promise, to bring their latest helpers to the people. Facebook is just as much a part of it as Apple and Google. Apps like Moovit for orientation or Poncho for the weather are pushing onto the market. Then there are household appliance and consumer goods manufacturers who want to take me by the hand, whether it’s brushing my teeth or watching a film.
With the advances in artificial intelligence, the offerings are constantly being upgraded. The Internet, so the message, is becoming an invisible ghost that guides you through the day. People should find without searching. But what does reality look like right now? I wanted to try that out.
Google Now, for example, had access to all my smartphone data – e-mails, calendars, Internet searches, location services. It remembers which restaurants I google for and who I receive emails from. A smart toothbrush monitors my cleaning habits, my shopping app is supposed to guide me through the supermarket, and Apple TV anticipates when I want to watch which movies. However, I am not quite comfortable with this.
If one day the police accuse me of a crime, I’ll talk myself into a nice situation if my new butlers could at least provide an alibi. At the end of my day, I’ll know whether the self-exposure also serves its purpose. Will I really lie relaxed in bed – or throw the smartphone into the corner?
I’ll get up and take Google Now with me into the kitchen. As I take butter, cheese and milk out of the fridge, I notice that the supplies are running low. I call “Okay Google” into my Android device (which I only do out of earshot of other people) and say “Remind me to buy butter at the supermarket! After breakfast I brush my teeth with a futuristic electric brush from Oral B.
It even shows me how long I should brush my teeth with the help of a timer and a bitmap – the latter only in the app. It also tells me if I press too hard and damage my gums. But for it to sound the alarm, I have to maltreat my teeth with brute force. Otherwise the toothbrush does not check whether I am following the instructions. I can let it vibrate in the air for two to three minutes and still get praise for extensive dental care.
It’s not that easy to cheat on a bathroom mirror developed by the DAI-Labor at the TU Berlin. A gleaming screen shows children a score for brushing their teeth. Anyone who brushes regularly wins against siblings who are lazy about brushing.
Such playful elements are now offered by many digital assistants to motivate mankind to live healthier lives: more sport, a vitamin-rich diet, better personal hygiene. Developers call the principle “gamification”. Studies suggest that success experiences stimulate the reward system in the brain. What the studies conceal: In practice, gamification works about two or three times, after which it gets on children’s nerves, according to experience. They prefer to play “Minecraft” or “Clash of Clans” and still don’t like brushing their teeth. I feel no different. Who wants to compete with others all the time? At least when brushing your teeth – and in the morning toilet in general – you should be allowed to be last, I think.
Life is exhausting enough as it is. I look at Google Now to see how my day is going to continue. To my surprise, I find that the digital butler has unsolicitedly determined my home and place of work based on my residence statistics. A comparison of GPS and time of day was all he needed. Now he tells me how long it will take me to get to the office, taking into account the weather and traffic situation. If I use public transport, he tells me the stops and departure times. If desired, he even wakes me up before the destination stop. Today, however, I will take the car and, according to Google Now, it will take me twelve minutes to get there. So I should be in the office in time for my first meeting.
I wish. Since Google Now did not take into account the search for a parking space, I’ll be fifteen minutes late. At least Google Now remembers where I parked the car. I don’t have to do anything for it, because the wizard uses my movements to determine where I left the car. With so much ahead-of-time intelligence, it’s strange that it fails completely at the command “Show me my appointments next week” and merely triggers a web search – the universal skip action of all digital assistants. The first search result I get: Tips for voice control under Android. Very funny! Google Now can record new appointments almost without errors and remind me of them. But displaying them by voice control doesn’t work.
“You’re an idiot,” I say. Google Now doesn’t engage in discussion and prefers to google. I am far from being able to perceive the machine in any human way. Still, a strange thought comes to mind: May I insult them? The more we communicate with assistants in the future, the more we will be concerned with questions of etiquette: For example, do bots always have to swallow everything, or can they even wedge back when it gets too wild? Should users thank an assistant for a successfully completed task? And should they then acknowledge this with a “no reason” or “you’re welcome”?
In any case, Google Now lets my reproach roll off. The developers seem to want to avoid me seeing my assistant as somehow human at all costs. They do that very well, but I seem to be an exception. Most users unconsciously attribute human qualities to their digital counterpart – even if they are fully aware that they are talking to a machine.
Companies deal with this “persona design” in very different ways. Amit Singhal, former chief engineer at Google, said two years ago that jokes and small talk would only suggest social interactions that today’s AI does not have – this would raise false expectations. So the trouble is all the greater when the bot is once again only able to understand a lot. When the first text-based chat bots appeared in the 1990s, their log files quickly became lexicons of contemporary insults and sexual advances. As a result, Google’s assistant doesn’t even have its own name, and Facebook’s assistant “M” has no gender. It exists with a male and female voice.
Will it stay that way? Ray Kurzweil, head of technical development at Google, announced in May that he was developing a more humane bot. Users should be able to help shape it, it should read texts from us and adapt our personality. Google would thus go in a direction that Apple and Microsoft have already taken. Siri, for example, counters my reproach “You’re boring” with “I actually find myself quite interesting”. My question about her age is answered cheekily with: “Is that any of your business?” She also knows how to have a casual chat or break her tongue. Microsoft has even given its assistant Cortana a virtual body, borrowed from a computer game.
The risk is not only in the reaction of the user, but also – in a very human way – in the behavior of the bot itself. What happens when you overdo it with a realistic persona was recently revealed to Microsoft with its chat machine “Tay”. It was supposed to learn from other tweets to mime a silly teenager. That worked in principle as well. Tay even started flirting with other users. But she also picked up racist and sexist comments and continued to tweet them. That was too human for Microsoft, and it took Tay out of circulation.
I don’t hear from Google Now again until an appointment is pending. My smartphone vibrates, its LEDs flash blue. I flinch briefly. It’s not that I missed many appointments in the past, but at least the reminder works. I have to go to the residents’ registration office to renew my ID. In the future, not Google Now, but an assistant to the authorities will tell me that my ID card is expiring – and negotiate an appointment with the residents’ registration office. “We are developing a public authority assistant for Berlin,” says Michael Meder, head of the Smart Government Services application centre at DAI-Labor. “In the current phase, he will answer specific questions about the services provided by the authorities. In the future, such assistants are to become the interface between us and an authority, for example by calculating the optimal time for me to transfer taxes.
Sahin Albayrak, Director of the DAI Laboratory, is sure that such services will come. The trend is to bring together data from different areas: administration, finance, shopping, medicine, work, leisure. “Only then can assistants realise their potential,” says Albayrak. “Of course, this makes us more vulnerable when it comes to data security, but we’re already developing systems for that as well.”
That our data will be more secure in the future is the mantra of many developers. Just as Norbert Blüm always said that pensions are secure. However, it is already difficult for the individual user to understand what the digital butlers are evaluating, where they get their information from, and which paths the data takes. It takes effort to master the balancing act between data protection and service with my assistants. Google Now, for example, has given me comprehensive access to my data, and now it’s becoming active on its own. After evaluating my e-mails, he informs me that I should take an online order later in the day. I did not ask the assistant to do this. Of course the data access can be restricted. For example, Google allows me to deactivate “Web & App activities” or the “location history”. But what exactly Google still evaluates and what it does not evaluate is not clear to me. Microsoft allows much more detailed settings, which preferences Cortana may evaluate. But here, too, there is ultimately no choice but to trust the company.
With the digital butlers, it will soon be like it used to be with the human servants: they know more than the ruler would like. According to Microsoft, in future it will be enough to say “I’m hungry!” in order to have lunch, for example. The corresponding eating patterns tell Cortana that the user will most likely prefer pizza on Tuesdays, and will ask her to order the usual pizza or reserve a table. Users would have to agree, however, that Cortana interacts with the corresponding apps of service providers. Google’s “Allo” is supposed to work similarly: If two users arrange to meet for dinner via a messaging service such as WhatsApp, Allo should reserve a table. In addition, the butler should learn how to offer pre-formulated birthday greetings, for example, based on a user’s previous conversations.
Whether the technology will work or not cannot yet be tested, at least not with Cortana and Allo. But there are already specialized bots that network with Facebook Messenger, for example, and provide a foretaste of fun or annoying malfunctions. Moovit, for example, has developed an English-language assistant that tells users where to go. But he also likes to send them to Oxford instead of London or tries to explain to Londoners how to get to London. The weather chatbot Poncho, in contrast to the rather sober Moovit bot, has a sense of humor and says sentences like: “Sorry, just slept, what did you want again? But with this he apparently only covers similar quirks. A blogger in well-connected Brooklyn asked him about the weather for the weekend, but because Poncho couldn’t locate him, he thought he was on a boat. Poncho only answers the specific question about the weather in Brooklyn: seven degrees, clear skies.
So does Google Now manage to remind me of the butter from the supermarket on my way home? I roll into the parking lot – and indeed: the LED on my smartphone glows blue and “butter” appears on the screen. I open my shopping list app and add it to the other products I need to get. In the future, the app is supposed to guide me directly through supermarkets of my choice or enable price comparisons between different stores – but it requires intensive support: In the store, I have to tick off all the items I put in the shopping cart and enter the price if it has changed since the last purchase. The app remembers the order in which I encounter the goods in the supermarket. But since my smartphone darkens quickly to save energy, I have to unlock it again for each product. In doing so, I slow down the other shoppers. In the end, I realize with consternation that I would have been faster with a handwritten note.
At home I eat, brush my teeth – although I don’t feel like using the Oral B app again – and call out “film time”. My smart apartment has preheated the living room, dims the lights and turns on the TV. “Show me funny horror movies”, I say to Siri on Apple TV. “But only the good ones.” But Siri only understands “good” as what others think is good, because I don’t want to constantly evaluate something during evening entertainment just to make Siri better. And so I watch 30 minutes of a film that I find neither scary nor funny, and go to bed exhausted.
I switch off my smartphone because I’m afraid that my assistants will wake me up at night and ask me for more data to prepare my next day even better. A few of the digital butlers may help in everyday life, but none of their current functions have saved me any trouble. I rather find that in all the stories about the great new helpers, one aspect is missing out: How long it takes to serve them before they serve me. No thanks, I think. Another day with digital helpers – that’s way too exhausting for me. (Boris Hänßler, Gregor Honsel) / (bsc)
This text has been published in Technology Review (Germany).