Life is exhausting enough

How does it feel to be taken care of all day long by intelligent assistants? What’s in store for us with the new generation of chatbots and digital butlers?

It’s six in the morning. My smartphone on the bedside table wakes me up with music that gets louder by the second. Today is a special day: I want to let digital assistants guide me through everyday life. They are supposed to save me stress and give me time, the digital companies promise, to bring their latest helpers to the people. Facebook is just as much a part of this as Apple and Google. Apps such as Moovit for navigation or Poncho for the weather are pushing onto the market. In addition, there are household appliance and consumer goods manufacturers who want to take me by the hand, be it brushing my teeth or recommending a movie.

With the advances in artificial intelligence, the offerings are constantly being upgraded. The Internet, so the message, is becoming an invisible spirit that guides you through the day. People should find without searching. But what does reality look like at present? That is what I wanted to find out.

Google Now, for example, had access to all my smartphone data – e-mails, calendars, internet searches, location services. It remembers which restaurants I google for and who I receive mails from. A smart toothbrush monitors my cleaning habits, my shopping app is supposed to guide me through the supermarket and give Apple TV a head start on when I want to watch which movies. However, I am not completely comfortable with this.

If one day the police accuse me of a crime, I whitewash the situation, my new butlers could at least provide an alibi. At the end of my day, I will know whether the self-exposure also serves its actual purpose. Will I really lie relaxed in bed – or throw the smartphone into the corner?

I’ll get up and take Google Now with me into the kitchen. As I take butter, cheese and milk out of the fridge, I notice that the supplies are running low. I call “Okay Google” into my Android device (which I only do out of earshot of other people) and say: “Remind me to buy butter at the supermarket!” After breakfast I brush my teeth with a futuristic electric brush from Oral B.

It even shows me how long I should brush my teeth with the help of a timer and a bitmap – the latter only in the app that comes with it. It also detects if I press too hard and damage my gums. In order for it to sound the alarm, however, I have to maltreat my teeth with brute force. Otherwise the toothbrush does not check whether I am following the instructions. I can let it vibrate in the air for two to three minutes and still get praise for extensive dental care.

It’s not that easy to cheat on a bathroom mirror developed by the DAI-Labor of the TU Berlin. A gleaming screen shows children a score for brushing their teeth. Anyone who brushes regularly wins against siblings who are lazy about brushing.

Such playful elements are now offered by many digital assistants to motivate humankind to lead a healthier life: more sports, vitamin-rich nutrition, better personal hygiene. Developers call the principle gamification. Studies suggest that success experiences stimulate the reward system in the brain. What the studies conceal: In practice, gamification works about two or three times, after which it gets on children’s nerves, according to experience. They prefer to play “Minecraft” or “Clash of Clans” and still do not like to brush their teeth. I do not feel any different. Who wants to compete with others all the time? At least when brushing your teeth – and in the morning toilet in general – you should be allowed to be last, I think.

Life is exhausting enough as it is. I look at Google Now to see how my day will continue to be. To my surprise, I realize that the digital butler has unsolicitedly determined my home and work location based on my residence statistics. A comparison of GPS and time of day was enough for it. Now it tells me how long it takes me to get to the office, taking into account the weather and traffic situation. If I use public transport, it tells me the stops and departure times. If desired, it even wakes me up before the destination stop. Today, however, I will take the car and, according to Google Now, I will need twelve minutes for the trip. So I should be in the office on time for my first meeting.

I wished I were. Since Google Now didn’t take the parking space search into account, I’m fifteen minutes late. At least Google Now remembers where I parked the car. I don’t have to do anything because the assistant uses my movements to determine where I left the car. With so much anticipatory intelligence, it’s strange that it completely fails at the command “Show me my appointments next week” and merely triggers a web search – the universal skip action of all digital assistants. The first search result I get: tips for voice control under Android. Very funny! Google Now can record new appointments almost without errors and remind me of them. But displaying them by voice control doesn’t work.

“You are an idiot,” I say. Google Now does not engage in any discussion and prefers to google. I am far from being able to perceive the machine in any human way. Nevertheless, a strange thought comes to mind: May I insult them? The more we communicate with assistants in the future, the more we will be concerned with questions of etiquette: For example, do bots always have to swallow everything, or are they allowed to wedge back when things get too wild? Should users thank an assistant for a successfully completed task? And should they then acknowledge this with a “no reason” or “you’re welcome”?

In any case, Google Now lets my reproach roll off. The developers seem to want to avoid me seeing my assistant as somehow human at all costs. They succeed in doing this very well – although I seem to be an exception. Most users unconsciously attribute human qualities to their digital counterpart – even if they are fully aware that they are talking to a machine.

Companies deal with this “persona design” in very different ways. Amit Singhal, former chief engineer at Google, said two years ago that jokes and small talk would only suggest social interactions that today’s AI does not have – this would raise false expectations. So the trouble is all the greater when the bot once again only knows what it’s doing. When the first text-based chat bots appeared in the 1990s, their log files quickly became lexicons of contemporary insults and sexual advances. As a result, Google’s assistant does not even have a name of its own, and Facebook’s assistant “M” has no gender. It exists with a male and female voice.

Will it stay that way? Ray Kurzweil, head of technical development at Google, announced in May that he would develop a more humane bot. Users should be able to help shape it, it should read texts from us and adapt our personality. Google would thus go in a direction that Apple and Microsoft have already taken. Siri, for example, counters my reproach “You’re boring” with “I actually find myself quite interesting”. She answers my question about her age cheekily with: “Is that any of your business?” She also knows how to chat casually or break her tongue. Microsoft has even given its assistant Cortana a virtual body, borrowed from a computer game.

The risk is not only in the reaction of the user, but also in the behavior of the bot itself. What happens when you overdo it with a realistic persona was recently revealed to Microsoft with its chat machine “Tay”. It was supposed to learn from other tweets to mime a silly teenager. That worked in principle as well. Tay even started flirting with other users. But it also picked up racist and sexist comments and continued to tweet them. That was too human for Microsoft, and it took Tay out of circulation.

I only hear from Google Now again when I have an appointment. My smartphone vibrates, its LEDs flash blue. I flinch briefly. It’s not that I missed many appointments in the past, but at least the reminder works. I have to go straight to the residents’ registration office to renew my ID. In the future, not Google Now, but an assistant to the authorities will tell me that my ID card is expiring – and negotiate an appointment with the residents’ registration office. “We are developing a public authority assistant for Berlin,” says Michael Meder, head of the Smart Government Services application center at DAI-Labor. “In the current phase, he will answer specific questions about the services provided by the authorities. In the future, such assistants are to become the interface between us and an authority, for example by calculating the optimal time for me to transfer taxes.”

Sahin Albayrak, Director of the DAI Laboratory, is sure that such services will come. The trend is towards merging data from different areas: Administration, finance, shopping, medicine, work, leisure. “Only then can assistants realize their full potential,” says Albayrak. “Of course, this makes us more vulnerable when it comes to data security, but we’re already developing systems for that as well.

That our data will be more secure in the future is the mantra of many developers. Just as Norbert Blüm used to say that pensions are secure. However, it is already difficult for individual users to understand what the digital butlers are evaluating, where they are getting their information from and which routes the data is taking. It takes effort to master the balancing act between data protection and service with my assistants. Google Now, for example, has given me comprehensive access to my data, and it is now acting on its own initiative. After analyzing my e-mails, it points out to me that I should receive an online order later in the day. I did not ask the assistant to remember me that. Of course, data access can also be restricted. Google, for example, allows me to deactivate “Web & App activities” or the “location history”. But what exactly Google still evaluates and what it does not evaluate is not clear to me. Microsoft allows much more detailed settings, which preferences Cortana may evaluate. But here, too, there is ultimately no other choice but to trust the company.

With the digital butlers it will soon be like it was with the human servants: They know more than the rulers would like. According to Microsoft, in the future it will be enough to say “I’m hungry!” The corresponding patterns in eating behavior reveal to Cortana that the user most likely prefers pizza on Tuesdays and immediately asks her to order the usual pizza or reserve a table for him. Users would have to agree, however, that Cortana interacts with the corresponding apps of service providers. Google’s “Allo” is supposed to work similarly: If two users arrange to meet for dinner via a messaging service such as WhatsApp, Allo should reserve a table. In addition, the butler should learn how to offer pre-formulated birthday greetings, for example, based on a user’s previous conversations.

Whether the technology will work or not cannot yet be tested, at least not with Cortana and Allo. But there are already specialized bots that network with Facebook Messenger, for example, and provide a foretaste of fun or annoying malfunctions. Moovit, for example, has developed an English-language assistant that explains the route to a destination to users. But it also likes to send them to Oxford instead of London or tries to explain to Londoners how to get to London. The weather chatbot Poncho, in contrast to the rather sober Moovit bot, has a sense of humor and says sentences like: “Sorry, I was just sleeping, what did you want again?” But with this it apparently only covers similar quirks. A blogger in well-connected Brooklyn asked it about the weather for the weekend, but because Poncho couldn’t locate him, it thought he was on a boat. Poncho only answers the specific question about the weather in Brooklyn: seven degrees, clear sky.

So does Google Now manage to remind me of the butter from the supermarket on my way home? I roll into the parking lot – and indeed: the LED on my smartphone glows blue and “butter” appears on the screen. I open my shopping list app and add it to the other products I need to get. In the future, the app should guide me directly through supermarkets of my choice or allow me to compare prices between different stores – but it requires intensive care: In the store, I have to check off all the items I put in the shopping cart and enter the price if it has changed since the last purchase. The app remembers the order in which I encounter the goods in the supermarket. But since my smartphone darkens quickly to save energy, I have to unlock it again for each product. In doing so, I slow down the other shoppers. In the end, I realize with consternation that I would have been faster with a handwritten note.

At home I eat, brush my teeth – although I don’t feel like using the Oral B app again – and call out “movie time”. My smart apartment has preheated the living room, dims the lights and turns on the TV. “Show me funny horror movies”, I say to Siri on Apple TV. “But only the good ones.” But Siri understands “good” only what others think is good, because I don’t want to constantly evaluate something during evening entertainment just to make Siri better. And so I watch 30 minutes of a film that I find neither scary nor funny, and go to bed exhausted.

I switch off my smartphone because I’m afraid that my assistants will wake me up at night and ask me for more data to prepare my next day even better. A few of the digital butlers may help in everyday life, but none of their current functions have saved me any trouble. I rather find that in all the stories about the great new helpers one aspect is missing out: How long it takes to serve them before they serve me. No thanks, I think. Another day with digital helpers – that’s far too exhausting for me.

This text has been published in Technology Review.