Artificial intelligence: your personal assistant is not stupid, it is poorly designed

by bold-lichterman

Despite the indisputable advances of the giants Google, Apple, Facebook and Amazon, the functions offered by the new “artificial voices” and other chatbots still seem far from our daily conversational uses. The low rate of use of voice assistants such as Siri, Google Now or Cortana is a clear illustration of this.

Are personal assistants doomed to be deceptive? What is missing for these new forms of artificial intelligence to be developed? What would be the levers to facilitate interactions with users?

Research in emotional design has shown that the user experience fragments into a cluster of decisive micro-moments. A background that changes color, a feature that allows you to respond to a message without changing pages, a scalable favicon giving the possibility to follow the status of a page from a tab, a button that grows from ‘a late hour… Micro-moments are the visible or invisible details of a user experience that create surprise and increase our attachment to a brand. For designer Dan Saffer, these micro-details, beyond the great functionalities, make it possible to create “artificial empathy” for objects, brands or services.

If we apply this principle to artificial intelligence, several types of interaction deserve to be refined. Here are three that seem essential.

1. The question of rhythm

Time is of the essence in managing a user experience. Numerous studies show that consumers are connecting more often with their smartphones, but that their visits are becoming shorter and shorter. For very immediate needs such as buying, communicating, learning, finding or doing, the management of the response time of artificial intelligence has been worked on, but that of the nuance has been forgotten.

With quick but inadequate responses, interrupted speech in the middle of a dictation, and half-typed messages, Siri doesn’t seem to be “listening” to users. However, as in language, the question of rhythm, specific to each and different according to each culture, is essential for communication.

On digital interfaces, simple visualization systems exist to signify time, such as progress bars, loading loops or sound signals. It could be interesting to transfer these sound and visual codes to better manage the spaces of dialogue with artificial voices. Why not suggest to the user to define his own time spaces, when dictating an email or SMS or when asking a question?

2. Clear and intuitive principles of interaction

We need to quickly understand the possibilities of interaction that we can have with a machine or an object. Don Norman, author of The Design of Everyday Things, speaks of “affordance” to designate this capacity of an object or an interface to suggest its own use.

For artificial intelligence, this readability is still unclear. It is easy to start a conversation with a robot, but it is complicated to understand its methods of understanding and its real scope of action. Should we use phrases or keywords? Is it more efficient to write in capitals? Should we speak English to him?

A journalist from Rue89, for example, experienced this on HealthTap, a chatbot half-robot, half-doctor, by declaring to be “excited by the shoes”. The program automatically redirected her to answers about foot odor. This trial and error is a bad sign for the user, who can quickly get annoyed and give up.

Why not clearly state the operating principles of the chatbot, at the beginning or during the conversation, rather than camouflage its incompetence?

3. Self-learning

For psychiatrist Serge Tisseron, who works on the mechanisms and dangers of artificial empathy, the attachment that one might feel for a robot is comparable to that one feels for an animal. We can draw a parallel with the success of the Tamagotchis, these little games that made it possible to breed a virtual animal in the 1990s.

The notion of self-learning therefore seems central in our interaction with artificial intelligence. If man is able to learn things from a machine, he will be able to spend more time and invest in it. He will take some pride in it. It may even develop new mental reflexes and change structurally, as when an adult addresses a child.

We imagine that the giants of artificial intelligence have chosen to limit this possibility so as not to create abuses, because the results can be surprising and counter-intuitive, such as with Tay, the Microsoft Twitter robot who was used to utter sexist and racist content under the influence of malicious Internet users.

If the right settings still have to be found for moral questions, many micro-interactions could already be worked on: language and intonations, nature of the tasks requested, user journey, etc.

What will artificial intelligence look like with all of these micro-moments? Is there a middle ground between a mechanical outsourcing of tasks and the creation of a creepy technological superego?

Clarisse MoisandGraduated from Sciences Po in 2008 and after a first experience in Asia, Clarisse Moisand founded, upon his return in early 2011, WEDO Studios a consulting agency in innovation and design of experience. Since its creation, WEDO Studios has supported large groups, start-ups and organizations in the design, development and implementation of innovative projects.

Passionate about social sciences and digital technology, Clarisse works on new global innovation strategies, which combine the functional with the emotional. Clarisse participated in the development of “design thinking” and “service design” in France as a professor at ESSEC Business School. She is regularly asked to speak at specialized conferences, in France and abroad.

Photo credit: Fotolia, royalty-free stock images, vectors and videos