yisela.com

Chatbots, Personal Assistants & The Future of Artificial Intelligence and UX

10 min read · Posted September 15, 2016 (on Chatbot News Daily)

Image credit: Michael Dain, CC

Some people equate the bot revolution to that of the GUI or smartphones. Major players are investing in new and revolutionary ways to automatize and redefine interaction. But what are chatbots, exactly, and how are they already reshaping the world we live in?

Artificial Intelligence: Old Players, New Rules

Artificial Intelligence — at least as a beautiful and scary concept inhabiting some pioneer’s head — is old news.

Ada Lovelace, a countess, mathematician and writer (quite a trio, if you ask me,) created the first ever algorithm for her Analytical Engine in 1842. That’s almost a hundred years before Alan Turing came up with a test to determine if you are talking to a machine or a person, and 150 years before neural networks finally became a reality.

Although bots and conversational interfaces have been in the public eye for a while, its exponential growth only turned into an unstoppable trend thanks to massive amounts of data and hardware available currently. As a result, bots are seen not necessarily as response to concrete user demands, but as driven by a sort of tech vision. This vision is hoisted by both the usual giants and the little players, because programming a chat-bot (not necessarily a good one) is actually relatively easy.

In August 2015, Facebook launched its virtual assistant M, a technology that combines artificial intelligence and human helpers. Amazon has turned a few heads with its newest kid Echo, too. But it doesn’t stop there. Thanks to Windows, there’s a gigantic swarm of Azure bots coming this way, and 30% of organic google searches are already processed via A.I.

As messaging platforms turn into OSs, apps become redundant and less people install them. The newest interface seems to be the conversational interface, a completely new model for interacting with online services. How did we get here, what does here exactly mean, and where are we heading next?

Funny but Simple: The Stateless Bot

There are several ways of classifying bots, I personally like Alex Bunardzi’s one, which I read about here on Medium.

A stateless bot is one that has no memory, no storage. Sort of like a server, where each request made to it disappears after being processed (if a server were to keep track of each and every requests, it would soon collapse. So would a bot!). Consequently, each message exchanged with a stateless bot can be considered a new interaction.

A Twitter bot that picks up “I’m not racist but” tweets

If you are looking for useful stateless bots, just take a look at Twitter. Highlights include earthquake alert systems, museum networks that post random works of art, and a gazillion bots made for the sake of lols (namely, one that exchanges the word “boy” with “bot”, and one that makes jokes using google’s autocomplete feature are among my favourites.)

But not all stateless bots are made for our amusement. ELIZA, created in 1966, is a perfect example of a stateless bot that -unexpectedly- served a bit of a social purpose. What made ELIZA so successful, one could argue, was the fact that she/it acted like a non-directional psychotherapist in an initial psychiatric interviews, asking open ended, introspective questions.

Although ELIZA uses simple pattern-matching techniques like parsing and substitution of words, she was so believable that Weizenbaum’s secretary asked to be left alone with her, so she could ask questions freely. ELIZA was even deemed useful for treating mild psychological symptoms.

See, not everything that has no storage is not gold.

Useful when Limited: The Semi-Stateful Bot

howdy

A second type of bot is the semi-stateful one. Think Slack bots, these great little co-workers that can automatise asynchronous communication, plan meetings or collect lunch orders (if you have no idea what Slack is, or how bots made it into the new cool kid around, you can check out one of the most used bots for it over here.)

What characterises these bots is that they are terribly effective for a limited scope of tasks. We can safely assume that semi-statefuls will bloom in the next few years, they are so easy to set up (usually come working out of the box) and quickly turn into game-changers in the workplace when it comes to communicating more effectively. When used for on-boarding, for example, they can simplify the gargantuan task of jumping into a project and having to go through endless boring documentation.

Semi-stateful are not exclusive to the office, of course. Another area that can greatly benefit from them is healthcare. Their effectiveness is proven: A bot can send reminders, track medicines or food intake, answer medical questions and act as a general health coach, and rock at it. But, of course, bots can also be used for evil. Social network spamming and profile cloning are unavoidable dangers, as technology has no real morals of its own.

Now, I bet you don’t think of him as a bot. But he is definitely one:

Clippy Clippy makes everyone uncomfortable.

When Bots go Wrong

You could argue that Microsoft Office’s Clippy was a UX bad idea from the get go: Enabled by default, it opened with the program every. Single. Time. And then it would ask you if you need help writing a letter whenever you spelled the word “Dear”. However, one of the biggest reasons Clippy failed was because stakeholders paid little attention to what researchers were saying regarding what an assistant like this should look like. Had him be more human and humorous, history could have been different.

This brings to attention a very important fact: We like to treat computers like people. We use the same standards of politeness, gender stereotypes, teamwork and reciprocity with them as we do with our fellow biological peers. Now, how human should we make our bots? Well, even if we could, there’s a limit we should be aware of. This limit (or rather, range) is called the Uncanny Valley, and it determines the point at which human-likeness for machines turns into creepiness:

The Uncanny Valley The dip in the graph represents the uncanny valley. Source: Heinakroon

What This Means for UX

We know from several studies (and a bunch of funny personal anecdotes) that most users build a relationship with their bots. We also know that interfaces are getting more and more conversational.

In conversational UIs, personality is the new UX. The entire app experience is potentially reduced to a few lines of texts, with interesting consequences: On the one hand, microcopy becomes king. On the other, this king poses some serious challenges. More and more companies are hiring writers and comedians to script engaging bots — it’s not only about what you say, but how you say it. And what the person at the other end actually feels.

Which takes us to supervised AI, something that is, at this point, both useful and convenient. We are still far from a truly conversational exchange (we will see why), but we still want to represent our brand fairly. What we see in enterprises like Facebook’s M is a teaming of people and bots, working together to create a solid system. The bot gathers the information for an eventual contact with a human rep. As the person responds, the machine learns.

Stateful Bots: Machine Learning and Why Language is Such a Difficult Nut to Crack

Her In the movie “Her”, Samantha is an O.S. capable of communicating and reasoning

I will keep this short because I could actually write about machine learning and neural networks for years (so let’s not get me started.)

What is then machine learning? Basically, if you want to teach a machine (a program) how to recognise cats in pictures, you need to feed it with a ton of photos of cats and not-cats. After a while, if you give the same machine an image, any image, it should tell you if it contains a cat or not.

It sounds a bit un-glamorous, but it’s actually fascinating. Machines can learn things, recognise patterns and make decisions in a human-like way. They do perfectly fine with objects and faces, yet there’s one thing the machine can’t do just yet: Converse.

Human language is a marvel. Truly. It’s a compact and effective system, but it has one unavoidable condition: It relies on the assumption of intelligence and a common social and physical world.

How do you think most machines would respond to the following question?

The trophy would not fit in the brown suitcase because it was too big. What was too big?

  1. The Trophy
  2. The suitcase

Unless the machine knows the size of both elements, and the relationship between them, it will get stuck at trying to understand the grammar and potentially give a wrong answer. Of course a more powerful machine should be able to consider these variables, but you can already see the complexity in such a simple question.

Bots need to speak and think like us, but bots don’t live in a physical world like we do, and they certainly don’t have years to learn it a la human way.

For bots to be like Samantha from Her, machines also need to understand how humans work on an emotional level. Which, let’s agree, is a fugly hot mess for us as well. This means detect and analyse emotions, extract concepts from dialog and show empathy.

Currently, the only way to have a conversational interface that works is by using a combination of Symbolic Processing and Machine Learning. We feed it some basic rules (think grammar rules, for example, or social rules, ethics, whatever floats your bot. I mean, boat,) and then let the bot deal with the almost-infinite number of cases that spawn from them. Imagine providing photos of cats, but when I say cats I mean every single topic humans (and potentially, non-humans) can refer to. Easy, right?

What Now?

What Now Image credit: Mark Strozier

In machine learning as in life, the more you know, the more you don’t know. Where would you start if you had to teach a fully-capable being the entire experience of being human?

And even if you manage to, what abilities would you give your bot? Would you talk to it, or use it? Science fiction has been exploring the topic for a century, yet we play around A.I. like it’s some fun harmless hobby. But, truly, what are the dangers of bots used in combination with captology? Should military robots have autonomous lethal firing power?

As the opening question to interaction around Sillicon Valley seems to be: What does your bot do?, we should not forget to ask ourselves what the purpose of bots should be. The ultimate purpose.

I believe they can help us become better, happier humans. Not saying they should, just… they could.

Tagged with Artificial Intelligence, Bots, Chatbots, Language, Machine Learning

Avatar of Yisela Alvarez TrentiniYisela Alvarez Trentini lives and works in Frankfurt am Main building useful things. You should follow her on Twitter