3 pitfalls when designing an AI assistant

3 pitfalls when designing an AI assistant

I've got something to tell you.

I hate doing the same mistakes two times in a row. You too, right?

It sounds cheesy, as my clickbait title, but in with industry, you have to be comfortable with managing for mistakes.

Design an AI assistant is like building a toy for a baby: you won't know every way it will be used. And trust me, after years of iteration, I'm still impressed by human nature when it comes to interaction with a machine. Very creative.

Today, you will learn 3 of my secrets to patch common pitfalls form the research to the iterating part of the design process.

Let's delight our users, shall we

Painting in the dark: no training data

Your job as a conversational designer is to close the gap from intent to action. A great way of variation could formulate intents, thanks to machine learning, it's getting simpler to detect intention without having to know every single exact formulation, but you still have to train it with patterns, language style.

The larger your scope is, the higher your training set will be, and the harder you will maintain it. For a customer support assistant, which covers 80% of my projects (where the money is right now, btw), you can have +100 intentions to hide and thousands of utterances on your training set to maintain so that you won't have confusion and disparity between intents.

A common pitfall is to start an AI project without datasets. At this point, it's a meme to me since it happens every time.

Let's introduce with the next best enemy: human bias.

As a conversation designer, you will have to train your model. To cover up this problem, you will often use everything closer to a user utterance:

  • Google searches: Too short, not in a conversational style
  • E-mail logs: Too long but in a conversational style

Those are not optimal, and you will have to rewrite them to get a qualitative set. And you will arbitrate that it will be faster to write those utterances all by yourself, maybe with the help of the client.

Congrats, you've opened the door to your enemy — human bias. Bad performance, so the experience is down the line.

If your conversational project is all about detecting intent and you have little to no material of research to train your model, I recommend to start with putting a dumb bot, but the smart way.

Design a bot who has one mission, listen to the user intent, and redirect it to the primary resource. Add value as possible by offering a shortcut to the human operator, estimates responding times, additional resources.

Why making something dumb will help you to build a smart assistant?

  • Real end-user in the medium data: you will have one of the best possible
  • Focused on impact: you will know what's asked so you won't have to guess the main use-case to cover
  • Acknowledge the fact that AI isn't magic: it will also educate your client that you need qualitative data to design smart interfaces.

For the last part, you will thank me later.

Forced conversational style

One of the critical tips that everybody gives is to have a compelling onboarding: Very explicative, with a clean tone of voice, maybe some interaction to users hints that we here to have a conversation.

It's, of course, a piece of good advice until you meet your end-user.

"Why he said this?" "The assistant clearly said that he couldn't do that"... That's what I hear when we're user testing or monitoring conversations.

Problem? Users don't read, they can.

The well-known principle in web design is in full-effect on conversational interfaces too.

We've also seen that users start typing and even send messages without waiting for your beautifully crafted onboarding to finish. They don't have to be kind to a bot; they have jobs to be done. Time is the main asset.

Then the challenge is to do our job: educating the user on what he can do, how he can do it, establishing the tone of voice, when considering the user's job: get an answer.

Solution? Split-test and delayed onboarding.

Since every cohort of users is different, there's no magic trick that fits all. A/B testing is your ally.

Don't too much pressure on your intuition and test radical choices. Short onboarding to monitoring user's intentions when having minimal guidance, visuals instead of words in a carousel.

Then, consider delaying your onboarding, or I would stay stretch it during multiple turns.

  • Start with a minimalistic welcome message — the right dose of guidance to prevent off-scope intents.
  • Then get your user to discover additional use-cases/features during the following turns.


But what if my user quits the conversation just after the first turn?

  • If the user got it answers: congrats, you've done your job right.
  • If the user had a bad experience: Try adding more guidance into your welcome message and iterate on your fallback experience.

Back your intuition

As every designed creation (every creation then?) is composed of design decisions.

Those are backed with a multitude of factors: experience, intuition, data, shared best practices.

Since we still haven't mastered UX for bots, you will rely, for the most part, on your intuition and data you will generate.

Intuition is a great way to start since it flatters our ego if it works the way you intended. But you have to acknowledge mental models and bias that are common pitfalls for every UX project.

Data is king, but it deserves. It demands time and effort. Despite inherent qualities, it's still not the priority for stakeholders, so you have to battle to get buy-in.

The third pitfall is that we don't anticipate enough about our hypothesis, our design decision.

Since there's a shortage of best practices backed with years of data, intuition is at the forefront. And at this game, every stakeholder will participate with their own.

Adopt to the right mindset, track your hypothesis before launching, and focus on impact: not every decision should be followed at the beginning.

So often, I was trapped because I haven't tracked the right event, so I've got no data to conclude, and we still navigating int he dark.

Plus, document your best practices/findings So that you can share it with your team and apply them as best practices for the next sprint/project.

Conclusion

We, designers, have to make sure our mental model on how an assistant should act is in phase with the user's expectations. I often see the romanticized version of bots, straight from a Sci-Fi movie. Get your UX basics right; a large portion of UX principles applies to bot design.

We also have to play with a trust debt thanks to early initiatives that didn't deliver to user's expectations: from flip phones bad dictation feature to first automated customer services with 3D avatars.

Don't forget to share your findings with the community: our industry needs you and your expertise. Don't be so humbling; I'm sure you have something to share. Join a community and start interacting now!

Show Comments