Rasa: Implementing Error Flows and Fallback Strategies
In our conversations others, we are accustomed to dealing with communication breakdowns. When difficulties arise, we have to negotiate meaning with those we are communicating with. When doing so, we have a whole body of background knowledge and context that we can use when making assumptions about intentions and meaning. Also, we have a repertoire of strategies to express ourselves and help others express themselves.
Conversational agents need to handle communication breakdowns in a graceful way, as well. We have to prepare our agents before deploying them (and while deployed) to understand what people are saying and respond appropriately. We can also guide our bot’s conversation partners by prompting them towards responses with specific vocabulary or syntax. Even still, problems still may come up. People express themselves in different, sometimes surprising, ways. Additionally, those using our agent may not know how to accomplish what they want and may make unaccounted for mistakes.
Rasa provides tools to implement different fallback policies to handle such difficulties in communication. We will look at a few examples in a simple bot to get a better understanding of how to approach fallbacks and error flows in Rasa.
For this project, we will just use the base project Rasa creates for us.
rasa init
We can talk about our mood with this bot. For our purposes today, we will say “okayish.”
> Your input -> Hi
> Hey! How are you?
> Your input -> Okayish
>
Nothing. The bot has no response. Let’s try some other inputs.
> Your input -> Hi
> Hey! How are you?
> Your input -> Okayish
> Your input -> okay
> Your input -> I’m okay
> Your input -> Doing okay
>
A user would probably have left by now. They might think the bot doesn’t work or even that they did something wrong. Regardless, if a user leaves, the bot probably cannot accomplish its purpose. Fallback strategies provide a way to keep the user engaged, help them through the problem, and accomplish whatever goal they initially had.
A good message right now should probably
1) tell the user that it thinks something is wrong
2) tell the user how we might move forward
We’ll add a simple message that accomplishes these to our conversation. In domain.yml, add the following lines to the responses section:
utter_default:
- text: “I didn’t quite understand. Could you rephrase that for me?”
After we save and retrain Rasa (i.e., rasa train), the bot performs a bit better with our conversation. It gives the user a clue that rephrasing their response might solve the problem. When trying some different responses, though, our bot says the same thing. If a user gets to this point, they are going to quickly get tired of trying and move on.
Your input -> hi
Hey! How are you?
Your input -> okayish
I didn't quite get that. Could you rephrase?
Your input -> okay
I didn't quite get that. Could you rephrase?
Your input ->
To create a more robust response, we need to use actions. First, though, we need to comment out utter_default in domain.yml. It will conflict with the other strategies.
# utter_default:
# - text: "I didn't quite get that. Could you rephrase?"
We are going to create another fallback message using actions. Actions in Rasa basically allow us to program in Python and connect with some underlying functionality of the platform. In domain.yml, we need to add entries for the action we will create.
actions:
- action_default_fallback
Also, in rules.yml, we need to add a rule to call our action when the bot doesn’t recognize an intent well. nlu_fallback is not something that we explicitly create. Rasa has already set this up for us.
- rule: Simple Fallback
steps:
- intent: nlu_fallback
- action: action_default_fallback
Now, we need to build action_default_fallback in actions.py.
from typing import Any, Dict, List, Text
from rasa_sdk import Action, Tracker
from rasa_sdk.executor import CollectingDispatcher
from rasa_sdk.events import UserUtteranceReverted, ConversationPaused, EventType
from rasa_sdk.types import DomainDict
class ActionDefaultFallback(Action):
def name(self) -> Text:
return "action_default_fallback"
def run(
self,
dispatcher: CollectingDispatcher,
tracker: Tracker,
domain: Dict[Text, Any],
) -> List[Dict[Text, Any]]:
dispatcher.utter_message(text="I’m having trouble understanding. Could you rephrase that?")
return [ConversationPaused(), UserUtteranceReverted()]
Our conversation with our bot will actually be very similar to what we just experienced. Our message has changed slightly, but the fallback message just repeats.
Your input -> hi
Hey! How are you?
Your input -> okayish
I’m having trouble understanding. Could you rephrase that?
Your input -> okay
I’m having trouble understanding. Could you rephrase that?
Your input ->
However, because we are using an action, we could actually implement other strategies instead of just repeating the message. For example, we could write a function that would facilitate a handoff to a human agent.
In our case, though, we will develop a 2-stage fallback strategy based on Rasa’s documentation.
Rasa provides the 2-stage fallback for us. We don’t need to necessarily make it ourselves. Instead, we just need to set up our actions and rules a particular way to trigger it properly. The previous action – action_default_fallback – is just one step.
The name “Two-Stage Fallback” seems somewhat inaccurate because the full process seems to go through 3-5 steps. First, when the agent can’t tell what to do, it will respond with a list of intents with the highest confidence thresholds. If a user indicates none match, Rasa will use utter_ask_rephrase to give us the chance to explain our goal another way. If the breakdown persists, Rasa will again provide a new list of intents based on our rephrased response. Finally, if the problem can’t be resolved, action_default_fallback will be called. We could terminate the conversation at that point if we wanted.
Before developing our second action, let’s fix up action_default_fallback since we know it will come last now. Change the message to “I don’t think that I can help.” Also, remove UserUtteranceReverted(), which will result in the agent ignoring any further dialogue.
class ActionDefaultFallback(Action):
def name(self) -> Text:
return "action_default_fallback"
def run(
self,
dispatcher: CollectingDispatcher,
tracker: Tracker,
domain: Dict[Text, Any],
) -> List[Dict[Text, Any]]:
dispatcher.utter_message(text="I don't think that I can help.")
return [ConversationPaused()]
Now, we have to add the other action and update our fallback rule.
Comment out the simple response in rules.yml.
# - rule: Simple Fallback
# steps:
# - intent: nlu_fallback
# - action: action_default_fallback
Add the 2-stage fallback rule to rules.yml.
- rule: Two-Stage Fallback
steps:
- intent: nlu_fallback
- action: action_two_stage_fallback
- active_loop: action_two_stage_fallback
In domain.yml, we need to update the actions section with the new action we will create: action_default_ask_affirmation.
actions:
- action_default_fallback
- action_default_ask_affirmation
Finally, we need to add the other action. This is a pretty big function with some important components. Some important parts are explained below.
class ActionDefaultAskAffirmation(Action):
def name(self):
return "action_default_ask_affirmation"
def run(self, dispatcher, tracker, domain):
intents = tracker.latest_message["intent_ranking"][1:]
message = "Sorry, I’m still having trouble. I’ve found the following things that seem relevant. What would you like to do?"
buttons = [
{
"title": intent['name'],
"payload": "/{}".format(intent['name'])
}
for intent in intents
]
buttons.append({
"title": "Something else…",
"payload": "/out_of_scope"
})
dispatcher.utter_message(text=message, buttons=buttons)
return []
This returns a list of intents with the highest confidence scores. You can adjust how many intent options will display based on your preference. Right now, [1:] means the assistant will return all intents. We could use [1:2] instead to only show 2 intents.
intents = tracker.latest_message["intent_ranking"][1:]
The message variable will appear above the intents. It is the prompt that asks users which intent matches best with what they want to do.
message = " Sorry, I’m still having trouble. I’ve found the following things that seem relevant. What would you like to do?"
We can add specific intents to the list. In our case, we are adding an “out of scope” intent that, from the bot’s perspective, means that the user’s utterance is not something it understands. A user definitely doesn’t want to say they are “out of scope,” so we have given it the friendlier button text “Something else...” If we were limiting intents in a more complex app, we might want to include a main menu or the submenu option here.
buttons.append({
"title": "None of These",
"payload": "/out_of_scope"
})
With this setup, the chatbot works much better. The assistant progressively escalates the fallback, offering us chances to change paths or alter our response.
Your input -> hi
Hey! How are you?
Your input -> okayish
? Sorry, I’m still having trouble. I’ve found the following things that seem relevant. What would you like to do? 8: Something else… (/out_
of_scope)
I'm sorry. I didn't understand. Could you rephrase?
Your input -> okay
? Sorry, I’m still having trouble. I’ve found the following things that seem relevant. What would you like to do? 8: Something else… (/out_
of_scope)
I don't think that I can help.
Your input -> okay
Much like out of scope, you might want to rename your intents into more user-friendly text for the options. We can just make a json object that maps the intents to the button text we want. The intent name should be the key, and the value should be the button text. Let’s also limit the number of intent choices to the highest 2 by changing [1:] to [1:2].
class ActionDefaultAskAffirmation(Action):
def name(self):
return "action_default_ask_affirmation"
def run(self, dispatcher, tracker, domain):
intents = tracker.latest_message["intent_ranking"][1:2]
message = "Sorry, I’m still having trouble. I’ve found the following things that seem relevant. What would you like to do?"
intent_options = {
"greet": "Greet me",
"goodbye": "End out chat",
"affirm": "Say 'Yes'",
"deny": "Say 'No'",
"mood_great": "Good mood",
"mood_unhappy": "Unhappy mood",
"bot_challenge": "Test me"
}
buttons = [
{
"title": intent_options[intent['name']],
"payload": "/{}".format(intent['name'])
}
for intent in intents
]
buttons.append({
"title": "Something else…",
"payload": "/out_of_scope"
})
dispatcher.utter_message(text=message, buttons=buttons)
return []
The intent_options will not all display in every error. Instead, the ones with the high confidence will be displayed. As mentioned before, you can limit the number of options that display. However, even though we’ve limited the intents, we still need to map all of the intents to button text because we don’t know which ones will match to a user’s utterance.
Now, action_default_ask_affirmation will display options with clearer wording for users.
Your input -> hi
Hey! How are you?
Your input -> okayish
? Sorry, I’m still having trouble. I’ve found the following things that seem relevant. What would you like to do?
» 1: Say 'Yes' (/affirm)
2: Something else… (/out_of_scope)
Type out your own message...
Rasa’s built-in functionality allows a lot of flexibility to customize a fallback sequence that best fits the needs of your assistant. Having such a policy may help users work through difficulties quickly to accomplish whatever goal they initially came to the bot with. This ultimately means that the chatbot will be more effective at accomplishing its own purpose.
Helpful Resources
Rasa Documentation – Fallback and Human Handoff
Rasa Blog – Failing Gracefully with Rasa