Overview
The Bot Testing feature is a critical quality assurance tool within the Workflow module that helps you ensure the reliability and accuracy of your chatbot's conversational abilities. Its primary purpose is to automatically check for "intent clashes"—scenarios where a user's query could mistakenly trigger the wrong conversation flow because two or more intents have overly similar training phrases.
By simulating user inputs, this tool analyzes the content you've uploaded to the chatbot and identifies potential conflicts that could negatively impact the user experience. It allows you to proactively find and fix issues where the bot might get confused between, for example, a user asking for their "account balance" versus their "last account statement."
The feature provides flexibility by allowing you to test individual intents, select groups of top intents for faster checks, and filter the tests based on the last modification date. The results are presented in a clear report that can be exported via email for further analysis.
How to Use Bot Testing
This guide explains the process of using the Bot Testing tool to validate your chatbot's intents.
1 Navigating to the Feature
From the main console menu, navigate to Workflow.
Click on the Bot Testing tab in the secondary navigation bar.
2 Configuring and Running a Test
The Bot Testing interface will allow you to configure the test parameters.
Select Intents to Test: You can choose how to run the test:
Test Individual Intents: Select one or more specific intents from a list to check for clashes among a small, targeted group.
Test Top Intents: Choose this option for a faster test that focuses on the most frequently used or recently modified intents.
Start the Test: Once you have selected the intents, click the "Start Testing" or a similar button to initiate the process. The system will begin analyzing the training phrases for the selected intents to identify potential overlaps.
(Optional) Stop Test: You have the option to stop a test in progress if needed.
3 Reviewing Results and Taking Action
After the test is complete, the results will be displayed on the screen, highlighting any intents that are found to be clashing.
The report will show which intents are in conflict and provide insights into why they are clashing (e.g., similar keywords or phrasing).
Export Report: You can choose to have the detailed test report exported to your email for offline review and record-keeping.
Fixing Clashes: Based on the report, navigate back to the Flows tab, open the conflicting intents, and edit their Training Questions to be more distinct and specific.
Common Examples and Use Cases
Post-Content Upload: After you have created several new flows or used "Extract Web FAQs" to add dozens of intents, run a Bot Test to ensure the new content doesn't conflict with existing flows.
Debugging User Issues: A user reports that when they ask "how do I apply for a loan?", the bot sometimes provides information about checking loan status. Running a Bot Test on the "apply for loan" and "check loan status" intents can quickly identify the source of this confusion.
Pre-Deployment Sanity Check: Before publishing your changes from the Development environment to Production, running a comprehensive Bot Test is a crucial final step to catch any regressions or new conflicts.
Best Practices
Test Iteratively: Don't wait until you've built the entire bot to run a test. Test small, related groups of intents as you build them. This makes it much easier to find and fix clashes.
Make Training Phrases Distinct: If a clash is detected, the best solution is to make the training phrases for the conflicting intents more unique. Add context-specific keywords to differentiate them.
Use the "Last Modified" Filter: Regularly test the intents that have been recently changed. This ensures that recent edits haven't unintentionally created new problems with older, stable intents.
Regularly Audit Your Bot: Schedule a full Bot Test periodically (e.g., once a month) to maintain the overall health and accuracy of your chatbot as it evolves.
Common Mistakes
Never Testing: Building a bot with hundreds of intents without ever running a clash test is a primary cause of poor and unpredictable bot performance.
Ignoring Test Results: Running a test and seeing the clash warnings but not taking action to resolve them. The tool is only effective if its results are used to improve the bot.
Creating Vague Intents: Building intents with very generic, single-word training phrases (like "help," "info," or "details") is a guaranteed way to create clashes with more specific intents.