bottest-ai changelog

Improved Evaluation Accuracy

cover image

We've reworked how all Evaluations are performed across bottest.ai for improved accuracy and reliability. Now, it's easier than ever to get from consistent quality Evaluations to direct and actionable feedback for your team!

Introducing Frameworks

Instead of Success Criteria driving Evaluations, they've been replaced with Evaluation Frameworks. These Frameworks can be modified in your Suite settings, for full transparency on the steps and process in determining whether an Evaluation passes or fails. You can directly provide feedback to regenerate a new version of the Framework right in the editor.

If any Evaluations pass/fail when they shouldn't, you now have the option to quickly provide feedback on the Framework (which will generate a new version with that feedback), and re-evaluate all other Evaluations in the Test/Suite Run with the new Framework you just generated.

Categorizing Suite Run Results

Based on feedback we've received from our customers, it's hard to quickly see what the common failure reasons are across large Suite Runs of hundreds of Evaluations. You can now view an intelligent grouping of results in each Suite Run Report that is generated in the Overview of Results section!

Coming soon, we are going to add an integration with Linear for automatic Issue creation based on these groupings!


Bulk CSV Import of Tests

cover image

You can now directly import Tests from CSV straight into bottest.ai — single turn and multi-turn alike!

This new feature is located in the Modify Suites modal, right below the existing button to "Add a new blank Suite." Simply follow the on-screen instructions and upload your CSV to have bottest.ai do the hard work of recording for you.

We use the structure and format of existing recordings in your Bot to replicate the data for each generated Test. However, since we aren't actually recording these generated Tests, they won't have any Baselines. The first Test Run (or Suite Run) that is performed on a newly generated Test will determine its first Baseline.


Execute Suite Runs in the Cloud + Scheduling

cover image

We're excited to announce two powerful new features that give you more flexibility and control over your Test automation: Cloud Suite Runs and Scheduled Suite Runs.


Cloud Suite Runs

Leaving behind the limitations of browser-based testing, you can now execute your Test Suites directly in our cloud infrastructure. You can do so in the main Dashboard by simply pressing the "Run Suite" button, and selecting the Cloud option. This gives you the benefits of:

  • Freeing of your local device resources, as you no longer need to use our chrome extension to open tabs to execute Tests.

  • Ability to benchmark your chatbot's performance with accurate user load Testing. You can set how many parallel executions should happen at once, and ensure your chatbot can withstand a specific number of concurrent users. You can configure this option in the Suite Settings.

  • Improved stability and reliability. Oftentimes, the replaying process will fail due to one-off issues on your local device (such as your browser crashing, running out of memory, or generally not being able to handle the concurrent tests replaying all at once). Running Tests in the cloud will increase the stability and reliability of executions.


Scheduled Suite Runs

You can now schedule your Suite Runs to run on an automated schedule of your choosing, and customize each Suite + Environment combination to run on daily, weekly, and monthly schedules! These scheduled Suite Runs will be executed in the Cloud automatically, completely automated and hassle-free. You can configure this for your Suites in the Suite Runs Dashboard, clicking the "Schedule Suite Run" button.

Note that depending on whether your chatbot requires authentication to access, you will need to setup Auto-Login for Suite Runs in the cloud to work properly.

Coming soon, we are looking to add email/Slack/etc notifications upon the completion of a scheduled Suite Runs, along with an attatched PDF version of the generated report. If this is something that seems useful, help us prioritize this feature here!



Auto-Login for Environments

cover image

We've received feedback that one of the biggest pain points when Testing with bottest.ai is that users will forget to login to their chatbot before running a Test or Suite, and the Test will then error.

You can now configure Auto-Login from the Edit Environments modal! Simply record your log-in flow (making sure to follow the instructions and understand the limitations).

The next time your Test is blocked by the chatbot login page, bottest.ai will automatically replay your login recording and then restart running all of your Tests!

In the future, look out for the ability to run Tests in the cloud which will require the configuration of Auto-Login for any environment you want to run in the cloud.


Multi-Language Support & Improvements to Variants

cover image

We've updated the view for Variants to give users more control and customization, as well as easily test your chatbot across any number of languages!

When editing a Test, you will now see a settings icon near where you modify the Variant count. Clicking the icon will open a new view where users can:

  • View existing Variants

  • Modify existing Variants

  • Delete existing Variants

  • Create new Variants

When creating new Variants, you can specify the number of new Variants you want to create, the language that new Variants should be generated in, and any additional custom creation criteria for the generation process.

Now, when you execute the Test all Variants for that Test will be ran, and the results can be viewed in the Test Results modal just as always.


Revamped Suite Run View

cover image

Since the initial beta release of our product, we've received some amazing feedback on bottest.ai about what's working and what's needed to make the product even better.

We're releasing a new dashboard specifically for Suite Runs, which can be accessed from the side panel (it's right below Dashboard). From this new view, you can look at your previous Suite Runs, their corresponding Test Run statuses, and easily navigate to the generated Suite Run report.

We're working hard on a couple of additional key features that will be coming soon, including:

  1. Auto-login

  2. Running Tests from the cloud

  3. Pre-packaged Security Tests and Quality Tests

  4. Webhook/CICD Integration

  5. Multi-Language Testing Support

  6. Scheduling of Suite Runs

As always, we really appreciate your feedback on what would make the product better. You can add features to our roadmap and help us prioritize the ones you want most in our feedback portal.


Initial release

cover image

The initial release of the beta version of bottest.ai!

Record and replay Tests from the browser using the Chrome Extension. Gather powerful analytics in the Reports and Analytics Dashboards.

You can read more about our motivation for chatbot testing here.