skip to main content

gerg.dev

Blog | About me

Dynamically create test cases with Robot Framework

In Robot Framework, there isn’t an obvious built-in way to create a list of tests to execute dynamically. I recently faced a case where I wanted to do this, and happily Bryan Oakley (blog, twitter, github) was able to help me through the problem. I’ve seen a few people with similar problems so thought it would be useful to document the solution.

Use the subheadings to skip down to the solution if you don’t want the backstory.

Why would I want to do this

Normally I’m against too much “magic” in test automation. I don’t like to see expected values calculated or constructed with a function that’s just as likely to have bugs as the app being tested, for example. I’ve seen tests with assertions wrapped in for loops that never check whether we actually did greater than zero assertions. Helper functions have an if/else to check two variations of similar behaviour and the test passes, but I can’t tell which of the two cases it thinks it found or whether that was the intended one. When you write a test case you should know what you’re expecting, so expect it. Magic should not be trusted.

But sometimes I need a little magic.

The problem I had was that I wanted to check that some background code was executing properly every time the user selected an option from a list, but the items in that list could be changed by another team at any time. It wasn’t sufficient to check that one of the items worked, or that a series of fake items, because I wanted to know that the actual configuration of each item in the real list was consistent with what our code expected. I’m basically testing the integration, but I would summarize it like this: “I want to test that our code properly handles every production use case.”

Importantly, though, I don’t just care that at least one item failed, I care how many items failed and which ones. That’s the difference between looping over every item within a test case and executing a new case for each one. Arguably this is just a reporting problem, and certainly I can drill down into the reports if I did this all with a loop in one test case, but I would rather have the most relevant info front and center.

The standard (unmaintainable) solution

Robot Framework does provide a way of using Test Templates and for-loops to accomplish something like this: given a list, it can run the same test on each item in the list. For 10 items, the report will tell you 10 passed, 10 failed, or somewhere in between. This works well if you know in advance which items you need to test:

*** Settings ***
Test Template    Some test keyword

*** Test Cases ***
:FOR    ${i}    IN RANGE     10
\    ${i}

This runs Some test keyword ten times, using the numbers 0 to 9 as arguments, which you’d define to click on the item index given and make whatever assertions you need to make. Of course as soon as the list changes to 9 or 11 items, this will either fail or silently skip items. To get around this, I added a teardown step to count the number of items in the list and issue a failure if it didn’t match the expected list. Still not great.

The reporting still leaves a bit to be desired, as well. It’s nicer to list out each case with a descriptor, like so:

*** Test Cases ***
Apples     0
Oranges    1
Bananas    2

We get a nice report that tells us that Apples passed but Oranges and Bananas failed. Now I can easily find which thing failed without counting items down the list, but you can see that this is even more of a maintenance nightmare. As soon as the order changes, my report is lying to me.

A failed intermediate option

When I brought this question up to the Robot Framework slack user group, Bryan suggested I look into using Robot’s visitor model and pre-run modifiers. Immediately this was over my head. Not being a comp-sci person, this was the first I had heard of the visitor pattern, but being some who always wants to learn this immediately sent me down a Wikipedia rabbit hole of new terminology. The basic idea here, as I understand it, is to write a modifier that would change a test suite when it starts. Bryan provided this example:

from robot.api import SuiteVisitor

class MyVisitor(SuiteVisitor):

    def __init__(self):
        pass
    
    def start_suite(self, suite):
        for i in range(3):
            tc = suite.tests.create(name='Dynamic Test #%s' % i)
            tc.keywords.create(name='Log', args=['Hello from test case #%s' % i])


# to satisfy robot requirement that the class and filename
# are identical
visitor = MyVisitor

This would be saved in a file called “visitor.py”, and then used when executing the suite:

robot --prerunmodifier visitor.py existing_suite.robot

I ran into problems getting this working, and I didn’t like that the pre-run modifier would apply to every suite I was running. This was just one thing I wanted to do among many other tests. I didn’t want to have to isolate this from everything else to be executed in its own job.

My next step to make this more flexible was to adapt this code into a custom python keyword. That way, I could call it from a specific suite setup instead of every suite setup. The basic idea looked like this:

tc = BuiltIn()._context.suite.tests.create(name="new test")
tc.keywords.create(...)

but I couldn’t get past a TypeError being thrown from the first line, even if I was willing to accept the unsupported use of _context. While I was trying to debug that, Bryan suggested a better way.

Solution: Adding test cases with a listener

For this, we’re still going to write a keyword that uses suite.tests.create() to add test cases, but make use of Robot’s listener interface to plug into the suite setup (and avoid _context). Again, this code comes courtesy of Bryan Oakley, though I’ve changed the name of the class:

from __future__ import print_function
from robot.running.model import TestSuite


class DynamicTestCases(object):
    ROBOT_LISTENER_API_VERSION = 3
    ROBOT_LIBRARY_SCOPE = 'TEST SUITE'

    def __init__(self):
        self.ROBOT_LIBRARY_LISTENER = self
        self.current_suite = None

    def _start_suite(self, suite, result):
        # save current suite so that we can modify it later
        self.current_suite = suite

    def add_test_case(self, name, kwname, *args):
        """Adds a test case to the current suite

        'name' is the test case name
        'kwname' is the keyword to call
        '*args' are the arguments to pass to the keyword

        Example:
            add_test_case  Example Test Case  
            ...  log  hello, world  WARN
        """
        tc = self.current_suite.tests.create(name=name)
        tc.keywords.create(name=kwname, args=args)

# To get our class to load, the module needs to have a class
# with the same name of a module. This makes that happen:
globals()[__name__] = DynamicTestCases

This is how Bryan explained it:

It uses a couple of rarely used robot features. One, it uses listener interface #3, which passes actual objects to the listener methods. Second, it uses this listener as a library, which lets you mix both a listener and keywords in the same file. Listener methods begin with an underscore (eg: `_start_suite`), keywords are normal methods (eg: `add_test_case`). The key is for `start_suite` to save a reference to the current suite. Then, `add_test_case` can use that reference to change the current test case.

Once this was imported into my test suite as a library, I was able to write a keyword that would define the test cases I needed on suite setup:

Setup one test for each item
    ${numItems}=    Get number of items listed
    :FOR    ${i}    IN RANGE    ${numItems}
    \     Add test case    Item ${i}
    \     ...              Some test keyword    ${i}

The first line of the keyword gets the number of items available (using a custom keyword for brevity), saving us the worry of what happens when the list grows or shrinks; we always test exactly what is listed. The FOR loop then adds one test case to the suite for each item. In the reports, we’ll see the tests listed as “Item 0”, “Item 1”, etc, and each one will execute the keyword Some test keyword with each integer as an argument.

I jazzed this up a bit further:

Setup one test for each item
    ${numItems}=    Get number of items listed
    ${items}=       Get webelements    ${itemXpath}
    :FOR    ${i}    IN RANGE    ${numItems}
    \   ${itemText}=    Set variable
    \   ...             ${items[${i}].get_attribute("text")}
    \   Add test case   Item ${i}: ${itemText}
    \   ...             Some test keyword ${i}

By getting the text of the WebElement for each item, I can set a more descriptive name. With this, my report will have test cases name “Item 0: Apple”, “Item 1: Orange”, etc. Now the execution report will tell me at a glance how many items failed the test, and which ones, without having to count indices or drill down further to identify the failing item.

The one caveat to this is that Robot will complain if you have a test suite with zero test cases in it, so you still need to define one test cases even if it does nothing.

*** Settings ***
Library        DynamicTestCases
Suite setup    Setup one test for each item

*** Test cases ***
Placeholder test
    Log    Placeholder test required by Robot Framework

*** Keywords ****
Setup one test for each item
    ...

You can not, unfortunately, use that dummy test to run the keyword to add the other test cases. By the time we start executing tests, it’s too late to add more to the suite.

Since implementing the DynamicTestCases library, my suite has no longer been plagued with failures caused only by another team doing their job. I’m now testing exactly what is listed at any given moment, no more and no less. My reports actually give me useful numbers on what is happening, and they identify specifically where problems were arising. I still have some safety checks in place on teardown to ensure that I don’t fail to test anything at all, but these have not flagged a problem in weeks.

As long as there’s a good use case for this kind of magic, I hope it is useful to others as well.

About this article

20 Comments

Add yours →

  1. Pekka Klärck

    @ September 24, 2018, 05:27

    Thanks for the blog post! Nice to see some of the “magical” features of Robot Framework being used.

    It’s a bit unfortunate that you need to have the dummy test there and it cannot be used for adding more tests. To be more precise, it ought to be possible to add new tests by using test.parent.create(), but those tests won’t be executed due to how executed tests are iterated. That could be changed, but I’m afraid the change isn’t exactly trivial.

    Unless you need to create different tests based on information available only during execution, I’d probably still try using pre-run modifiers. With them you could simply have dummy tests that act as markers/templates indicating which suites should be processed. The modifier could then read information from the dummy test, create new tests based on that, and also remove the dummy test altogether.

    Finally, if you haven’t already, it would be great if you could share this interesting blog post on Slack and possibly also on robotframework-users mailing list and elsewhere.

  2. chaman bharti

    @ February 5, 2019, 12:09

    Hi Gregory,

    I had landed in the same situation as described by you in this post, I wanted to create the test cases dynamically, based on the available test cases from a xlsx file. Thanks a lot, i could use your blog and was able to do it very easily. Keep posting. Cheers.

    Regards
    Chaman Bharti

  3. binelbinel

    @ June 7, 2019, 10:54

    Hi Gregory Paciga,

    I’ve come across a similar situation in my company where I want to dynamically create many tests based on Data on an Excel or CSV file.

    I think your technique will definitely help me achieve this.

    Keep your good work on.

    Regards

  4. A Robot User

    @ January 20, 2020, 07:45

    I was able to use test templates and for-loops with dynamic ranges. All I had to do is pre-fetch and store all available options in a list in the suite setup, then loop over this list with the template. It will still show up as a single test case in the report but the individual iterations are separated and you can check which failed and which did not. So still not a solution if you want totally new test cases for each option. Anyway here is the example:

    *** Settings ***
    Suite Setup Get All Available Options
    Library SeleniumLibrary

    *** Test Cases ***
    Check All Available Option
    [Template] Check Option
    FOR ${option} IN @{ALL OPTIONS}
    ${option}
    END

    *** Keywords ***
    Get All Available Options
    ${ALL OPTIONS}= Get webelements ${itemXpath}
    Set Suite Variable ${ALL OPTIONS}

    Check Option
    [arguments] ${item}
    Log ${item}

    • robot-is-ok-i-guess

      @ July 8, 2020, 03:52

      That was my first approach as well. Notable difference is that robot lumps all tests into one test case, and the result says “1 passed/1failed”. It’s a subtle difference, but optics do matter sometimes.

  5. No one

    @ April 1, 2020, 09:14

    That is exactly what I was looking for! It is a pitty that Robot Framework doesn’t support this as a built in feature. I am going to use this right away. Thank you!

  6. aistikas

    @ May 7, 2020, 09:40

    Thanks a lot for this helpful post! Btw, I noticed that you can filter out the dummy test case in start_suite() by using a tag:


    Placeholder test case
    [Tags] placeholder
    No Operation


    def _start_suite(self, suite, result):
    suite.filter(excluded_tags="placeholder")
    self.current_suite = suite

    Before this, I tried to exclude the test case by using “-e placeholder” in the command line which didn’t work, but the above solution works ok for some reason.

    • Gregory Paciga

      @ May 10, 2020, 16:11

      This looks like a nice trick, I like that you can set it up to have the library automatically ignore these. The only thing I would worry about if this method were packaged in a library is conflicting with other tag conventions that might be used in the suite. Still, I might try this myself!

  7. Diego Curtino

    @ June 5, 2020, 05:07

    This is really useful. I was wondering if it’s possible to tag the generated test cases. It’s something that I really need.

    • Gregory Paciga

      @ June 5, 2020, 11:40

      Yes, basically it’s just tc.tags.add(). The way I do it is by adding return tc to the add_test_case() method, and add a new method to the library:

      
      def add_tags(self, test, *tags):
          test.tags.add(tags)
      

      and then in the .robot files:

      
      ${tc}=  Add test case  Example test  Log  Hello!
      Add tags  ${tc}  First Tag  Second Tag
      
      • Anton

        @ August 18, 2022, 12:22

        Awesome! tc.tags.add() is exactly what I was looking for.
        I just wanted you to know that your answer is still helping people.

        Thanks!

  8. sdub0800

    @ August 18, 2020, 13:58

    I’m using the code you have provided to dynamically generate test cases. Any thoughts on how you would use this to set up these dynmically generated tests to run a certain test set up? (In the method provided in the post these tests won’t follow a specific Test Setup up or Test Tear down if specififed in the robot file)

    • Gregory Paciga

      @ August 19, 2020, 17:16

      Interesting observation, I guess I’ve never used this with setup/teardown. My guess is that you could add attach setup/teardown to the tc object somehow, but I don’t actually know how these are represented in the code.

      The workaround is to have the test itself start by resetting whatever needs to be reset, and then using a suite teardown to do the teardown from the last test. Not elegant, but might work.

    • Anton

      @ August 18, 2022, 12:26

      Hi,

      I am using keyword with [Teardown] so it cleans up even if test fails. Unfortunately I am not aware of any kind of keyword setup.

  9. Matan Bach

    @ September 13, 2020, 05:11

    Hey, that exactly what I needed for my work!
    Thanks a lot !!
    Using this library I managed to create a robot test suite that can create test cases infinitely, using recursive keyword. The stopping condition is Ctrl+C press.

    For example:

    *** Keywords ***
    Test cases loop
    Add test case My Test
    … Run Keywords
    … Some Test Logic
    … Test cases loop

    *** Test Cases ***
    Another Test Name
    Test cases loop

    It seems to work pretty fine but still I would like to ask for your opinion about this solution… I’m a bit afraid that something spooky happens behind the scenes.

    And one more question, is there any reason you are importing print_function from future and TestSuite from model?

  10. Claudio

    @ April 19, 2021, 08:13

    In robot framework 4.0 the keyword.create was deprecated, so you need to change the py class by just replacing the line below:

    tc.keywords.create(name=kwname, args=args) #deprecated in 4.0
    tc.body.create_keyword(name=kwname, args=args)

  11. antti

    @ January 4, 2024, 11:33

    Man I just have to comment that this solution is just brilliant. I have the exact same issue with RF template clumping all iterations in one test case which is really not ideal, now I can not only have the iterations as individual tests but I can also adjust my Jenkins pipeline to automatically retry failed cases, thank you!

    • antti

      @ January 9, 2024, 03:40

      Anyone reading this, one solution to make the –rerunfailed option work is to use the –runemptysuite option along with it. Without the latter the rerun will not work, as your suite does not contain the dynamically created test cases.

      I chose to include a second test case in the s1-level suite called ´Rerun failed´, which has its own setup as you don’t want to rerun the overall suite setup again. I also tagged these two static tests with different tags, so I can run the whole thing in one Jenkins pipeline, first the placeholder test and then the rerun if there are failed tests.

      *** Settings ***
      Library ../tools/CaseGeneratorListener.py

      Suite setup Run Keywords
      … Setup one test for each item
      … Common setup
      Suite Teardown Common teardown

      *** Test Cases ***
      Placeholder test
      [Tags] placeholder
      ${X} Get Length ${LISTA}
      Pass Execution If ${X} == 0 0 rows to be checked today.

      Rerun failed
      [Setup] Common setup
      [Tags] rerun
      Log message

Leave a Reply to AntonCancel reply