In a previous article, we listed 5 requirements for an efficient black-box test automation.
They were as follows:

  • describe the behavior of your tests
  • store your test data (input, expected output, and test execution results)
  • execute your tests on specific hardware/software
  • control and monitor the test execution
  • analyze the test results and monitor/report quality trends

In this post, we would like to share some thoughts on the first item: how to describe the behavior of your tests.

explain code

Black-box software testing can be pictured this way: the software under test (SUT) is an opaque box with a set of entry points (a graphical user interface, a command line interface, an API, etc.) that the tester uses to produce a set of output allowing the tester to check the behavior of the software. Automating those tests means writing additional software that will use those entry points to automatically check those behaviors. So, as for any other software, there is a whole spectrum of possibilities to write it with different levels of programming skills involved. Those possibilities can be grouped into 3 main types: full language, domain-specific language, and a packaged testing solution.


Full-Fledged Language

The full language option means that you use a generic programming language to write your tests. Contrary to unit tests that must be written in the same language as the SUT, black-box tests can be written in any language as the coupling between the SUT and the test is low (you won’t call actual functions/methods of the SUT but rather access/entry points). Nevertheless, many people argue that a single language should be used. This can work in some organizations, but this won’t be relevant or possible in many contexts. For example, the SUT itself could be written in several languages (e.g. Java backend and Javascript front-end) but which one should be chosen to write the black-box tests? Another example would be when the product is written in a low-level language, like C, or a functional one, like OCaml. Would it really make sense to write black-box tests in OCaml? Finally, if there is a separate automation team, it might be assigned automation on several products written in different languages. In that case, choosing a single language (potentially different from all the product ones) could be wise. Then, testing libraries could be shared more widely, and it would allow testers to easily switch from one project to another.

Whatever language is chosen, this “full language option” will require some test engineers with programming skills. This can be challenging in some organizations where testers are more business experts (finance, health care, communication, etc.) than programming wizards. The upside of this choice is that it allows your team to benefit from all of the advantages of a full-fledged language (great IDE, static code analysis, and StackOverflow answers!). Another benefit is that software engineers might be more willing to engage in writing automated tests if it is done similar to the way the SUT itself is written.

Note that today, Python is becoming one of the most popular programming languages, especially among testers. Its initial simplicity makes it possible for business people to dip their toe in the programming water, but this can still be complicated programming, as you can see in this simple example:

from selenium import webdriver
from selenium.webdriver.common.keys import Keys
 
driver = webdriver.Firefox()
driver.get("http://www.python.org")
assert "Python" in driver.title
elem = driver.find_element_by_name("q")
elem.clear()
elem.send_keys("pycon")
elem.send_keys(Keys.RETURN)
assert "No results found." not in driver.page_source
driver.close()

So for those not at ease with that kind of code, a Domain Specific Language might be the right option, as we will explain next.


Using a Domain Specific Language

The second possibility to automate your test is to use a Domain Specific Language (DSL). Here the domain is “testing.” The idea is that you don’t need a full language to write your tests, but a subpart focusing on what typical tests need. The most famous example of such DSL is Gherkin for Cucumber. This language is made of only 5 words: Scenario, Given, When, Then, and And. Those statements allow for creating BDD-style tests that look like this:

Scenario: User clicks the About link
Given I am on the homepage
When I click the link to About page
Then I should see the About page

Another widespread DSL used for testing is the language used by Robot Framework. This framework has a larger set of keywords and comes with a set of libraries for all sorts of tasks (file operations, date/time manipulation, etc.). However, it also allows writing simple BDD-style tests à la Cucumber:

*** Test Cases ***
Addition
    Given calculator has been cleared
    When user types "1 + 1"
    and user pushes equals
    Then result is "2"

So Cucumber and Robot Framework code look much more simple, right? Of course, there is no magic. At some point, someone will need to program keywords like “I should see the About page” or “user pushes equals”. This task will be performed in Python, Java, Ruby, etc. This leads us back to square one and the need to use a full language. The selling point of such tools is that their DSL offers a layer that can be used by non-technical people (e.g. business QA, product manager). This is a key success factor for many organizations where some team members are in charge of the automation libraries (actually coding in Python for example) and other team members are using only the simpler DSL so they don’t have to learn a real programming language. Therefore, those frameworks can act as automation-enablers and are great to kick-off automation. After a while, as testers feel the need to write more complex use cases and get some confidence in their programming skills, they might hit the limitation of those DSLs. In such a case, it is not uncommon to switch back to a full language and drop those frameworks.


Proprietary Solution

The third possibility is to use a solution provided by testing software vendors. The most famous type is “record and playback” for GUI-driven testing. With such a solution, the tester can hit the “record” button, browse the software under test via the GUI following its use case, and the scenario is recorded for later replays. This is as brittle as it sounds and has been criticized for a long time. The only hope to make such a test robust would be to tweak the code that is generated behind the scenes and make it less sensitive to change in the interface. In any case, as we discussed in a previous article, the GUI is generally not the right entry point to automate black-box tests.

There are other testing solutions that are not targeting the GUI layer. For example, the solution could provide access to the command line interface or the REST API of your SUT. This solution can be a desktop product or a SaaS service. It could allow you to build a robust automated test without having to use full programming language. You will need to find a solution that can interact with the interfaces offered by your SUT.


Which Solution to Choose?

To decide which solution to use to describe your tests, the main factor you consider might be your current team profiles and skills. If there are no Test Engineers on your team and software engineers are going to be the one in charge of automating black-box tests, then you probably want to go with the full-language solution. This way, developers can manage the whole “testing stack” (from unit testing to end-2-end tests) from their IDE and use the programming language of their choice. If your team has some Test Engineers with strong business knowledge but low technical skills, you might go with a packaged solution so that testers are able to write automated tests with as little programming as possible. The final case is in larger organizations where there are both Test Engineers (business inclined) and Software Engineers in Test (SETs) with more technical knowledge. In such contexts, the SETs are in charge of building the automation infrastructure and a DSL approach could make sense. The SET would write the code behind the keywords of the DSL and the Test Engineers would use those keywords in a simple-to-use DSL to write business-oriented tests.

Team profiles and skillsSolution to describe your black-box tests
Software developers in charge of automationFull-Fledged Language
Test engineers with few technical skills in charge of automation Proprietary Solution
Test engineers in charge of automation backed by software engineers building testing librariesDomain Specific Language

Keep in mind that describing the tests is only one part of the solution, as you will also need to fulfill the other four requirements (storing, execution, monitoring, analyzing). Those other requirements may have impacts on the language/tool you choose. For example, if you need to use some APIs to connect to some services where to store your test data (input and reference data), then you might choose a language for which this API is available or a tool that is able to integrate with this service. We will discuss those other requirements with greater detail in a future article.


Share