When a set of tests becomes large and needs to be launched more regularly, software engineering teams feel the need to automate those tests. White-box tests, like unit-tests, are written in a programming language within a testing framework, which can be easily automated. Launching them is just a matter of executing a command in an IDE or a shell. Functional tests are another beast. They depend a lot on the type of Software Under Test (SUT). An image analysis software and a social network mobile app won’t be functionally tested the same way, hence the automation tool requirements will differ.
In this article, we will discuss how you can automate functional tests for the engine of software, similar to the image analysis product we used as a reference in our previous post. That type of software can be tested with black-box tests that access the SUT below the UI (aka subcutaneous tests). Those test cases are mostly data driven: input and expected output. For a more detailed description of that kind of functional testing, see our article How to Approach Functional Testing.
To switch from manual tests to automated ones, you will need to:
- describe the behavior of your tests
- store your test data (input, expected output, and test execution results).
- execute your tests on specific hardware/software.
- control and monitor test execution.
- analyze test results and monitor/report quality trends.
Each of those elements raises some challenges. Let’s review them one by one.
Describing the Behavior of Your Tests
You will need to describe the steps of your functional tests. For a data-driven SUT, the scenario should follow this pattern:
- Load some data that will be used as input for the SUT.
- Launch the SUT with some arguments (using input data) and save the output data.
- Load some data that will be used as expected output of the SUT.
- Compare the expected data with the actual output data from the SUT.
- Return a status depending on the previous comparison.
To express those steps, you could use a generic language like Java, C#, or Python, along with an associated testing framework. Another option is to use some Domain Specific Language (DSL) like the ones offered by Cucumber or Robot Framework. Both of those solutions offer infinite possibilities but require specific programming skills. The risk here is to embrace the “infinite” possibilities and build up a custom, in-house framework on top of one provided with those languages.
Another option is to use commercial testing solutions that give codeless/scriptless functional automation. Most of those solutions propose automation via the GUI, which is not what you are looking for. As mentioned before, you want to build tests below the GUI, and few vendors offer such features.
Storing All Your Test-related Data
Your functional tests need some input and expected output data. This data may be heavy (image, sounds, video, etc.) and numerous. In addition, this data will evolve along with the SUT in several ways:
– The format of the data (input or output) for the V2 of your SUT might differ from the one used in V1.
– The content of the data will also change (reference data used as expected output in V1 could be different in V2).
This means that you will need to keep the different versions of those data. Hence, the total amount of data you will need to store may end up being substantial.
To store this data, there are essentially three possibilities:
- Add your data to your SCM (e.g. git). This works fine when using text data of reasonable size, but when using very big files or binary ones, then the SCM is not well-suited.
- Invest in private infrastructure, and add more storage capacity as needed. This means RAID drives, a high throughput LAN, a UPS, regular backup, etc. However, this hardware requirement is only the tip of the iceberg. The biggest effort will be to customize or create some software solution that fits your organizational and versioning needs. As you build your own storage facility, this will require a lot of software and hardware maintenance, which could be for your automation project.
- Use a cloud-based infrastructure. There are some very good today. However, those services are mostly technical bricks that need to be connected to the rest of your automation solution. This requires specific expertise that you may be reluctant to invest in.
Moreover, each test execution will generate some data that you might want to keep for smaller amounts of time (i.e. for debugging) or forever (i.e. for public release). You’ll have to implement a complex solution to automatically manage this data in order to optimize your disk space usage.
Executing Your Tests
Having your tests automated means that you will need at least one computer that you can use to execute your tests. In fact, you will probably need several of them! As mentioned before, there are many tests, and each test can have a long duration. The total time of execution for all of the tests might just be too long to be tolerable. Since the tests are independent, launching them in parallel is the obvious choice, but this means that you will need a cluster of machines.
Even if parallelism is not required, you may have to test the SUT on several platforms:
- Different hardware (cores, processor, memory etc.)
- Different OS (Windows, Linux, MacOS etc. and potentially each of them in several versions)
Whatever the situation, you may end up having a need for a little server farm. As for the data storage facility, you are facing the choice between self-hosted servers or using online services. Since the advent of services providing virtual machines online (AWS paving the way for many others), it has become more and more of an obvious choice than managing in-house servers. In-house servers can be a pain that may not be worth the price for most organizations. In the case of functional tests, you might need many machines, and their use could be non-continuous with some peaks during phases where tests need to be launched more often. Choosing an online solution is almost a no-brainer. However, after your first successful experimentations, if you decide to invest in such a solution, you will need to hire or train specialists for the administration and cost-optimization. As we mentioned for online storage services, connecting this infrastructure to your automation solution might not be as easy as hoped.
Controlling and Monitoring Test Execution
The next requirement for your automation solution is to use a tool from where you can launch and follow the execution of the tests. Launching tasks associated with software engineering are commonly assigned to a Continuous Integration (CI) server. If you already use a CI server (i.e. to build or produce), then you can choose to use it to manage the execution of your functional tests. However, the possibilities of the CI servers in terms of functional tests might be limited. Typically the main features are: launching a job, following the standard output during the execution, and showing a simple graph of the results.
If you do not currently use a CI server (and don’t need to), you might end up writing your own dashboard with your own set of scripts and schedules. As is normal with custom tools, it will start simple and easy, but as the requirements of the team using the tool mature, it will become more complicated to maintain.
Analyzing Test Results and Monitoring Quality Trends
The final piece of our automation solution is a dashboard where results and trends can be analyzed. The results of a campaign should be displayed in ways that allow you to drill-down from the global status of a campaign to the individual tests that failed. For each test, you should also have access to the execution history so that you can easily identify when it started to fail. In case of such a failure, you should have access to the logs of the tests to help understand the root cause. When the failure is traced back to a regression in the SUT, the dashboard should allow you to tag the test as a “known failure,” and it should be connected to your issue-tracking software (i.e. Jira) so that you can automatically open a bug.
You would also expect to have access to general quality trends (percentage of success, test durations, etc.). Ideally, you would aggregate those functional test results with data coming from other testing activities (like unit tests, performance tests, etc.) to turn your current dashboard into a complete, quality dashboard. If you are using a CI server, you will find some elements of reporting (often via plugins), but most often they are not in a central place in those solutions. Getting a global quality view is difficult without custom development.
There are many reasons you may produce test results or quality reports. For example, you may have to provide the quality status of a specific release to a customer, test results to external auditors in case of certified software, inform management of quality status, etc. Being able to customize your report, both in content and format, will soon become a must-have.
Challenges to Build a Functional Test Automation Solution
We know that there are different requirements for your automation needs that we can illustrate with our image analysis example:
- A language to express the steps that load images on the computer to execute the SUT, run the program, and check the output: generic language, DSL, testing software
- An online storage facility for the management of all of the input, expected output, and test-generated data
- A service of online virtual machines with enough computing power and RAM (that could become resource-intensive) to execute the analysis: self-hosted or online
- A tool to control and monitor executions: CI server or custom development
- A dashboard to analyze test results and monitor quality trends: access to the actual output when it did not match the expected one, log of the execution, etc.
However, those different items will need to be connected together. The “tool” mentioned in the previous list should communicate with every other element of the system, and this is far from being the simplest piece of work! For example, moving data from your storage facility to your execution infrastructure will need scripting. Similarly, saving some output data in case of a failed test from the execution machine to the storage facility won’t be straightforward. In fact, the challenges that we listed are encompassed in an even bigger one: how much coding and scripting are needed to build and maintain a coherent test automation framework. In our next article, we will discuss actual solutions.
If you are already looking for a functional test automation solution, check-out what we propose and contact us to explore how it could fit your needs. And to stay updated with our product, follow us on Twitter and LinkedIn