I’ve wanted to get some automated performance tests into my CI pipeline for a while, and after finally taking some time to get to it over the quiet holiday period, I wanted to share my experience of how to do it, as I found a lot of the material online around this was not as clear to follow as I would like to see.
I chose to use Taurus after seeing a presentation on it at Australian Testing Days 2018. It is a wrapper that runs on top of popular performance testing tools, JMeter or Gatling, and makes it really easy to run a performance test, simply by creating a config file and running it. I use TeamCity because that’s what my company uses, but it can be easily swapped out to another CI tool. And I wanted to use Docker to avoid having to add installation steps to all our build agents, and make a cleaner, more reliable test by using a known, reproducible environment. (Docker lets you spin up a machine in a known state with known programs/apps etc.. installed)
1 – Create a git repo for your Taurus test
Start by setting up a simple git repo to contain your Taurus test file. There are a whole lot of variables you can set in the config file, I chose to keep it fairly basic for now while starting out. Here is my file (with a few details omitted):
Let’s break down each section.
The execution section describes what test scenario I am going to run and the settings of my performance test. In this case, I am running the ‘getting-started-load-test’ scenario, with variables to be pre-filled in my TeamCity run later on (so they are easy to change, not hard-coded, and for privacy reasons too). Using example numbers, If I had:
- Concurrency: 10
- ramp-up: 5s
- hold-for: 30s
Then my test will start sending requests as defined in the getting-started scenario. Over the first 5 seconds it will increase from 1 user to 10 parallel users making requests. And the whole test will last for 30 seconds. (Breaking down that timeline: 0-5 seconds, = building up from 1-10 users, then 5-30 seconds is the full 10 users).
The scenario section lists the url to make the request with, along with any headers or request body needed. Again, I have used a placeholder for the authorization header.
The reporting section tells the tool what it should consider as a failure in terms of analysing the test results. Without this, you can have every single request fail, and the test will not fail. There are many options for what you can use as your failure criteria, including average request time, 90th percentile of results, and you even tell it that a condition needs to be met for a certain duration, eg. average response time > 1s for 7 seconds = fail.
In my case, I have gone for the simple option, if any requests are not successful, stop running the test and mark it as failed.
I’ve also added a Blazemeter module, which gives me a url in my test results that I can look at with Blazemeter graphs and stats analysing your test run in more details. You can use the free version of Blazemeter (it will do this by default) and it keeps your test results for 7 days, if you have a paid account you can pass your account token into the script and have your test results saved to your account.
It will be handy to confirm that your script works on it’s own, before adding the extra complexities of Docker or a CI build. So replace all your variables with real values. You can use “http://blazedemo.com” as the test url if you like. Then run the test by typing:
in a command line tool, assuming you have first installed Python, ‘pip’ and Taurus on your machine, using the instructions here (or just run “pip install bzt”).
If the test is successful, you should get output in your terminal similar to this:
2 – Setting up TeamCity to run your Test
Now we have our test running locally, the next challenge is to get it running in TeamCity (or your CI tool of choice) so it can be run regularly or as part of your deployment process to help detect problems over time. I will assume basic knowledge of TeamCity for this explanation, which will also help keep this part transferrable to other tools.
I’ll run through each of the build configurations:
1. Create a new build in TeamCity to run your tests
2. Attach the VCS root for your git repo you set up containing your test file, so the build has access to the .yml test file, and also knows whether you have committed changes to this build.
3. Add a build step which will run docker, choosing an image with Taurus installed, and pass in your .yml file to be run. (reference: Steps for using the Taurus Docker image on their website)
The build step to run a Docker image is remarkably simple. You want to add a command line build step, with script like this:
The first line says, we are using a bash script. The second line says we are going to run a docker image, the parameters passed in to this method say that we are going to clean up the image when we are done, we are going to copy the current working directory of the build (which contains the contents of our git repo, including our test .yml file) into the docker image, into a folder called ‘bzt-configs’ (this is a requirement of the Taurus Docker image, it expects your scripts to be in that folder) and then we are going to run the ‘getting-started-test.yml’ file from that folder.
4. Add any triggers you want for when this build should run. I chose to run the build anytime I made a commit to the git project to change the test script (so I would know straight away if a change I made caused the test to fail), I also run the tests everytime we run a commit to master, and everyday at 12.30pm over lunch, so we get continual feedback, as well as feedback specific to a release. This schedule will be updated over time as we see fit.
5. Next important setting is the BuildFeatures. This is where I replace each of the variables in my .yml file with the actual test conditions and insert API keys.
You will want to add a “File Content replacer” build step for each variable you have in your code, this automatically runs before any build steps. You can do a regex search or straight text search. There are other ways to replace code in a file, eg a command line script, feel free to use whatever approach you are most comfortable with, I wanted to try out an inbuilt feature of TeamCity I hadn’t tried before.
You will also notice that I have used an environmental variable within TeamCity as the value, so that I can then define all the parameters in one place, in the Parameters build configuration page.
6. Set your environment variables for each of the parameters you have now declared in your test script and in your variable replacement script like this:
7. Your last step is to make sure the build agents that run this build have Docker installed. Specify any limiting rules on your build agent requirements so that the build always run on a build agent with Docker. (I didn’t set the agents up, but if you need to set up a build agent with Docker, I don’t imagine there is much more involved then having a machine with an appropriate amount of RAM and CPU power, install Docker and away you go. I’m sure there’s walkthroughs available online).
You are now ready to go! Run your test build to make sure it’s all working as expected and you now have your very own Performance test against a Http endpoint running in your CI pipeline and giving you feedback on your builds over time.
Next Steps / Still missing
Some of the things that are still to be explored include:
- Find a way to report on performance over time in TeamCity between builds. Probably needs a custom report tool extracting the data out of the build log.
- Add more failure criteria to the build regarding response time or other interesting factors to observe.
- Find the optimum balance of concurrent users, duration of test and ramp up time to adequately test the system, without adversely impacting other users of the system.
- Add more http endpoints to test against, explore whether this can/should be done within the same script, or not. It could certainly be done from the same repo, and just target a different file in the teamCity build.
I’m interested to hear anyone else’s experience with Taurus or other Performance testing tools or ways to improve my approach described here.