Jira, Confluence, Bitbucket performance testing Part 1


In these series of articles I want to talk about performance testing of Atlassian Jira, Confluence and Bitbucket.

I will not discuss the methodology of performance testing, I will discuss only the technical aspects of performance testing with dc-app-performance-toolkit provided by Atlassian.

This toolkit lets you test such Atlassian products as Jira, Confluence and Bitbucket for performance. I love this toolkit because you do not have to spend hours to make it work, it just works out of the box.

This toolkit uses Taurus, jmeter and Selenium for testing.

You can use the toolkit for the following purpose:

  • If your company develops apps for Atlassian Marketplace. Then you can use this toolkit for the Data Center certification.
  • If your company uses Atlassian Jira, Confluence or Bitbucket for your internal or external use. You can use this toolkit to test your current configuration with all your scripts and installed apps and ,what is very much important, on your own dataset. As mentioned in the documentation this toolkit works only with certain versions of Jira, Confluence and Bitbucket. But I believe that it will work successfully on all recent versions of these products without modifications or with minor modifications. I will tell you how to modify this toolkit in these series of articles.

The steps which are needed for performance testing of Jira, Confluence and Bitbucket are the same, that is why I will provide examples only for Jira performance testing. If there is something special for a product, I will mention it.

You would need git installed on your PC to reproduce examples in this tutorial.


First you need to clone the toolkit with the following command:


You will have the dc-app-perfromance-toolkit folder created. Move to this folder:

cd dc-app-performance-toolkit

Now you need to install all dependencies and tools which are used for performance testing by the dc-app-perfromance-toolkit.

To accomplish this task, please, follow the instructions in the path_to_dc_app_performance_toolkit/README.md file.

Config files

Before running performance tests you should read documentation provided in the path_to_dc_app_performance_toolkit/doc.

The contents of the folder are:

  • In the root of this folder there are three md files (one for Jira, one for Confluence and one for Bitbucket) which will explain to you how to use this toolkit for Data Center certification.
  • Three folders: jira, confluence and bitbucket with information on how to run performance tests for each of the products.

Please, read this documentation before running performance tests. I will stop only on important things which are not mentioned in the documentation.


Before running performance tests you should provide information about your instance in jira.yml, confluence.yml or bitbucket.yml file which are placed in the path_to_dc_app_performance_toolkit/app folder. These files are the configuration files for Taurus. You can find more information on Taurus configuration files here.

I will explain in detail the jira.yml file.


settings is a section in the Taurus configuration file. It contains top-level settings for Taurus. You can find more information on the settings section here.

  artifacts-dir: results/jira/%Y-%m-%d_%H-%M-%S

artifacts-dir is a path template where to save artifact files. Here is the list of artifact files:

  • bzt.log – log of bzt run.
  • error_artifacts – folder with screenshots and HTMLs of Selenium fails.
  • jmeter.err – JMeter errors log.
  • kpi.jtl – JMeter raw data.
  • pytest.out – detailed log of Selenium execution, including stacktraces of Selenium fails.
  • selenium.jtl – Selenium raw data.
  • results.csv – consolidated results of execution.
  • resutls_summary.log – detailed summary of the run.
  • jira.yml – jira.yml file which were used for the test run.
  aggregator: consolidator

aggregator contains the module alias for top-level results aggregator to be used for collecting results and passing it to reporters. You can read more about aggregators here.

  verbose: false

verbose lets you run taurus in debug mode. We do not use the debug mode.


env sets environment variables. You can read more here.

    application_hostname: localhost   # Jira DC hostname without protocol and port e.g. test-jira.atlassian.com or localhost
    application_protocol: http      # http or https
    application_port: 2990            # 80, 443, 8080, 2990, etc
    application_postfix: /jira           # e.g. /jira in case of url like http://localhost:2990/jira
    admin_login: admin
    admin_password: admin

This set of parameters contains information about your Jira instance. I will test on my localhost Jira instance. I set all parameters accordingly. These parameters will be used in jmeter, selenium and other scripts.

    concurrency: 200
    test_duration: 45m

These parameters will be passed to the execution engine of Taurus. I will explain the meaning of these parameters later in the execution section.


WEBDRIVER_VISIBLE sets visibility of Chrome browser during Selenium execution. We make Chrome browser invisible.


JMETER_VERSION defines the version of jmeter, which will be used for testing.

    allow_analytics: Yes            # Allow sending basic run analytics to Atlassian. These analytics help us to understand how the tool is being used and help us to continue to invest in this tooling. For more details please see our README.

services is a section in the Taurus configuration file. It provides information about services which perform some actions before test starts, after test starts, or in parallel with running test. You can read more about services here.

  - module: shellexec

Shell executor is used to perform additional shell commands at various test execution phases.

      - python util/environment_checker.py
      - python util/data_preparation/jira/prepare-data.py
      - python util/jmeter_post_check.py
      - python util/jtl_convertor/jtls-to-csv.py kpi.jtl selenium.jtl
      - python util/analytics.py jira
      - python util/cleanup_results_dir.py

Prepare, shutdown and post-process are Taurus lifecycle stages. You can read more about Taurus lifecycle here. Each stage runs certain scripts. Here is a short description of each script:

  • util/environment_checker.py – checks Python version. Throws an error if Python version is wrong.
  • util/data_preparation/jira/prepare-data.py – prepares test data. We will stop on it later.
  • util/jmeter_post_check.py – checks if kpi.jtl exists. If this file does not exist, then something went wrong with the jmeter testing.
  • util/jtl_convertor/jtls-to-csv.py kpi.jtl selenium.jtl – creates the results.csv file out of the kpi.jtl and selenium.jtl files. The results.csv file contains aggregated information from these files. You can find average time , median, 90% line, maximum, minimum time of jmeter and selenium tests execution and some other metrics in results.csv file
  • util/analytics.py jira – sends analytics to Atlassian. You can turn it off by the allow_analytics parameter.
  • util/cleanup_results_dir.py – removes temporary files generated during the test run.

Execution is a section of the Taurus configuration file. Execution objects represent actual underlying tool executions. You can find more information here.

  - scenario: jmeter
    concurrency: ${concurrency}
    hold-for: ${test_duration}
    ramp-up: 3m

Jmeter execution parameters. You can find more information here.

concurrency – number of target concurrent virtual users. It means that jmeter will execute scripts emulating 200 users simultaneously.

ramp-up – ramp-up time to reach target concurrency. if you execute performance testing it is a good practice to reach the target concurrency gradually.

hold-for – time to hold the target concurrency. When you reached the target concurrency you will execute tests for this amount of time.

  - scenario: selenium
    executor: selenium
    runner: pytest
    hold-for: ${test_duration}

Selenium execution parameters. You can find more information here.

executor – the executor.

runner – test runner. We use pytest.

hold-for – time to hold target concurrency.


scenarios is a section of the Taurus configuration file. It provides parameters for all scenarios declared in the execution section.

    script: selenium_ui/jira_ui.py

script provides path to Selenium tests.

# provides path to the jmeter project file
    script: jmeter/jira.jmx
      application_hostname: ${application_hostname}
      application_protocol: ${application_protocol}
      application_port: ${application_port}
      application_postfix: ${application_postfix}
      # Workload model
# the number of actions for an hour. 
      total_actions_per_hr: 54500
# actions and the % of execution within one hour. The sum of all parameters must equal to 100%
      perc_create_issue: 4
      perc_search_jql: 13
      perc_view_issue: 43
      perc_view_project_summary: 4
      perc_view_dashboard: 12
      perc_edit_issue: 4
      perc_add_comment: 2
      perc_browse_projects: 4
      perc_view_scrum_board: 3
      perc_view_kanban_board: 3
      perc_view_backlog: 6
      perc_browse_boards: 2
      perc_standalone_extension: 0 # By default disabled

script provides path to the jmeter project file.

total_actions_per_hr sets the number of actions performed within one hour.

perc_ parameters set the percentage of execution of each operation per hour. The sum of all perc_ parameters must be equal to 100%.

    rtimes-len: 0 # CONFSRVDEV-7631 reduce sampling
    percentiles: [] # CONFSRVDEV-7631 disable all percentiles due to Taurus's excessive memory usage

modules is a section in the Taurus configuration file. This section contains a list of classes to load and the settings of these classes.

    version: ${JMETER_VERSION}
    detect-plugins: true
    memory-xmx: 8G  # allow JMeter to use up to 8G of memory
      - bzm-parallel=0.4
      - bzm-random-csv=0.6
      - jpgc-casutg=2.5
      - jpgc-dummy=0.2
      - jpgc-ffw=2.0
      - jpgc-fifo=0.2
      - jpgc-functions=2.1
      - jpgc-json=2.6
      - jpgc-perfmon=2.1
      - jpgc-prmctl=0.4
      - jpgc-tst=2.4
      - jpgc-wsc=0.3
      - tilln-sshmon=1.0
      - jpgc-cmd=2.2
      - jpgc-synthesis=2.2
      server.rmi.ssl.disable: true
      java.rmi.server.hostname: localhost
      httpsampler.ignore_failed_embedded_resources: "true"

jmeter provides properties for the Jmeter module. You can read more about Jmeter properties here.

detect-plugins – JMeter Plugins Manager allows you to install necessary plugins for your jmx file automatically. Yes, we want to install required plugins automatically.

plugins – a list of JMeter plugins you want to use.

system-properties – system properties for JMeter in system properties section. You can find more information on Jmeter system properties here.

# version of the chrome driver
      version: "80.0.3987.106" # Supports Chrome version 80. You can refer to http://chromedriver.chromium.org/downloads

selenium provides Selenium settings.

chromedriver – the version of the chrome driver we will use for testing.

- data-source: sample-labels
  module: junit-xml

Reporting is a section in the Taurus configuration file, which provides analysis and reporting settings. We say that we want the JUnit xml reporting. You can find more information here.

I briefly explained the jira.xml file. Let’s sum it up.

We have 6 sections in our jira.xml file:

settings – we define our settings here.

You need to change the following parameters in this section:

application_hostname: localhost   # Jira DC hostname without protocol and port e.g. test-jira.atlassian.com or localhost
    application_protocol: http      # http or https
    application_port: 2990            # 80, 443, 8080, 2990, etc
    application_postfix: /jira           # e.g. /jira in case of url like http://localhost:2990/jira
    admin_login: admin
    admin_password: admin

Also you may want to change the following parameters:

concurrency: 5
test_duration: 5m

I changed these parameters to lower numbers just to make example tests run faster. But in real life if you change test_duration and concurrency do not forget to modify the ramp-up parameter in the execution section if needed.

services – we define here our scripts to run during Taurus lifecycle.

execution – we provide our test scenarios here. First we run jmeter, then we run Selenium.

scenarios – we describe in detail our two scenarios from the execution section (jmeter scenario and Selenium scenario). You may need to change the number of operations per hour and the percentage of execution of each operation in the jmeter scenario. It depends on how you use your Jira.

modules – we tell Taurus that we will use consolidator, jmeter and Selenium, and Taurus should make sure that these modules are available during our test runs.

reporting – we set parameters for reporting

Ok. We changed all parameters and we are ready to execute tests.

Run test

We should activate Python virtual environment (you can read how to do it in the README.md file in the root folder of dc-app-performance-toolkit), move to the path_to_dc_app_performance_toolkit/app folder and run the following command:

bzt jira.yml

In my case I received the following error:

17:03:26 WARNING: Errors for python util/data_preparation/jira/prepare-data.py:
There are no software projects in Jira

And that is correct. My Jira is empty which means that there is no test data for performance testing. I will tell you how to prepare test data in the next part of this tutorial.

If you have found a spelling error, please, notify us by selecting that text and pressing Ctrl+Enter.

4 thoughts on “Jira, Confluence, Bitbucket performance testing Part 1

  1. Hello Alexey Matveev,

    Thank you for the detailed explanation of performance testing process. I have one question although, Do i need to purchase AWS services to make to run my performance Tests on Jira ? Because in the documentation they only talked about AWS Quick Start for Jira
    Thank you

    1. Hello Rym,
      Thank you for your question!
      You do not need to buy AWS or something else. This toolkit will work with any Jira instance. It does not matter if this instance is on AWS or on your own server. It can work even if you start Jira on your localhost.

Leave a Reply

%d bloggers like this:

Spelling error report

The following text will be sent to our editors: