End-of-Life (EoL)

Performance Benchmark

Details the Cortex XSOAR hardware specifications and requirements and benchmarking performance tests conducted in Cortex XSOAR labs.
Cortex XSOAR is designed to maximize performance and enable scalability, delivering factors to provide the best experience and performance, and such benchmarking process is conducted annually to ensure the best performance levels.
Cortex XSOAR performance is determined by compute, memory, and HD performance. Each component can impact a different part of the system, therefore it is important to ensure that you deploy Cortex XSOAR on an infrastructure that meets all requirements.
The amount of data each incident holds can have a significant impact on performance and disk space of the system. To achieve optimal performance and disk usage, we recommend that an incident not be larger than 0.5mb.

Disk Usage

The required disk space for each incident varies based on the number of integrations and the size and complexity of the playbook. We simulated the number of incidents and their respective size in the disk. For the simulation we used out-of-the-box integrations and an example phishing playbook (see Simulation Incidents and Required Disk Size table).
The incidents were generated using genuine phishing emails of various sizes, which averaged 4KB.
The below values show the disk space for incidents after ingestion and playbook run, without Demisto data compression. A plain incident before ingestion and playbook run averages 3KB in the file system, and depends on the data received from the SIEM.
You should not compare disk usage tests between versions. Different tests are performed for each version.
Product Version
22 GB
90 GB
225 GB
27 GB
108 GB
270 GB

Performance Benchmark Test

The benchmark test was performed on dedicated virtual servers running Amazon Linux 2. The following tests were performed on dedicated machines with recommended specifications. The tests were performed with a dedicated SSD hardware IOPS granting persistence input/output operations per second, rather than possible varying IOPS rate, which is common in cloud machines.
The virtual server specifications were:
  • 16 CPU cores
  • 32 GB RAM
  • 1 TB SSD (GP2)
  • Cortex XSOAR v5.5

Benchmark Process

The benchmarking process is executed using an automated test, which allows for batch incident mapping and classifying, ingestion, and execution of a specified playbook. We then measure each step in the incident lifecycle and measure the total time it took from the first incident received to the last incident closed.
The same three tests were performed on two different environments, a single-server environment and a multi-repo environment. Each environment contained only out-of-the-box integrations, scripts, and commands.
  • Ingest and run 50 incidents with an automated playbook to completion.
  • Ingest and run 100 incidents with an automated playbook to completion.
  • Ingest and run 500 incidents with an automated playbook to completion.
Incidents are ingested via HTTP REST request.
Each incident automatically triggers the default phishing playbook (complicated level), which performs the actions listed below. These actions are performed simultaneously for all incidents, on each test.
  • Parse and process the email
  • Auto-run IOC extraction and reputation checks for all indicators
  • Extract attachments
  • Calculate incident severity based on IOCs
  • Notify users (administrators and the email sender) about the progress of the incident
  • Close the incident

Benchmark Results

The results are the average time for each test.
The numbers specified were processed without data compression. The results might vary based on the machines' hardware specifications, system configurations, Docker version, and the type of actions performed.
Cortex XSOAR utilizes free memory and available resources to enable faster system performance, including cache, containers manipulation, and more.
Single-server Tests
Number of incidents executed in parallel
Time to complete ingestion
Time to complete playbook
Distributed Database Tests
The environment that was tested consisted of four servers: 1 app server, 1 main DB node, 2 DB nodes)
  • 1 application server
  • 1 main DB node
  • 2 DB nodes
Number of incidents executed in parallel
Time to complete ingestion
Time to complete playbook

Recommended For You