End-of-Life (EoL)
Elasticsearch System Requirements
There are several options for Elasticsearch deployment,
each with specific sizing requirements. Requirements for implementing
Cortex XSOAR with Elasticsearch.
This topic provides information about
the system requirements for implementing Cortex XSOAR with Elasticsearch.
Elasticsearch server
The information in the following table is per Elasticsearch
node, and assumes that the node is assigned all Elasticsearch node
roles (for example, which data is written to disk and when).
Component | Dev Environment Minimum | Production Minimum |
---|---|---|
CPU | 8 CPU Cores | 16 CPU cores |
Memory | 16 GB RAM | 32 GB RAM |
Storage | 250 GB SSD | 500 GB SSD with minimum 3k dedicated IOPS |
Elasticsearch user permissions are available in the security guidelines.
You must ensure that between the Elasticsearch and Cortex XSOAR
servers, and between Elasticsearch servers, latency should not exceed
100 MS. Latency that exceeds 100 MS can cause serious performance
degradation.
Supported Elasticsearch Versions
Cortex XSOAR officially supports Elasticsearch versions
7.4x to 7.10, including minor versions. Other versions might still
work, but have not been tested with Cortex XSOAR.
Elasticsearch in the Cloud
Cortex XSOAR supports using Elasticsearch with all the
major cloud service providers, Amazon Web Services, Azure, and Google
Cloud Platform.
Amazon Web Services offers OpenSearch as a replacement
for Elasticsearch. Cortex XSOAR supports OpenSearch v1.0 (not for
multi-tenant architecture)
You can use Elasticsearch as a service provided by your cloud
provider, or install Elasticsearch on a server in the cloud.
The hardware requirements for Elasticsearch in the cloud similar
to those posted above. To achieve this with your cloud provider,
Cortex XSOAR recommends you use the machines based on your intentions.
For example:
- When the Elasticsearch server functions as a data node, we recommend you use Storage optimized machines, such as the AWS i3.2xlarge machine. Alternatively, you can use a memory optimized machine, such as the AWS r3.2xlarge machine.
- When the Elasticsearch server is used for any other function (such as master mode), we recommend that you use a Compute optimized machine, such as the AWS c4.2xlarge machine.
You can configure your cloud environment to work with different
regions provided that you can maintain the minimum latency requirements
noted above.
General Configurations
It is recommended that you implement the following Elasticsearch configurations
in Cortex XSOAR. The value of the shards and replica shards should match
the sum total of Elasticsearch nodes that you have.
Set the number of shards for an index
This server configuration enables you to set the number of shards
for a specific index upon creation, where
<common-indicator>
is
the name of the index. The default is 1. To improve the write-performance, you can increase the number
of shards and decrease the number of replica shards.
elasticsearch.shards.
<common-indicator>
Set the number of replica shards for an index
This server configuration enables you to set the number of replica
shards for a specific index upon creation, where
<common-indicator>
is
the name of the index. To increase search performance and data redundancy,
you should set the value to the number of Elasticsearch nodes that
you have. The default is 1.elasticsearch.replicas.
<common-indicator>
Maximum indicator capacity and disk usage comparison
The following table compares the maximum total indicator
capacity and disk usage for BoltDB and Elasticsearch. The maximum
indicator capacity value was determined when testing the system.
We recommend using Elasticsearch if you plan to
exceed at least one of the following maximum capacities for BoltDB.
The Cortex XSOAR indicators used to test the sizing requirements
did not contain a significant number of additional fields nor custom
fields. The maximum size of the indicators we tested had 20 additional
or custom fields and a random string between 1-16 characters. Therefore,
the indicators size tested were approximately 0.5KB. If you plan
to have additional or custom fields for indicators, the maximum numbers
should be reduced.
Benchmark | BoltDB | Elasticsearch |
---|---|---|
Maximum indicator capacity (total) | 5-7 million (Requires up to 10 seconds for
a complex query) | 100 million (Requires approximately 40 seconds
for a complex query) |
Disk usage | 5 million (~ 30 GB) | 100 million (~ 70 GB) |
If performance is poor, or you know in advance that you will
need more than the maximum number of indicators, you should consider
scaling BoltDB or moving to Elasticsearch. If you are already in
Elasticsearch, you can scale it as well. For both BoltDB and Elasticsearch,
you can scale by either adding engines for one or more feed integrations
or increasing the resources (CPU, RAM, Disk IOPS) of the Cortex XSOAR
server. For Elasticsearch, you can also increase the Elasticsearch
cluster size from 1 server to 2 or more servers.
Incident disk usage comparison
The following table compares the disk usage for BoltDB
and Elasticsearch.
Number of Incidents | BoltDB | Elasticsearch |
---|---|---|
10,000 | 28GB | 13GB |
40,000 | 112GB | 52GB |
100,000 | 280GB | 130GB |
Single feed fetch comparison
The following table compares the number of indicators,
time to ingestion, and disk usage for BoltDB and Elasticsearch.
Number of Indicators | Database | Time to Ingestion | Disk Usage |
---|---|---|---|
30k | BoltDB | 16s | 1.3 GB |
Elasticsearch | 11s | 1.08 GB + 161 MB (Elasticsearch index) | |
50k | BoltDB | 33s | 1.45 GB |
Elasticsearch | 25s | 1.08 GB + 26.7 MB (Elasticsearch index) | |
100k | BoltDB | 1m8s | 2.1 GB |
Elasticsearch | 49s | 1.08 GB + 53 MB (Elasticsearch index) | |
1M | BoltDB | 12m21s | 13.5 GB |
Elasticsearch | 7m25s | 1.08 GB + 570MB (Elasticsearch index) | |
2M | BoltDB | 22m27s | 32 GB |
Elasticsearch | 22m20s | 1.23 GB + 1 GB (Elasticsearch index) |
Recommended For You
Recommended Videos
Recommended videos not found.