Multi-region Deployments

Deploying the Identity-Based Microsegmentation control plane across multiple regions ensures that your core services stay up after your region goes down. If your region goes down, Aporeto fails over to a different region. The clusters all have access to the same MongoDB, ensuring no interruption in the core services.
However, each cluster has its own time-series database. You won’t be able to access the flow logs, events, or reports of the original cluster from the failover cluster. Likewise, after restoring operations in the original cluster, you’ll have a gap in your flow logs, events, and reports.
In order to achieve multi region availability, you can deploy two or more control plane to separated regional Kubernetes cluster in a active/passive fashion (only one control plane will be able to server requests at any time).
Main idea is to have a DNS record, let say https://api.aporeto.com handled by a load balancer that updates the record to point to either https://api.active.aporeto.com or https://api.passive.aporeto.com plaform. Then a job auto-maintenance will make sure that only the platform that is pointed by the global record is active. The other will return a 423 return code which means maintenance mode.

Limitations

The other databases storing caches, reports metrics will not be replicated across the cluster. Thus, flow logs, events, or reports of the one cluster will not be accessible from the other cluster after failover.
Please note that MongoDB Atlas Database is not supported as shard/user creation is only allowed through their REST API respectively.

Requirements

  • Two Kubernetes clusters in different regions
  • Bring your own MongoDB database (BYODB) that will be used by all control planes
  • Voila installed on a VM or workstation

Configure Your Clusters

The very first step is to create two Kubernetes cluster retrieve their kubeconfig file and merge them if needed as:
KUBECONFIG=./foo.yaml:./bar.yaml kubectl config view --merge --flatten > clusters.yaml
Export that file as KUBECONFIG with export KUBECONFIG=/path/to/kubeconfig.yaml and follow the install documentation but do not proceed with the installation yet.
A voila environment can support several zone aka target cluster for the multi region deployment model described in this document. This goes with a tool called mz for multi zone.
In the following example we will add a zone called passive.
When adding a new zone and if no zone existed before the current voila environment will automaticaly create a zone called default.

Configure Zones

Use the mz commmand to configure zones in the control plane
mz -a <name> <context> <namespace>
where - <name> is the zone name - <context> is the Kubernetes context to use - <namespace> is the namespace to create/use
example:
mz -a passive bar aporeto-svcs
mz list-zones output would look like:
[DEFAULT] zone (active): * Kubernetes context: foo * Kubernetes namespace: aporeto-svcs * Voila configuration folder: /voila-env/./conf.d/DEFAULT [PASSIVE] zone: * Kubernetes context: bar * Kubernetes namespace: aporeto-svcs * Voila configuration folder: /voila-env/./conf.d/PASSIVE
If you need to adjust the zones informations context or namespace, you can edit the aporeto.yaml file and change the he following section.
> vi aporeto.yaml ... zones: - name: default k8s_context: foo k8s_namespace: aporeto-svcs - name: passive k8s_context: bar k8s_namespace: aporeto-svcs

MongoDB Setup

Your MongoDB setup must be in version 4.x. And use proper SSL certificates for the routers.

Requirements

A sharded setup minimum two shards (you can add more to scale afterward):
  • data
  • reports

How Authentication Works with the Identity-Based Microsegmentation Control Plane

Authentication mechanism MUST be authMechanism=MONGODB-X509.
At install time, from the CA Root in voila an intermediate CA is created called System CA.
From that System CA we are creating the following:
When a service <service> is deployed, a call is made with the admin cert to create the user for the service as:
{ "_id" : "$external.CN=<service>,OU=root,O=system", "user" : "CN=<service>,OU=root,O=system", "db" : "$external", "roles" : [ { "role" : "read", "db" : "shared_zoning" }, { "role" : "readWrite", "db" : "<service>" }, ...[other roles] ], "mechanisms" : [ "external" ] }
Now when a service is starting, it asks to our PKI service a client certificate signed by the System CA as CN=<service>,OU=root,O=system and uses it to get connected to the mongodb URL.
The router cert is verified as issues from the System CA and the router can verify the client cert as issues from the same CA.
From here we have two options to configure certificates:
  • Case one:
    Aporeto provide all certificates to be deployed on the external mongodb
  • Case two:
    Aporeto provide the client CA and customer provide the mongodb CA for connection to the routers
In both case the customer need to provider the mongodb URL to reach the routers and perform the proper sharding/zoning configuration.
Please note that the mongo binary is not in the voila container but you can install it for testing with apk add mongodb. Tools available through mgos and the alerting / dashboards will not work anymore will not work with external MongoDB.

Step by Step Configuration and Deployment

The customer needs to provide a MongoDB URL like:
mongodb-shard-router-0.externalfqdn.local:27017,mongodb-shard-router-1.externalfqdn.local:27017,mongodb-shard-router-2.externalfqdn.local:27017
An admin user on $external db as:
{ "_id" : "$external.CN=mongodb-admin,OU=users,O=mongodb", "userId" : UUID("ae13462c-bdec-448a-ab7c-d68c0b5c464e"), "user" : "CN=mongodb-admin,OU=users,O=mongodb", "db" : "$external", "roles" : [ { "role" : "root", "db" : "admin" } ], "mechanisms" : [ "external" ] }
For that just get a mongo shell on your router and type:
db.getSiblingDB('$external').runCommand({ createUser: 'CN=mongodb-admin,OU=users,O=mongodb', roles: [ { role: 'root', db: 'admin' } ] });
A proper taging on the replicasets composing the shard with tag z0, z1. For instance:
sh.addShardToZone('data','z0') sh.addShardToZone('reports','z1')

Configure the Voila Environment

Disable the installation of MongoDB in the Kuberentes cluster for all zones:
mz set_value enabled false mongodb-shard override
If MongoDB already installed, it will be removed when running the snap command.
Set the external mongodb URL from a voila environment with:
set_value global.database.mongo.host mongo1:27017,mongo2:27017,mongo3:27017 override
where mongo1:27017,mongo2:27017,mongo3:27017 are the routers to reach.
This is a global setting we don’t need to use the mz prefix command for that.

Use Aporeto PKI to Generate the Router Certificate for the Mongo Router

The following steps will generate the router certificate to use on the host that compose the mongodb URL (with the intermediate chain inside.)
Regenerate the mongo certificates with upconf regen-certs
DNS:mongodb-shard-router, DNS:mongodb-shard-router.aporeto-svcs, DNS:*.mongodb-shard-router, DNS:*.mongodb-shard-router.aporeto-svcs, DNS:mongo-1, DNS:mongo-2, DNS:mongo-3, DNS:localhost, IP Address:127.0.0.1
You need to configure your router with the following info:
For reference if you need to craft a custom certificates for your router by hand:
From voila, generate the mongo router certificate:
tg cert \ --auth-server \ --algo rsa \ --org aporeto \ --org-unit 'service' \ --name "external-mongodb" \ --common-name "external-mongodb" \ --pass "APASS" \ --dns mongo1 \ --dns mongo2 \ --dns mongo3 \ --signing-cert certs/ca-signing-system-cert.pem --signing-cert-key certs/ca-signing-system-key.pem --signing-cert-key-pass "$(get_value global.certs.ca.system.pass)"
Will output:
INFO[0000] certificate key pair created cert=external-mongodb-cert.pem key=external-mongodb-key.pem
Concat them to create a full cert:
cat certs/external-mongodb-key.pem certs/external-mongodb-cert.pem > certs/external-mongodb-cert-full.pem
Then use those certs:
Then you can try to connect to mongo manually with:
mongo \ --host mongo1:PORT,mongo-one-fqdn:PORT,mongo2:PORT,mongo3:PORT --ssl \ --sslCAFile certs/ca-chain-system.pem \ --sslPEMKeyFile certs/mongodb-admin-full.pem \ --sslPEMKeyPassword "$(get_value global.certs.mongodb.admin.pass)" \ --username "CN=mongodb-admin,OU=users,O=mongodb" \ --authenticationDatabase '$external' \ --authenticationMechanism 'MONGODB-X509'

Use Customer Certificate Authority to Connect to the Routers

Drop the customer router certificate authority into the /certs folder:
mkdir -p /certs cp custom-ca.pem /certs/mongodb-custom-ca.pem
Run upconf, this will display a message like:
Using provided Custom CA for mongodb database with CN:xxx
You need to configure your router with the certs/ca-chain-system.pem as the client cert certificate authority (used for client cert authentication).
Then you can try to connect to mongo manually with:
mongo \ --host mongo1:PORT,mongo-one-fqdn:PORT,mongo2:PORT,mongo3:PORT --ssl \ --sslCAFile certs/mongodb-custom-ca.pem \ --sslPEMKeyFile certs/mongodb-admin-full.pem \ --sslPEMKeyPassword "$(get_value global.certs.mongodb.admin.pass)" \ --username "CN=mongodb-admin,OU=users,O=mongodb" \ --authenticationDatabase '$external' \ --authenticationMechanism 'MONGODB-X509'

Configfure and Deploy the Control Plane on Your Zones

Set the Proper URL Needed for All Zones

Set options for the default zone
mz -z default set_value global.public.api https://active-api.aporeto.com override mz -z default set_value global.public.ui https://active-ui.aporeto.com override mz -z default set_value global.public.monitoring https://active-monitoring.aporeto.com override
Set options for the passive zone
mz -z passive set_value global.public.api https://passive-api.aporeto.com global override mz -z passive set_value global.public.ui https://passive-ui.aporeto.com global override mz -z passive set_value global.public.monitoring https://passive-monitoring.aporeto.com global override
If needed install the
metrics server
in each cluster:
mz k apply -n kube-system -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.6/components.yaml
At this point you can customize one or all zone settings, add your own certificates for public endpoints and so on.
Set options for the default zone
mz -z default set_value global.public.api https://active-api.aporeto.com override mz -z default set_value global.public.ui https://active-ui.aporeto.com override mz -z default set_value global.public.monitoring https://active-monitoring.aporeto.com override
Set options for the passive zone
mz -z passive set_value global.public.api https://passive-api.aporeto.com global override mz -z passive set_value global.public.ui https://passive-ui.aporeto.com global override mz -z passive set_value global.public.monitoring https://passive-monitoring.aporeto.com global override

Deploy Services

Then for each zone run:
mz -z default doit mz -z passive doit

Deploy the Auto-maintenance Job

The auto-maintenance job makes sure that only one platform is active at a time. Given that you are using a DNS based loadbalancer; it will check & redirect to the active platform respectively.
Deploy the job on your zones as follow:
. <(mz -e default) auto-maintenance job -a https://global.aporeto.com | k apply -f - . <(mz -e passive) auto-maintenance job -a https://global.aporeto.com | k apply -f -
At this point the job will monitor which platform is the active one by querying the provided endpoint and turn the non active one into maintenance mode.

Recommended For You