Install Satellite
1. Prerequisites
- You have an account on Levo.ai
- Compatibility script (from step 1) indicates the Linux host (that you want to instrument with the Sensor) is compatible.
2. Planning
a. Placement of the Satellite
There are two options for the Satellite placement. Choose what best works for you.
Placement Type | Pros | Cons |
---|---|---|
Same Host/Cluster as Sensor |
| Satellite consumes resources on the host/cluster where your application workloads are located. This might lead to resource contention based on traffic load. |
Dedicated Host/Cluster |
| Requires a dedicated host/cluster. |
b. Copy Authorization Key
from Levo.ai
The Satellite uses an authorization key to access Levo.ai. Follow instructions below to copy the key.
- Login to Levo.ai.
- Click on your user profile.
- Click on
User Settings
- Click on
Keys
on the left navigation panel - Click on
Get Satellite Authorization Key
- Now copy your authorization key. This key is required in a subsequent step below.
3. Follow instructions for your platform
- Install on Kubernetes
- Install on Linux host via Docker Compose
- Install in AWS using Levo Satellite AMI
Install on Kubernetes
Prerequisites
- Kubernetes version >=
v1.18.0
- Helm v3 installed and working.
- The Kubernetes cluster API endpoint should be reachable from the machine you are running Helm.
kubectl
access to the cluster, withcluster-admin
permissions.
1. Setup environment variables
export LEVOAI_AUTH_KEY=<'Authorization Key' copied earlier>
2. Install levoai Helm repo
helm repo add levoai https://charts.levo.ai && helm repo update
3. Create levoai
namespace & install Satellite
If locating Satellite on the same cluster alongside Sensor
helm upgrade --install -n levoai --create-namespace \
--set global.levoai_config_override.onprem-api.refresh-token=$LEVOAI_AUTH_KEY \
levoai-satellite levoai/levoai-satellite --force
If locating Satellite on a dedicated cluster
You will need to expose the Satellite via either a LoadBalancer
or NodePort
, such that is is reachable by Sensors running in other clusters. Please modify the below command appropriately.
# Please modify this command template and choose either 'LoadBalancer' or 'NodePort', prior to execution
helm upgrade --install -n levoai --create-namespace \
--set global.levoai_config_override.onprem-api.refresh-token=$LEVOAI_AUTH_KEY \
--set levoai-collector.service.type=<LoadBalancer | NodePort> \
levoai-satellite levoai/levoai-satellite --force
4. Verify connectivity with Levo.ai
a. Check Satellite health
The Satellite is comprised of four sub components 1) levoai-collector, 2) levoai-rabbitmq, 3)levoai-satellite, and 4) levoai-tagger.
Wait couple of minutes after the install, and check the health of the components by executing the following:
kubectl -n levoai get pods
If the Satellite is healthy, you should see output similar to below. Don't worry about the restarts of the levoai-tagger pod.
NAME READY STATUS RESTARTS AGE
levoai-collector-848fb4fff9-gv8g9 1/1 Running 0 4m8s
levoai-rabbitmq-0 1/1 Running 0 4m8s
levoai-satellite-54956ccb89-5s4h2 1/1 Running 0 4m8s
levoai-tagger-799db4d9cc-89jm8 1/1 Running 3 (4m8s ago) 4m8s
b. Check connectivity
Execute the following to check for connectivity health:
# Please specify the actual pod name for levoai-tagger below
kubectl -n levoai logs <levoai-tagger pod name> | grep "Ready to process; waiting for messages."
If connectivity is healthy, you will see output similar to below.
{"level": "info", "time": "2022-06-07 08:07:22,439", "line": "rabbitmq_client.py:155", "version": "fc628b50354bf94e544eef46751d44945a2c55bc", "module": "/opt/levoai/e7s/src/python/levoai_e7s/satellite/rabbitmq_client.py", "message": "Ready to process; waiting for messages."}
Please contact support@levo.ai
if you notice health/connectivity related errors.
5. Note down Host:Port
information
If locating Satellite on the same cluster alongside Sensor
The Collector can now be reached by the Sensors running in the same cluster at levoai-collector.levoai:4317
. Please note this, as it will be required to configure the Sensor.
If locating Satellite on a dedicated cluster
Run the below command and note the external
address/port of the the Collector service. This will be required to configure the Sensor.
kubectl get service levoai-collector -n levoai
Please proceed to install the Sensor.
Install on Linux host via Docker Compose
Prerequisites
- Docker Engine version
18.03.0
and above - Admin privileges on the Docker host
- 'docker-compose' installed, if 'docker compose' is not supported on your OS
1. Download Docker Compose file
Levo provides pre-built Docker images for the Satellite that can be installed via Docker Compose.
2. Install Satellite
Execute the following from the directory where the Docker Compose file was downloaded.
(export LEVOAI_AUTH_KEY=<'Authorization Key' copied earlier>; docker compose pull && docker compose up -d)
If
docker compose ...
complains with "docker: 'compose' is not a docker command.", you have can trydocker-compose
instead.
3. Verify connectivity with Levo.ai
a. Check Satellite health
The Satellite is comprised of four sub components 1) levoai-collector, 2) levoai-rabbitmq, 3)levoai-satellite, and 4) levoai-tagger.
Wait couple of minutes after the install, and check the health of the components by executing the following:
docker ps -f name=levoai
If the Satellite is healthy, you should see output similar to below.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2b32cd6b9ced levoai/collector:stable "/usr/local/bin/levo…" 10 seconds ago Up 8 seconds 0.0.0.0:4317->4317/tcp, 9411/tcp levoai-collector
06f3c597cad0 levoai/satellite:stable "gunicorn --capture-…" 10 seconds ago Up 9 seconds 0.0.0.0:9999->9999/tcp levoai-satellite
89026034c567 levoai/satellite:stable "python -OO /opt/lev…" 10 seconds ago Up Less than a second levoai-tagger
f74524d02fbd bitnami/rabbitmq:3.10 "/opt/bitnami/script…" 10 seconds ago Up 9 seconds 5551-5552/tcp, 0.0.0.0:4369->4369/tcp, 5671/tcp, 0.0.0.0:5672->5672/tcp, 0.0.0.0:15672->15672/tcp, 0.0.0.0:25672->25672/tcp, 15671/tcp levoai-rabbitmq
b. Check connectivity
Execute the following to check for connectivity health:
docker logs levoai-tagger | grep "Ready to process; waiting for messages."
If connectivity is healthy, you will see output similar to below.
{"level": "info", "time": "2022-06-07 08:07:22,439", "line": "rabbitmq_client.py:155", "version": "fc628b50354bf94e544eef46751d44945a2c55bc", "module": "/opt/levoai/e7s/src/python/levoai_e7s/satellite/rabbitmq_client.py", "message": "Ready to process; waiting for messages."}
4. Note down Host:Port
information
The Collector now runs in a container, and is reachable on the host via port 4317 (on all the host's network interfaces).
Please note down the either the host's IP address or domain name. The Sensor will be configured to communicate with the Collector at <Host's IP|Domain-Name>:4317.
Please proceed to install the Sensor.
Install in AWS using Levo Satellite AMI
1. Open the EC2 Launch Wizard and select the Levo Satellite AMI
Levo provides pre-built AMIs for Satellite. You can launch an EC2 instance using the AMI in the AWS region you wish to install the satellite in.
2. EC2 Configuration
Pick the following appropriately for your instance. Make sure that this instance is reachable from the eBPF sensors running in your VPC.
- Instance Name & tags
- Key pair
- The security group
- Make sure to add rules to allow https traffic.
- Allow UDP port 4789 if you are using traffic mirroring.
- Disk storage. Choose at least 40GB
3. Add User Metadata to the EC2 instance
Under Advanced details > User Data, add the following (pick the appropriate value of levo_auth_key
):
#!/bin/bash
echo "LEVOAI_AUTH_KEY=xxx" > /opt/levoai/.levoenv
sudo /opt/levoai/levo_satellite.sh start >> satellite-start.log 2>&1
# Uncomment the following line to enable the traffic mirroring listener
# sudo /opt/levoai/levo_traffic_listener.sh start >> traffic-listener-start.log 2>&1
Traffic Mirroring
In order to use traffic mirroring setup uncomment the last line of the user data script. Check Other Installation Options for configuring traffic mirroring using Levo CLI.
4. Launch the EC2 instance
Satellite services should start automatically once the EC2 instance is initialized
5. Verify the Satellite services
To check logs, debug and manage the Satellite services, you can SSH into the VM and use the following commands.
- Stop the Satellite:
sudo /opt/levo/levo_satellite.sh stop
- Start the Satellite:
sudo /opt/levo/levo_satellite.sh start
- Upgrade the Satellite:
sudo /opt/levo/levo_satellite.sh upgrade
- Check the services:
sudo docker ps
6. Verify connectivity with Levo.ai
a. Check Satellite health
The Satellite is comprised of four sub components 1) levoai-collector, 2) levoai-rabbitmq, 3)levoai-satellite, and 4) levoai-tagger.
Wait couple of minutes after the install, and check the health of the components by executing the following:
sudo docker ps -f name=levoai
If the Satellite is healthy, you should see output similar to below.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2b32cd6b9ced levoai/collector:stable "/usr/local/bin/levo…" 10 seconds ago Up 8 seconds 0.0.0.0:4317->4317/tcp, 9411/tcp levoai-collector
06f3c597cad0 levoai/satellite:stable "gunicorn --capture-…" 10 seconds ago Up 9 seconds 0.0.0.0:9999->9999/tcp levoai-satellite
89026034c567 levoai/satellite:stable "python -OO /opt/lev…" 10 seconds ago Up Less than a second levoai-tagger
f74524d02fbd bitnami/rabbitmq:3.10 "/opt/bitnami/script…" 10 seconds ago Up 9 seconds 5551-5552/tcp, 0.0.0.0:4369->4369/tcp, 5671/tcp, 0.0.0.0:5672->5672/tcp, 0.0.0.0:15672->15672/tcp, 0.0.0.0:25672->25672/tcp, 15671/tcp levoai-rabbitmq
b. Check connectivity
Execute the following to check for connectivity health:
sudo docker logs levoai-tagger 2>&1 | grep "Ready to process; waiting for messages."
If connectivity is healthy, you will see output similar to below.
{"level": "info", "time": "2022-06-07 08:07:22,439", "line": "rabbitmq_client.py:155", "version": "fc628b50354bf94e544eef46751d44945a2c55bc", "module": "/opt/levoai/e7s/src/python/levoai_e7s/satellite/rabbitmq_client.py", "message": "Ready to process; waiting for messages."}
7. Note down Host:Port
information
The Collector now runs in a container, and is reachable on the host via port 4317 (on all the host's network interfaces).
Please note down the either the host's IP address or domain name. The Sensor will be configured to communicate with the Collector at <Host's IP|Domain-Name>:4317.
Please proceed to install the Sensor.