Satellite on Kubernetes
Setup
Prerequisites
Before installing the Levo Satellite on Kubernetes, ensure you have:
- Kubernetes version >=
v1.18.0 - Helm v3 installed and configured
- The Kubernetes cluster API endpoint is reachable from the machine running Helm
kubectlaccess to the cluster withcluster-adminpermissions- Minimum system requirements:
- CPU: At least 4 cores
- RAM: At least 8 GB
1. Create a Kubernetes Secret with your LevoAuth key
kubectl create secret generic levoai-satellite \
-n levoai \
--from-literal=refresh-token="<Authorization Key>"
2. Create RabbitMQ Authentication Secret
RABBITMQ_ERLANG_COOKIE=$(openssl rand -base64 24)
kubectl create secret generic levoai-rabbitmq-auth \
-n levoai \
--from-literal=rabbitmq-username="<username>" \
--from-literal=rabbitmq-password="<password>" \
--from-literal=rabbitmq-erlang-cookie=$RABBITMQ_ERLANG_COOKIE
3. Add the Levo Helm Repository
helm repo add levoai https://charts.levo.ai && helm repo update levoai
4. Create the levoai Namespace and Install the Satellite
If Locating the Satellite on the Same Cluster Alongside the Sensor
helm upgrade --install -n levoai --create-namespace \
levoai-satellite levoai/levoai-satellite
You may need to set a different Levo base URL for the satellite if your Saas/dashboard account is created in India domain.
For example, if you are accessing Levo dashboard with app.india-1.levo.ai, the installation command will be:
helm upgrade --install -n levoai --create-namespace \
--set global.levoai_config_override.onprem-api.url="https://api.india-1.levo.ai" \
levoai-satellite levoai/levoai-satellite
If Locating the Satellite on a Dedicated Cluster
You will need to expose the Satellite via either a LoadBalancer or NodePort so that it is reachable by Sensors running in other clusters. Modify the command below appropriately:
# Please modify this command template and choose either 'LoadBalancer' or 'NodePort', prior to execution
helm upgrade --install -n levoai --create-namespace \
--set haproxy.service.type=<LoadBalancer | NodePort> \
# --set global.levoai_config_override.onprem-api.url="https://api.india-1.levo.ai" \
levoai-satellite levoai/levoai-satellite
:::
If RabbitMQ Persistence Needs to Be Disabled
Set the rabbitmq.persistence.enabled property to false:
helm upgrade --install -n levoai --create-namespace \
--set rabbitmq.persistence.enabled=false \
# --set global.levoai_config_override.onprem-api.url="https://api.india-1.levo.ai" \
levoai-satellite levoai/levoai-satellite
Kubernetes related customizations
Add Tolerations, Affinity and Node Selectors
Tolerations, Affinity and Node Selectors for the Satellite pods may be specified via helm values. For example:
tolerations:
- key: node-role.kubernetes.io/control-plane
operator: Exists
effect: NoSchedule
- key: "devops"
operator: "Equal"
value: "dedicated"
effect: "NoSchedule"
nodeSelector:s
kubernetes.io/hostname: "mavros"
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: topology.kubernetes.io/zone
operator: In
values:
- antarctica-east1
- antarctica-west1
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
preference:
matchExpressions:
- key: another-node-label-key
operator: In
values:
- another-node-label-value
5. Verify connectivity with Levo.ai
a. Check Satellite Health
The Satellite consists of five components: levoai-collector, levoai-ion, levoai-rabbitmq, levoai-satellite, and levoai-tagger.
Wait a couple of minutes after the installation, and check the health of the components by executing the following:
kubectl -n levoai get pods
If the Satellite is healthy, you should see output similar to the following:
NAME READY STATUS RESTARTS AGE
levoai-collector-5b54df8dd6-55hq9 1/1 Running 0 5m0s
levoai-ion-669c9c4fbc-vsmmr 1/1 Running 0 5m0s
levoai-rabbitmq-0 1/1 Running 0 5m0s
levoai-satellite-8688b67c65-xppbn 1/1 Running 0 5m0s
levoai-tagger-7bbf565b47-b572w 1/1 Running 0 5m0s
b. Check Connectivity
Verify connectivity to Levo.ai by running:
# Please specify the actual pod name for levoai-tagger below
kubectl -n levoai logs <levoai-tagger pod name> | grep "Ready to process; waiting for messages."
If connectivity is healthy, you will see output similar to the following:
{"level": "info", "time": "2022-06-07 08:07:22,439", "line": "rabbitmq_client.py:155", "version": "fc628b50354bf94e544eef46751d44945a2c55bc", "module": "/opt/levoai/e7s/src/python/levoai_e7s/satellite/rabbitmq_client.py", "message": "Ready to process; waiting for messages."}
Please contact support@levo.ai if you notice health/connectivity related errors.
6. Note the Host and Port Information
If Locating the Satellite on the Same Cluster Alongside the Sensor
The Satellite can now be reached by Sensors running in the same cluster at levoai-haproxy.levoai:80. Record this information—you'll need it to configure the Sensor.
If Locating the Satellite on a Dedicated Cluster
Run the command below and note the external address/port of the Collector service. You'll need this to configure the Sensor:
kubectl get service levoai-collector -n levoai
7. Optionally, Enable Authentication for Satellite APIs
Add the configuration below to a values.yml file to enable authentication for Satellite APIs using a unique key.
Refer to Accessing Organization IDs for instructions on fetching your Organization ID:
global:
levoai_config_override:
onprem-api:
org-id: <your-org-id>
haproxy:
satelliteAuthnEnabled: false
Install the Satellite using this values.yml file:
helm upgrade --install -n levoai --create-namespace \
-f ./values.yml \
levoai-satellite levoai/levoai-satellite
Otherwise, you can pass the org-id and authnEnabled as arguments to the helm command.
helm upgrade --install -n levoai --create-namespace \
--set global.levoai_config_override.onprem-api.org-id=<your-org-id> \
--set haproxy.authnEnabled=true \
levoai-satellite levoai/levoai-satellite
8. Optionally, Access the Satellite Through a CNAME and HTTPS
Add the configuration below to a values.yml file to add an ingress route for Satellite APIs, allowing access through a CNAME and HTTPS:
haproxy:
ingress:
enabled: true
hostname: <Your CNAME>
ingressClassName: haproxy
pathType: Prefix
extraPaths:
- path: /*
pathType: Prefix
backend:
service:
name: levoai-haproxy
port:
number: 80
Using a Private Docker Registry for Kubernetes Installations (Optional)
If you want to use a private Docker registry for the Satellite installation, refer to Using a Private Docker Registry for Kubernetes Installations.
Using a Custom Ingress Controller (Optional)
In case you want to use a custom ingress controller, contact support@levo.ai for assistance.
Next Steps: Install Traffic Capture Sensors
Proceed to install Traffic Capture Sensors to deploy sensors in your environment.
Satellite Lifecycle Management
Upgrade Satellite
- Optionally, update the Authorization Key secret (if expired or changed):
kubectl create secret generic levoai-satellite \
-n levoai \
--from-literal=refresh-token="<Authorization Key>" \
--dry-run=client -o yaml | kubectl apply -f -
Or, you can also update the secret using the following command:
kubectl -n levoai patch secret levoai-satellite \
--type='json' \
-p='[{"op": "replace", "path": "/data/refresh-token", "value":"'$(echo -n "<Authorization Key>" | base64)'"}]'
- Optionally, update RabbitMQ Auth secret (if changed)
RABBITMQ_ERLANG_COOKIE=$(openssl rand -base64 24)
kubectl create secret generic levoai-rabbitmq-auth \
-n levoai \
--from-literal=rabbitmq-username="<username>" \
--from-literal=rabbitmq-password="<password>" \
--from-literal=rabbitmq-erlang-cookie=$RABBITMQ_ERLANG_COOKIE \
-o yaml --dry-run=client | kubectl replace -f -
Or, you can also update only specific keys of the secret using the following command:
kubectl patch secret levoai-rabbitmq-auth \
-n levoai \
--type='merge' \
-p "{\"data\": { \
\"rabbitmq-password\": \"$(echo -n '<password>' | base64)\", \
\"rabbitmq-erlang-cookie\": \"$(echo -n \"$RABBITMQ_ERLANG_COOKIE\" | base64)\" \
}}"
- Update and upgrade the Helm installation:
# Update helm repo and upgrade installation
helm repo update levoai
helm upgrade -n levoai \
levoai-satellite levoai/levoai-satellite
Uninstall Satellite
helm uninstall levoai-satellite -n levoai
After running the above command, wait until all Satellite pods have been terminated, and then run the following command to delete the rabbitmq PersistentVolumeClaim. Deleting the PVC also deletes the corresponding PersistentVolume.
kubectl delete pvc data-levoai-rabbitmq-0 -n levoai
In case the kubectl delete pvc command gets stuck, run the following command before deleting the PVC again:
kubectl patch pvc data-levoai-rabbitmq-0 -p '{"metadata":{"finalizers":null}}' -n levoai
Change the Authorization Key used to communicate with Levo.ai
- Uninstall the Satellite.
- Reinstall the Satellite with the new
Authorization Key.
Change the Minimum Number of URLs for API Endpoint Detection
To detect an API endpoint, the Satellite waits for at least 10 URLs to match that endpoint's URL pattern. This threshold may cause delays in detecting API endpoints when there is insufficient load.
To adjust this threshold:
- Navigate to the Levo dashboard
- Click Settings in the left navigation bar
- Under the API Discovery tab, update Min. URLs per Pattern to your desired number
- Wait at least 5 minutes for the Satellite to apply the change
List Satellite Pods
kubectl -n levoai get pods | grep -E '^levoai-collector|^levoai-rabbitmq|^levoai-satellite|^levoai-tagger'
Tail Logs of a Specific Pod
kubectl -n levoai logs -f <pod name>
Troubleshooting
Tagger Errors
The Tagger component sends API endpoint metadata to Levo.ai. API Observability will not function if the Tagger is in an errored state.
Please see sample output below from kubectl get pods, that shows the Tagger in an errored state.
NAME READY STATUS RESTARTS AGE
levoai-collector-848fb4fff9-gv8g9 1/1 Running 0 64s
levoai-rabbitmq-0 0/1 Running 0 64s
levoai-satellite-54956ccb89-5s4h2 1/1 Running 0 64s
levoai-tagger-799db4d9cc-89jm8 0/1 Error 1 (14s ago) 64s
Below are common error scenarios:
Authentication Errors
The Tagger component authenticates with Levo.ai using the Authorization Key. If the Tagger is unable to authenticate, it will error out.
Check for authentication errors in the Tagger logs:
kubectl -n levoai logs <levoai-tagger-pod-id> | grep "Exception: Failed to refresh access token"
If there are exception messages, you have an incorrect or stale Authorization Key. Please contact support@levo.ai for further assistance.
Connectivity Errors
Check for connectivity errors in the Tagger logs:
kubectl -n levoai logs <levoai-tagger-pod-id> | grep "ConnectionRefusedError: [Errno 111] Connection refused"
If there are exception messages, the Tagger is unable to connect to dependent services. It typically establishes connection after 3-4 retries. Please contact support@levo.ai for assistance.
Enable Debug Logging
Add the following helm option to enable debug logging for the Satellite components.
helm upgrade --install -n levoai --create-namespace \
--set global.levoai.log_level="DEBUG" \
levoai-satellite levoai/levoai-satellite
This will enable detailed debugging logs for all satellite components, including Tagger, Collector, Ion, and Satellite.
Some various log levels that can be set are INFO, DEBUG, WARNING, ERROR. The default log level is INFO.
Share Satellite logs with support@levo.ai
/tmp/levoai_satellite_logs_%date-time%.tar.gz:
chmod +x get_levoai_satellite_logs.sh
./get_levoai_satellite_logs.sh
Need Help?
For further assistance, please reach out to support@levo.ai.