Setup a Ponos agent on a Kubernetes cluster
Requirements
This documentation assumes you have:
-
a running Kubernetes cluster
-
a local access to that cluster using kubectl and a local kube configuration
Resources
Role
The ponos agent needs to run with specific authorizations to manage Batch Jobs, view their logs and check on persistent volume claims.
You can either use a privileged kubernetes user to run the agent, or use the ClusterRole described below.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: ponos-rights
rules:
# Manage jobs
- verbs:
- get
- list
- watch
- create
apiGroups:
- batch
resources:
- jobs
- jobs/status
- verbs:
- list
apiGroups: ['']
resources:
# List PVC to validate the configuration
- persistentvolumeclaims
# List Pods to read their logs
- pods
# Read logs of a specific pod
- verbs:
- get
apiGroups: ['']
resources:
- pods/log
---
kind: ServiceAccount
apiVersion: v1
metadata:
name: ponos-agent
namespace: default
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: ponos-agent
subjects:
- kind: ServiceAccount
name: ponos-agent
namespace: default
roleRef:
kind: ClusterRole
name: ponos-rights
apiGroup: rbac.authorization.k8s.io
Volume
A persistent volume claim with access mode ReadWriteMany must be created before running the agent.
The name of this claim must also be set in the configuration described below (as kubernetes.pvc_name).
Ideally you’ll use a storage class compatible with network usage (like an object storage or NFS) so that all pods in the cluster can access the same volume.
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ponos-data
namespace: default
spec:
accessModes:
- ReadWriteMany
storageClassName: local-path
resources:
requests:
storage: 100Gi
Configuration
The setting kubernetes.pvc_name must be the same name as the persistent volume claim described above.
| The agent does not support yet Kubernetes secrets, so you’ll need to expose the farm seed in the configuration for now. |
kind: ConfigMap
apiVersion: v1
metadata:
name: ponos-config
data:
agent.yml: |
url: https://<ARKINDEX_URL>
farm_id: <PONOS_FARM_ID>
seed: <PONOS_FARM_SEED>
data_dir: /data
kubernetes:
pvc_name: ponos-data
Deployment
The deployment ties all the previous elements together and run the ponos-agent in a single container. It’s important that a single agent runs at any time to avoid conflicts.
Both configuration & data PVC are mounted as volumes. A few environment variables must be set with the downward API:
-
the
spec.nodeNameas thePONOS_NODE_NAMEenvironment variable, -
the
metadata.uidas thePONOS_POD_IDenvironment variable,
The latest stable version (replaced by X.Y.Z below) will be provided by Teklia.
apiVersion: apps/v1
kind: Deployment
metadata:
name: ponos-deployment
labels:
type: deployment
spec:
replicas: 1
selector:
matchLabels:
type: pods
template:
metadata:
name: ponos-agent
labels:
type: pods
name: ponos-agent
spec:
serviceAccountName: ponos-agent
volumes:
- name: config-volume
configMap:
name: ponos-config
- name: data-volume
persistentVolumeClaim:
claimName: ponos-data
containers:
- name: ponos-agent
image: registry.gitlab.teklia.com/arkindex/ponos-agent:X.Y.Z
volumeMounts:
- name: config-volume
mountPath: /etc/ponos/
- name: data-volume
mountPath: /data
env:
# Used to identify on the Arkindex instance
- name: PONOS_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
# Used to identify current pod in singleton detection
- name: PONOS_POD_ID
valueFrom:
fieldRef:
fieldPath: metadata.uid
As the container is labeled with a name (ponos-agent), you can track the deployment state and its logs with the following commands:
kubectl describe pods -l name=ponos-agent
kubectl logs -l name=ponos-agent -f
K3S support
The git repository of the project contains all the necessary files to quickly start an agent on a K3S cluster.
kubectl apply -f k3s/cluster_role.yml -f k3s/deployment.yml -f k3s/volume.yml
Setup a private SSL CA certificate for Ponos tasks
If your Kubernetes cluster uses a private SSL certificate authority to host your services (including Arkindex instance and S3 provider), the Kubernetes jobs created by the Ponos agent will also need to use a CA cert to validate every SSL connections.
You can use a combination of kubernetes.extra_volumes and kubernetes.extra_environment to mount a persistent volume claim containing a CA cert bundle, then use it in all new jobs.
It’s your responsibility to create and manage the persistent volume containing a valid CA cert bundle matching your infrastructure specification. We’ll assume it’s available as ca-bundle.crt at the root of a persistent volume claim named my-ca-bundle.
You can then configure the Ponos agent as follows:
kubernetes:
# ... Other configuration
extra_volumes:
- name: company-ca
pvc_name: my-ca-bundle
mount_path: /ssl
read_only: true
extra_environment:
AWS_CA_BUNDLE: /ssl/ca-bundle.crt
REQUESTS_CA_BUNDLE: /ssl/ca-bundle.crt
The two environment variables must be set towards the mounted ca-bundle.crt file in the containers, and configures the Python requests and Amazon Boto library for S3.