Latest version: 4.3.x
Rotating Kubernetes credentials
When configuring in Kubernetes mode for SQL processors and Lenses is deployed
outside the cluster then it requires access to a kubeconfig
file. Depending on
your Kubernetes configuration for authentication access, the token inside kubeconfig
may expire and in that case next time Lenses tries to access the cluster it will fail.
The kubeconfig
file is updated every time a user runs e.g. kubectl get namespaces
and thus, one way to keep it updated and valid is to set a small script that runs
this command in intervals. Lenses utilizes for this purpose lenses.kubernetes.config.reload.interval=30000
configuration which specifies how often it reads the token from kubeconfig
.
The following sections provide simple examples using Google cloud/AWS Kubernetes and docker compose for a reference implementation.
Google Kubernetes + docker-compose
Prerequisites:
- A Google Kubernetes engine cluster
- A Google service account in the same google project with
roles/container.developer
role - A manually constructed
kubeconfig
with cluster name, endpoint, user, etc. that matches the Kubernetes cluster and service account. You can find the official documentation for that in the References It’s usually enough to delete theusers.user.auth-provider.config
from a standard google kubeconfig.
The docker compose example setup uses the necessary Kafka infrastructure containers,
Lenses, and an additional ‘sidecar’ container. The latter is responsible for “rotating”
the kubeconfig
file using a simple Bash script thus providing Lenses with valid
credentials for Kubernetes integration.
version: '3'
services:
kubectl:
image: google/cloud-sdk
command: bash -c "
apt update && apt install wget;
wget https://github.com/mikefarah/yq/releases/download/v4.7.1/yq_linux_amd64 -O /usr/bin/yq && chmod +x /usr/bin/yq;
while true;
do
echo Updating kubeconfig with fresh token;
kubectl get namespace;
echo Token:;
yq e '.users[].user.auth-provider.config.access-token' kubeconfig.yaml;
sleep 3s;
done;"
environment:
KUBECONFIG: /kubeconfig.yaml
GOOGLE_APPLICATION_CREDENTIALS: /gsa-key.json
volumes:
- ./kubeconfig.yaml:/kubeconfig.yaml
- ./gsa-key.json:/gsa-key.json
lenses:
image: lensesio/lenses:4.3
depends_on:
- kafka-1
ports:
- 3030:3030
environment:
LENSES_SQL_EXECUTION_MODE: KUBERNETES
LENSES_KUBERNETES_CONFIG_FILE: /kubeconfig.yaml
LICENSE_URL: <licence URL>
LENSES_PORT: 3030
LENSES_KAFKA_BROKERS: PLAINTEXT://kafka-1:9092
LENSES_ZOOKEEPER_HOSTS: "[{url: \"zookeeper-1:2181\"}]"
volumes:
- ./kubeconfig.yaml:/kubeconfig.yaml
zookeeper-1:
image: confluentinc/cp-zookeeper:6.0.1
environment:
ZOOKEEPER_SERVER_ID: 1
ZOOKEEPER_CLIENT_PORT: 2181
kafka-1:
hostname: kafka-1
image: confluentinc/cp-kafka:6.0.1
depends_on:
- zookeeper-1
environment:
KAFKA_BROKER_ID: 1
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka-1:9092
KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:2181
KAFKA_AUTO_CREATE_TOPICS_ENABLE: "false"
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_JMX_PORT: 9581
AWS + docker-compose
To generate a kubeconfig
file according to
AWS documentation
,
you have to use aws-cli
:
$ aws eks --region <region-code> update-kubeconfig --name <cluster_name>
A sample of the generated kubeconfig
can be seen below:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data:
....
kind: Config
preferences: {}
users:
- name: arn:aws:eks:eu-central-1:303519801974:cluster/token-refresh
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- --region
- eu-central-1
- eks
- get-token
- --cluster-name
- token-refresh
command: aws
The token refresh mechanism requires using a different aws-cli
command.
Each token is valid for approximately 15 minute period:
$ date -u
Wed 28 Apr 2021 09:40:00 AM UTC
$ aws eks --region eu-central-1 get-token --cluster-name token-refresh | jq .
{
"kind": "ExecCredential",
"apiVersion": "client.authentication.k8s.io/v1alpha1",
"spec": {},
"status": {
"expirationTimestamp": "2021-04-28T09:54:02Z",
"token": <long token string>
}
}
Adding the above AWS cli tool to Lenses image will help us later in our deployment:
CopyFROM image: lensesio/lenses:4.3
RUN apt-get update && apt-get -y install unzip
RUN curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" && unzip awscliv2.zip && ./aws/install
By running the following docker-compose.yml
file Lenses will source the kubeconfig
file
and every time it needs access to AWS the referenced aws-cli
tool that is already
included in Lenses image will be triggered and the token will be automatically refreshed.
lenses:
image: lenses-aws
hostname: lenses
container_name: lenses
depends_on:
- zookeeper
- kafka1
- schema-registry
....
LENSES_SQL_EXECUTION_MODE: KUBERNETES
LENSES_KUBERNETES_CONFIG_FILE: /tmp/kube/config
AWS_ACCESS_KEY_ID: your-key
AWS_SECRET_ACCESS_KEY: your-secret
volumes:
- ~/.kube/config:/tmp/kube/config