Deploying Vault with etcd backend in KubernetesJack LeiBlockedUnblockFollowFollowingJun 29I needed a secrets management tool that can be highly available, on-premise, cloud-native, run on low resources, and easily orchestrated to make my life easier.
Is that so much to ask for?Kubernetes is easily the most popular container orchestration platform proven in public cloud and on-premise for production.
Hashicorp Vault doesn’t really have any competition for secrets management tool especially considering we don’t want to lock-in to a cloud vendor.
Hashicorp recommends using Consul’s key-value store as Vault’s storage backend.
Unfortunately, Consul bundles all of its features together.
The official Vault Reference Architecture recommends a m5.
large on AWS or n1-standard-4 on GCE.
No thank you, just give me the kv store.
I’ll go with etcd, a simple and reliable key-value store.
I want this build to be as close to what I would deploy to production.
That means I need to think about the lifecycle of the tools.
That’s where the Operators come in.
The operators oversee installation, updates, and management of the lifecycle of all of the Operators (and their associated services) running across the cluster.
That is awesome.
We will be using CoreOS etcd Operator and BanzaiCloud Vault Operator.
PrerequisitesA few things are assumed before we can start deploying our storage backend and Vault.
A Kubernetes cluster.
You can always spin up a local kubernetes cluster with minikube.
Installation instructions can be found here.
minikube startHelm tiller initialized.
helm init –upgrade –waitkubectl create clusterrolebinding tiller-cluster-admin –clusterrole=cluster-admin –serviceaccount=kube-system:defaultInstallationThe Vault operator includes an option to install the etcd operator.
Nothing wrong with the implementation but I prefer to deploy the storage backend myself.
etcdThe CoreOS etcd-operator helm chart is stable, the etcd operator is maintained, and supports the latest version of etcd.
yaml with the following contents.
These are the helm chart values for the etcd operator.
# etcdOperatoretcdOperator: image: repository: quay.
io/coreos/etcd-operator tag: v0.
3# backup specbackupOperator: image: repository: quay.
io/coreos/etcd-operator tag: v0.
3 spec: storageType: S3 s3: s3Bucket: awsSecret:# restore specrestoreOperator: image: repository: quay.
io/coreos/etcd-operator tag: v0.
3 spec: s3: # The format of "path" must be: "<s3-bucket-name>/<path-to-backup-file>" # e.
backup" path: awsSecret:## etcd-cluster specific valuesetcdCluster: name: etcd-cluster size: 3 version: 3.
13 image: repository: quay.
io/coreos/etcd tag: v3.
13 enableTLS: false # TLS configs tls: static: member: peerSecret: etcd-peer-tls serverSecret: etcd-server-tls operatorSecret: etcd-client-tlsAt the time of this write-up, v3.
13 is the latest stable release.
I will cover backing up and restoring in another write-up.
Install the chart.
helm upgrade –install etcd-operator stable/etcd-operator -f etcd_operator_values.
yamlShould take a few seconds, you can runhelm status etcd-operator to check.
Once the etcd operator is running, deploy the etcd cluster.
cat <<EOF | kubectl apply -f -apiVersion: "etcd.
com/v1beta2"kind: "EtcdCluster"metadata: name: "example-etcd-cluster"spec: size: 3 version: "3.
13"EOFVerify that the etcd pods are running with kubectl get po -l app=etcd.
At this point, you are done configuring etcd for Vault.
VaultThe CoreOS Vault operator was the go-to when it was maintained over a year ago, which is no longer the case.
BanzaiCloud Vault operator showed great potential.
Let’s begin, deploy the Vault operator without the included etcd operator configuration.
You may need to override the image tag to use the latest version.
helm upgrade –install –set image.
17 –set etcd-operator.
enabled=false vault-operator banzaicloud-stable/vault-operatorVerify that the operator is running with helm status vault-operator.
Then deploy the service account, role, and role bindings.
kubectl apply -f https://raw.
yamlIn case the url changes, use the following.
Contents copied from BanzaiCloud BankVaults rbac.
cat <<EOF | kubectl apply -f -kind: ServiceAccountapiVersion: v1metadata: name: vault—kind: RoleapiVersion: rbac.
io/v1metadata: name: vault-secretsrules: – apiGroups: – "" resources: – secrets verbs: – "*"—kind: RoleBindingapiVersion: rbac.
io/v1metadata: name: vault-secretsroleRef: kind: Role name: vault-secrets apiGroup: rbac.
iosubjects: – kind: ServiceAccount name: vault—# This binding allows the deployed Vault instance to authenticate clients# through Kubernetes ServiceAccounts (if configured so).
io/v1kind: ClusterRoleBindingmetadata: name: vault-auth-delegatorroleRef: apiGroup: rbac.
io kind: ClusterRole name: system:auth-delegatorsubjects: – kind: ServiceAccount name: vault namespace: defaultEOFDeploy the vault cluster.
Make sure to specify the correct address for etcd and api_addr will be set to 127.
1 for testing.
cat <<EOF | kubectl apply -f -apiVersion: "vault.
com/v1alpha1"kind: "Vault"metadata: name: "vault"spec: size: 2 image: vault:1.
3 bankVaultsImage: banzaicloud/bank-vaults:latest# Specify the ServiceAccount where the Vault Pod and the Bank-Vaults configurer/unsealer is running serviceAccount: vault# Specify how many nodes you would like to have in your etcd cluster # NOTE: -1 disables automatic etcd provisioning etcdSize: -1# Support for distributing the generated CA certificate Secret to other namespaces.
# Define a list of namespaces or use ["*"] for all namespaces.
caNamespaces: – "vswh"# Describe where you would like to store the Vault unseal keys and root token.
unsealConfig: kubernetes: secretNamespace: default# A YAML representation of a final vault config file.
# See https://www.
io/docs/configuration/ for more information.
config: storage: etcd: address: http://example-etcd-cluster:2379 ha_enabled: "true" listener: tcp: address: "0.
0:8200" tls_cert_file: /vault/tls/server.
crt tls_key_file: /vault/tls/server.
key api_addr: https://127.
1:8200 telemetry: statsd_address: localhost:9125 ui: trueEOFVerify the vault pods are up with kubectl get po -l app=vault.
Installation is complete, now let’s get into the castle.
Let’s get inRetrieve the keys to the castle: root token, unseal keys, and generated certificate.
Get the decoded root token and save it to the VAULT_TOKEN variable.
export VAULT_TOKEN=$(kubectl get secrets vault-unseal-keys -o json | jq -r '.
data["vault-root"]' | base64 –decode)echo $VAULT_TOKENThe unseal keys are not needed, but here is how you can retrieve them still base64 encoded.
kubectl get secrets vault-unseal-keys -o json | jq -r '.
data | to_entries | select(.
key | startswith("vault-unseal")) | .
value'Get the Vault certificate, save it to disk, and set the VAULT_CACERT variable.
kubectl get secrets vault-tls -o json | jq -r '.
crt"]' | base64 –decode | sudo tee /etc/vault/tls/ca.
pemConnect to VaultFor simplicity sake, open a new terminal and port-forward the vault-0 pod.
kubectl port-forward vault-0 8200:8200You will need the Vault command line tool.
If you don’t have it, download it.
Try out vault status.
You should get something like this.
> vault statusKey Value— —–Seal Type shamirInitialized trueSealed falseTotal Shares 5Threshold 3Version 1.
3Cluster Name vault-cluster-f98b1f48Cluster ID 68028c10-0f18-691d-56fa-8c046459cae1HA Enabled trueHA Cluster https://172.
17:8201HA Mode activeAlternatively, you can navigate to https://127.
1:8200 in a browser.
Clean upkubectl delete Vault vaultkubectl delete -f https://raw.
yamlhelm delete –purge vault-operatorkubectl delete EtcdCluster example-etcd-clusterhelm delete –purge etcd-operatorNext StepsVault is up and running, the next thing is to configure Vault to authenticate and serve secrets.
Depending on your infrastructure and your application design, this process can vary.
Here is my opinionated approach implementing the Kubernetes authentication method and database secrets engine.
Vault: Kubernetes Auth and Database Secrets EngineImplementation details for authenticating services to Vault to retrieve dynamic credentials.