So, you’ve got your Kubernetes cluster up and running, and now its time to expose workloads to the outside world in a secure fashion.
- External DNS — Configure external DNS servers (AWS Route53, Google CloudDNS, Azure and others) for Kubernetes Ingresses and Services.
- Oauth Proxy — To keep strangers out of the way!
- Nginx-Ingress — You can read more about ingress here.
- Kube Lego— Uses Let’s Encrypt to create valid SSL certs for your workloads.
So, let’s get to it…
# Make sure you are using the correct Kubernetes context
kubectl config current-context
# Create a namespace that will be used during this tutorial
kubectl create ns ops-tools
# Clone Kubernetes charts
git clone https://github.com/helm/charts.git
# If helm isn't installed yet, get it from:
# RBAC Note:
In case your cluster has RBAC enabled (Highly recommended), make sure to init helm as follows and that you are deploying your charts with RBAC set to true:
kubectl -n kube-system create sa tiller
kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
helm init --service-account tiller
External DNS automatically configures DNS records in your DNS server based on service/ingress annotations. Awesome!
In order to use this chart, you’ll need to grant it permissions for creating DNS records in your cloud provider DNS (See: Deploying to a cluster)
Once you’ve edited charts/stable/external-dns/values.yaml to use your own cloud provider & credentials, run the following to install the chart to your K8S cluster:
helm install stable/external-dns --name external-dns --namespace ops-tools
# You can verify its up by running:
kubectl --namespace=ops-tools get pods -l "app=external-dns,release=external-dns"
We will use the below during the installation of Nginx-Ingress-Controller so that the LoadBalancer address created by the cloud provider will be registered in our DNS server.
An Oauth proxy is a reverse proxy that provides authentication with Google, Github, and others. This will keep strangers out of the way of your cluster workloads.
First you’ll have to configure Oauth client ID and Secret. If you are using Google, navigate to Google’s Developers Console and follow these instructions:
- Click on ‘Create credentials’ and choose ‘Oauth Client ID’.
- Choose ‘web application’.
- Give it a name and copy the Client ID and Client Secret to the chart values.yaml file.
- Configure ‘Authorized redirect URIs’ with the DNS name that will be configured by the External-DNS chart “https://my.company.com/oauth2/callback”.
- Create a cookie secret using the python command below and paste it as value of cookieSecret.
- Replace email-domain value with your company domain. This way only members of “yourcompany.com” would be allowed to access your cluster workloads.
# Values.yaml file (Relevant lines only)
# Oauth client configuration specifics
# OAuth client ID
# OAuth client secret
# Create a new secret with the following command
# python -c 'import os,base64; print base64.b64encode(os.urandom(16))'
Time to deploy:
helm install stable/oauth2-proxy --name oauth-proxy --namespace ops-tools
Nginx Ingress Controller
helm install --set controller.service.annotations."external-dns\.alpha\.kubernetes\.io/hostname"=my.company.com stable/nginx-ingress --name nginx-ingress --namespace ops-tools
Before moving on to the next step it is necessary to make sure “my.company.com” was configured successfully in your DNS server by External-DNS. You can do so using dig command. If your “Answer section” contains the DNS record TTL and IP you are good to go (Be patient, it may take a few moments until the record propagates):
; <<>> DiG 9.9.7-P3 <<>> my.company.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 23994
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 1280
;; QUESTION SECTION:
;my.company.com. IN A
;; ANSWER SECTION:
my.company.com. 300 IN A <ingress controller loadbalancer IP>
;; Query time: 175 msec
;; SERVER: 192.168.43.1#53(192.168.43.1)
;; WHEN: Sun Mar 25 08:21:31 IDT 2018
;; MSG SIZE rcvd: 61
You can also verify that External-DNS recognized your ingress annotation by looking at its logs:
kubectl logs -n ops-tools external-dns-<POD_SUFFIX>
time="2018-03-25T05:19:28Z" level=info msg="Updating A record named 'my' to '<Loadbalancer IP>' for Azure DNS zone 'company.com'."
time="2018-03-25T05:19:29Z" level=info msg="Updating TXT record named 'my' to '"heritage=external-dns,external-dns/owner=default"' for Azure DNS zone 'company.com'."
Automatically request certificates for Kubernetes Ingress resources from Let’s Encrypt.
I’ll update this post once I do the migration to cert-manager myself.
You’ll need to edit 2 values in kube-lego chart values.yaml:
## Email address to use for registration with Let's Encrypt
## Let's Encrypt API endpoint
## Production: https://acme-v01.api.letsencrypt.org/directory
## Staging: https://acme-staging.api.letsencrypt.org/directory
Replace LEGO_EMAIL with your own email address.
Change LEGO_URL to the ‘Production’ URL in order to fetch a valid certificate or ‘Staging’ if you are just testing stuff.
SUPER IMPORTANT NOTE: kube-lego will keep retrying to fetch a certificate in case of failure and after a few retries will encounter Let’s Encrypt rate limit. To prevent this from happening, use the ‘Staging’ URL first.
Now that we got all of these great tools armed and ready, let’s deploy the Kubernetes dashboard to our cluster and secure it behind our Oauth-Proxy.
There is a Kubernetes Dashboard Helm chart available under charts/stable/kubernetes-dashboard
Here is the relevant ingress section from the chart values.yaml file:
## If true, Kubernetes Dashboard Ingress will be created.
## Kubernetes Dashboard Ingress annotations
## Kubernetes Dashboard Ingress path
## Kubernetes Dashboard Ingress hostnames
## Must be provided if Ingress is enabled
## Kubernetes Dashboard Ingress TLS configuration
## Secrets must be manually created in the namespace
- secretName: ops-tls
Let’s break down the annotations above:
kubernetes.io/tls-acme: “true” This will trigger kube-lego to generate a certificate for your workload.
kubernetes.io/ingress.class: nginx Makes our nginx-ingress-controller aware and responsible to route traffic to this ingress object.
The two annotations above are used by nginx-ingress-controller to route traffic through our oauth-proxy pod, thus securing Kubernetes dashboard access to my.company.com users only.
nginx.ingress.kubernetes.io/secure-backends: “true” In case your Kubernetes Dashboard is configured to listen over https (this is the default in the latest version – v1.8.3).
Now you can go ahead and deploy Kubernetes dashboard:
helm install stable/kubernetes-dashboard --name kubernetes-dashboard --namespace ops-tools
Your Kubernetes dashboard should now be available and protected by Oauth at https://my.company.com
This is one of the methods we use to expose and protect our Kubernetes workloads here at alcide. You can create a similar ingress object with the annotations above for every workload you’d like to expose and protect using Oauth. You can read more on Kubernetes Security Best Practices.