Networking basics in the Azure Kubernetes Service

The Azure Kubernetes Services provides two Network Plugin options. Kubenet, which was the first available option, and the Azure CNI (Advanced Networking).
The Azure CNI is the only networking option that supports provides support for capabilities like Vnet peering and network policies – basically most enterprise scenarios will require using the Azure CNI.

There’s a really comprehensive guide to the Azure CNI here : https://docs.microsoft.com/en-us/azure/aks/configure-azure-cnihttps://docs.microsoft.com/en-us/azure/aks/configure-azure-cni
This post is intended to serves as an example, and to emphasize several of the points in the official documentation.

Using an existing virtual network


I have a /23 vnet that offers 445 addresses. As each pod on each node will take an ip address from the virtual network, it’s pretty important to realise the limitations of using a small virtual network for your clusters. Where the virtual network is peered with other networks, including your on-prem network this can often mean you’ll end up needing a larger network address range than you first thought.
From the Azure CNI documentation, there’s a pretty handy formula you can plug into Excel to start seeing how many nodes/pods your address space will support.
=(A2+1) + ((A2+1) * B2)

Nodes Pods Addresses
50 30 1581
10 100 1111
6 62 441
So you can see that in my /23 vnet, a suitable combination is 6 nodes with a maximum pod capacity of 62.

Once you’ve done this, you can create your cluster. The important piece to note at this point is;
The service address range is a set of virtual IPs (VIPs) that Kubernetes assigns to internal services in your cluster.
Therefore when you select the Service address range, you need to ensure it won’t overlap with any other IP ranges you may use… EG.

After the cluster has been created, and you’ve provisioned some pods/services you’ll see the IP addresses used as per the following;

Helm namespaces

I’ve been creating my own Helm chart for an application. I initially tried to shoe-horn the namespace into the values.yaml and then pull it through into the service and deployment yaml files. Turns out that isn’t a good way to do it, which is kind of obvious when you think about it. It’s a common scenario for an application to exist multiple times in the same cluster, separated by namespace.

When you install the chart for the first time, provide the name of the namespace which already exists.
Here’s the steps from Creating a new Helm Chart to deploying and then upgrading it.

helm create nginx-webapp
#tweak the values.yml
cd nginx-webapp/
kubectl create namespace whatthehack-webapps2
helm install --name nginx-webapp . --namespace whatthehack-webapps2
#make a change to the application
helm upgrade nginx-webapp .
kubectl get svc --all-namespaces
helm upgrade nginx-webapp .

Create Kubernetes Cluster in Azure

ACS

Azure has 2 container service offerings, ACS and AKS.
ACS was the first to be released, gives a choice of orchestrators but is little more than an ARM template with no management capability. These are some of the issues that AKS address. I’m confident that when AKS is Generally Available, ACS will become deprecated. Until that point however, i like to stay with the GA container service.

I have a shell script that creates my cluster with my optimal “cheapo” settings. Probably worth noting that this config is pretty slow, and not great at taking load tests – but hey, you get what you pay for.

I usually kick this off in the Azure Cloud Shell, and i pass in simply one parameter which is the name of the Resource Group.
The reason for the script is as follows.
1) I want to consistently add tags to my resource group for automation
2) I use a service principal to access Azure which has a much lower set of permissions. At point of creation i want it to automatically have Contributor access.
3) I want the cluster to be small, and sized to be cheap.
4) I want the ssh credentials zipped and ready for me to download to other clients to access the cluster. I do this partly so i can easily get away from the cloudshell and its aggressive timeouts. It’s probably worth saying that this is a sledgehammer approach, i could just go into the /.kube/ directory and copy out the specific kube config file.

Hope this proves useful

1
sh Create-ACS.sh MyK8S