[Azure] Azure Kubernetes Service Setup
Guide ## Prerequisites
Quickstart Guide
AKS Engine (aks-engine
) generates ARM (Azure Resource Manager) templates, and also deploys them via ARM to Microsoft Azure cloud environments. The input to the aks-engine
command line tool is a cluster definition JSON file (referred to throughout the docs interchangeably as either âAPI modelâ, âcluster configâ, or âcluster definitionâ) which describes the desired cluster configuration, including enabled or disabled features, for both the control plane running on âmasterâ VMs and one or more node pools.
Prerequisites
The following prerequisites are required:
- An Azure Subscription
- The Azure CLI
Install the aks-engine
command line tool
Binary downloads for the latest version of AKS Engine are available on Github. Download the package for your operating system, and extract the aks-engine
binary (and optionally integrate it to your $PATH
for more convenient CLI usage).
You can also choose to install aks-engine
using gofish. To do so, execute the command gofish install aks-engine
. You can install gofish following the instructions for your OS.
On macOS, you can install aks-engine
with Homebrew. Run the command brew install Azure/aks-engine/aks-engine
to do so. You can install Homebrew following these instructions.
On Windows, you can install aks-engine.exe
via Chocolatey by executing the command choco install aks-engine
. You can install Chocolatey following these instructions. You can also install aks-engine.exe
via Scoop by executing the command scoop install aks-engine
. You can install Scoop following these instructions.
On Linux, if you prefer, you can install aks-engine
via install script doing:
$ curl -o get-akse.sh https://raw.githubusercontent.com/Azure/aks-engine/master/scripts/get-akse.sh
$ chmod 700 get-akse.sh
$ ./get-akse.sh
If you would prefer to build aks-engine
from source, or if youâre interested in contributing to AKS Engine, see the developer guide for more information.
Completion
aks-engine
supports bash completion. To enable this, add the following to your .bashrc
or ~/.profile
source <(aks-engine completion)
Deploy your First Cluster
aks-engine
reads a cluster definition which describes the size, shape, and configuration of your cluster. This guide takes the default configuration of a control plane configuration with one master VM, and a single node pool with two Linux nodes exemplified here. If you would like to change the configuration, edit examples/kubernetes.json
before continuing.
The aks-engine deploy
command automates the creation of an Azure resource group to contain cluster resources, and SSH keypair to connect to a control plane VM on your behalf. If you need more control or are interested in the individual steps see the âLong Wayâ section below.
NOTE: AKS Engine creates a cluster; it doesnât create an Azure Kubernetes Service (AKS) resource. Clusters that you create using the aks-engine
command (or ARM templates generated by the aks-engine
command) wonât show up as AKS resources, for example when you run az aks list
. The resultant resource group + IaaS will be entirely under your own control and management, and unknown to AKS or any other Azure service.
After the cluster is deployed, the scale, addpool, update, and upgrade commands may be used to make updates to your cluster, with some conditions (the scale, addpool, update, and upgrade docs will enumerate these conditions).
Gather Information
- The subscription in which you would like to provision the cluster. This is a UUID which can be found with
az account list -o table
.
$ az account list -o table
Name CloudName SubscriptionId State IsDefault
----------------------------------------------- ----------- ------------------------------------ ------- -----------
Contoso Subscription AzureCloud 51ac25de-afdg-9201-d923-8d8e8e8e8e8e Enabled True
- Proper access rights within the subscription; especially the right to create and assign service principals to applications
- A
dnsPrefix
which forms part of the hostname for your cluster (e.g. staging, prodwest, blueberry). In the example weâre using we are not building a private cluster (declared by assigning atrue
value toproperties.orchestratorProfile.kubernetesConfig.privateCluster.enabled
in your API model: see this example), and so we have to consider that the value ofdnsPrefix
must produce a unique fully-qualified domain name DNS record composed of <value ofdnsPrefix
>.<value oflocation
>.cloudapp.azure.com. Depending on the uniqueness of yourdnsPrefix
, it may be a good idea to pre-check the availability of the resultant DNS record prior to deployment. (Also see the--auto-suffix
option below if having to do this pre-check is onerous, and you donât care about having a randomly named cluster.)- NOTE: The
location
value may be omitted in your cluster definition JSON file if you are deploying to Azure Public Cloud; it will be automatically inferred during ARM template deployment as equal to the location of the resource group at the time of resource group creation. Also NOTE: that the â.cloudapp.azure.comâ FQDN suffix example above also assumes an Azure Public Cloud deployment. When you provide alocation
value that maps to a non-public cloud, the FQDN suffix will be concatenated appropriately for that supported cloud environment, e.g., â.cloudapp.chinacloudapi.cnâ for mooncake (Azure China Cloud); or â.cloudapp.usgovcloudapi.netâ for usgov (Azure Government Cloud)
- NOTE: The
- Choose a location to provision the cluster e.g.
westus2
.
Deploy
For this example, the subscription id is 51ac25de-afdg-9201-d923-8d8e8e8e8e8e
, the DNS prefix is contoso-apple
, and the location is westus2
.
First, we need to log in to Azure:
$ az login
Note, we have launched a browser for you to login. For old experience with device code, use "az login --use-device-code"
You have logged in. Now let us find all the subscriptions to which you have access...
Finally, run aks-engine deploy
with the appropriate arguments:
$ aks-engine deploy --dns-prefix contoso-apple \
--resource-group contoso-apple \
--location westus2 \
--api-model examples/kubernetes.json \
--auto-suffix
INFO[0000] No subscription provided, using selected subscription from azure CLI: 51ac25de-afdg-9201-d923-8d8e8e8e8e8e
INFO[0003] Generated random suffix 5f776b0d, DNS Prefix is contoso-apple2-5f776b0d
WARN[0005] Running only 1 control plane VM not recommended for production clusters, use 3 or 5 for control plane redundancy
INFO[0011] Starting ARM Deployment contoso-apple-1877721870 in resource group contoso-apple. This will take some time...
INFO[0273] Finished ARM Deployment (contoso-apple-1877721870). Succeeded
aks-engine
creates a new resource group automatically from the --resource-group
value passed into the aks-engine deploy
statement, if that resource group doesnât already exist. A resource group is a container that holds related resources for an Azure solution. In Azure, you can organize related resources such as storage accounts, virtual networks, and virtual machines (VMs) into resource groups. AKS Engine takes advantage of that organizational model to place all Kubernetes cluster resources into a dedicated resource group.
aks-engine
will generate ARM templates, SSH keys, and a kubeconfig (A specification that may be used as input to the kubectl
command to establish a privileged connection to the Kubernetes apiserver, see here for more documentation.), and then persist those as local files under a child directory in the relative path _output/
. Because we used the --auto-suffix
option, AKS Engine created the cluster configuration artifacts under the child directory contoso-apple-5f776b0d
:
$ ls _output/contoso-apple-5f776b0d/
apimodel.jsonazuredeploy.parameters.jsonclient.crtetcdpeer0.crtkubeconfig
apiserver.crtazureuser_rsaclient.keyetcdpeer0.keykubectlClient.crt
apiserver.keyca.crtetcdclient.crtetcdserver.crtkubectlClient.key
azuredeploy.jsonca.keyetcdclient.keyetcdserver.key
Access the new cluster by using the kubeconfig generated for the clusterâs location. This example used westus2
, so the kubeconfig is located at _output/contoso-apple-5f776b0d/kubeconfig/kubeconfig.westus2.json
:
$ export KUBECONFIG=_output/contoso-apple-5f776b0d/kubeconfig/kubeconfig.westus2.json
$ kubectl cluster-info
Kubernetes master is running at https://contoso-apple-5f776b0d.westus2.cloudapp.azure.com
CoreDNS is running at https://contoso-apple-5f776b0d.westus2.cloudapp.azure.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://contoso-apple-5f776b0d.westus2.cloudapp.azure.com/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
The files saved to the output/contoso-apple-5f776b0d/
directory (using our example) are critical to keep save for any future cluster operations using the aks-engine
CLI. Store them somewhere safe and reliable!
Administrative note: By default, the directory where aks-engine stores cluster configuration (_output/contoso-apple-5f776b0d
above) wonât be overwritten as a result of subsequent attempts to deploy a cluster using the same --dns-prefix
) To re-use the same resource group name repeatedly, include the --force-overwrite
command line option with your aks-engine deploy
command. On a related note, include an --auto-suffix
option to append a randomly generated suffix to the dns-prefix to form the resource group name, for example if your workflow requires a common prefix across multiple cluster deployments. Using the --auto-suffix
pattern appends a compressed timestamp to ensure a unique cluster name (and thus ensure that each deploymentâs configuration artifacts will be stored locally under a discrete _output/<resource-group-name>/
directory).
Note: If the cluster is using an existing VNET, please see the Custom VNET feature documentation for additional steps that must be completed after cluster provisioning.
AKS Engine the Long Way
This example uses the more traditional method of generating raw ARM templates, which are submitted to Azure using the az deployment group create
command.
For this example, we will use the same information as before: the subscription id is 51ac25de-afdg-9201-d923-8d8e8e8e8e8e
, the DNS prefix is contoso-apple-5eac6ed8
(note the manual use of a unique string suffix to better ensure uniqueness), and the location is westus2
.
We will also need to generate an SSH key. When creating VMs, you will need an SSH RSA key for SSH access. Use the following articles to create your SSH RSA Key:
- Windows - https://www.digitalocean.com/community/tutorials/how-to-create-ssh-keys-with-putty-to-connect-to-a-vps
- Mac and Linux - https://help.github.com/articles/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent/
Next, weâll create a resource group to demonstrate building a cluster into a resource group that already exists (Note: we recommend you use this resource group only for your Kubernetes cluster resources, and use one, dedicated resource group per cluster).
$ az group create --name contoso-apple-5eac6ed8 --location westus2
{
"id": "/subscriptions/51ac25de-afdg-9201-d923-8d8e8e8e8e8e/resourceGroups/contoso-apple-5eac6ed8",
"location": "westus2",
"managedBy": null,
"name": "contoso-apple-5eac6ed8",
"properties": {
"provisioningState": "Succeeded"
},
"tags": null
}
In this example, weâll create a service principal to demonstrate that authentication option for establishing a privileged connection between the Kubernetes runtime and Azure APIs. Normally, we recommend that you use the managed identity configuration (the default), which uses service principals generated from the VM identity itself, rather than maintain your own service principals. More documentation about managed identity is here.
$ az ad sp create-for-rbac --role="Contributor" --scopes="/subscriptions/51ac25de-afdg-9201-d923-8d8e8e8e8e8e/resourceGroups/contoso-apple-5eac6ed8"
{
"appId": "47a62f0b-917c-4def-aa85-9b010455e591",
"displayName": "azure-cli-2019-01-11-22-22-06",
"name": "http://azure-cli-2019-01-11-22-22-06",
"password": "26054d2b-799b-448e-962a-783d0d6f976b",
"tenant": "72f988bf-86f1-41af-91ab-2d7cd011db47"
}
We make a note of the appId
and the password
fields, as we will be providing them in the next step.
Edit the simple Kubernetes cluster definition and fill out the required values:
properties.MasterProfile.dnsPrefix
: in this example weâre using âcontoso-apple-5eac6ed8âproperties.linuxProfile.ssh.publicKeys[0].keyData
: must contain the public portion of the SSH key we generated - this will be associated with theadminUsername
value found in the same section of the cluster definition (e.g. âssh-rsa AAAAB3NzaC1yc2EAAAADAQABAâŚ.â)- Add a new
properties.servicePrincipalProfile
JSON object:properties.servicePrincipalProfile.clientId
: this is the service principalâs appId UUID or name from earlierproperties.servicePrincipalProfile.secret
: this is the service principalâs password or randomly-generated password from earlier
Optional: attach to an existing virtual network (VNET). Details here
Generate the Templates
The generate command takes a cluster definition and outputs a number of templates which describe your Kubernetes cluster. By default, generate
will create a new directory named after your cluster nested in the _output
directory. If your dnsPrefix was contoso-apple-5eac6ed8
, your cluster templates would be found in _output/contoso-apple-5eac6ed8-
.
Run aks-engine generate examples/kubernetes.json
The generate
command lets you override values from the cluster definition file without having to update the file. You can use the --set
flag to do that:
aks-engine generate --set linuxProfile.adminUsername=myNewUsername,masterProfile.count=3 clusterdefinition.json
The --set
flag only supports JSON properties under properties
. You can also work with arrays, like the following:
aks-engine generate --set agentPoolProfiles[0].count=5,agentPoolProfiles[1].name=myPoolName clusterdefinition.json
- To enable the optional network policy enforcement using calico, you have to set the parameter during this step according to this guide
- To enable the optional network policy enforcement using cilium, you have to set the parameter during this step according to this guide
- To enable the optional network policy enforcement using antrea, you have to set the parameter during this step according to this guide
Now we can deploy the files azuredeploy.json
and azuredeploy.parameters.json
using either the Azure CLI or PowerShell.
Using the CLI:
$ az deployment group create \
--name "contoso-apple-k8s" \
--resource-group "contoso-apple-5eac6ed8" \
--template-file "./_output/contoso-apple-5eac6ed8/azuredeploy.json" \
--parameters "./_output/contoso-apple-5eac6ed8/azuredeploy.parameters.json"
When your ARM template deployment is complete you should return some JSON output, and a 0
exit code. You now have a Kubernetes cluster with the (mostly complete) set of default configurations.
export KUBECONFIG=_output/contoso-apple-5eac6ed8/kubeconfig/kubeconfig.westus2.json
Now youâre ready to start using your Kubernetes cluster with kubectl
!
Comments