Ranga Babu
Create role in IAM AWS

SQL Server in Kubernetes Cluster using KOPS

May 24, 2019 by

In this article, we will review how to create a Kubernetes cluster in AWS using KOPS, provision Elastic Block Store (EBS) as persistent volume to store the database files and deploy SQL Server in the K8s cluster.

Here is step by step to configure the K8s cluster in AWS using KOPS.

Creating a K8s cluster in AWS using KOPS

Log in to the AWS console, Click on Services and search for EC2. Click on EC2(Virtual Servers in the Cloud).

EC2 service in AWS

In the EC2 Dashboard, Click on Launch Instance and select Ubuntu server with t2.micro size.

launch EC2 Ubuntu instance

Configure instance details, storage, and security groups and launch the instance using a new key pair or use the existing key pair if already have one.

Create an IAM role with below policies and assign the role to Ubuntu instance you created above.This role is used to create Kubernetes cluster resouces.

To create an IAM role, click on Services and search for IAM. Click on IAM (Manage User Access and Encrypted Keys)


In IAM console, Click on Roles (1). Click on Create Role (2).

Create role in IAM AWS

Select EC2 in “Choose the service that will use the role” and click on Next: Permissions.

service that will the IAM role

Select the above-mentioned policies. Click on Next and Review. Enter the role name and click on Create Role. Now navigate to the EC2 Dashboard and select the Ubuntu instance you created above, Right-click -> Instance Settings -> Attach/Replace IAM Role. Select the IAM role you created above and click on Apply.

Kubernetes cluster - Assign role to ubuntu instance

To connect to the Ubuntu instance, we must download PuTTy from putty.org and install it. After installing PuTTy, open PuTTygen and click on load,

select the .pem file (key pair) which was used to launch the Ubuntu instance and click on Save private key.

Now open putty and enter the hostname. To know the hostname of the Ubuntu instance, navigate to the EC2 dashboard, select the instance and copy the public DNS as shown in the below image.

public DNS of EC2 Ubuntu instance

hostname in putty

Click on Auth (1). Browse the private key you created in the above step (2). Click on Open (3).

authentication in putty

Login with Ubuntu user.

login to Ubuntu instance to create Kubernetes cluster

Log in as a superuser using below command.

Install AWS CLI using below commands. AWS Command Line Interface is a tool to configure, manage AWS services from the command line.

Once we install AWS CLI, we need to install the Kubernetes command-line tool (kubectl) on Ubuntu instance which is used to run commands against K8s cluster. Use below commands to download the latest version and install kubectl.

Configure AWS CLI using below command. Leave the access key id and secret key blank as we are using the IAM role that is attached to the Ubuntu EC2 instance. Input the default region of your choice and output format like JSON.

AWS CLI configure

We need to download and install KOPS on EC2 Ubuntu instance. KOPS is used to create a Kubernetes cluster on Amazon Web Services. Use below commands to download and install KOPS.

Now, create a private hosted zone in Route53. To create a hosted zone, click on Services and search for Route 53. Select Route 53 (scalable DNS and Domain Registration)

Route 53 in AWS

Click on Create Hosted Zone. enter the domain name and select Private Hosted Zone for Amazon VPC as type.
Select the VPC ID and click on Create.

private hosted zone in Route 53 AWS

Now we need to create an S3 bucket. This S3 bucket will hold the K8s cluster configuration. To create an S3 bucket and set environment variable, execute the below command in the console.

Create SSH key using below command.

Execute below commands to create Kubernetes cluster configuration which will be stored in the S3 bucket created above. This will only create the cluster configuration and not the cluster.

Create the cluster by executing below command. This will create the cluster in the zone “ap-south-1b” with cluster name as “ranga.com”.

Once you execute the above command, it will create all the necessary resources required for the cluster. Now execute validate command to validate the cluster.

K8s Cluster validation - KOPS

It takes some time to create all the cluster resources. Execute the same command after a few minutes. Once validation is a success and you see “your cluster is ready”. Then list the nodes using below command.

Creating Persistent Volume Claim

Once your Kubernetes cluster setup and ready, we need to create a persistent volume and volume claim to store the database files. As we created the K8s cluster on Amazon Web Services, we will create a persistent volume using AWS EBS.

Use below code to create a manifest file directly on the Ubuntu server for creating persistent volume and volume claim.

If you have any parsing errors due to special characters when you create the .yaml file directly on the Ubuntu server, Open the notepad in your local machine, paste the above code and save it as dbvclaim.yaml file and upload the dbvclaim.yaml file to S3 bucket using S3 console. Now on Ubuntu instance execute below command to download the same file from S3 bucket to the Ubuntu server.

Now apply the manifest file using kubectl to create persistent volume and volume claim on Kubernetes cluster.

After executing the above command, it creates a persistent volume with a random name and volume claim with name “dbvolumecliam”

Deploying SQL Server container in K8s cluster in AWS

Before deploying the SQL Server in K8s cluster created in AWS using KOPS, we need to give permissions to create a load balancer for the role which is attached to the master node in the cluster. Navigate to IAM console and click on the role associated with the master node. In my case it is masters.ranga.com. Click on Attach policies.

Attach policies in AWS

Select ElasticLoadBalancingFullAccess and click on Attach policies. This policy allows the master node to create load balancer and assign public IP to the service.

Create SA password in Kubernetes cluster which will be used in the SQL Server deployment. Your password should meet password policy requirements else your deployment fails and the pod show “CrashLoopBackOff” status.

Create a manifest file which will be used for deploying the SQL Server container image. Replace claimName value with the name of your persistent volume claim. You can create the .yaml file directly on the server or upload it s3 from your local machine and download back to Ubuntu server.

Apply the manifest file using kubectl to create a deployment in the K8s cluster.

Once you execute the above command, it will create a deployment with name mssql-deployment in Kubernetes cluster. Now a pod is created with SQL Server running in it. Execute the below command to get the status of the pod.

pod status in K8s cluster

Once the container is created the status of the pod is changed to running. To know the details of the pod, execute below command by replacing the name of the pod.

In case of any errors during deploying, use below command and replace the pod name with the name of your pod to get the logs.

To know the public IP of the SQL Server, execute below command. This command will list all the available services in the Kubernetes cluster with the service name, internal IP and external IP.

public IP of SQL Server service in K8s cluster

To connect to the SQL Server, Open SQL Server management studio. input the copied external IP and password of SA which you created.

Deleting the Cluster using KOPS

Execute below command to delete the K8s cluster using KOPS. Replace ranga.com with the name of your K8s cluster. This will delete all the resources created by KOPS. Before executing this command you need to remove the policy “ElasticLoadBalancingFullAccess” that you attached manually to the role associated with the master node.

Once you execute above commad, It takes few minutes to delete the Kubernetes cluster and displays a message “Deleted cluster: cluster name” at the end.

deleting K8s cluster

Ranga Babu