INTRODUCTION
The purpose of this blog is to provide you information on :
- Deploying kubernetes cluster in private network using kops in h.a mode.
- Where h.a mode will include
- Multi – A.Z deployment of master and worker nodes
- Where each master node apart from hosting its services, will also contain etcd cluster node setup.
- Auto scaling service for each master and worker node group to maintain their desired count in case of any hardware failure.
- Multi – A.Z deployment of master and worker nodes
- Where h.a mode will include
- Deploy bastion host using kops for accessing private cluster nodes.
- Deploy kubernetes dashboard with root access.
- Editing master and worker nodes.
Kops is the command line utility provided by AWS to create, destroy and manage kubernetes cluster in a very clean and automated way on AWS infrastructure, It also provide support for other cloud providers like GCE and VMware which is currently going under testing phase, hence it is not being declared officially by AWS.
Kops make use of s3 bucket in order to store and maintain revisions of kubernetes cluster.
Prerequisite
In order to get started you need to install following tools
Install kops
curl -LO https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64 chmod +x kops-linux-amd64 sudo mv kops-linux-amd64 /usr/local/bin/kops
Install kubectl
curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-drelease/release/stable.txt)/bin/linux/amd64/kubectl chmod +x ./kubectl sudo mv ./kubectl /usr/local/bin/kubectl
Install aws-cli
# install pip for python3 (recommended) sudo apt-get install python3-pip # install awscli using pip3 command. pip3 install --upgrade --user awscli # export the following path in .profile file which present in the user’s home directory export PATH=/home/ubuntu/.local/bin:$PATH #Save the file and check the version of aws-cli to verify the installation with following command aws --version.
1. deploy kubernetes cluster
Export following variables in .profile file.
# Load access and secret key export AWS_ACCESS_KEY_ID= export AWS_SECRET_ACCESS_KEY= # Dns settings , k8s.local is used when we want use kubernetes dns not our own domain dns. export NAME=k8s-cluster.k8s.local # KOPS_STATE_STORE is used to refer bucket name in kops command. export KOPS_STATE_STORE=s3://k8s-repo;
create s3 bucket with following aws cli command
aws s3api create-bucket --bucket k8s-repo --region us-east-1
Enter the following command to declare kubernetes config.
kops create cluster \ --cloud aws \ --authorization rbac \ --topology private \ --networking kopeio-vxlan \ --master-count 3 \ --master-size t2.micro \ --master-zones us-east-1a,us-east-1b,us-east-1c \ --master-volume-size=10 \ --node-count 2 \ --node-size t2.micro \ --node-volume-size=10 \ --zones us-east-1a,us-east-1b,us-east-1c \ --ssh-public-key ~/.ssh/id_rsa.pub \ --state ${KOPS_STATE_STORE} \ ${NAME}
Where
- –cloud flag represents the cloud provider.
- —authorization flag represents authentication mode at api server
- –topology flag represents public/private mode of cluster node networking
- –network flag represents Container Networking Interface provider.
- –master-zones flag represents the list of availability zones where master nodes can be deployed in H.A mode.
- –zones flag represents the list of availability zones where worker nodes can be deployed in H.A mode.
- –ssh-public-key flag represents public key which is to be imported into the cluster nodes for ssh access.
Enter the following commands to deploy kubernetes cluster on aws infrastructure
# Deploy kubernetes cluster configuration kops update cluster $NAME --yes # Validate kubernetes cluster kops validate cluster $NAME
2. deploy bastion instance to access cluster instances
# Declare bastion server configuration. kops create instancegroup bastions --role Bastion --subnet utility-us-east-1a --name ${NAME} # Deploy bastion server configuration on aws infrastructure. kops update cluster ${NAME} --yes # Get endpoint of the bastion's load balancer. aws elb --output=table describe-load-balancers --region us-east-1 | grep DNSName.\*bastion|awk '{print $4}'
Ssh to bastion server , assuming its endpoint obtained from the last command is bastion-k8s-cluster-k8s-l-vj88hv-150112292.us-east-1.elb.amazonaws.com
# Execute ssh-agent session. eval `ssh-agent -s` # Add private key id_rsa to ssh-agent session. ssh-add ~/.ssh/id_rsa # Ssh to bastion server ssh -A admin@bastion-k8s-cluster-k8s-l-vj88hv-150112292.us-east-1.elb.amazonaws.com # ssh to any cluster node by following command ssh admin@<node-name> # Example ssh admin@ip-172-20-57-84.ec2.internal
3 deploy dashboard ui with root access
# Enter the following command to deploy Dashboard-UI on the cluster kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta4/aio/deploy/recommended.yaml
Create a file named dashboard-rootuser.yaml and add the following content to it and execute command kubectl apply -f dashboard-rootuser.yaml to create end user’s ServiceAccount named root-user for accessing kubernetes dashboard.
apiVersion: v1 kind: ServiceAccount metadata: name: root-user namespace: kubernetes-dashboard
Create a file named rootuser-role.yaml and add the following content to it and execute command kubectl apply -f rootuser-role.yaml to bind a role (i.e giving permission) to ServiceAccount named root-user.
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: root-user roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: root-user namespace: kubernetes-dashboard
Execute the following command to get the token which will be required to login kubernetes dashboard.
kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')
- Execute the following command to map kubernetes dashboard over localhost network.
- kubectl proxy
- Open the following url in the browser
4 edit master and worker nodes
#Optional #Set default editor. export EDITOR=nano
Edit worker nodes
# Edit your node instance group kops edit ig --name=${NAME} nodes # Try editing number of worker nodes or updgrading t2.micro to t2.small # Save your configuration. # Finally configure your cluster with kops update cluster ${NAME} --yes kops rolling-update cluster --yes
Edit you master node
# Edit your master node in availability zone us-east-1a kops edit ig --name=${NAME} master-us-east-1a # Try editing number of master node or updgrading t2.micro to t2.small # Save your configuration. # Finally configure your cluster with kops update cluster ${NAME} --yes kops rolling-update cluster --yes
#Edit this cluster with kops edit cluster ${NAME}
No comments! Be the first commenter?