vSphere with Tanzu (VKS) integration with NSX-T Part-3
vSphere with Tanzu (VKS) integration with NSX-T Part-3
In the first part of this series, we enabled vSphere with Tanzu on a compute cluster. This allowed developers to use this cluster to run K8s and container-based applications. Vsphere with Tanzu leverages the functionality of NSX-T as a networking solution, vSAN as storage solution and so on.
In the second part, we successfully created our first namespace called namespace-01. We provided necessary permissions, storage policies, VM classes, and resource limits. We also added content library to our VM class.
In this part, we will provision our first vSphere pods on our earlier created “namespace-01“. We will also observe the changes on the NSX-T, vSphere, environment. We will also try to access the application from outside world.
Diagram
To deploy, Kubernetes on the vsphere cluster, we have provisioned three ESXi hosts. These hosts are connected to DSwitch (vDS) on uplink-1 (vmnic0), uplink-2 (vmnic1) and are prepared for NSX-T consumption. The compute cluster is already enabled for vSphere HA (default settings), and DRS (fully automated). An edge is already deployed “SA-EDGE-01“. A Tier-0 (K8s-Tier-0) is already deployed. It is connected to the physical environment via BGP. All routes are redistributed to the physical router. For storage purposes, a vSAN (OSA) datastore has been created to provider storage to the K8s Pods

In the first blog, we already enabled the workload management via vSphere client. A cluster of three supervisor cluster has been deployed and a VIP is assigned to to this cluster. NSX-T native load balancing is being used to load balance traffic to this cluster.
In the Second blog, we deployed the namespace, and allowed necessary permission to our developer team to access the namespace. Using this “namespace-01“, we will deploy our first vSphere pod.
NB: In the upcoming part, we will check the prerequisites required. This will allow workload management for the VI workload domain via SDDC manager.
Configuration
Before we continue, we need to access our namespace via Linux using Kubectl (K8s command line tool). To access the namespace via command line tool, we need to login into our Linux machine. Then, we access the VIP of our supervisor cluster providing the developer team credentials.
“Kubectl vsphere login –server=192.168.30.33″ –insecure-skip-tls-verify“
192.168.30.33 is our Supervisor VIP address, we already validated it in our first and second blog. After successful login, devops01 user has access to namespace-01.

To access our namespace, we need to use “kubectl config user-context namespace-01“. As our devops01 user already has access to the namespace, we can easily access our namespace.
Now is the right time to work on our “yaml” file. We need to give the necessary configuration to create our first vSphere Pod.

In the above snapshot, we are validating the content of deployment yaml file. The above file includes details like deployment, Pod template.
kind —-> Deployment.
Name of the deployment –> nginx-deployment.
Label to be imposed on deployment —> nginx.
Replicas —> 1.
Match labels of Pod —> nginx.
In the below section, we will create template related to Pod.
labels —-> nginx (to be imposed on Pod).
Name of the containers—> Nginx
Image of the containers located at —> 172.20.10.30
Name of image—> nginx:1.16
Nginx application will be imposed on Port —> 80
After reviewing the configuration, we need to create the K8s deployment (nginx-deployment.yaml). It will create one vSphere Pod. This pod will be exposed on port 80.
To create the deployment, run “kubectl create -f nginx-deployment.yaml“

The deployment is successfully created. To verify, the creation of vSphere Pod, run “Kubectl get pods“

Let’s carry out a last check to confirm the creation of K8s components like Pod, deployment, and so on. This check can be performed using “kubectl get all“

Let’s verify the creation of our newly created vSphere Pod on the vSphere with “Workload Management“. To Navigate, Workload Management > Namespace > Namespace-01 > Compute > vSphere Pod.

To verify the deployment, Workload Management > Namespace > Namespace-01 > Compute > Deployments.

In the last verification, let’s verify the details of our Nginx pod on the vsphere UI.

Summary
In the first part of this series, we enabled vSphere with Tanzu on a compute cluster. This allowed developers to use this cluster to run K8s and container-based applications. Vsphere with Tanzu leverages the functionality of NSX-T as a networking solution, vSAN as storage solution and so on.
In the second part, we successfully created our first namespace called namespace-01. We also provided necessary permissions, storage policies, VM classes, and resource limits. We also added content library to our VM class.
In this part, we successfully deployed our first vSphere Pod on the namespace created in earlier part.
NB: Before setting up our first TKG cluster, we will provision vSphere with Tanzu using NSX-ALB. So, in next blog, we will configure workload management using vDS and external load Balancer.
In upcoming parts, we will provision our first TKG cluster. This will allow the Harbor repository. Along with it, we will deploy our first application using K8s. We will also verify the requirements from SDDC manager to allow VMware Kubernetes services on VI workload domain.
Comments
Post a Comment