vSphere with Tanzu (VKS) integration with NSX-T Part-5
vSphere with Tanzu (VKS) integration with NSX-T Part-5
In the first part of this series, we enabled vSphere with Tanzu on a compute cluster. This allowed developers to use this cluster to run K8s and container-based applications. Vsphere with Tanzu leverages the functionality of NSX-T as a networking solution, vSAN as storage solution and so on.
In the second part, we successfully created our first namespace called namespace-01. We also provided necessary permissions, storage policies, VM classes, and resource limits. We also added content library to our VM class.
In the third part, we successfully deployed our first vSphere Pod on the namespace created in earlier part. vSphere Pod is deployed using the NSX-T and it is single zone deployment.
In the fourth part, we explored the deployment of vsphere Tanzu (Workload Management) using external load balancer using VMware NSX Advance load balancer and vSphere networking object called vDS (virtual distributed switch)
In this part of the blog, we will deploy our first Tanzu Kubernetes cluster. This cluster will be set up on top of the namespace created earlier in this series. This namespace will use external load balancer and vsphere logical networking construct (vSphere Distributed Switch).
Diagram
To enable, Kubernetes on the vsphere cluster (VKS), we have deployed a cluster with Five ESXi hosts. These hosts are connected to dvs-SA-Datacenter (vDS) on uplink-1 (vmnic0), uplink-2 (vmnic1), uplink-3 (vmnic2), and uplink-4 (vmnic3) . All ESXi hosts in a cluster are part of same vDS switch. This vDS is configured with four port groups “pa-sa-infra-management“, “pa-sa-vmotion“, “pa-sa-tanzu-management, and “pg-sa-tanzu-workload“.
Portgroup “pa-sa-infra-management (172.20.10.0/24)” will be used to host all management VMs. These include AVI controller, Service Engine, NSX-T manager, and vCenter. The same portgroup will also be used in ESXi hosts.
Portgroup “pa-sa-tanzu-management (172.20.12.0/24)” will be used to host VIP network. A VIP will be assigned to supervisor cluster from this network. This network will be host on AVI service engine.
Portgroup “pa-sa-tanzu-workload (192.168.150.0/24)” will be used on the supervisor cluster. All supervisor VMs will get an IP address from this network.
AVI controller (NSX-ALB) is already deployed and connected to “pa-sa-infra-management“. Default cloud and service engine group are already configured and a certificate is already provisioned installed on the NSX-ALB controller.
Workload Management will deploy SE’s in two-arm mode on the compute cluster provisioned for vSphere with Tanzu. SE’s will have three interfaces, one in “pa-sa-infra-management“, second in pa-sa-tanzu-management“, and “pa-sa-tanzu-workload“.
Below diagram depicts different networks consumed by workload management, along with placement of SE’s on the compute cluster.
NB: We have shown only 4 ESXi hosts in the below diagram. However, in our lab, we have five ESXi hosts.

Configuration
To set up a TKG (Tanzu Kubernetes Cluster), we must prepare our TKG yaml file. This file should specify the number of control and worker nodes.

In above image, we are creating our first TKG cluster on our earlier created namespace “namespace-01“.
We need to incorporate below details in the yaml file.
Name of the TKG cluster | “tkc-01“ |
Namespace | “namespace-01“ |
Service CIDR | “198.51.100.0/12“ |
cidrBlocks | “192.0.2.0/16“ |
serviceDomain | “Name of domain name“ |
Version of K8s | “v1.23.8“ |
Number of Control Plane Nodes | “1“ |
Number of Worker Nodes | “2“ |
“OS for Control and Worker Node” | “Ubuntu“ |
VMclass | “best-effort-large“ |
Default Storage Class and Storage Class | “Sp-Tanzu“ |
After reviewing our TKG yaml file, we can deploy the TKG cluster either using K8s or tanzu command line interface.
In order install Tanzu command line interface on “Supervisor Cluster“, we need to follow below steps.

To begin the deployment of TKG cluster, we use “kubectl create -f
tkc-01.yaml“.

To veiw the process of TKG cluster deployment, we can use “kubectl get cluster -n namespace-01“

Let’s validate the provisioning of TKG cluster using Tanzu Command line interface.

The deployment will take approx 20 minutes to deploy Kubernetes cluster on vSphere namespace.

TKG cluster has been created successfully.
Let’s view the health of our TKG cluster control and worker nodes using Tanzu command line interface. In the below screenshot, we used
“tanzu cluster machinehealthcheck control-plane get tkc-01“.


In the below screenshot, we will confirm the health of worker nodes deployed in our TKG cluster deployment. To check the health “”tanzu cluster machinehealthcheck node get tkc-01“


Let’s check the status of our first TKG cluster on the vSphere vCenter client. Navigate to Inventory < SA-Datacenter < Namespaces < namespace-01 < tkc-01.

In above screenshot, three VMs are deployed, one is control VM and two are worker nodes.
Summary
In the first part of this series, we enabled vSphere with Tanzu on a compute cluster. This allowed developers to use this cluster to run K8s and container-based applications. Vsphere with Tanzu leverages the functionality of NSX-T as a networking solution, vSAN as storage solution and so on.
In the second part, we successfully created our first namespace called namespace-01. We also provided necessary permissions, storage policies, VM classes, and resource limits. We also added content library to our VM class.
In the third part, we successfully deployed our first vSphere Pod on the namespace created in earlier part.
In the fourth part, we enabled vSphere with Tanzu on a compute cluster. We used External Load Balancer (NSX-ALB) and vDS (vSphere logical construct).
In this part, we successfully provisioned our first TKG cluster. We did this on our earlier created namespace “namespace-01“, which is on top of our supervisor cluster “sa-supervisor-01“.
In upcoming parts, we will provision different supervisor services like Contour, ExternalDNS, Harbor and so on. Along with it, we will deploy our first application using K8s. We will also verify the requirements from SDDC manager to allow VMware Kubernetes services on VI workload domain.
Comments
Post a Comment