vSphere with Tanzu (VKS) integration with NSX-T Part-1
vSphere with Tanzu (VKS) integration with NSX-T Part-1
Introduction
In this series of blog, we will discuss how “vSphere with Tanzu” will be integrated with NSX-T. To use the functionality of NSX-T, we will activate workload management on the vsphere cluster. This will provision Kubernetes services on the compute cluster. vSphere with Tanzu uses NSX-T for networking. It uses vSAN for storage and ESXi for compute. This provides a Kubernetes platform for customers to develop their container-based applications.
NSX-T provides different networking capabilities like load-balancing, security, switching, and routing. We will discuss in more detail.
There are two solutions through which we can deploy Kubernetes (vSphere for Tanzu) on vSphere cluster. The first solution is via NSX-T which uses the NSX-T Native Load Balancing solution. The second solution distributed virtual switch involves using an external load balancer like AVI or HA proxy. This approach is out of scope for this blog.
Diagram
To deploy, Kubernetes on the vsphere cluster, we have provisioned three ESXi hosts. These hosts are connected to DSwitch (vDS) on uplink-1 (vmnic0), uplink-2 (vmnic1) and are prepared for NSX-T consumption. The compute cluster is already enabled for vSphere HA (default settings), and DRS (fully automated). An edge is already deployed “SA-EDGE-01“. A Tier-0 (K8s-Tier-0) is already deployed. It is connected to the physical environment via BGP. All routes are redistributed to the physical router. For storage purposes, a vSAN (OSA) datastore has been created to provider storage to the K8s Pods.

To set up Kubernetes services on vsphere cluster, a supervisor cluster of three VMs will be deployed. To access the supervisor cluster, a VIP will be provisioned on the NSX-T native load balancer.
Prerequisites
To provision the K8s on vSphere cluster, certain prerequisites have to configured. In below section, we will discuss about the requirements.
- Cluster should be configured for DRS (Fully Automated), and vsphere HA (Default Settings).
- NSX-T Managers must be configured and ESXi hosts must be prepares for NSX-T consumption.
- An Edge cluster should be provisioned and Tier-0 must be created using that Edge Cluster.
- Tier-0 gateway must be connected to physical devices via either BGP or Static routing.
- Storage policy has to be created for supervisor cluster and Kubernetes nodes (Worker and Control nodes). In our scenario, we are using default storage policies.
- A content library has to be created to host vSphere Tanzu Gird templates.
Configuration
We have discussed the prerequisites. Now it is time to allow K8s on the vSphere compute cluster. To start the configuration, we will navigate to Menu > Workload Management.

Let’s get started to provision the vSphere for Tanzu. In the next step, we will select a vCenter along with networking solution and click on “Next“. In this series, we are covering only NSX-T.

In the next step, we will select the compute cluster on which we want to configure Kubernetes. In our scenario, we will select “SA-Compute-01“.

In the below step, we need to select the resource allocation which determines the size of the supervisor cluster VMs. Then, click “Next“.

In the below step, we need to provide storage policies for Control Plane Node, Ephemeral Disks, and Image Cache. We have selected “K8s Storage Policy” which is Default storage policy.

We also need to give details about Management IP addresses. This should include the distributed port group for Supervisor VMs on the management interface.

In the below step, we need to provide details like distributed switch, Edge Cluster, and API Server endpoint FQDN. The FQDN should resolve to the VIP address of the Supervisor cluster. We also need to specify DNS server details, Service, POD, Ingress, and Egress CIDR details below.

In the second last step, we need to specify “Content Library” details. Kubernetes content library we created earlier, not in the scope of this series.


In the final step, we need to review the configuration and click on “Finish“.

If the configuration provided in earlier steps are correct. It will take 10-15 minutes to deploy the Supervisor cluster on the compute cluster.
Verification
After the configuration, you need to verify the logical objects. Workload management has created them on vCenter and NSX-T for the first time.
In the below step, we confirm that compute cluster “SA-Compute-01” has been enabled for K8s. The VIP for the control plane is “192.168.30.33“. This VIP is resolvable to “vspherek8s.vclass.local“.

Now, we will assign the appropriate licenses to the supervisor cluster configured in below step. To assign the license, we need to navigate to “Menu > Administration > Licenses > Assets > Supervisor clusters > Assign Licenses“

License has been applied to the supervisor cluster successfully.

In the NSX-T manager, lets verify that the VIP for supervisor cluster has been created. To verify that, Networking > Networking Services > Load Balancing > Virtual Services.

NB: Other NSX-T components are also created like Segments, Tier-1 gateway (Connected to Tier-0 Gateway), Distributed firewall rules. We will talk about them in next part of this series.
In the vCenter, let’s verify that a new Namespace resource pool has been created. Three supervisor VMs are part of it. To navigate to Host & Clusters, Data Center > Clusters > Namespaces.

Let’s browse the VIP or FQDN of the supervisor cluster.

Summary
In this part of the series, we enabled vSphere with Tanzu on a compute cluster. This allows developers to use this cluster to run K8s and container-based applications. Vsphere with Tanzu leverages the functionality of NSX-T as a networking solution, vSAN as storage solution and so on.
In the upcoming parts, we will set up vSphere pods. We will set up our first Namespace, including resource allocation and RBAC on Namespace.
We will build our first container based application using vSphere with Tanzu. We will then notice changes in the vSphere client, NSX-T, and storage.
Comments
Post a Comment