vSphere with Tanzu (VKS) integration with NSX-T Part-4
vSphere with Tanzu (VKS) integration with NSX-T Part-4
In the first part of this series, we enabled vSphere with Tanzu on a compute cluster. This allowed developers to use this cluster to run K8s and container-based applications. Vsphere with Tanzu leverages the functionality of NSX-T as a networking solution, vSAN as storage solution and so on.
In the second part, we successfully created our first namespace called namespace-01. We also provided necessary permissions, storage policies, VM classes, and resource limits. We also added content library to our VM class.
In last part, we successfully deployed our first vSphere Pod on the namespace created in earlier part. However, NSX-T is mandatory to deploy vSphere pod.
In this part, we will explore the deployment of vsphere Tanzu (Workload Management) using external load balancer using VMware NSX Advance load balancer and vSphere networking object called vDS (virtual distributed switch)
Diagram
To enable, Kubernetes on the vsphere cluster (VKS), we have deployed a cluster with Five ESXi hosts. These hosts are connected to dvs-SA-Datacenter (vDS) on uplink-1 (vmnic0), uplink-2 (vmnic1), uplink-3 (vmnic2), and uplink-4 (vmnic3) . All ESXi hosts in a cluster are part of same vDS switch. This vDS is configured with four port groups “pa-sa-infra-management“, “pa-sa-vmotion“, “pa-sa-tanzu-management, and “pg-sa-tanzu-workload“.
Portgroup “pa-sa-infra-management (172.20.10.0/24)” will be used to host all management VMs. These include AVI controller, Service Engine, NSX-T manager, and vCenter. The same portgroup will also be used in ESXi hosts.
Portgroup “pa-sa-tanzu-management (172.20.12.0/24)” will be used to host VIP network. A VIP will be assigned to supervisor cluster from this network. This network will be host on AVI service engine.
Portgroup “pa-sa-tanzu-workload (192.168.150.0/24)” will be used on the supervisor cluster. All supervisor VMs will get an IP address from this network.
AVI controller (NSX-ALB) is already deployed and connected to “pa-sa-infra-management“. Default cloud and service engine group are already configured and a certificate is already provisioned installed on the NSX-ALB controller.
Workload Management will deploy SE’s in two-arm mode on the compute cluster provisioned for vSphere with Tanzu. SE’s will have three interfaces, one in “pa-sa-infra-management“, second in pa-sa-tanzu-management“, and “pa-sa-tanzu-workload“.
Below diagram depicts different networks consumed by workload management, along with placement of SE’s on the compute cluster.
NB: We have shown only 4 ESXi hosts in the below diagram. However, in our lab, we have five ESXi hosts.

Configuration
Content library and tag based storage policies are already configured. These vSphere constructs will be used while deploying supervisor cluster.
Now, it is the right time to start enabling the workload management for compute cluster. To start navigate to “vSphere client < Workload Management, click on “Get Started“.

This time we are using vDS and external load balancer, so in the next step, we need to select vDS.

Now, we need to provide the location where the supervisor VMs will be deployed. In our lab, its SA-Cluster-01. In this step, we need to provide details like Name of Supervisor cluster and name of the Zone.

In below step, we need to supply storage policy which Supervisor cluster (VM’s) will be using. We have configured tagged based Storage policy.

In the next step, we need to provide details about the External Load balancer. NSX-ALB is already deployed in our environment.

below details need to be provided about the external load balancer.
Name (should be DNS resolvable)—> sa-nsxalb-01 (already deployed), Load Balancer Type—> NSX Advance Load Balancer, NSX-ALB IP address—> 172.20.10.58:443, username—> admin, and the password.
At last, we need to provide NSX-ALB certificate details, which can be fetched from AVI controller. Navigate to Templates < Security < SSL/TLS certificate. Click on sa-nsxalb-01 and copy the content of the certificate. Paste it on Server Certificate under Load Balancer (Workload Management). Refer to the above screenshot.

Now, let’s continue with the next step to fill details related to Management Network. In this step, provide details like Network Mode and Network name(dPG). Also, include the Starting IP address, subnet mask, gateway, DNS server, and NTP server details.
All the NSX-ALB will assign VIP IP address from this network. AVI controller will allocate address assignment for this network. We will check it in verification.

At last, we need to configure workload network. In this section, we need to provide information about Network Mode—> Static. We also need details on the internal network for K8s Services. Additionally, include Port Group, IP address range, Gateway, Subnet mask, and so on.
NSX-ALB SE will have one interface in this network. All Supervisor VMs will get an IP address from this network. Avi Controller will maintain the inventory of IP range allocated in below screenshot.

Now, we need to allocate the Control VM size. We also need to assign the FQDN for the VIP address. This address is allocated to the Supervisor Cluster (vSphere IAAS control plane). At last, we need to review our configuration and click on “Finish“.

Validation
Deployment of Supervisor cluster will take 15 minutes. Now, after the successful configuration, it is the right time to check our environment. Navigate to Inventory < Workload Management < Supervisors. Below image depicts about Supervisor Cluster IP address (Configured on SE) along with K8s version.

Let’s navigate to the Inventory < Workload Management < Supervisors < Summary. This will show the full statistics. It will also show capacity, namespace details, and related objects. The below image give information about the K8s version.

Lets verify on the AVI controller. Two VIP’s with the different IP address are configured. They have different application ports configured on AVI SE.

In blog 1 and 2 of the series, we already discussed checking the IP of the supervisor cluster. We also talked about creating our namespace.
Summary
In the first part of this series, we enabled vSphere with Tanzu on a compute cluster. This allowed developers to use this cluster to run K8s and container-based applications. Vsphere with Tanzu leverages the functionality of NSX-T as a networking solution, vSAN as storage solution and so on.
In the second part, we successfully created our first namespace called namespace-01. We also provided necessary permissions, storage policies, VM classes, and resource limits. We also added content library to our VM class.
In the third part, we successfully deployed our first vSphere Pod on the namespace created in earlier part.
In this part, we enabled vSphere with Tanzu on a compute cluster. We used External Load Balancer (NSX-ALB) and vDS (vSphere logical construct).
In upcoming parts, we will provision our first TKG cluster. This will allow the Harbor repository. Along with it, we will deploy our first application using K8s. We will also verify the requirements from SDDC manager to allow VMware Kubernetes services on VI workload domain.
Comments
Post a Comment