Skip to main content

vSphere with Tanzu (VKS) integration with NSX-T Part-5

 vSphere with Tanzu (VKS) integration with NSX-T Part-5


vSphere with Tanzu (VKS) integration with NSX-T Part-5

In the first part of this series, we enabled vSphere with Tanzu on a compute cluster. This allowed developers to use this cluster to run K8s and container-based applicationsVsphere with Tanzu leverages the functionality of NSX-T as a networking solutionvSAN as storage solution and so on.

In the second part, we successfully created our first namespace called namespace-01. We also provided necessary permissionsstorage policiesVM classes, and resource limits. We also added content library to our VM class.

In the third part, we successfully deployed our first vSphere Pod on the namespace created in earlier part. vSphere Pod is deployed using the NSX-T and it is single zone deployment.

In the fourth part, we explored the deployment of vsphere Tanzu (Workload Management) using external load balancer using VMware NSX Advance load balancer and vSphere networking object called vDS (virtual distributed switch)

In this part of the blog, we will deploy our first Tanzu Kubernetes cluster. This cluster will be set up on top of the namespace created earlier in this series. This namespace will use external load balancer and vsphere logical networking construct (vSphere Distributed Switch).

Diagram

To enable, Kubernetes on the vsphere cluster (VKS), we have deployed a cluster with Five ESXi hosts. These hosts are connected to dvs-SA-Datacenter (vDS) on uplink-1 (vmnic0), uplink-2 (vmnic1), uplink-3 (vmnic2), and uplink-4 (vmnic3) . All ESXi hosts in a cluster are part of same vDS switch. This vDS is configured with four port groups “pa-sa-infra-management“, “pa-sa-vmotion“, “pa-sa-tanzu-management, and “pg-sa-tanzu-workload“.

Portgroup “pa-sa-infra-management (172.20.10.0/24)” will be used to host all management VMs. These include AVI controller, Service Engine, NSX-T manager, and vCenter. The same portgroup will also be used in ESXi hosts.

Portgroup “pa-sa-tanzu-management (172.20.12.0/24)” will be used to host VIP network. A VIP will be assigned to supervisor cluster from this network. This network will be host on AVI service engine.

Portgroup “pa-sa-tanzu-workload (192.168.150.0/24)” will be used on the supervisor cluster. All supervisor VMs will get an IP address from this network.

AVI controller (NSX-ALB) is already deployed and connected to “pa-sa-infra-management“. Default cloud and service engine group are already configured and a certificate is already provisioned installed on the NSX-ALB controller.

Workload Management will deploy SE’s in two-arm mode on the compute cluster provisioned for vSphere with Tanzu. SE’s will have three interfaces, one in “pa-sa-infra-management“, second in pa-sa-tanzu-management“, and “pa-sa-tanzu-workload“.

Below diagram depicts different networks consumed by workload management, along with placement of SE’s on the compute cluster.

NB: We have shown only 4 ESXi hosts in the below diagram. However, in our lab, we have five ESXi hosts.

Configuration

To set up a TKG (Tanzu Kubernetes Cluster), we must prepare our TKG yaml file. This file should specify the number of control and worker nodes.

In above image, we are creating our first TKG cluster on our earlier created namespace “namespace-01“.

We need to incorporate below details in the yaml file.

Name of the TKG clustertkc-01
Namespacenamespace-01
Service CIDR198.51.100.0/12
cidrBlocks192.0.2.0/16
serviceDomainName of domain name
Version of K8sv1.23.8
Number of Control Plane Nodes1
Number of Worker Nodes2
“OS for Control and Worker Node”Ubuntu
VMclassbest-effort-large
Default Storage Class and Storage ClassSp-Tanzu

After reviewing our TKG yaml file, we can deploy the TKG cluster either using K8s or tanzu command line interface.

In order install Tanzu command line interface on “Supervisor Cluster“, we need to follow below steps.

To begin the deployment of TKG cluster, we use “kubectl create -f
tkc-01.yaml
“.

To veiw the process of TKG cluster deployment, we can use “kubectl get cluster -n namespace-01

Let’s validate the provisioning of TKG cluster using Tanzu Command line interface.

The deployment will take approx 20 minutes to deploy Kubernetes cluster on vSphere namespace.

TKG cluster has been created successfully.

Let’s view the health of our TKG cluster control and worker nodes using Tanzu command line interface. In the below screenshot, we used
tanzu cluster machinehealthcheck control-plane get tkc-01“.

In the below screenshot, we will confirm the health of worker nodes deployed in our TKG cluster deployment. To check the health “”tanzu cluster machinehealthcheck node get tkc-01

Let’s check the status of our first TKG cluster on the vSphere vCenter client. Navigate to Inventory < SA-Datacenter < Namespaces < namespace-01 < tkc-01.

In above screenshot, three VMs are deployed, one is control VM and two are worker nodes.

Summary

In the first part of this series, we enabled vSphere with Tanzu on a compute cluster. This allowed developers to use this cluster to run K8s and container-based applicationsVsphere with Tanzu leverages the functionality of NSX-T as a networking solutionvSAN as storage solution and so on.

In the second part, we successfully created our first namespace called namespace-01. We also provided necessary permissionsstorage policiesVM classes, and resource limits. We also added content library to our VM class.

In the third part, we successfully deployed our first vSphere Pod on the namespace created in earlier part.

In the fourth part, we enabled vSphere with Tanzu on a compute cluster. We used External Load Balancer (NSX-ALB) and vDS (vSphere logical construct).

In this part, we successfully provisioned our first TKG cluster. We did this on our earlier created namespace “namespace-01“, which is on top of our supervisor cluster “sa-supervisor-01“.

In upcoming parts, we will provision different supervisor services like ContourExternalDNSHarbor and so on. Along with it, we will deploy our first application using K8s. We will also verify the requirements from SDDC manager to allow VMware Kubernetes services on VI workload domain.

Comments

Popular posts from this blog

Quick Guide to VCF Automation for VCD Administrators

  Quick Guide to VCF Automation for VCD Administrators VMware Cloud Foundation 9 (VCF 9) has been  released  and with it comes brand new Cloud Management Platform –  VCF Automation (VCFA)  which supercedes both Aria Automation and VMware Cloud Director (VCD). This blog post is intended for those people that know VCD quite well and want to understand how is VCFA similar or different to help them quickly orient in the new direction. It should be emphasized that VCFA is a new solution and not just rebranding of an old one. However it reuses a lot of components from its predecessors. The provider part of VCFA called Tenenat Manager is based on VCD code and the UI and APIs will be familiar to VCD admins, while the tenant part inherist a lot from Aria Automation and especially for VCD end-users will look brand new. Deployment and Architecture VCFA is generaly deployed from VCF Operations Fleet Management (former Aria Suite LCM embeded in VCF Ops. Fleet Management...
  Issue with Aria Automation Custom form Multi Value Picker and Data Grid https://knowledge.broadcom.com/external/article?articleNumber=345960 Products VMware Aria Suite Issue/Introduction Symptoms: Getting  error " Expected Type String but was Object ", w hen trying to use Complex Types in MultiValue Picker on the Aria for Automation Custom Form. Environment VMware vRealize Automation 8.x Cause This issue has been identified where the problem appears when a single column Multi Value Picker or Data Grid is used. Resolution This is a known issue. There is a workaround.  Workaround: As a workaround, try adding one empty column in the Multivalue picker without filling the options. So we can add one more column without filling the value which will be hidden(there is a button in the designer page that will hide the column). This way the end user will receive the same view.  
  "Cloud zone insights not available yet, please check after some time" message on Aria Automation https://knowledge.broadcom.com/external/article?articleNumber=314894 Products VMware Aria Suite Issue/Introduction Symptoms: The certificate for Aria operations has been replaced since it was initially added to Aria Automation as an integration. When accessing the Insights pane under  Cloud Assembly  ->  Infrastructure  ->  Cloud Zone  ->  Insights  the following message is displayed:   "Cloud zone insights not available yet, please check after some time." The  /var/log/services-logs/prelude/hcmp-service-app/file-logs/hcmp-service-app.log  file contains ssl errors similar to:   2022-08-25T20:06:43.989Z ERROR hcmp-service [host='hcmp-service-app-xxxxxxx-xxxx' thread='Thread-56' user='' org='<org_id>' trace='<trace_id>' parent='<parent_id>' span='<span_id>'] c.v.a.h.a.common.AlertEnu...