Skip to main content

vSphere with Tanzu (VKS) integration with NSX-T Part-4

 vSphere with Tanzu (VKS) integration with NSX-T Part-4



vSphere with Tanzu (VKS) integration with NSX-T Part-4

In the first part of this series, we enabled vSphere with Tanzu on a compute cluster. This allowed developers to use this cluster to run K8s and container-based applicationsVsphere with Tanzu leverages the functionality of NSX-T as a networking solutionvSAN as storage solution and so on.

In the second part, we successfully created our first namespace called namespace-01. We also provided necessary permissionsstorage policiesVM classes, and resource limits. We also added content library to our VM class.

In last part, we successfully deployed our first vSphere Pod on the namespace created in earlier part. However, NSX-T is mandatory to deploy vSphere pod.

In this part, we will explore the deployment of vsphere Tanzu (Workload Management) using external load balancer using VMware NSX Advance load balancer and vSphere networking object called vDS (virtual distributed switch)

Diagram

To enable, Kubernetes on the vsphere cluster (VKS), we have deployed a cluster with Five ESXi hosts. These hosts are connected to dvs-SA-Datacenter (vDS) on uplink-1 (vmnic0), uplink-2 (vmnic1), uplink-3 (vmnic2), and uplink-4 (vmnic3) . All ESXi hosts in a cluster are part of same vDS switch. This vDS is configured with four port groups “pa-sa-infra-management“, “pa-sa-vmotion“, “pa-sa-tanzu-management, and “pg-sa-tanzu-workload“.

Portgroup “pa-sa-infra-management (172.20.10.0/24)” will be used to host all management VMs. These include AVI controller, Service Engine, NSX-T manager, and vCenter. The same portgroup will also be used in ESXi hosts.

Portgroup “pa-sa-tanzu-management (172.20.12.0/24)” will be used to host VIP network. A VIP will be assigned to supervisor cluster from this network. This network will be host on AVI service engine.

Portgroup “pa-sa-tanzu-workload (192.168.150.0/24)” will be used on the supervisor cluster. All supervisor VMs will get an IP address from this network.

AVI controller (NSX-ALB) is already deployed and connected to “pa-sa-infra-management“. Default cloud and service engine group are already configured and a certificate is already provisioned installed on the NSX-ALB controller.

Workload Management will deploy SE’s in two-arm mode on the compute cluster provisioned for vSphere with Tanzu. SE’s will have three interfaces, one in “pa-sa-infra-management“, second in pa-sa-tanzu-management“, and “pa-sa-tanzu-workload“.

Below diagram depicts different networks consumed by workload management, along with placement of SE’s on the compute cluster.

NB: We have shown only 4 ESXi hosts in the below diagram. However, in our lab, we have five ESXi hosts.

Configuration

Content library and tag based storage policies are already configured. These vSphere constructs will be used while deploying supervisor cluster.

Now, it is the right time to start enabling the workload management for compute cluster. To start navigate to “vSphere client < Workload Management, click on “Get Started“.

This time we are using vDS and external load balancer, so in the next step, we need to select vDS.

Now, we need to provide the location where the supervisor VMs will be deployed. In our lab, its SA-Cluster-01. In this step, we need to provide details like Name of Supervisor cluster and name of the Zone.

In below step, we need to supply storage policy which Supervisor cluster (VM’s) will be using. We have configured tagged based Storage policy.

In the next step, we need to provide details about the External Load balancerNSX-ALB is already deployed in our environment.

below details need to be provided about the external load balancer.

Name (should be DNS resolvable)—> sa-nsxalb-01 (already deployed), Load Balancer Type—> NSX Advance Load Balancer, NSX-ALB IP address—> 172.20.10.58:443username—> admin, and the password.

At last, we need to provide NSX-ALB certificate details, which can be fetched from AVI controller. Navigate to Templates < Security < SSL/TLS certificate. Click on sa-nsxalb-01 and copy the content of the certificate. Paste it on Server Certificate under Load Balancer (Workload Management). Refer to the above screenshot.

Now, let’s continue with the next step to fill details related to Management Network. In this step, provide details like Network Mode and Network name(dPG). Also, include the Starting IP addresssubnet maskgatewayDNS server, and NTP server details.

All the NSX-ALB will assign VIP IP address from this network. AVI controller will allocate address assignment for this network. We will check it in verification.

At last, we need to configure workload network. In this section, we need to provide information about Network Mode—> Static. We also need details on the internal network for K8s Services. Additionally, include Port GroupIP address rangeGatewaySubnet mask, and so on.

NSX-ALB SE will have one interface in this network. All Supervisor VMs will get an IP address from this networkAvi Controller will maintain the inventory of IP range allocated in below screenshot.

Now, we need to allocate the Control VM size. We also need to assign the FQDN for the VIP address. This address is allocated to the Supervisor Cluster (vSphere IAAS control plane). At last, we need to review our configuration and click on “Finish“.

Validation

Deployment of Supervisor cluster will take 15 minutes. Now, after the successful configuration, it is the right time to check our environment. Navigate to Inventory < Workload Management < Supervisors. Below image depicts about Supervisor Cluster IP address (Configured on SE) along with K8s version.

Let’s navigate to the Inventory < Workload Management < Supervisors < Summary. This will show the full statistics. It will also show capacity, namespace details, and related objects. The below image give information about the K8s version.

Lets verify on the AVI controller. Two VIP’s with the different IP address are configured. They have different application ports configured on AVI SE.

In blog 1 and 2 of the series, we already discussed checking the IP of the supervisor cluster. We also talked about creating our namespace.

Summary

In the first part of this series, we enabled vSphere with Tanzu on a compute cluster. This allowed developers to use this cluster to run K8s and container-based applicationsVsphere with Tanzu leverages the functionality of NSX-T as a networking solutionvSAN as storage solution and so on.

In the second part, we successfully created our first namespace called namespace-01. We also provided necessary permissionsstorage policiesVM classes, and resource limits. We also added content library to our VM class.

In the third part, we successfully deployed our first vSphere Pod on the namespace created in earlier part.

In this part, we enabled vSphere with Tanzu on a compute cluster. We used External Load Balancer (NSX-ALB) and vDS (vSphere logical construct).

In upcoming parts, we will provision our first TKG cluster. This will allow the Harbor repository. Along with it, we will deploy our first application using K8s. We will also verify the requirements from SDDC manager to allow VMware Kubernetes services on VI workload domain.

Comments

Popular posts from this blog

  Issue with Aria Automation Custom form Multi Value Picker and Data Grid https://knowledge.broadcom.com/external/article?articleNumber=345960 Products VMware Aria Suite Issue/Introduction Symptoms: Getting  error " Expected Type String but was Object ", w hen trying to use Complex Types in MultiValue Picker on the Aria for Automation Custom Form. Environment VMware vRealize Automation 8.x Cause This issue has been identified where the problem appears when a single column Multi Value Picker or Data Grid is used. Resolution This is a known issue. There is a workaround.  Workaround: As a workaround, try adding one empty column in the Multivalue picker without filling the options. So we can add one more column without filling the value which will be hidden(there is a button in the designer page that will hide the column). This way the end user will receive the same view.  

57 Tips Every Admin Should Know

Active Directory 1. To quickly list all the groups in your domain, with members, run this command: dsquery group -limit 0 | dsget group -members –expand 2. To find all users whose accounts are set to have a non-expiring password, run this command: dsquery * domainroot -filter “(&(objectcategory=person)(objectclass=user)(lockoutTime=*))” -limit 0 3. To list all the FSMO role holders in your forest, run this command: netdom query fsmo 4. To refresh group policy settings, run this command: gpupdate 5. To check Active Directory replication on a domain controller, run this command: repadmin /replsummary 6. To force replication from a domain controller without having to go through to Active Directory Sites and Services, run this command: repadmin /syncall 7. To see what server authenticated you (or if you logged on with cached credentials) you can run either of these commands: set l echo %logonserver% 8. To see what account you are logged on as, run this command: ...
  The Guardrails of Automation VMware Cloud Foundation (VCF) 9.0 has redefined private cloud automation. With full-stack automation powered by Ansible and orchestrated through vRealize Orchestrator (vRO), and version-controlled deployments driven by GitOps and CI/CD pipelines, teams can build infrastructure faster than ever. But automation without guardrails is a recipe for risk Enter RBAC and policy enforcement. This third and final installment in our automation series focuses on how to secure and govern multi-tenant environments in VCF 9.0 with role-based access control (RBAC) and layered identity management. VCF’s IAM Foundation VCF 9.x integrates tightly with enterprise identity providers, enabling organizations to define and assign roles using existing Active Directory (AD) groups. With its persona-based access model, administrators can enforce strict boundaries across compute, storage, and networking resources: Personas : Global Admin, Tenant Admin, Contributor, Viewer Projec...