Skip to main content

vSphere with Tanzu (VKS) integration with NSX-T Part-1

 vSphere with Tanzu (VKS) integration with NSX-T Part-1



vSphere with Tanzu (VKS) integration with NSX-T Part-1

Introduction

In this series of blog, we will discuss how “vSphere with Tanzu” will be integrated with NSX-T. To use the functionality of NSX-T, we will activate workload management on the vsphere cluster. This will provision Kubernetes services on the compute cluster. vSphere with Tanzu uses NSX-T for networking. It uses vSAN for storage and ESXi for compute. This provides a Kubernetes platform for customers to develop their container-based applications.

NSX-T provides different networking capabilities like load-balancingsecurityswitching, and routing. We will discuss in more detail.

There are two solutions through which we can deploy Kubernetes (vSphere for Tanzu) on vSphere cluster. The first solution is via NSX-T which uses the NSX-T Native Load Balancing solution. The second solution distributed virtual switch involves using an external load balancer like AVI or HA proxy. This approach is out of scope for this blog.

Diagram

To deploy, Kubernetes on the vsphere cluster, we have provisioned three ESXi hosts. These hosts are connected to DSwitch (vDS) on uplink-1 (vmnic0), uplink-2 (vmnic1) and are prepared for NSX-T consumption. The compute cluster is already enabled for vSphere HA (default settings), and DRS (fully automated). An edge is already deployed “SA-EDGE-01“. A Tier-0 (K8s-Tier-0) is already deployed. It is connected to the physical environment via BGP. All routes are redistributed to the physical router. For storage purposes, a vSAN (OSA) datastore has been created to provider storage to the K8s Pods.

To set up Kubernetes services on vsphere cluster, a supervisor cluster of three VMs will be deployed. To access the supervisor cluster, a VIP will be provisioned on the NSX-T native load balancer.

Prerequisites

To provision the K8s on vSphere cluster, certain prerequisites have to configured. In below section, we will discuss about the requirements.

  1. Cluster should be configured for DRS (Fully Automated), and vsphere HA (Default Settings).
  2. NSX-T Managers must be configured and ESXi hosts must be prepares for NSX-T consumption.
  3. An Edge cluster should be provisioned and Tier-0 must be created using that Edge Cluster.
  4. Tier-0 gateway must be connected to physical devices via either BGP or Static routing.
  5. Storage policy has to be created for supervisor cluster and Kubernetes nodes (Worker and Control nodes). In our scenario, we are using default storage policies.
  6. content library has to be created to host vSphere Tanzu Gird templates.

Configuration

We have discussed the prerequisites. Now it is time to allow K8s on the vSphere compute cluster. To start the configuration, we will navigate to Menu > Workload Management.

Let’s get started to provision the vSphere for Tanzu. In the next step, we will select a vCenter along with networking solution and click on “Next“. In this series, we are covering only NSX-T.

In the next step, we will select the compute cluster on which we want to configure Kubernetes. In our scenario, we will select “SA-Compute-01“.

In the below step, we need to select the resource allocation which determines the size of the supervisor cluster VMs. Then, click “Next“.

In the below step, we need to provide storage policies for Control Plane NodeEphemeral Disks, and Image Cache. We have selected “K8s Storage Policy” which is Default storage policy.

We also need to give details about Management IP addresses. This should include the distributed port group for Supervisor VMs on the management interface.

In the below step, we need to provide details like distributed switchEdge Cluster, and API Server endpoint FQDN. The FQDN should resolve to the VIP address of the Supervisor cluster. We also need to specify DNS server details, ServicePODIngress, and Egress CIDR details below.

In the second last step, we need to specify “Content Library” details. Kubernetes content library we created earlier, not in the scope of this series.

In the final step, we need to review the configuration and click on “Finish“.

If the configuration provided in earlier steps are correct. It will take 10-15 minutes to deploy the Supervisor cluster on the compute cluster.

Verification

After the configuration, you need to verify the logical objects. Workload management has created them on vCenter and NSX-T for the first time.

In the below step, we confirm that compute cluster “SA-Compute-01” has been enabled for K8s. The VIP for the control plane is “192.168.30.33“. This VIP is resolvable to “vspherek8s.vclass.local“.

Now, we will assign the appropriate licenses to the supervisor cluster configured in below step. To assign the license, we need to navigate to “Menu > Administration > Licenses > Assets > Supervisor clusters > Assign Licenses

License has been applied to the supervisor cluster successfully.

In the NSX-T manager, lets verify that the VIP for supervisor cluster has been created. To verify that, Networking > Networking Services > Load Balancing > Virtual Services.

NB: Other NSX-T components are also created like SegmentsTier-1 gateway (Connected to Tier-0 Gateway), Distributed firewall rules. We will talk about them in next part of this series.

In the vCenter, let’s verify that a new Namespace resource pool has been created. Three supervisor VMs are part of it. To navigate to Host & ClustersData Center > Clusters > Namespaces.

Let’s browse the VIP or FQDN of the supervisor cluster.

Summary

In this part of the series, we enabled vSphere with Tanzu on a compute cluster. This allows developers to use this cluster to run K8s and container-based applications. Vsphere with Tanzu leverages the functionality of NSX-T as a networking solution, vSAN as storage solution and so on.

In the upcoming parts, we will set up vSphere pods. We will set up our first Namespace, including resource allocation and RBAC on Namespace.

We will build our first container based application using vSphere with Tanzu. We will then notice changes in the vSphere client, NSX-T, and storage.

Comments

Popular posts from this blog

  Issue with Aria Automation Custom form Multi Value Picker and Data Grid https://knowledge.broadcom.com/external/article?articleNumber=345960 Products VMware Aria Suite Issue/Introduction Symptoms: Getting  error " Expected Type String but was Object ", w hen trying to use Complex Types in MultiValue Picker on the Aria for Automation Custom Form. Environment VMware vRealize Automation 8.x Cause This issue has been identified where the problem appears when a single column Multi Value Picker or Data Grid is used. Resolution This is a known issue. There is a workaround.  Workaround: As a workaround, try adding one empty column in the Multivalue picker without filling the options. So we can add one more column without filling the value which will be hidden(there is a button in the designer page that will hide the column). This way the end user will receive the same view.  

57 Tips Every Admin Should Know

Active Directory 1. To quickly list all the groups in your domain, with members, run this command: dsquery group -limit 0 | dsget group -members –expand 2. To find all users whose accounts are set to have a non-expiring password, run this command: dsquery * domainroot -filter “(&(objectcategory=person)(objectclass=user)(lockoutTime=*))” -limit 0 3. To list all the FSMO role holders in your forest, run this command: netdom query fsmo 4. To refresh group policy settings, run this command: gpupdate 5. To check Active Directory replication on a domain controller, run this command: repadmin /replsummary 6. To force replication from a domain controller without having to go through to Active Directory Sites and Services, run this command: repadmin /syncall 7. To see what server authenticated you (or if you logged on with cached credentials) you can run either of these commands: set l echo %logonserver% 8. To see what account you are logged on as, run this command: ...
  The Guardrails of Automation VMware Cloud Foundation (VCF) 9.0 has redefined private cloud automation. With full-stack automation powered by Ansible and orchestrated through vRealize Orchestrator (vRO), and version-controlled deployments driven by GitOps and CI/CD pipelines, teams can build infrastructure faster than ever. But automation without guardrails is a recipe for risk Enter RBAC and policy enforcement. This third and final installment in our automation series focuses on how to secure and govern multi-tenant environments in VCF 9.0 with role-based access control (RBAC) and layered identity management. VCF’s IAM Foundation VCF 9.x integrates tightly with enterprise identity providers, enabling organizations to define and assign roles using existing Active Directory (AD) groups. With its persona-based access model, administrators can enforce strict boundaries across compute, storage, and networking resources: Personas : Global Admin, Tenant Admin, Contributor, Viewer Projec...