Skip to main content

vSphere with Tanzu (VKS) integration with NSX-T Part-6

vSphere with Tanzu (VKS) integration with NSX-T Part-6

vSphere with Tanzu (VKS) integration with NSX-T Part-6

In this series of blogs, we have seen how to deploy various Tanzu components. This includes enabling vSphere with Tanzu using NSX-T. It also includes vSphere with Tanzu using an external load balancer, which is AVI in this case. We also covered creating our first namespace on vSphere with Tanzu. To give granular control on the namespace, we have restricted the access of it, capped resource limits and so on.

In the last part, we deployed our first K8s cluster on namespace. We gave access to the developers. So that, they can build their application on it.

In this part of the series, we will provision our first supervisor cluster services called Contour. Contour is a ingress controller to give both L4 and L7 load balancing services. Contour is deployed in the control node and deploy its data plane “envoy” in the worker node.

There are multiple different ingress controllers available in the market like NGINXAKO (AVI Kubernetes operator) and so on. But in our lab, we will be using Contour for L4 and L7 proxy services.

Diagram

To enable, Kubernetes on the vsphere cluster (VKS), we have deployed a cluster with Five ESXi hosts. These hosts are connected to dvs-SA-Datacenter (vDS) on uplink-1 (vmnic0), uplink-2 (vmnic1), uplink-3 (vmnic2), and uplink-4 (vmnic3) . All ESXi hosts in a cluster are part of same vDS switch. This vDS is configured with four port groups “pa-sa-infra-management“, “pa-sa-vmotion“, “pa-sa-tanzu-management, and “pg-sa-tanzu-workload“.

Portgroup “pa-sa-infra-management (172.20.10.0/24)” will be used to host all management VMs. These include AVI controller, Service Engine, NSX-T manager, and vCenter. The same portgroup will also be used in ESXi hosts.

Portgroup “pa-sa-tanzu-management (172.20.12.0/24)” will be used to host VIP network. A VIP will be assigned to supervisor cluster from this network. This network will be host on AVI service engine.

Portgroup “pa-sa-tanzu-workload (192.168.150.0/24)” will be used on the supervisor cluster. All supervisor VMs will get an IP address from this network.

AVI controller (NSX-ALB) is already deployed and connected to “pa-sa-infra-management“. Default cloud and service engine group are already configured and a certificate is already provisioned installed on the NSX-ALB controller.

Workload Management will deploy SE’s in two-arm mode on the compute cluster provisioned for vSphere with Tanzu. SE’s will have three interfaces, one in “pa-sa-infra-management“, second in pa-sa-tanzu-management“, and “pa-sa-tanzu-workload“.

Below diagram depicts different networks consumed by workload management, along with placement of SE’s on the compute cluster.

NB: We have shown only 4 ESXi hosts in the below diagram. Still, in our lab, we have five ESXi hosts.

In contour as ingress controller, it deploys the envoy demon set in the worker nodes. The brain sits on the controller node. Below diagram, illustrates the architecture of contour in the K8s.

The above image is taken from “https://projectcontour.io/docs/v1.17.1/architecture/

Configuration

The Supervisor services in tanzu is deployed in two stages. In the first stage, services will be enabled at the vCenter level. After enabling the services, respective services will be deployed on the supervisor cluster. In the below steps, we will see configuration of both the steps.

In the vCenter, navigate to Workload Management < Services. Click on the icon “Discover and download available services” . In this section, we can discover all available services which can be deployed in vSphere with Tanzu.

We have downloaded the contour.yaml file and enable the service at the vCenter level. Click on Add a New Service.

In the above screenshot, we have to give details like contour.yml file, name of the service, Service ID will be automatically generated.

Let’s verify the configured services on the vCenter. (Follow the same approach to allow different services). We will talk about Harbor and external-dns in upcoming blogs.

In the below image, let’s enable the contour services on our earlier deployed supervisor cluster.

Right click on Contour services, click on “Install Contour on Supervisors”

Select the Tanzu version, select the supervisor and give contour data values as per the below screenshot.

After providing the required data values to start the contour services, click on “OK.” Two replicas of Contour will be deployed. Additionally, Envoy will be deployed on each worker node (Daemon set).

After activating the contour services on our supervisor cluster, let’s verify the creation of contour and envoy pods. We should check these on respective nodes.

Navigate to Inventory < Namespaces < namespace-01 < svc-contour-cxxxx

We have successfully deployed our contour pods along with envoy pods on our namespace (namespace-01).

Summary

In this series of blogs, we had seen how to deploy various Tanzu components. This included enabling vSphere with Tanzu using NSX-T. It also included vSphere with Tanzu using an external load balancer, which was AVI in this case. We also created our first namespace on vSphere with Tanzu. To give granular control on the namespace, we had restricted the access of it, capped resource limits and so on.

In the last part, we deployed our first K8s cluster on namespace. We gave access to the developers. So that, they can build their Containerized application.

In this part of the series, we have successfully provision our first supervisor cluster services called Contour. Contour is a ingress controller to give both L4 and L7 proxy services. Contour is deployed in the control node and deploy its data plane “envoy” in the worker node.

Comments

Popular posts from this blog

  Issue with Aria Automation Custom form Multi Value Picker and Data Grid https://knowledge.broadcom.com/external/article?articleNumber=345960 Products VMware Aria Suite Issue/Introduction Symptoms: Getting  error " Expected Type String but was Object ", w hen trying to use Complex Types in MultiValue Picker on the Aria for Automation Custom Form. Environment VMware vRealize Automation 8.x Cause This issue has been identified where the problem appears when a single column Multi Value Picker or Data Grid is used. Resolution This is a known issue. There is a workaround.  Workaround: As a workaround, try adding one empty column in the Multivalue picker without filling the options. So we can add one more column without filling the value which will be hidden(there is a button in the designer page that will hide the column). This way the end user will receive the same view.  

57 Tips Every Admin Should Know

Active Directory 1. To quickly list all the groups in your domain, with members, run this command: dsquery group -limit 0 | dsget group -members –expand 2. To find all users whose accounts are set to have a non-expiring password, run this command: dsquery * domainroot -filter “(&(objectcategory=person)(objectclass=user)(lockoutTime=*))” -limit 0 3. To list all the FSMO role holders in your forest, run this command: netdom query fsmo 4. To refresh group policy settings, run this command: gpupdate 5. To check Active Directory replication on a domain controller, run this command: repadmin /replsummary 6. To force replication from a domain controller without having to go through to Active Directory Sites and Services, run this command: repadmin /syncall 7. To see what server authenticated you (or if you logged on with cached credentials) you can run either of these commands: set l echo %logonserver% 8. To see what account you are logged on as, run this command: ...
  The Guardrails of Automation VMware Cloud Foundation (VCF) 9.0 has redefined private cloud automation. With full-stack automation powered by Ansible and orchestrated through vRealize Orchestrator (vRO), and version-controlled deployments driven by GitOps and CI/CD pipelines, teams can build infrastructure faster than ever. But automation without guardrails is a recipe for risk Enter RBAC and policy enforcement. This third and final installment in our automation series focuses on how to secure and govern multi-tenant environments in VCF 9.0 with role-based access control (RBAC) and layered identity management. VCF’s IAM Foundation VCF 9.x integrates tightly with enterprise identity providers, enabling organizations to define and assign roles using existing Active Directory (AD) groups. With its persona-based access model, administrators can enforce strict boundaries across compute, storage, and networking resources: Personas : Global Admin, Tenant Admin, Contributor, Viewer Projec...