vSphere with Tanzu (VKS) integration with NSX-T Part-6

vSphere with Tanzu (VKS) integration with NSX-T Part-6
In this series of blogs, we have seen how to deploy various Tanzu components. This includes enabling vSphere with Tanzu using NSX-T. It also includes vSphere with Tanzu using an external load balancer, which is AVI in this case. We also covered creating our first namespace on vSphere with Tanzu. To give granular control on the namespace, we have restricted the access of it, capped resource limits and so on.
In the last part, we deployed our first K8s cluster on namespace. We gave access to the developers. So that, they can build their application on it.
In this part of the series, we will provision our first supervisor cluster services called Contour. Contour is a ingress controller to give both L4 and L7 load balancing services. Contour is deployed in the control node and deploy its data plane “envoy” in the worker node.
There are multiple different ingress controllers available in the market like NGINX, AKO (AVI Kubernetes operator) and so on. But in our lab, we will be using Contour for L4 and L7 proxy services.
Diagram
To enable, Kubernetes on the vsphere cluster (VKS), we have deployed a cluster with Five ESXi hosts. These hosts are connected to dvs-SA-Datacenter (vDS) on uplink-1 (vmnic0), uplink-2 (vmnic1), uplink-3 (vmnic2), and uplink-4 (vmnic3) . All ESXi hosts in a cluster are part of same vDS switch. This vDS is configured with four port groups “pa-sa-infra-management“, “pa-sa-vmotion“, “pa-sa-tanzu-management, and “pg-sa-tanzu-workload“.
Portgroup “pa-sa-infra-management (172.20.10.0/24)” will be used to host all management VMs. These include AVI controller, Service Engine, NSX-T manager, and vCenter. The same portgroup will also be used in ESXi hosts.
Portgroup “pa-sa-tanzu-management (172.20.12.0/24)” will be used to host VIP network. A VIP will be assigned to supervisor cluster from this network. This network will be host on AVI service engine.
Portgroup “pa-sa-tanzu-workload (192.168.150.0/24)” will be used on the supervisor cluster. All supervisor VMs will get an IP address from this network.
AVI controller (NSX-ALB) is already deployed and connected to “pa-sa-infra-management“. Default cloud and service engine group are already configured and a certificate is already provisioned installed on the NSX-ALB controller.
Workload Management will deploy SE’s in two-arm mode on the compute cluster provisioned for vSphere with Tanzu. SE’s will have three interfaces, one in “pa-sa-infra-management“, second in pa-sa-tanzu-management“, and “pa-sa-tanzu-workload“.
Below diagram depicts different networks consumed by workload management, along with placement of SE’s on the compute cluster.
NB: We have shown only 4 ESXi hosts in the below diagram. Still, in our lab, we have five ESXi hosts.

In contour as ingress controller, it deploys the envoy demon set in the worker nodes. The brain sits on the controller node. Below diagram, illustrates the architecture of contour in the K8s.

The above image is taken from “https://projectcontour.io/docs/v1.17.1/architecture/“
Configuration
The Supervisor services in tanzu is deployed in two stages. In the first stage, services will be enabled at the vCenter level. After enabling the services, respective services will be deployed on the supervisor cluster. In the below steps, we will see configuration of both the steps.
In the vCenter, navigate to Workload Management < Services. Click on the icon “Discover and download available services” . In this section, we can discover all available services which can be deployed in vSphere with Tanzu.

We have downloaded the contour.yaml file and enable the service at the vCenter level. Click on Add a New Service.

In the above screenshot, we have to give details like contour.yml file, name of the service, Service ID will be automatically generated.
Let’s verify the configured services on the vCenter. (Follow the same approach to allow different services). We will talk about Harbor and external-dns in upcoming blogs.
In the below image, let’s enable the contour services on our earlier deployed supervisor cluster.
Right click on Contour services, click on “Install Contour on Supervisors”

Select the Tanzu version, select the supervisor and give contour data values as per the below screenshot.

After providing the required data values to start the contour services, click on “OK.” Two replicas of Contour will be deployed. Additionally, Envoy will be deployed on each worker node (Daemon set).

After activating the contour services on our supervisor cluster, let’s verify the creation of contour and envoy pods. We should check these on respective nodes.
Navigate to Inventory < Namespaces < namespace-01 < svc-contour-cxxxx“

We have successfully deployed our contour pods along with envoy pods on our namespace (namespace-01).
Summary
In this series of blogs, we had seen how to deploy various Tanzu components. This included enabling vSphere with Tanzu using NSX-T. It also included vSphere with Tanzu using an external load balancer, which was AVI in this case. We also created our first namespace on vSphere with Tanzu. To give granular control on the namespace, we had restricted the access of it, capped resource limits and so on.
In the last part, we deployed our first K8s cluster on namespace. We gave access to the developers. So that, they can build their Containerized application.
In this part of the series, we have successfully provision our first supervisor cluster services called Contour. Contour is a ingress controller to give both L4 and L7 proxy services. Contour is deployed in the control node and deploy its data plane “envoy” in the worker node.
Comments
Post a Comment