Skip to main content

vSphere with Tanzu (VKS) integration with NSX-T Part-7

 

vSphere with Tanzu (VKS) integration with NSX-T Part-7

vSphere with Tanzu (VKS) integration with NSX-T Part-7

In this series of blogs, we have seen how to deploy various Tanzu components. This includes enabling vSphere with Tanzu using NSX-T. It also includes vSphere with Tanzu using an external load balancer, which is AVI in this case. We also covered creating our first namespace on vSphere with Tanzu. To give granular control on the namespace, we have restricted the access of it, capped resource limits and so on.

Along with that, we deployed our first K8s cluster on namespace. We gave access to the developers. So that, they can build their application on it.

In the last of this series, we provisioned our first supervisor cluster services called Contour. Contour is a ingress controller to give both L4 and L7 load balancing services. Contour deployed in the control node and deploy its data plane “envoy” in the worker node.

In this part of the series, we will deploy the Harbor repository. It will host our container images. These images are needed to deploy our application on the TKG cluster.

Diagram

To enable, Kubernetes on the vsphere cluster (VKS), we have deployed a cluster with Five ESXi hosts. These hosts are connected to dvs-SA-Datacenter (vDS) on uplink-1 (vmnic0), uplink-2 (vmnic1), uplink-3 (vmnic2), and uplink-4 (vmnic3) . All ESXi hosts in a cluster are part of same vDS switch. This vDS is configured with four port groups “pa-sa-infra-management“, “pa-sa-vmotion“, “pa-sa-tanzu-management, and “pg-sa-tanzu-workload“.

Portgroup “pa-sa-infra-management (172.20.10.0/24)” will be used to host all management VMs. These include AVI controller, Service Engine, NSX-T manager, and vCenter. The same portgroup will also be used in ESXi hosts.

Portgroup “pa-sa-tanzu-management (172.20.12.0/24)” will be used to host VIP network. A VIP will be assigned to supervisor cluster from this network. This network will be host on AVI service engine.

Portgroup “pa-sa-tanzu-workload (192.168.150.0/24)” will be used on the supervisor cluster. All supervisor VMs will get an IP address from this network.

AVI controller (NSX-ALB) is already deployed and connected to “pa-sa-infra-management“. Default cloud and service engine group are already configured and a certificate is already provisioned installed on the NSX-ALB controller.

Workload Management will deploy SE’s in two-arm mode on the compute cluster provisioned for vSphere with Tanzu. SE’s will have three interfaces, one in “pa-sa-infra-management“, second in pa-sa-tanzu-management“, and “pa-sa-tanzu-workload“.

Below diagram depicts different networks consumed by workload management, along with placement of SE’s on the compute cluster.

NB: We have shown only 4 ESXi hosts in the below diagram. Still, in our lab, we have five ESXi hosts.

Configuration

To set up the Harbor repository, we need to enable the services on the vCenter used for workload management.

In the vCenter, navigate to Workload Management < Services. Click on the icon “Discover and download available services” . In this section, we can discover all available services which can be deployed in vSphere with Tanzu.

Click on Add New Service. Enabling services need a yaml file for Harbor Repository, which will be downloaded from the Discover and available services page.

NB: Similarly, we can provision our other services. These include Certificate Management, Local configuration interface, and external-dns.

After activating the Harbor service on the vCenter, we need to enable Harbor repository services on the Supervisor cluster. To activate the services, click on harbor services, and click on “Install on Supervisors.

Now it is the right time to give values to the Harbor repository. We have use below data values while enabling Harbor services.

It will take around 2-5 minutes to deploy necessary pods and deployment. Services and namespace will be provisioned for Harbor services on the Supervisor cluster.

After the successful deployment of Harbor repository. we can login into the Harbor using configured FQDN, username, and password.

Let’s create a new project called my_project and download the CA certificate to login via

Navigate to the My_Project > Repositories and click on “Registry Certificate

Let’s login into the Harbor Repository via Linux console. To do so, we can the earlier downloaded ca cert into the Linux machine.

Make a directory “sudo mkdir /etc/docker/certs.d/harbor.vmbeans.com

Copy the ca cert to the above directory and Harbor Repository

After the successful authentication, we can now upload the container images into the Harbor Repository.

Container image of our application is already available in our system, lets verify it by running “docker images“. Along with that we will tag the image using “docker tag“, and “docker push

Let’s verify on our Harbor Repositories, whether the docker image is available or not. To verify navigate to Harbor < My_Project < Repositories

Summary

In this series of blogs, we had seen how to deploy various Tanzu components. This included enabling vSphere with Tanzu using NSX-T. It also included vSphere with Tanzu using an external load balancer, which was AVI in this case. We also created our first namespace on vSphere with Tanzu. To give granular control on the namespace, we had restricted the access of it, capped resource limits and so on. We also deployed our first K8s cluster on namespace. We gave access to the developers. So that, they can build their Containerized application.

In the last of this series, we provisioned our first supervisor cluster services called Contour. Contour is a ingress controller to give both L4 and L7 load balancing services. Contour deployed in the control node and deploy its data plane “envoy” in the worker node.

In this part of the series, we deploy ed the Harbor repository. We also pushed our container images on our Harbor repository. In upcoming parts, we will use these images to deploy our container based application.

In the upcoming parts, we will set up our first application using container images available in Harbor. We will also try to provision TKG cluster on our Workload Domain using SDDC manager. Along with that, we will see how the TKG cluster will be upgraded. We will also learn how to carry out the backup and restore using Velero.

Comments

Popular posts from this blog

Quick Guide to VCF Automation for VCD Administrators

  Quick Guide to VCF Automation for VCD Administrators VMware Cloud Foundation 9 (VCF 9) has been  released  and with it comes brand new Cloud Management Platform –  VCF Automation (VCFA)  which supercedes both Aria Automation and VMware Cloud Director (VCD). This blog post is intended for those people that know VCD quite well and want to understand how is VCFA similar or different to help them quickly orient in the new direction. It should be emphasized that VCFA is a new solution and not just rebranding of an old one. However it reuses a lot of components from its predecessors. The provider part of VCFA called Tenenat Manager is based on VCD code and the UI and APIs will be familiar to VCD admins, while the tenant part inherist a lot from Aria Automation and especially for VCD end-users will look brand new. Deployment and Architecture VCFA is generaly deployed from VCF Operations Fleet Management (former Aria Suite LCM embeded in VCF Ops. Fleet Management...
  Issue with Aria Automation Custom form Multi Value Picker and Data Grid https://knowledge.broadcom.com/external/article?articleNumber=345960 Products VMware Aria Suite Issue/Introduction Symptoms: Getting  error " Expected Type String but was Object ", w hen trying to use Complex Types in MultiValue Picker on the Aria for Automation Custom Form. Environment VMware vRealize Automation 8.x Cause This issue has been identified where the problem appears when a single column Multi Value Picker or Data Grid is used. Resolution This is a known issue. There is a workaround.  Workaround: As a workaround, try adding one empty column in the Multivalue picker without filling the options. So we can add one more column without filling the value which will be hidden(there is a button in the designer page that will hide the column). This way the end user will receive the same view.  
  "Cloud zone insights not available yet, please check after some time" message on Aria Automation https://knowledge.broadcom.com/external/article?articleNumber=314894 Products VMware Aria Suite Issue/Introduction Symptoms: The certificate for Aria operations has been replaced since it was initially added to Aria Automation as an integration. When accessing the Insights pane under  Cloud Assembly  ->  Infrastructure  ->  Cloud Zone  ->  Insights  the following message is displayed:   "Cloud zone insights not available yet, please check after some time." The  /var/log/services-logs/prelude/hcmp-service-app/file-logs/hcmp-service-app.log  file contains ssl errors similar to:   2022-08-25T20:06:43.989Z ERROR hcmp-service [host='hcmp-service-app-xxxxxxx-xxxx' thread='Thread-56' user='' org='<org_id>' trace='<trace_id>' parent='<parent_id>' span='<span_id>'] c.v.a.h.a.common.AlertEnu...