Skip to main content

vSphere with Tanzu (VKS) integration with NSX-T Part-3

 vSphere with Tanzu (VKS) integration with NSX-T Part-3



vSphere with Tanzu (VKS) integration with NSX-T Part-3

In the first part of this series, we enabled vSphere with Tanzu on a compute cluster. This allowed developers to use this cluster to run K8s and container-based applicationsVsphere with Tanzu leverages the functionality of NSX-T as a networking solutionvSAN as storage solution and so on.

In the second part, we successfully created our first namespace called namespace-01. We provided necessary permissionsstorage policiesVM classes, and resource limits. We also added content library to our VM class.

In this part, we will provision our first vSphere pods on our earlier created “namespace-01“. We will also observe the changes on the NSX-TvSphereenvironment. We will also try to access the application from outside world.

Diagram

To deploy, Kubernetes on the vsphere cluster, we have provisioned three ESXi hosts. These hosts are connected to DSwitch (vDS) on uplink-1 (vmnic0), uplink-2 (vmnic1) and are prepared for NSX-T consumption. The compute cluster is already enabled for vSphere HA (default settings), and DRS (fully automated). An edge is already deployed “SA-EDGE-01“. A Tier-0 (K8s-Tier-0) is already deployed. It is connected to the physical environment via BGP. All routes are redistributed to the physical router. For storage purposes, a vSAN (OSA) datastore has been created to provider storage to the K8s Pods

In the first blog, we already enabled the workload management via vSphere client. A cluster of three supervisor cluster has been deployed and a VIP is assigned to to this cluster. NSX-T native load balancing is being used to load balance traffic to this cluster.

In the Second blog, we deployed the namespace, and allowed necessary permission to our developer team to access the namespace. Using this “namespace-01“, we will deploy our first vSphere pod.

NB: In the upcoming part, we will check the prerequisites required. This will allow workload management for the VI workload domain via SDDC manager.

Configuration

Before we continue, we need to access our namespace via Linux using Kubectl (K8s command line tool). To access the namespace via command line tool, we need to login into our Linux machine. Then, we access the VIP of our supervisor cluster providing the developer team credentials.

Kubectl vsphere login –server=192.168.30.33″ –insecure-skip-tls-verify

192.168.30.33 is our Supervisor VIP address, we already validated it in our first and second blog. After successful login, devops01 user has access to namespace-01.

To access our namespace, we need to use “kubectl config user-context namespace-01“. As our devops01 user already has access to the namespace, we can easily access our namespace.

Now is the right time to work on our “yaml” file. We need to give the necessary configuration to create our first vSphere Pod.

In the above snapshot, we are validating the content of deployment yaml file. The above file includes details like deploymentPod template.

kind —-> Deployment.

Name of the deployment –> nginx-deployment.

Label to be imposed on deployment —> nginx.

Replicas —> 1.

Match labels of Pod —> nginx.

In the below section, we will create template related to Pod.

labels —-> nginx (to be imposed on Pod).

Name of the containers—> Nginx

Image of the containers located at —> 172.20.10.30

Name of image—> nginx:1.16

Nginx application will be imposed on Port —> 80

After reviewing the configuration, we need to create the K8s deployment (nginx-deployment.yaml). It will create one vSphere Pod. This pod will be exposed on port 80.

To create the deployment, run “kubectl create -f nginx-deployment.yaml

The deployment is successfully created. To verify, the creation of vSphere Pod, run “Kubectl get pods

Let’s carry out a last check to confirm the creation of K8s components like Poddeployment, and so on. This check can be performed using “kubectl get all

Let’s verify the creation of our newly created vSphere Pod on the vSphere with “Workload Management“. To Navigate, Workload Management > Namespace > Namespace-01 > Compute > vSphere Pod.

To verify the deployment, Workload Management > Namespace > Namespace-01 > Compute > Deployments.

In the last verification, let’s verify the details of our Nginx pod on the vsphere UI.

Summary

In the first part of this series, we enabled vSphere with Tanzu on a compute cluster. This allowed developers to use this cluster to run K8s and container-based applicationsVsphere with Tanzu leverages the functionality of NSX-T as a networking solutionvSAN as storage solution and so on.

In the second part, we successfully created our first namespace called namespace-01. We also provided necessary permissionsstorage policiesVM classes, and resource limits. We also added content library to our VM class.

In this part, we successfully deployed our first vSphere Pod on the namespace created in earlier part.

NB: Before setting up our first TKG cluster, we will provision vSphere with Tanzu using NSX-ALB. So, in next blog, we will configure workload management using vDS and external load Balancer.

In upcoming parts, we will provision our first TKG cluster. This will allow the Harbor repository. Along with it, we will deploy our first application using K8s. We will also verify the requirements from SDDC manager to allow VMware Kubernetes services on VI workload domain.

Comments

Popular posts from this blog

Quick Guide to VCF Automation for VCD Administrators

  Quick Guide to VCF Automation for VCD Administrators VMware Cloud Foundation 9 (VCF 9) has been  released  and with it comes brand new Cloud Management Platform –  VCF Automation (VCFA)  which supercedes both Aria Automation and VMware Cloud Director (VCD). This blog post is intended for those people that know VCD quite well and want to understand how is VCFA similar or different to help them quickly orient in the new direction. It should be emphasized that VCFA is a new solution and not just rebranding of an old one. However it reuses a lot of components from its predecessors. The provider part of VCFA called Tenenat Manager is based on VCD code and the UI and APIs will be familiar to VCD admins, while the tenant part inherist a lot from Aria Automation and especially for VCD end-users will look brand new. Deployment and Architecture VCFA is generaly deployed from VCF Operations Fleet Management (former Aria Suite LCM embeded in VCF Ops. Fleet Management...
  Issue with Aria Automation Custom form Multi Value Picker and Data Grid https://knowledge.broadcom.com/external/article?articleNumber=345960 Products VMware Aria Suite Issue/Introduction Symptoms: Getting  error " Expected Type String but was Object ", w hen trying to use Complex Types in MultiValue Picker on the Aria for Automation Custom Form. Environment VMware vRealize Automation 8.x Cause This issue has been identified where the problem appears when a single column Multi Value Picker or Data Grid is used. Resolution This is a known issue. There is a workaround.  Workaround: As a workaround, try adding one empty column in the Multivalue picker without filling the options. So we can add one more column without filling the value which will be hidden(there is a button in the designer page that will hide the column). This way the end user will receive the same view.  
  "Cloud zone insights not available yet, please check after some time" message on Aria Automation https://knowledge.broadcom.com/external/article?articleNumber=314894 Products VMware Aria Suite Issue/Introduction Symptoms: The certificate for Aria operations has been replaced since it was initially added to Aria Automation as an integration. When accessing the Insights pane under  Cloud Assembly  ->  Infrastructure  ->  Cloud Zone  ->  Insights  the following message is displayed:   "Cloud zone insights not available yet, please check after some time." The  /var/log/services-logs/prelude/hcmp-service-app/file-logs/hcmp-service-app.log  file contains ssl errors similar to:   2022-08-25T20:06:43.989Z ERROR hcmp-service [host='hcmp-service-app-xxxxxxx-xxxx' thread='Thread-56' user='' org='<org_id>' trace='<trace_id>' parent='<parent_id>' span='<span_id>'] c.v.a.h.a.common.AlertEnu...