Skip to main content

 VCF 9 Launched! 

Quick Takeaways. hashtag#4CuttheClutter I am going to keep it simple. Compute, Storage, Network, VCF Operations, VCF Automation. 


(1/4) COMPUTE

1. NSX VPCs in vSphere UI - NSX VPCs are now natively integrated to vCenter. Admins can now create and manage NSX VPCs and networks directly in vSphere UI/API/CLI. Removes need to switch between NSX and vSphere consoles. Even the network topology is visible through vCenter. [Refresher: VPC's are self contained and cant communicate with each other. Provides simple networking consumption for users at the same time adapted to large multi-tenant environments. From going forward, VPCs will be the core building blocks for VMware's 'Public Cloud Experience in your Private Cloud'. ] 


2. NVMe Memory Tiering - Now you can utilise NVMe disks for memory. Both DDR and NVMe will be taken and will be presented as one to workloads. From VCF 9, NVMe used as a secondary memory tier on ESXi hosts. Increases memory pool, offloads DRAM, allows better VM/VDI consolidation, and reduces DRAM cost. 


3. Increased VM Scale - Support for up to 960 vCPUs and 16TB RAM per VM. Works with AMD Turin/Venice and Intel Sapphire Rapids. Suits high-density and SAP HANA-sized workloads.


4. ESX Live Patching - Critical patches can be applied to ESXi hosts without reboot or VM evacuation. Host stays operational, reduces required downtime for patching.


5. Faster vMotion for GenAI workloads - vMotion speed for GPU VMs improved by 6x with parallel TCP, 100Gbps support, and Intel QAT offload. vGPU profiles migrate faster, lowering downtime for production AI/ML workloads.


6. Unified VCF SDK and OpenAPI 3.0 - This makes now automation very easier as before we had fragmented APIs and SDKs. Now with Single unified SDK you have a consistent client side api bindings and also tooling for all the vcf sdk's. 


7. Minimal Supervisor Deployment - Supervisor can be brought up with minimal config unlike previous times when we used to pre define lb, workload networks etc. The advanced options like LB, scaling, workload networks can now be added later. Faster initial deployment, less Day 0 overhead.


8. VM Service Enhancements - Consumers can now deploy VMs from ISO which will be uploaded by VI admins to the associated content library. VM CPU/mem can now be changed by re-assigning VM Class, previously we had to redeploy a new vm for resource adjustments. VADP workflows supported for backup/restore. So, Resource changes and image-based deploys possible without VM redeploy.


9. Non-Disruptive Cert Renewal and TLS 1.3 Default- vCenter/ESXi certs can be rotated without service restarts. No impact to ongoing sessions. No downtime for certificate replacement. vSphere now uses TLS 1.3 default for all communications. Can revert to TLS 1.2 if needed for legacy apps. 


11. FIPS Compliance - All cryptography is FIPS 140-2/3 certified by default. Meets compliance requirements for regulated environments.


12. Storage Cluster Traffic Separation - Before, vSAN had only one network for both compute and storage, that means Guest VM I/O and vSAN I/O went through the same network. Now in a disaggregated vSAN architecture where you have a cluster for vSAN storage and another cluster that ll consume the storage, you can separate the traffic and ensure the back end vsan io traffic stays within the tor and only the front end traffic goes to the spine also the vsan storage cluster performance is on par with hci level, also cost efficiency. Caveat - separation feature is only available when you deploy a vSAN cluster using the vSAN storage cluster deployment option. not available when a vSAN HCI deployment option is selected.

 

13. vSAN File Services Scalability: vSAN File Services allows file services on a per cluster basis using any vSAN cluster. With vSAN 9, Max shares per vSAN cluster increased to 500 (was 100). Supports more NFS exports for multi-tenancy, project or app file shares. But to note here is that the limit of SMB shares used by wndows systems reamins at 100. [Supported: NFS v3, v4.1, SMB v2,v3] and is only applicable to clusters running ESA.

 

14. Degraded Device Handling for ESA: vSAN ESA auto-detects failing disks using latency and congestion metrics. Automatically moves data off suspect devices, no manual action required. If auto-remediate is not possible, alerts are raised. Reduces downtime risk and admin intervention.

 

15. vSAN-to-vSAN Replication - An extension of vSAN DP introduced in 5.2. Now with vSAN 9, vSAN ESA datastores can now replicate (async, vsphere rep) snapshots between each other using live recovery as add on. Here snapshot is performed at the target unlike array based replication which snapshoted at source. You will have the complete copy of the VM at both sites and not just the delta changes, this is beneficial in many use cases. 


16. Compute-only in Stretched vSAN Clusters: Can now stretch compute-only nodes with a vSAN storage cluster. Supports synchronous VM/data replication across sites. Enables site-level failover for both compute and storage, protects against full site or single host failures.

Comments

Popular posts from this blog

Top 10 high-level EC2 scenario-based questions to challenge your AWS & DevOps skills

 Here are 10 high-level EC2 scenario-based questions to challenge your AWS & DevOps skills 1. Your EC2 instance is running but you can’t connect via SSH. What troubleshooting steps will you take?  Check Security Group inbound rules (port 22 open to your IP).  Verify Network ACLs (NACLs not blocking inbound/outbound).  Confirm instance’s Public IP / Elastic IP.  Validate Key Pair and correct permissions on .pem.  Ensure SSM Agent is installed (Session Manager can help).  Check system logs on the console for OS-level issues. 2. You terminated an EC2 instance by mistake. How can you prevent this in the future? Enable Termination Protection in EC2 settings. Use IAM permissions to restrict TerminateInstances. Tag critical instances and set resource policies. 3. Your EC2 instance needs to access an S3 bucket securely. What’s the best way to configure this? Best practice: Attach an IAM Role with least privilege policy to the EC2 instance. Avoid hardcoding...

GitOps-Driven Management of VKS Clusters: Enabling GitOps on VCF 9.0 (Part 03)

  GitOps-Driven Management of VKS Clusters: Enabling GitOps on VCF 9.0 (Part 03) In the Part-02 blog, we walked through the process of deploying an Argo CD instance within a vSphere Namespace on  VMware Cloud Foundation (VCF) 9.0 , enabling a GitOps-based approach to manage Kubernetes workloads in a vSphere environment. With Argo CD successfully installed, we now have a powerful toolset to drive declarative infrastructure and application delivery. In this blog post, we’ll take the next step by demonstrating how to  provision and manage VKS clusters  directly through the Argo CD  UI and CLI . This allows us to fully operationalise GitOps within the private cloud, delivering consistency, scalability, and automation across the Kubernetes lifecycle. Importance of Managing the Kubernetes Cluster with a Gitops Approach Adopting a GitOps-based approach for managing Kubernetes clusters enables declarative, version-controlled, and automated operations by leveraging Git a...
 https://knowledge.broadcom.com/external/article?articleNumber=389217 VMware Aria Suite Backup and Restore Documentation Issue/Introduction This article host backup and restore documentation for VMware Aria Suite 2019 product lines. Environment VMware Aria Suite 8.x VMware Aria Automation 8.x VMware Aria Automation Orchestrator 8.x Cause Technical documentation has been migrated from docs dot vmware dot com to  https://techdocs.broadcom.com . During this migration, some content considered End of Life (EOL) or End of General Support (EOGS) was not targeted for migration. Resolution PDF files are provided in this article while these documents are restored to https://techdocs.broadcom.com. Attachments Backup & Restore with EMC Avamar for VMware Aria Suite.pdf get_app Backup & Restore with Netbackup for VMware Aria Suite.pdf get_app VMware Aria Suite Backup and Restore by Using vSphere Data Protection.pdf get_app