VCF 9 Launched!
Quick Takeaways. hashtag#4CuttheClutter I am going to keep it simple. Compute, Storage, Network, VCF Operations, VCF Automation.
(1/4) COMPUTE
1. NSX VPCs in vSphere UI - NSX VPCs are now natively integrated to vCenter. Admins can now create and manage NSX VPCs and networks directly in vSphere UI/API/CLI. Removes need to switch between NSX and vSphere consoles. Even the network topology is visible through vCenter. [Refresher: VPC's are self contained and cant communicate with each other. Provides simple networking consumption for users at the same time adapted to large multi-tenant environments. From going forward, VPCs will be the core building blocks for VMware's 'Public Cloud Experience in your Private Cloud'. ]
2. NVMe Memory Tiering - Now you can utilise NVMe disks for memory. Both DDR and NVMe will be taken and will be presented as one to workloads. From VCF 9, NVMe used as a secondary memory tier on ESXi hosts. Increases memory pool, offloads DRAM, allows better VM/VDI consolidation, and reduces DRAM cost.
3. Increased VM Scale - Support for up to 960 vCPUs and 16TB RAM per VM. Works with AMD Turin/Venice and Intel Sapphire Rapids. Suits high-density and SAP HANA-sized workloads.
4. ESX Live Patching - Critical patches can be applied to ESXi hosts without reboot or VM evacuation. Host stays operational, reduces required downtime for patching.
5. Faster vMotion for GenAI workloads - vMotion speed for GPU VMs improved by 6x with parallel TCP, 100Gbps support, and Intel QAT offload. vGPU profiles migrate faster, lowering downtime for production AI/ML workloads.
6. Unified VCF SDK and OpenAPI 3.0 - This makes now automation very easier as before we had fragmented APIs and SDKs. Now with Single unified SDK you have a consistent client side api bindings and also tooling for all the vcf sdk's.
7. Minimal Supervisor Deployment - Supervisor can be brought up with minimal config unlike previous times when we used to pre define lb, workload networks etc. The advanced options like LB, scaling, workload networks can now be added later. Faster initial deployment, less Day 0 overhead.
8. VM Service Enhancements - Consumers can now deploy VMs from ISO which will be uploaded by VI admins to the associated content library. VM CPU/mem can now be changed by re-assigning VM Class, previously we had to redeploy a new vm for resource adjustments. VADP workflows supported for backup/restore. So, Resource changes and image-based deploys possible without VM redeploy.
9. Non-Disruptive Cert Renewal and TLS 1.3 Default- vCenter/ESXi certs can be rotated without service restarts. No impact to ongoing sessions. No downtime for certificate replacement. vSphere now uses TLS 1.3 default for all communications. Can revert to TLS 1.2 if needed for legacy apps.
11. FIPS Compliance - All cryptography is FIPS 140-2/3 certified by default. Meets compliance requirements for regulated environments.
12. Storage Cluster Traffic Separation - Before, vSAN had only one network for both compute and storage, that means Guest VM I/O and vSAN I/O went through the same network. Now in a disaggregated vSAN architecture where you have a cluster for vSAN storage and another cluster that ll consume the storage, you can separate the traffic and ensure the back end vsan io traffic stays within the tor and only the front end traffic goes to the spine also the vsan storage cluster performance is on par with hci level, also cost efficiency. Caveat - separation feature is only available when you deploy a vSAN cluster using the vSAN storage cluster deployment option. not available when a vSAN HCI deployment option is selected.
13. vSAN File Services Scalability: vSAN File Services allows file services on a per cluster basis using any vSAN cluster. With vSAN 9, Max shares per vSAN cluster increased to 500 (was 100). Supports more NFS exports for multi-tenancy, project or app file shares. But to note here is that the limit of SMB shares used by wndows systems reamins at 100. [Supported: NFS v3, v4.1, SMB v2,v3] and is only applicable to clusters running ESA.
14. Degraded Device Handling for ESA: vSAN ESA auto-detects failing disks using latency and congestion metrics. Automatically moves data off suspect devices, no manual action required. If auto-remediate is not possible, alerts are raised. Reduces downtime risk and admin intervention.
15. vSAN-to-vSAN Replication - An extension of vSAN DP introduced in 5.2. Now with vSAN 9, vSAN ESA datastores can now replicate (async, vsphere rep) snapshots between each other using live recovery as add on. Here snapshot is performed at the target unlike array based replication which snapshoted at source. You will have the complete copy of the VM at both sites and not just the delta changes, this is beneficial in many use cases.
16. Compute-only in Stretched vSAN Clusters: Can now stretch compute-only nodes with a vSAN storage cluster. Supports synchronous VM/data replication across sites. Enables site-level failover for both compute and storage, protects against full site or single host failures.
Comments
Post a Comment