Skip to main content

 VCF 9 Launched! 

Quick Takeaways. hashtag#4CuttheClutter I am going to keep it simple. Compute, Storage, Network, VCF Operations, VCF Automation. 


(1/4) COMPUTE

1. NSX VPCs in vSphere UI - NSX VPCs are now natively integrated to vCenter. Admins can now create and manage NSX VPCs and networks directly in vSphere UI/API/CLI. Removes need to switch between NSX and vSphere consoles. Even the network topology is visible through vCenter. [Refresher: VPC's are self contained and cant communicate with each other. Provides simple networking consumption for users at the same time adapted to large multi-tenant environments. From going forward, VPCs will be the core building blocks for VMware's 'Public Cloud Experience in your Private Cloud'. ] 


2. NVMe Memory Tiering - Now you can utilise NVMe disks for memory. Both DDR and NVMe will be taken and will be presented as one to workloads. From VCF 9, NVMe used as a secondary memory tier on ESXi hosts. Increases memory pool, offloads DRAM, allows better VM/VDI consolidation, and reduces DRAM cost. 


3. Increased VM Scale - Support for up to 960 vCPUs and 16TB RAM per VM. Works with AMD Turin/Venice and Intel Sapphire Rapids. Suits high-density and SAP HANA-sized workloads.


4. ESX Live Patching - Critical patches can be applied to ESXi hosts without reboot or VM evacuation. Host stays operational, reduces required downtime for patching.


5. Faster vMotion for GenAI workloads - vMotion speed for GPU VMs improved by 6x with parallel TCP, 100Gbps support, and Intel QAT offload. vGPU profiles migrate faster, lowering downtime for production AI/ML workloads.


6. Unified VCF SDK and OpenAPI 3.0 - This makes now automation very easier as before we had fragmented APIs and SDKs. Now with Single unified SDK you have a consistent client side api bindings and also tooling for all the vcf sdk's. 


7. Minimal Supervisor Deployment - Supervisor can be brought up with minimal config unlike previous times when we used to pre define lb, workload networks etc. The advanced options like LB, scaling, workload networks can now be added later. Faster initial deployment, less Day 0 overhead.


8. VM Service Enhancements - Consumers can now deploy VMs from ISO which will be uploaded by VI admins to the associated content library. VM CPU/mem can now be changed by re-assigning VM Class, previously we had to redeploy a new vm for resource adjustments. VADP workflows supported for backup/restore. So, Resource changes and image-based deploys possible without VM redeploy.


9. Non-Disruptive Cert Renewal and TLS 1.3 Default- vCenter/ESXi certs can be rotated without service restarts. No impact to ongoing sessions. No downtime for certificate replacement. vSphere now uses TLS 1.3 default for all communications. Can revert to TLS 1.2 if needed for legacy apps. 


11. FIPS Compliance - All cryptography is FIPS 140-2/3 certified by default. Meets compliance requirements for regulated environments.


12. Storage Cluster Traffic Separation - Before, vSAN had only one network for both compute and storage, that means Guest VM I/O and vSAN I/O went through the same network. Now in a disaggregated vSAN architecture where you have a cluster for vSAN storage and another cluster that ll consume the storage, you can separate the traffic and ensure the back end vsan io traffic stays within the tor and only the front end traffic goes to the spine also the vsan storage cluster performance is on par with hci level, also cost efficiency. Caveat - separation feature is only available when you deploy a vSAN cluster using the vSAN storage cluster deployment option. not available when a vSAN HCI deployment option is selected.

 

13. vSAN File Services Scalability: vSAN File Services allows file services on a per cluster basis using any vSAN cluster. With vSAN 9, Max shares per vSAN cluster increased to 500 (was 100). Supports more NFS exports for multi-tenancy, project or app file shares. But to note here is that the limit of SMB shares used by wndows systems reamins at 100. [Supported: NFS v3, v4.1, SMB v2,v3] and is only applicable to clusters running ESA.

 

14. Degraded Device Handling for ESA: vSAN ESA auto-detects failing disks using latency and congestion metrics. Automatically moves data off suspect devices, no manual action required. If auto-remediate is not possible, alerts are raised. Reduces downtime risk and admin intervention.

 

15. vSAN-to-vSAN Replication - An extension of vSAN DP introduced in 5.2. Now with vSAN 9, vSAN ESA datastores can now replicate (async, vsphere rep) snapshots between each other using live recovery as add on. Here snapshot is performed at the target unlike array based replication which snapshoted at source. You will have the complete copy of the VM at both sites and not just the delta changes, this is beneficial in many use cases. 


16. Compute-only in Stretched vSAN Clusters: Can now stretch compute-only nodes with a vSAN storage cluster. Supports synchronous VM/data replication across sites. Enables site-level failover for both compute and storage, protects against full site or single host failures.

Comments

Popular posts from this blog

Quick Guide to VCF Automation for VCD Administrators

  Quick Guide to VCF Automation for VCD Administrators VMware Cloud Foundation 9 (VCF 9) has been  released  and with it comes brand new Cloud Management Platform –  VCF Automation (VCFA)  which supercedes both Aria Automation and VMware Cloud Director (VCD). This blog post is intended for those people that know VCD quite well and want to understand how is VCFA similar or different to help them quickly orient in the new direction. It should be emphasized that VCFA is a new solution and not just rebranding of an old one. However it reuses a lot of components from its predecessors. The provider part of VCFA called Tenenat Manager is based on VCD code and the UI and APIs will be familiar to VCD admins, while the tenant part inherist a lot from Aria Automation and especially for VCD end-users will look brand new. Deployment and Architecture VCFA is generaly deployed from VCF Operations Fleet Management (former Aria Suite LCM embeded in VCF Ops. Fleet Management...
  Issue with Aria Automation Custom form Multi Value Picker and Data Grid https://knowledge.broadcom.com/external/article?articleNumber=345960 Products VMware Aria Suite Issue/Introduction Symptoms: Getting  error " Expected Type String but was Object ", w hen trying to use Complex Types in MultiValue Picker on the Aria for Automation Custom Form. Environment VMware vRealize Automation 8.x Cause This issue has been identified where the problem appears when a single column Multi Value Picker or Data Grid is used. Resolution This is a known issue. There is a workaround.  Workaround: As a workaround, try adding one empty column in the Multivalue picker without filling the options. So we can add one more column without filling the value which will be hidden(there is a button in the designer page that will hide the column). This way the end user will receive the same view.  
  "Cloud zone insights not available yet, please check after some time" message on Aria Automation https://knowledge.broadcom.com/external/article?articleNumber=314894 Products VMware Aria Suite Issue/Introduction Symptoms: The certificate for Aria operations has been replaced since it was initially added to Aria Automation as an integration. When accessing the Insights pane under  Cloud Assembly  ->  Infrastructure  ->  Cloud Zone  ->  Insights  the following message is displayed:   "Cloud zone insights not available yet, please check after some time." The  /var/log/services-logs/prelude/hcmp-service-app/file-logs/hcmp-service-app.log  file contains ssl errors similar to:   2022-08-25T20:06:43.989Z ERROR hcmp-service [host='hcmp-service-app-xxxxxxx-xxxx' thread='Thread-56' user='' org='<org_id>' trace='<trace_id>' parent='<parent_id>' span='<span_id>'] c.v.a.h.a.common.AlertEnu...