Key Updates and Recommendations:
- Host Considerations:
- BIOS Settings: Adjustments to BIOS configurations on physical servers hosting ESXi are emphasized to reduce latency.
- Enhanced vMotion Compatibility (EVC): Recommendations include disabling EVC to prevent potential performance constraints.
- vMotion and Distributed Resource Scheduler (DRS): Careful scheduling of vMotion and DRS activities is advised to minimize disruptions to latency-sensitive applications.
- Advanced Settings: New insights are provided on settings such as disabling action affinity, opening ring buffers, enabling SplitRX and SplitTX, and disabling queue pairing to optimize performance.
- Virtual Machine (VM) Considerations:
- Rightsizing VMs: It is emphasized that ensuring VMs are appropriately sized for the host hardware and configuring the correct virtual topology (vTopology) is essential.
- Virtual Hardware Version: It is recommended to utilize the latest virtual hardware versions to take advantage of performance improvements.
- Hot-Add Features: Disabling CPU and memory hot-add features may lead to performance gains for latency-sensitive workloads.
- Latency Sensitivity Setting: The discussion includes enabling the latency sensitivity feature on a per-VM basis as a method to reduce latency.
- Network Interface: The recommendation is to use VMXNET3 network adapters due to their performance benefits.
- Transmit (Tx) and Receive (Rx) Balancing: Strategies are presented to enhance network performance by balancing Tx and Rx workloads and affinitizing the VM’s Tx thread.
- NUMA Node Association: Associating VMs with specific Non-Uniform Memory Access (NUMA) nodes is advised to optimize memory access times.
- DirectPath I/O and SR-IOV: For scenarios that require minimal latency, the use of DirectPath I/O or Single Root I/O Virtualization (SR-IOV) is suggested.
3. Networking Enhancements:
- Enhanced Datapath: For Network Functions Virtualization (NFV) workloads, enabling the enhanced datapath feature is recommended to improve packet processing efficiency.
- Data Processing Units (DPUs): Offloading vSphere services to DPUs, also known as SmartNICs, is introduced as a strategy to reduce CPU overhead and latency.
These updates reflect VMware’s commitment to providing robust support for latency-sensitive applications in virtualized environments. By implementing the recommended configurations and leveraging the new features in vSphere 8.x, organizations can achieve significant performance improvements for their critical workloads.
For a detailed exploration of these recommendations and additional insights, refer to the full technical white paper: Performance Tuning for Latency-Sensitive Workloads: VMware vSphere 8.