SIOC is considered to be one of the finest addition to the VMware feature stack. In this blog we will discuss about SIOC and its impact on Disk usage. Before discussing about SIOC, we should understand the relevance of VM based 'Disk Shares' in VMware. The disk share concept is simple and much alike Memory and CPU shares. When there is a resource constraint, the host will throttle the disk usage of VMs by adjusting its Disk Queue Depth based on its share values. If all the VMs have the same Disk Share, the control over disk will be shared among all VMs. If the share value of a VM is high, it gets precedence over other VMs.
Simple and effective right...then why should VMware introduce a new feature called Storage I/O Control - SIOC. The answer is simple, 'Disk Share' calculates the share value of VMs in a particular host whereas the relevance of SIOC comes in a cluster. SIOC is configured for a particular datastore and will be disabled by default. Once enabled, SIOC gets active based on the value of the datastore latency. The current latency of datastore will be stored in the file called iormstat.sf which is hidden in the datastore. The default threshold value of latency is set to 30ms which is of course configurable. The threshold value can be changed based on the underlying storage technology. For eg: <20 and="" for="" ms="" nbsp="" sata.="" span="" ssd="">SIOC requires VMware Enterprise Plus license. A datastore-wide disk scheduler (PARDA control algorithm) runs on each host sharing that datastore. This scheduler will be triggering 'Latency computation' and 'Window Size Computation' algorithms. 'Latency Computation' is used to detect if SIOC needs to throttle queues to ensure each VM gets its fair share. 'Window Size Computation' is used to calculate what this queue depth should be for your host. Once the queue depth is calculated by PARDA, the 'Local Host Scheduler' will take on from there. By default the queue depth is 32 and will be adjusted to a value calculated by PARDA which will be always higher than 4. VMware recommends that this feature should be enabled for better VM performance. It is not at all advisable to leave a datastore unconfigured with SIOC, when other datastores in the same physical array is configured with SIOC. Pictorial representation of SIOC scenario
Suppose we have 3 VMs with equal disk shares of 1000 each. 2 VMs are in one ESXi and the 3rd VM in another ESXi. When SIOC is disabled, even though all VMs are configured with equal shares, the VM in the 2nd ESXi will enjoy more disk shares during resource constraint.
20>
SIOC disabled
With SIOC enabled, VMs across hosts enjoys a fair share.
Issue VMware P2V conversion of Windows 2003 server fails with the below error :
FAILED : Unable to create a VSS snapshot of the source volume(s). Error code: 2147754774 (0x80042316) Cause
This is a known issue with Microsoft and occurs when there is an issue with Windows Volume Shadow Copy Service (VSS).
Resolution
Re-register Windows Volume Shadow Copy Service (VSS) to resolve this issue.
Steps to re-register VSS (From Microsoft article):
1. Click Start, click Run, type cmd, and then click OK.
2. Type the following commands at a command prompt. Press ENTER after you type each command. cd /d %windir%\system32 Net stop vss Net stop swprv regsvr32 ole32.dll regsvr32 oleaut32.dll regsvr32 vss_ps.dll vssvc /register regsvr32 /i swprv.dll regsvr32 /i eventcls.dll regsvr32 es.dll regsvr32 stdprov.dll regsvr32 vssui.dll regsvr32 msxml.dll regsvr32 msxml3.dll regsvr32 msxml4.dll
Note The last command may not run successfully.
3. Rerun the converter.
Note : This article is not for use with Windows Vista, with Windows Server 2008, or with later operating systems. Starting with Windows Vista and with Windows Server 2008, Windows component installation is manifest based. If you try to manually register specific components, such as those that are described in this "Resolution" section, in the operating systems that are mentioned in this note, unexpected results may occur that may require reinstalling Windows to resolve.
While performing vMotion, the operation fails at 14% with the below error : vMotion migration [-1062731490:1419235061251156] failed to create a connection with remote host : The ESX hosts failed to connect over the VMotion network Migration [-1062731490:1419235061251156] failed to connect to remote host <Destination vMotion IP> from host : Network unreachable The vMotion failed because the destination host did not receive data from the source host on the vMotion network. Please check your vMotion network settings and physical network configuration and ensure they are correct.
Resolution
I've already penned a post on VMware vMotion failure at 14%. This blog is an extended version of that post. If none of the steps mentioned in my previous post helped you, then you are in the right page.
Check whether vMotion is selected for multiple vmkernel NICs in ESXi host. !!!!