Not exactly sure if the iops discussion at SolidCP is a good place to hold, but my 2 cents:

In my experience iops depends alot on your hardware, even the type of ssd you buy (consumer, DC models, etc), then you got raid, even raid cards, cache, and so on help.

Unless your testing single disks with specific generation and types i don't think you can average out iops in general...

But for all intents and purposes: just set what you think is best for your hardware/ customers.. make sure you test it.. do a few overload tests (multiple vm's on full load and check out node in total etc) to determine what is best for your hardware and environment :)

At my own company we test this for each new server build (diff hardware = new tests).

Regards,

Marco

]]>I would say Hyper IOPS != standard IOPS

]]>The basics are simple, RAID introduces a write penalty. The question of course is how many IOps do you need per volume and how many disks should this volume contain to meet the requirements? First, the disk types and the amount of IOps. Keep in mind I’ve tried to keep values on the safe side:

RPM IOPS

SSD 6000

15K 175

10K 125

7200 75

5400 50

So how did I come up with these numbers? I bought a bunch of disks, measured the IOps several times, used several brands and calculated the average… well sort of. I looked it up on the internet and took 5 articles and calculated the average and rounded the outcome.

Many asked about where these numbers came from. Like I said it’s an average of theoretical numbers. In the comments there’s link to a ZDNet article which I used as one of the sources. ZDNet explains what the maximum amount of IOps theoretically is for a disk. In short; It is based on “average seek time” and the half of the time a single rotation takes. These two values added up result in the time an average IO takes. There are 1000 miliseconds in every second so divide 1000 by this value and you have a theoretical maximum amount of IOps. Keep in mind though that this is based on “random” IO.With sequential IO these numbers will of course be different on a single drive.

So what if I add these disks to a raid group:

For “read” IOps it’s simple, RAID Read IOps = Sum of all Single Disk IOps.

For “write” IOps it is slightly more complicated as there is a penalty introduced:

Raid IO Penalty IOPS -15k disk

Raid-0 0 175

Raid-1 2 85

Raid-5 4 40

Raid-6 6 30

raid-dp 2 85

So how do we factor this penalty in? Well it’s simple for instance for RAID-5 for every single write there are 4 IO’s needed. That’s the penalty which is introduced when selecting a specific RAID type. This also means that although you think you have enough spindles in a single RAID Set you might not due to the introduced penalty and the amount of writes versus reads.

I found a formula and tweaked it a bit so that it fits our needs:

(TOTAL IOps × % READ)+ ((TOTAL IOps × % WRITE) ×RAID Penalty)

So for RAID-5 and for instance a VM which produces 1000 IOps and has 40% reads and 60% writes:

(1000 x 0.4) + ((1000 x 0.6) x 4) = 400 + 2400 = 2800 IO’s

The 1000 IOps this VM produces actually results in 2800 IO’s on the backend of the array, this makes you think doesn’t it?

1000 IOPS ~ 5 mB/s, (+- 2mB)

2000 IOPS ~ 10 mB/s (+- 2mB)

etc.

HDDs/SSDs or Mixed (RAM & SSD & HDD), not matter.

]]>Well 50 Max IOPS ~ 50 kb/s :) It will be extremely slow for clients.

]]>

Set-VMHardDiskDrive -VMName $vmName -MaximumIOPS 50

Specifies the maximum normalized I/O operations per second (IOPS) for the hard disk. Hyper-V calculates normalized IOPS as the total size of I/O per second divided by 8 KB.

]]>Get-VMNetworkAdapter -ComputerName PROLIANTDL380 -VMname * | Select VMName, VMQweight

]]>