Hyper-V Bandwidth Limit
That is interesting as other zdnet articles give other figures based on different disks etc as per below response to a query
The basics are simple, RAID introduces a write penalty. The question of course is how many IOps do you need per volume and how many disks should this volume contain to meet the requirements? First, the disk types and the amount of IOps. Keep in mind I’ve tried to keep values on the safe side:
So how did I come up with these numbers? I bought a bunch of disks, measured the IOps several times, used several brands and calculated the average… well sort of. I looked it up on the internet and took 5 articles and calculated the average and rounded the outcome.
Many asked about where these numbers came from. Like I said it’s an average of theoretical numbers. In the comments there’s link to a ZDNet article which I used as one of the sources. ZDNet explains what the maximum amount of IOps theoretically is for a disk. In short; It is based on “average seek time” and the half of the time a single rotation takes. These two values added up result in the time an average IO takes. There are 1000 miliseconds in every second so divide 1000 by this value and you have a theoretical maximum amount of IOps. Keep in mind though that this is based on “random” IO.With sequential IO these numbers will of course be different on a single drive.
So what if I add these disks to a raid group:
For “read” IOps it’s simple, RAID Read IOps = Sum of all Single Disk IOps.
For “write” IOps it is slightly more complicated as there is a penalty introduced:
Raid IO Penalty IOPS -15k disk
Raid-0 0 175
Raid-1 2 85
Raid-5 4 40
Raid-6 6 30
raid-dp 2 85
So how do we factor this penalty in? Well it’s simple for instance for RAID-5 for every single write there are 4 IO’s needed. That’s the penalty which is introduced when selecting a specific RAID type. This also means that although you think you have enough spindles in a single RAID Set you might not due to the introduced penalty and the amount of writes versus reads.
I found a formula and tweaked it a bit so that it fits our needs:
(TOTAL IOps × % READ)+ ((TOTAL IOps × % WRITE) ×RAID Penalty)
So for RAID-5 and for instance a VM which produces 1000 IOps and has 40% reads and 60% writes:
(1000 x 0.4) + ((1000 x 0.6) x 4) = 400 + 2400 = 2800 IO’s
The 1000 IOps this VM produces actually results in 2800 IO’s on the backend of the array, this makes you think doesn’t it?
We got those numbers from Windows Resource Monitor -> Disk (Read/Write). It shows only B/s.
I would say Hyper IOPS != standard IOPS
Not exactly sure if the iops discussion at SolidCP is a good place to hold, but my 2 cents:
In my experience iops depends alot on your hardware, even the type of ssd you buy (consumer, DC models, etc), then you got raid, even raid cards, cache, and so on help.
Unless your testing single disks with specific generation and types i don't think you can average out iops in general...
But for all intents and purposes: just set what you think is best for your hardware/ customers.. make sure you test it.. do a few overload tests (multiple vm's on full load and check out node in total etc) to determine what is best for your hardware and environment 🙂
At my own company we test this for each new server build (diff hardware = new tests).
here is a poweshell script that tells you what your vms are doing, you can set the sample rate and time to process and it will give you an idea of what is really being used on the vms and bandwidth, etc. this will give you a good idea of what iops are being used over time, I would assume the longer the sample period and the number of samples taken will provide better averages