Check Disk Speed

This guide uses the C: drive which means it’ll test the C: drive so if you want to test a different drive change the drive letter

Install DISKSPD

Open PowerShell as Administrator and run the following commands to download and extract the DISKSPD executable

$client = new-object System.Net.WebClient
$client.DownloadFile("https://github.com/microsoft/diskspd/releases/download/v2.0.21a/DiskSpd.zip","c:\temp\DiskSpd-2.0.21a.zip")
Expand-Archive -LiteralPath c:\temp\DiskSpd-2.0.21a.zip -DestinationPath C:\Temp\DISKSPD

 

Execute a test

  1. In PowerShell
  2. Change to the c:\Temp\DISKSPD directory
  3. Change to the appropriate CPU type directory (AMD64 is for 64-bit systems)
  4. Run the tool choosing the most appropriate parameters using the example below and the table for which parameters to choose.
C:\Temp\DISKSPD\amd64\diskspd.exe [PARAMETERS] [PATH_FOR_TEST_FILE] > [OUTPUT_FILE.txt]

Parameters

ParamUse
-c50GCreates a test file of 50GB, this size will exceed the cache on a storage controller
-d120Test duration in seconds
-rRandom read/write operations (to test sequential use -s)
-t2Number of threads; this should equal the number of CPU cores
-w40Ratio of write to read operations 40%/60%; 0 would be 100% read
-o32This indicates the number of outstanding I/O requests per target per thread. This is also known as the queue depth, and in this case, 32 were used to stress the CPU.
-b4kThis indicates the block size in bytes, KiB, MiB, or GiB. In this case, 4K block size was used to simulate a random I/O test.
-SuwDisables software and hardware write caching
-LMeasures latency statistics
-DCaptures IOPS statistics, such as standard deviation, in intervals of milliseconds (per-thread, per-target).

To count how many cores your system has open PowerShell.exe as Administrator and run the following commands

WMIC CPU Get DeviceID,NumberOfCores,NumberOfLogicalProcessors

Example command for a 2 CPU CORE system.

C:\Temp\DISKSPD\amd64\diskspd.exe -c50G -d120 -r -t2 -w40 -o32 -b4k -Suw -L -D C:\Temp\DiskSpd\diskpsdtmp.dat > C:\Temp\DiskSpd\TestResults.txt

 

Read the results

Input parameters

This section describes the command you ran, the input parameters, and additional details about the test run.

Command Line: C:\Temp\DISKSPD\amd64\diskspd.exe -c50G -d300 -r -t2 -w40 -o32 -b4k -Suw -L -D C:\Temp\DiskSpd\diskpsdtmp.dat
Input parameters:
timespan:   1
duration: 300s
warm up time: 5s
cool down time: 0s
measuring latency
gathering IOPS at intervals of 1000ms
random seed: 0
path: 'C:\Temp\DiskSpd\diskpsdtmp.dat'
think time: 0ms
burst size: 0
software cache disabled
hardware write cache disabled, writethrough on
performing mix test (read/write ratio: 60/40)
block size: 4096
using random I/O (alignment: 4096)
number of outstanding I/O operations: 32
thread stride size: 0
threads per file: 2
using I/O Completion Ports
IO priority: normal

CPU utilization

This section highlights information such as the test time, number of threads, number of available processors, and the average utilization of every CPU core during the test. In this case, there are two CPU cores that averaged around 19.25% usage.

System information:
computer name: 3025531-ILJNOGX
start time: 2021/03/20 14:13:57 UTC

Results for timespan 1:
*******************************************************************************
actual test time:        300.00s
thread count:                2
proc count:                2
CPU |  Usage |  User  |  Kernel |  Idle
-------------------------------------------
   0|  21.05%|   3.77%|   17.28%|  78.95%
   1|  17.44%|   4.10%|   13.34%|  82.56%
-------------------------------------------
avg.|  19.25%|   3.94%|   15.31%|  80.75%

I/O

This section has three subsections. The first section highlights the overall performance data including both read and write operations. The second and third sections split the read and write operations into separate categories.

In this example, you can see that the total I/O count was 183562 during the 300-second duration. Thus, IOPS = 183562 / 300 = 611.87. The average latency was 105.703 milliseconds, and the throughput was 2.39 MiB/s.

The last three columns show the standard deviation of IOPS at 188.38 (from -D parameter), the standard deviation of the latency at 209.933 milliseconds (from -L parameter), and the file path.

Total IO
thread |       bytes     |     I/Os     |    MiB/s   |  I/O per s |  AvgLat  | IopsStdDev | LatStdDev |  file
------------------------------------------------------------------------------------------------------------------
     0 |       372932608 |        91048 |       1.19 |     303.49 |  106.631 |      93.75 |   209.421 | C:\Temp\DiskSpd\diskpsdtmp.dat (50GiB)
     1 |       378937344 |        92514 |       1.20 |     308.38 |  104.790 |      95.16 |   210.431 | C:\Temp\DiskSpd\diskpsdtmp.dat (50GiB)
------------------------------------------------------------------------------------------------------------------
total:         751869952 |       183562 |       2.39 |     611.87 |  105.703 |     188.38 |   209.933
Read IO
thread |       bytes     |     I/Os     |    MiB/s   |  I/O per s |  AvgLat  | IopsStdDev | LatStdDev |  file
------------------------------------------------------------------------------------------------------------------
     0 |       223830016 |        54646 |       0.71 |     182.15 |   97.043 |      57.10 |   183.355 | C:\Temp\DiskSpd\diskpsdtmp.dat (50GiB)
     1 |       227659776 |        55581 |       0.72 |     185.27 |   95.409 |      58.07 |   192.262 | C:\Temp\DiskSpd\diskpsdtmp.dat (50GiB)
------------------------------------------------------------------------------------------------------------------
total:         451489792 |       110227 |       1.44 |     367.42 |   96.219 |     114.15 |   187.901
Write IO
thread |       bytes     |     I/Os     |    MiB/s   |  I/O per s |  AvgLat  | IopsStdDev | LatStdDev |  file
-----------------------------------------------------------------------------------------------------------------
     0 |       149102592 |        36402 |       0.47 |     121.34 |  121.023 |      38.35 |   242.654 | C:\Temp\DiskSpd\diskpsdtmp.dat (50GiB)
     1 |       151277568 |        36933 |       0.48 |     123.11 |  118.908 |      38.67 |   234.436 | C:\Temp\DiskSpd\diskpsdtmp.dat (50GiB)
------------------------------------------------------------------------------------------------------------------
total:         300380160 |        73335 |       0.95 |     244.45 |  119.958 |      75.89 |   238.553

Latency percentile analysis

This last section details the percentile latencies per operation type of storage performance from the minimum value to the maximum value.

This section is important because it determines the “quality” of your IOPS. It reveals how many of the I/O operations were able to achieve a certain latency value. It’s up to you to decide the acceptable latency for that percentile.

Moreover, the “nines” refer to the number of nines. For example, “3-nines” is equivalent to the 99th percentile. The number of nines exposes how many I/O operations ran at that percentile. Eventually, you’ll reach a point where it no longer makes sense to take the latency values seriously. In this case, you can see that the latency values remain constant after “4-nines.” At this point, the latency value is based on only one I/O operation out of the 234408 operations.

total:
  %-ile |  Read (ms) | Write (ms) | Total (ms)
----------------------------------------------
    min |      0.106 |      1.712 |      0.106
   25th |     63.165 |     93.500 |     93.230
   50th |     94.454 |    107.310 |     94.667
   75th |    110.149 |    128.357 |    110.401
   90th |    154.995 |    155.108 |    155.068
   95th |    155.865 |    155.629 |    155.788
   99th |    157.109 |    204.520 |    161.125
3-nines |    339.325 |   3953.982 |   2610.925
4-nines |   9250.069 |   9392.223 |   9343.528
5-nines |  10640.059 |  10844.819 |  10843.920
6-nines |  10642.105 |  10844.819 |  10844.819
7-nines |  10642.105 |  10844.819 |  10844.819
8-nines |  10642.105 |  10844.819 |  10844.819
9-nines |  10642.105 |  10844.819 |  10844.819
    max |  10642.105 |  10844.819 |  10844.819
Last modified July 21, 2024: update (e2ae86c)