Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

t/one-core.sh: Adding script to run the one-core io benchmark #1272

Merged
merged 1 commit into from Sep 21, 2021

Conversation

ErwanAliasr1
Copy link
Contributor

Associated to fio, the t/io_uring test is used to compute the max IOPS a
single core can get.

Jens published several times the procedure he uses, but trying to
reproduce this setup is error-prone. It's easy to miss a configuration
and get a different result.

This script is about setting up a common setup to reproduce these runs.

From the fio directory, execute like the folliowing :

[user@fio] t/one-core.sh /dev/nvme0n1 [other drives]
system: Checking configuration
Warning: For better performance, you should enable nvme poll queues by setting nvme.poll_queues=32 on the kernel commande line
nvme0n1: set none as io scheduler
nvme0n1: iostats set to 1. Disabling.
nvme0n1: nomerge set to 0. Disabling.
system: End of system checks
io_uring: Running taskset -c 0,12 t/io_uring -b512 -d128 -c32 -s32 -p1 -F1 -B1 -n4 /dev/nvme0n1
    [...]
IOPS=731008, BWPS=356 MB IOS/call=32/31, inflight=(108 127 126 106)

This script will take care of the following items:

  • nvme poll queues
  • io scheduler
  • iostats
  • nomerge
  • finding the logical cores running on the first physical core
  • calling t/io_uring with the proper parameters in 512 bytes fashion

Signed-off-by: Erwan Velu erwanaliasr1@gmail.com

@ByteHamster
Copy link
Contributor

Thanks, I think that will help people who also try to reproduce the measurements. Some thoughts:

  • How about also setting /sys/block/nvmeX/queue/io_poll to 1? I don't remember if that got set to 1 because of the boot parameter or if I did that manually.
  • How about setting the cpu scheduler to performance? echo "performance" | tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor?
  • When iterating over the cpus, how about also making sure that they are available in /sys/devices/system/node/nodeX/cpuY, where X=/sys/block/nvmeZ/device/numa_node? Unfortunately, I can't test this currently because the machine is booted with Numa disabled.
  • How about naming the file something like t/io_uring_one_core.sh? Makes it more clear that it belongs to t/io_uring but could also be potentially annoying because it will always show up in tab completion.
  • Maybe also a little note could help? "You are running Kernel uname -r. Make sure to use the latest version for best results." I mean, it won't directly set anything but it could sensitize users of the script.
  • Not really clean but lspci -vv | grep -P "[0-9a-f]{2}:[0-9a-f]{2}\.[0-9a-f]|LnkSta:|LnkCap:" | grep -A 2 -i -P "(nvme|nvm express)" would try to display the used and maximum PCI lanes of each SSD, for sensitizing users.

Copy link
Collaborator

@sitsofe sitsofe left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ErwanAliasr1 Could you push this script through Shellcheck and see what you think of its comments?

@ErwanAliasr1
Copy link
Contributor Author

ErwanAliasr1 commented Sep 17, 2021 via email

@ErwanAliasr1
Copy link
Contributor Author

@ErwanAliasr1 Could you push this script through Shellcheck and see what you think of its comments?
Fixed my IDE setup, only minor stuff. I'll push an update.

@ErwanAliasr1
Copy link
Contributor Author

Just pushed a new version that set the proper cpu scaling & idle governor and set io_poll to 1

@ErwanAliasr1
Copy link
Contributor Author

@ByteHamster Just push a version that prints the nvme config like :
nvme0n1: MODEL=KCD6XVUL3T20 FW=GPK1 serial=11E0A016TX1R PCI=0000:44:00.0@16.0 GT/s PCIe IRQ=63 NUMA=4 CPUS=16-19,48-51

Hope you'll like it

Associated to fio, the t/io_uring test is used to compute the max IOPS a
single core can get.

Jens published several times the procedure he uses, but trying to
reproduce this setup is error-prone. It's easy to miss a configuration
and get a different result.

This script is about setting up a common setup to reproduce these runs.

From the fio directory, execute like the folliowing :

	[user@fio] t/one-core.sh /dev/nvme0n1 [other drives]
        ##################################################:
	system: CPU: AMD EPYC 7502P 32-Core Processor
	system: MEMORY: 2933 MT/s
	system: KERNEL: 5.10.35-1.el8.x86_64
	nvme0n1: MODEL=Samsung SSD 970 EVO Plus 2TB FW=2B2QEXM7 serial=S59CNM0R417706B PCI=0000:01:00.0@8.0 GT/s PCIe IRQ=64 NUMA=0 CPUS=0-23
	nvme0n1: set none as io scheduler
	nvme0n1: iostats set to 1.
	nvme0n1: nomerge set to 0.
	Warning: For better performance, you should enable nvme poll queues by setting nvme.poll_queues=32 on the kernel commande line
        ##################################################:

	io_uring: Running taskset -c 0,12 t/io_uring -b512 -d128 -c32 -s32 -p1 -F1 -B1 -n4 /dev/nvme0n1
        [...]
	IOPS=731008, BWPS=356 MB IOS/call=32/31, inflight=(108 127 126 106)

This script will take care of the following items:
- nvme poll queues
- io scheduler
- iostats
- io_poll
- nomerge
- finding the logical cores running on the first physical core
- cpu frequency governor on performance
- cpu idle governor on menu
- calling t/io_uring with the proper parameters in 512 bytes fashion
- reporting the nvme & pci configuration

Signed-off-by: Erwan Velu <erwanaliasr1@gmail.com>
@ErwanAliasr1
Copy link
Contributor Author

Just added the processor, memory speed & kernel version

@ErwanAliasr1
Copy link
Contributor Author

@axboe unless you see other stuff to add here, I think this PR is ready

@axboe axboe merged commit d913730 into axboe:master Sep 21, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants