This tutorial will explain how the deadline scheduler works, its tunable parameters, and how it can be disabled or modified in your system.
To set the stage: why would you want to disable front merges? Well, let’s say you have a workload that consists of reading large files sequentially from disk. In this case, the deadline scheduler will try to merge requests that are contiguous on disk into larger ones in order to reduce seek times and improve performance. However, if your workload is already optimized for sequential reads (which is often the case), then front merges can actually hurt performance by increasing latency and causing unnecessary I/O operations.
So how do you disable them? Easy! Just add this line to your kernel command line:
# This script disables front merges in the kernel command line to improve performance.
# Set the variable "kernel.deadline_front_merging" to 0 to disable front merges.
kernel.deadline_front_merging=0
Or, if you prefer the sysctl method:
# This script is used to disable the kernel's front merging feature by setting the value to 0 in the sysctl configuration file.
# The first line uses the echo command to print the specified string to the specified file, which is the sysctl configuration file.
# The string contains the configuration parameter "kernel.deadline_front_merging" and its value "0".
# However, the syntax is incorrect as the redirection operator ">" is missing between the string and the file path.
# Also, the file path should be enclosed in double quotes to avoid any potential issues with spaces in the path.
echo "kernel.deadline_front_merging = 0" > "/etc/sysctl.d/my-custom.conf"
# The second line uses the sysctl command to reload the sysctl configuration file.
# However, the syntax is incorrect as the "-p" option is missing before the file path.
# Also, the file path should be enclosed in double quotes to avoid any potential issues with spaces in the path.
sysctl -p "/etc/sysctl.d/my-custom.conf"
That’s it! Now your deadline scheduler will no longer attempt to merge requests at the front of the queue, which can significantly reduce latency and improve performance for sequential read workloads.
The deadline scheduler also has other tunable parameters that you might want to tweak depending on your needs:
– `read_expire` (in ms): This parameter controls the maximum time a request can spend in the queue before it is considered expired. By default, this value is set to 100ms, but you can adjust it based on your workload and latency requirements. For example, if you have a high-latency application that requires sub-millisecond response times, you might want to decrease this value to something like 5ms or even less.
– `write_expire` (in ms): Similar to read_expire, but for writes. This parameter controls the maximum time a write request can spend in the queue before it is considered expired and needs to be flushed to disk. By default, this value is set to 500ms, which should be sufficient for most workloads. However, if you have a high-throughput application that requires low latency writes (such as a database), you might want to decrease this value to something like 100ms or even less.
– `fifo_batch`: This parameter controls the maximum number of requests per batch for reads and writes. By default, this value is set to 32, which should be sufficient for most workloads. However, if you have a high-throughput application that requires low latency reads (such as a web server), you might want to increase this value to something like 1024 or even more.
– `writes_starved`: This parameter controls how many times we give preference to reads over writes before dispatching some writes based on the same criteria as reads. By default, this value is set to 5, which should be sufficient for most workloads. However, if you have a high-throughput application that requires low latency writes (such as a database), you might want to decrease this value to something like 1 or even less.
The Deadline I/O Scheduler is a powerful tool for improving performance and reducing latency in your Linux system, but it’s not always the best choice for every workload. By disabling front merges and tweaking its tunable parameters, you can optimize your system for specific use cases and achieve better results.
Now go out there and experiment with different settings! And if you have any questions or feedback, feel free to leave a comment below.