0% found this document useful (0 votes)
97 views8 pages

Tuning System Performance

This document discusses tuning system performance on Red Hat Enterprise Linux 8 through the use of tuned profiles. It describes how to install and enable the tuned package, select different tuning profiles for various system tuning needs (e.g. power saving, performance boosting), and use the tuned-adm command to view and change profiles. Profiles include balanced, desktop, throughput-performance, latency-performance, and more.

Uploaded by

pmmanick
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
97 views8 pages

Tuning System Performance

This document discusses tuning system performance on Red Hat Enterprise Linux 8 through the use of tuned profiles. It describes how to install and enable the tuned package, select different tuning profiles for various system tuning needs (e.g. power saving, performance boosting), and use the tuned-adm command to view and change profiles. Profiles include balanced, desktop, throughput-performance, latency-performance, and more.

Uploaded by

pmmanick
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 8

Tuning System Performance

Installing and enabling tuned


A minimal Red Hat Enterprise Linux 8 installation includes and enables the tuned package by
default. To install and enable the package manually:

[root@host ~]$ yum install tuned


[root@host ~]$ systemctl enable --now tuned

Selecting a Tuning Profile


The Tuned application provides profiles divided into the following categories:
 Power-saving profiles
 Performance-boosting profiles
The performance-boosting profiles include profiles that focus on the following aspects:
 Low latency for storage and network
 High throughput for storage and network
 Virtual machine performance
 Virtualization host performance
Table 3.1. Tuning Profiles Distributed with Red Hat Enterprise Linux 8

Tuned
Purpose
Profile

Ideal for systems that require a compromise between power saving


balanced
and performance.

Derived from the balanced profile. Provides faster response of


desktop
interactive applications.

throughput-
performance
Tunes the system for maximum throughput.

latency- Ideal for server systems that require low latency at the expense of
performance power consumption.

Derived from the latency-performance profile. It enables


network-latency additional network tuning parameters to provide low network
latency.

network-throughput Derived from the throughput-performance profile. Additional


network tuning parameters are applied for maximum network
Tuned
Purpose
Profile

throughput.

powersave Tunes the system for maximum power saving.

Optimized for Oracle database loads based on the throughput-


oracle
performance profile.

Tunes the system for maximum performance if it runs on a virtual


virtual-guest
machine.

Tunes the system for maximum performance if it acts as a host for


virtual-host
virtual machines.

The tuned-adm command is used to change settings of the tuned daemon. The tuned-adm
command can query current settings, list available profiles, recommend a tuning profile for
the system, change profiles directly, or turn off tuning.
A system administrator identifies the currently active tuning profile with tuned-adm active.

[root@host ~]# tuned-adm active


Current active profile: virtual-guest

The tuned-adm list command lists all available tuning profiles, including both built-in
profiles and custom tuning profiles created by a system administrator.

[root@host ~]# tuned-adm list

Use tuned-adm profile profilename to switch the active profile to a different one that
better matches the system's current tuning requirements.

[root@host ~]$ tuned-adm profile throughput-performance


[root@host ~]$ tuned-adm active
Current active profile: throughput-performance

The tuned-adm command can recommend a tuning profile for the system. This mechanism is
used to determine the default profile of a system after installation.
[root@host ~]$ tuned-adm recommend
virtual-guest

To revert the setting changes made by the current profile, either switch to another profile or
deactivate the tuned daemon. Turn off tuned tuning activity with tuned-adm off.

[root@host ~]$ tuned-adm off


[root@host ~]$ tuned-adm active
No current active profile.

Verify that the tuned package is installed, enabled, and started.


Use yum to confirm that the tuned package is installed.

[student@servera ~]$ yum list tuned


...output omitted...
Installed Packages
tuned.noarch 2.10.0-15.el8
@anaconda

The systemctl is-enabled tuned; systemctl is-active tuned command


displays its enablement and run state.

[student@servera ~]$ systemctl is-enabled tuned; systemctl is-active


tuned
enabled
active

List the available tuning profiles and identify the active profile. If sudo prompts for a
password, enter student after the prompt.

[student@servera ~]$ sudo tuned-adm list

Change the current active tuning profile to powersave, then confirm the results. If sudo
prompts for a password, enter student after the prompt.
Change the current active tuning profile.
[student@servera ~]$ sudo tuned-adm profile powersave

Confirm that powersave is the active tuning profile.

[student@servera ~]$ sudo tuned-adm active


Current active profile: powersave

Influencing Process Scheduling


Relative Priorities
This priority is called the nice value of a process, which are organized as 40 different levels of
niceness for any process.

The nice level values range from -20 (highest priority) to 19 (lowest priority). By default,
processes inherit their nice level from their parent, which is usually 0. Higher nice levels
indicate less priority (the process easily gives up its CPU usage), while lower nice levels
indicate a higher priority (the process is less inclined to give up the CPU). If there is no
contention for resources, for example, when there are fewer active processes than available
CPU cores, even processes with a high nice level will still use all available CPU resources
they can. However, when there are more processes requesting CPU time than available
cores, the processes with a higher nice level will receive less CPU time than those with a
lower nice level.

Displaying Nice Levels with Top

Use the top command to interactively view and manage processes. The default configuration
displays two columns of interest about nice levels and priorities. The NI column displays the
process nice value and the PR column displays its scheduled priority. In the top interface, the
nice level maps to an internal system priority queue as displayed in the following graphic.
For example, a nice level of -20 maps to 0 in the PR column. A nice level of 19 maps to a
priority of 39 in the PR column.

The nice command can be used by all users to start commands with a default or higher nice
level. Without options, the nice command starts a process with the default nice value of 10.
The following example starts the sha1sum command as a background job with the default
nice level and displays the process's nice level:

[user@host ~]$ nice sha1sum /dev/zero &


[1] 3517
[user@host ~]$ ps -o pid,comm,nice 3517
PID COMMAND NI
3517 sha1sum 10

Use the -n option to apply a user-defined nice level to the starting process

user@host ~]$ nice -n 15 sha1sum &

[1] 3521

[user@host ~]$ ps -o pid,comm,nice 3521

PID COMMAND NI

3521 sha1sum 15

Changing the Nice Level of an Existing Process


The nice level of an existing process can be changed using the renice command. This
example uses the PID identifier from the previous example to change from the current nice
level of 15 to the desired nice level of 19.

[user@host ~]$ renice -n 19 3521


3521 (process ID) old priority 15, new priority 19

The top command can also be used to change the nice level on a process. From within the
top interactive interface, press the r option to access the renice command, followed by the
PID to be changed and the new nice level.

Determine the number of CPU cores on servera and then start two instances of the sha1sum
/dev/zero & command for each core.
Use grep to parse the number of existing virtual processors (CPU cores) from the
/proc/cpuinfo file.

[student@servera ~]$ grep -c '^processor' /proc/cpuinfo


Use a looping command to start multiple instances of the sha1sum /dev/zero & command.
Start two per virtual processor found in the previous step. In this example, that would be four
instances. The PID values in your output will vary from the example.

[student@servera ~]$ for i in $(seq 1 4); do


sha1sum /dev/zero &
done
[1] 2643
[2] 2644
[3] 2645
[4] 2646

Verify that the background jobs are running for each of the sha1sum processes.

[student@servera ~]$ jobs


[1] Running sha1sum /dev/zero &
[2] Running sha1sum /dev/zero &
[3]- Running sha1sum /dev/zero &
[4]+ Running sha1sum /dev/zero &

Use the ps and pgrep commands to display the percentage of CPU usage for each sha1sum
process.

[student@servera ~]$ ps u $(pgrep sha1sum)

Terminate all sha1sum processes, then verify that there are no running jobs.
1. Use the pkill command to terminate all running processes with the name pattern
sha1sum.

[student@servera ~]$ pkill sha1sum

Verify that there are no running jobs.

[student@servera ~]$ jobs


Start multiple instances of sha1sum /dev/zero &, then start one additional instance of
sha1sum /dev/zero & with a nice level of 10. Start at least as many instances as the system
has virtual processors. In this example, 3 regular instances are started, plus another with the
higher nice level.
Use looping to start three instances of sha1sum /dev/zero &.

[student@servera ~]$ for i in $(seq 1 3); do


sha1sum /dev/zero &
done

[1] 1947
[2] 1948
[3] 1949

Use the nice command to start the fourth instance with a 10 nice level.

[student@servera ~]$ nice -n 10 sha1sum /dev/zero &


[4] 1953

Use the ps and pgrep commands to display the PID, percentage of CPU usage, nice value,
and executable name for each process. The instance with the nice value of 10 should display
a lower percentage of CPU usage than the other instances.

[student@servera ~]$ ps -o pid,pcpu,nice,comm $(pgrep sha1sum)

Use the sudo renice command to lower the nice level of a process from the previous step.
Note the PID value from the process instance with the nice level of 10. Use that process PID
to lower its nice level to 5.

[student@servera ~]$ sudo renice -n 5 1953

Use the tuned-adm profile_info command to confirm that the active profile is the balanced
profile.

[student@serverb ~]$ sudo tuned-adm profile_info


Two processes on serverb are consuming a high percentage of CPU usage. Adjust each
process's nice level to 10 to allow more CPU time for other processes.

Determine the top two CPU consumers on serverb. The top CPU consumers are listed last
in the command output. CPU percentage values will vary.

[student@serverb ~]$ ps aux --sort=pcpu

Identify the current nice level for each of the top two CPU consumers.

[student@serverb ~]$ ps -o pid,pcpu,nice,comm $(pgrep sha1sum;pgrep md5sum)

Use the sudo renice -n 10 2967 2983 command to adjust the nice level for each process to
10. Use PID values identified in the previous command output.

[student@serverb ~]$ sudo renice -n 10 2967 2983

Verify that the current nice level for each process is 10.

[student@serverb ~]$ ps -o pid,pcpu,nice,comm $(pgrep sha1sum;pgrep md5sum)

You might also like