Showing posts with label Tuning. Show all posts
Showing posts with label Tuning. Show all posts
Wednesday, September 8, 2010
controlling cpu usage part 9: Setting capped-cpu for a Zone
Introduced in Solaris 10 05/08 CPU caps allow a fine division of CPU resources. The administrator can allocate CPU resources in 1% increments of a single CPU. The allocation can go from 1% of a single CPU to the of CPU's in the system.
The capped-cpu resource type has a single property ncpus. This holds the amount of CPU allocation for the zone. It is expressed in units of a CPU, so 0.01 would be 1% of a single CPU, 1 would be 1CPU and 3.75 would be 3 3/4 CPU's.
I there are multiple CPU's in the system the allocation can come from any CPU, so multi-threaded code can still run threads in parallel if the scheduler so allocates.
However, unlike pools there is no dynamic balancing. If capped-cpu is enabled the CPU resources are statically divided. Unused CPU cycles in a zone are not available for other zones which have capped-cpu in effect.To set a zone to use 18% of a single CPU we would enter the following.
# zonecfg -z test0z1
zonecfg:test0z1> add capped-cpu
zonecfg:test0z1:capped-cpu> set ncpus=0.18
zonecfg:test0z1:capped-cpu> end
zonecfg:test0z1> exit
One important point to remember is that a single threaded process cannot utilize more than a single CPU irrespective of the value of the value of the capped-cpu resource.
You can check to see the performance of the capped-cpu zone using the prstat -Z command.
The percentage (of the global CPU resource) utilized by each zone will be listed with each zone in its summary line.
Labels:
Performance,
Tuning,
Zones
Monday, September 6, 2010
controlling cpu usage part 8: Adding Pools to a Zone
Zones are able to use the pool subsystem directly. When a zone is defined it can be associated with a named pool by setting the zone's pool property to the name of an existing pool.
# zonecfg -z zone set pool=pool_web
Multiple zones may share the same pool. In this case each zone should set the cpu-shares resource type to arbitrate between the relative use of CPU for each zone in the pool.
# zonecfg -z test0z1 set cpu-shares=20
# zonecfg -z test0z2 set cpu-shares=30
# zonecfg -z test0z2 set cpu-shares=30
Solaris 11/06 introduces the concept of anoymous pools. These are pools created by a zone when it boots for the exclusive use of that zone. This is done through the dedicated-cpu resource type for a zone. The dedicated-cpu resource type has two properties, ncpus which indicates the number CPU's to put into the created pool, or a range of CPU's if a dynamic pool is desired, and importance which sets the pool.importance property in the pool for use as tie-breaker by poold.
# zonecfg -z test0z1
zonecfg:test0z1> add dedicated-cpu
zonecfg:test0z1:dedicated-cpu> set ncpu=1-3
zonecfg:test0z1:dedicated-cpu> set importance=10
zonecfg:test0z1:dedicated-cpu> end
zonecfg:test0z1> commit
zonecfg:test0z1> exit
Whenever the zone boots the zoneadmd deamon will create a pool and assign the zone to the pool.
Note that the dedicated-cpu resource on a zone means that the pool cannot be shared between multiple zones.
# zonecfg -z test0z1
zonecfg:test0z1> set scheduling-class=FSS
zonecfg:test0z1> commit
zonecfg:test0z1> exit
Note The pools system must be already configured on the system before the dedicated-cpu resource type is used by the zone. If the pool system is not configured any attempt to boot the pool will result in an error from zoneadm.
If there is not enough resources to create the pool an attempt to boot results in a fatal error, and the boot fails.
# zoneadm -z test0z1 boot
zoneadm: zone 'test0z1': libpool(3LIB) error: invalid configuration
zoneadm: zone 'test0z1': dedicated-cpu setting cannot be instatiated
zoneadm: zone 'test0z1': call to zoneadmd failed
When the zone is booted a temporary pool called SUNWtmp_zonename is created.
pool SUNWtmp_test0z1
int pool.sys_id 4
boolean pool.active true
boolean pool.default false
int pool.importance 10
string pool.comment
boolean pool.temporary true
pset SUNWtmp_test0z1
pset SUNWtmp_test0z1
int pset.sys_id 1
boolean pset.defaul false
uint pset.min 1
uint pset.max 3
string pset.units population
uint pset.load 1991
uint pset.size 1
string pset.comment
boolean pset.temporary true
cpu
int cpu.sys_id 0
string cpu.comment
string cpu.status on-line
The dedicated-cpu resource type creates a pool for the exclusive use of this zone. The zone has exclusive access to the CPU's in the pool. For that reason the cpu-shares resource type in the zone has no meaning if a dedicated-cpu resource type is also defined. The zone will always have 100% of the shares in the processor set, and so will always have the entire processor set to itself irrespective of the number of shares.
Labels:
Performance,
pools,
Tuning,
Zones
Thursday, September 2, 2010
controlling cpu usage part 7: Pools
Dynamic pools were introduced in Solaris 10. A pool is binding of a resource and processor set together into a persistent entry. A pool allows us to name a pool and assign resource controls, such as the scheduler, on a persistent basis.
Pools can also be used for projects using the project.pool attribute in /etc/projectBy default if the pools system is enabled using SMF there is a default processor set created whcih is attached to a default pool. This configuration can be viewed using the poolcfg -dc info command.
# poolcfg -dc info
poolcfg: cannot load configuration from /dev/poolctl: Facility is not active
# svcadm enable svcs:/system/pools:default
# poolcfg -dc info
system default
string system.comment
int system.version 1
boolean system.bind-default true
string system.poold.objectives wt-load
pool pool_default
int pool.sys_id 0
boolean pool.active true
boolean pool.default true
int pool.importance 1
string pool.comment
pset pset_default
pset pset_default
int pset.sys_id -1
boolean pset.default true
uint pset.min 1
uint pset.max 65536
string pset.units population
uint pset.load 481
uint pset.size 2
string pset.comment
cpu
int cpu.sys_id 1
string cpu.comment
string cpu.status on-line
cpu
int cpu.sys_id 0
string cpu.comment
string cpu.status on-line
To configure pools on a system you must create a configuration file. By default this file should be named /etc/pooladm.conf so it would automatically load at boot time. The easiest way of creating a file is to configure the current system as desired and then perform a pooladm save command.
# pooladm -s /etc/pooladm.conf
The following example saves the current kernel state as /etc/pooladm.conf, and then uses poolcfg to create a new pool called pool_web which contains one processor set pset_web which has one CPU.
# pooladm -s /etc/pooladm.conf
# poolcfg -c 'create pool pool_web'
# poolcfg -c 'create pset pset_web (uint pset.min = 1; uint pset.max = 4)'
# poolcfg -c 'associate pool pool_web (pset pset_web)'
# pooladm -c
# pooladm -s
# poolcfg -c 'create pool pool_web'
# poolcfg -c 'create pset pset_web (uint pset.min = 1; uint pset.max = 4)'
# poolcfg -c 'associate pool pool_web (pset pset_web)'
# pooladm -c
# pooladm -s
We can then display the resultant condiguration.
# poolcfg -dc info
system default
string system.comment
int system.version 1
boolean system.bind-default true
string system.poold.objectives wt-load
pool pool_web
int pool.sys_id 1
boolean pool.active true
boolean pool.default false
int pool.importance 1
string pool.comment
pset pset_web
pool pool_default
int pool.sys_id 0
boolean pool.active true
boolean pool.default true
int pool.importance 1
string pool.comment
pset pset_default
pset pset_web
int pset.sys_id 1
boolean pset.default false
uint pset.min 1
uint pset.max 4
string pset.units population
uint pset.load 0
uint pset.size 1
string pset.comment
cpu
int cpu.sys_id 0
string cpu.comment
string cpu.status on-line
pset pset_default
int pset.sys_id -1
boolean pset.default true
uint pset.min 1
uint pset.max 65536
string pset.units population
uint pset.load 0
uint pset.size 1
string pset.comment
cpu
int cpu.sys_id 1
string cpu.comment
string cpu.status on-line
Note that the default pool and the default pset have their default property set to true.
We can also define other resource properties for the pools. To do this we can also define and set the pool.scheduler to the pool.
The following example sets the FSS scheduler for the pool pool_web# poolcfg -c 'modify pool pool_web (string pool.scheduler="FSS")'
# poolcfg -dc info
system default
string system.comment
int system.version 1
boolean system.bind-default true
string system.poold.objectives wt-load
pool pool_web
int pool.sys_id 1
boolean pool.active true
boolean pool.default false
string pool.scheduler FSS
int pool.importance 1
string pool.comment
pset pset_web
...
As the load in one processor set increases the number of CPU's in that pool is increased by taking CPU's from other pools. The pset.min and pset.max properties of the processor set are used to constrain the minimum and maximum number of CPU's that can exist in a pool.
If the there is a tie for resource the pool.importance property is used as a tie-breaker.
To enable dynamic pools the svc:/system/pools/dynamic:default service must be enabled. This will start the poold deamon which performs the dynamic modification of the processor sets on the system.
# ps -eaf|grep poold
root 20334 3948 0 12:23:51 pts/4 0:00 grep poold
# svcadm enable svc:/system/pools/dynamic:default
# ps -eaf|grep poold
root 20423 3948 0 12:24:55 pts/4 0:00 grep poold
root 20422 1 0 12:24:53 ? 0:00 /usr/lib/pool/poold
root 20334 3948 0 12:23:51 pts/4 0:00 grep poold
# svcadm enable svc:/system/pools/dynamic:default
# ps -eaf|grep poold
root 20423 3948 0 12:24:55 pts/4 0:00 grep poold
root 20422 1 0 12:24:53 ? 0:00 /usr/lib/pool/poold
Labels:
Performance,
pools,
Tuning
Tuesday, August 31, 2010
controlling cpu usage part 6: Creating and Using Processor Sets
Processor sets extend the idea of CPU bindings to a more general relationship. With processor sets some number of CPU's are collected together into a set. These CPU's are effectively fenced from the rest of the system. Normal thread cannot use these CPU's. This is different to processor bindings, where the CPU's are still available for non-bound threads.
Processor sets should only be used on legacy systems that are currently using processor sets. All new installations should use pools, as they have greater flexibility.
The following example creates an empty processor set, assigns CPU id 0 to the newly created set, then binds the current shell to the newly created set.
We then query the processor sets for the details on bound process before destroying the bindings.
Finally the processor set itself is deleted.
# psrset -c
created processor set 1
# psrset -a 1 0
processor 0: was not assigned , now 1
# psrset -b 1 $$
process id 18219: was not bound, now 1
# psrset
user processor set 1: processor 0
# psrset -Q 1
process id 18329: 1
process id 18219: 1
# psrset -U
# psrset -Q 1
# psrset
user processor set 1: processor 0
# psrset -d 1
removed processor set 1
# psrset
created processor set 1
# psrset -a 1 0
processor 0: was not assigned , now 1
# psrset -b 1 $$
process id 18219: was not bound, now 1
# psrset
user processor set 1: processor 0
# psrset -Q 1
process id 18329: 1
process id 18219: 1
# psrset -U
# psrset -Q 1
# psrset
user processor set 1: processor 0
# psrset -d 1
removed processor set 1
# psrset
Labels:
Performance,
psrset,
Tuning
Friday, August 27, 2010
controlling cpu usage part 5: Binding a Process to a Processor
Processor Binding is the forced locking of a process onto a particular CPU. The nominated process, or threads within a process, are only excecuted by the specified CPU. All process binding is performed through the pbind command. To bind all the threads in a process the pbind command is called with the -b option and the CPU to bind to is specified.
# psrinfo
0 on-line since 06/11/2010 12:18:49
1 on-line since 06/11/2010 12:18:51
# echo $$
16587
# pbind -b 1 $$
process id 16587: was not bound, now 1
# sh
# echo $$
18219
# pbind -q
process id 18220: 1
process id 16857: 1
process id 18219: 1
All the threads of the specified process are bound. Also, processor bindings are inherited by any new threads or processes, so any child processes are likewise bound to the same CPU.
To remove the bindings for a process the -u option to pbind can be used, the -U option removes all bindings.
# pbind -u 18219
process id 18219: was 1, now not bound
# pbind -U
# pbind -q
process id 18219: was 1, now not bound
# pbind -U
# pbind -q
Binding a process or a thread to a CPU does not prohibit that CPU from being used for other threads.
It can be used to limit the maximum amount of CPU that a process, or group of process, can use to a single CPU.
Labels:
pbind,
Performance,
Tuning
Thursday, August 26, 2010
controlling cpu usage part 4: The Fair Share Scheduler
The Fair Share Scheduler (FSS) is an alternative scheduling class. It is not used by default an is explicitly enabled. The FSS guarantees a minimum proportion of the machines CPU resources are made available to each holder of shares, in proportion of the number of shares held.
The absolute quantity of shares is not important. Any number that is in proportion with the desired CPU entitlement can be used.
To configure projects the /etc/project file needs to be modified to identify the number of shares to be granted to each project, and the /etc/user_attr file needs to be modified to assign each user to a project.
To define two users, u1 and u2 with u1 having twice the CPU resources as u2 the entries in /etc/user_attr and /etc/project would be similar to the following:
# egrep 'u[12]' /etc/passwd
u1:x:1000:1::/export/home/u1:/bin/sh
u2:x:1001:1::/export/home/u2:/bin/sh
# egrep 'u[12]' /etc/user_attr
u1::::type=normal;project=u1
u2::::type=normal;project=u2
# egrep 'u[12]' /etc/project
u1:1000:User 1:u1::project.cpu-shares=(privileged,20,none)
u2:1001:User 2:u2::project.cpu-shares=(privileged,10,none)
To determine the project of the current process the ps command may be used, and the prctl command will show the number of shares.
# ps -o project= -p $$
user.root
# su - u1
$ ps -o project= -p $$
u1
$ prctl -t privileged -n project.cpu-shares -i pid $$
process: 1444: -sh
NAME PRIVILEGED VALUE FLAG ACTION RECIPIENT
project.cpu-shares
privileged 20 None -
To change the scheduling class of a running process you can use the priocntl command.
# priocntl -s -c FSS -i pid # Change one process
# priocntl -s -c FSS -i class TS # Change everything currently in TS
# priocntl -s -c FSS -i zoneid 1 # Change all processes in zone ID 1
# priocntl -s -c FSS -i pid 1 # Change init (special case)
To examine the shares granted to a process (or zone) use the prctl command.
# prctl -t privileged -n zone.cpu-shares -i zoneid 1 # Shares for zone ID 1
To modify the number of shares granted to a zone we can use -r option to prctl. This change only lasts until next reboot.
# prctl -r -v 10 -t privileged -n zone.cpu-shares -i zoneid 1
# Change number of shares to 10
# Change number of shares to 10
To change the default scheduling class, so that on next and subsequent reboots all process will use FSS by default we can use the dispadmin command.
# dispadmin -d FSS
Labels:
dispadmin,
Performance,
Projects,
Tuning,
Zones
Wednesday, August 25, 2010
controlling cpu usage part 3: Manipulating the dispatch parameter tables
Each scheduling class maintains a set of tables in the kernel. These are used to control aspects of the scheduling class. These tables may be manipulated by the dispadmin command:
# dispadmin -l
CONFIGURED CLASSES
==================
SYS (System Class)
TS (Time Sharing)
FX (Fixed Priority)
RT (Real Time)
IA (Interactive)
Changing the Scheduler
# dispadmin -g -c TS
# Time Sharing Dispatcher Configuration
RES=1000
# ts_quantum ts_tqexp ts_slpret ts_maxwait ts_lwait PRIORITY LEVEL
200 0 50 0 50 # 0
200 0 50 0 50 # 1
200 0 50 0 50 # 2
200 0 50 0 50 # 3
200 0 50 0 50 # 4
200 0 50 0 50 # 5
200 0 50 0 50 # 6
200 0 50 0 50 # 7
200 0 50 0 50 # 8
200 0 50 0 50 # 9
...
160 0 51 0 51 # 10
160 1 51 0 51 # 11
160 2 51 0 51 # 12
160 3 51 0 51 # 13
160 4 51 0 51 # 14
...
40 40 58 0 59 # 50
40 41 58 0 59 # 51
40 46 58 0 59 # 56
40 47 58 0 59 # 57
40 48 58 0 59 # 58
20 49 59 32000 59 # 59
The new table will come effect immediately no reboot is required here. But the change will only have effect during the current life-time of the current boot time. To make the change effective on subsequent boots the dispadmin -c TS -s new_table has to be run as an initialization script on each boot. It is recommended the this is placed after the single-user milestone is reached to enable the system to be booted to single user mode in the case the table turns out to be incorrect.
# dispadmin -l
CONFIGURED CLASSES
==================
SYS (System Class)
TS (Time Sharing)
FX (Fixed Priority)
RT (Real Time)
IA (Interactive)
Changing the Scheduler
Solaris comes with six defined scheduling classes. Of these classes four are provided for use by user threads time sharing (TS), interactive (IA), fixed priority (FX) & fair share scheduling (FSS) . the other two are system, for kernel threads, and real-time.
If there are multiple processor sets in use then each processor set can theoretically use a different scheduling class. This is only practical when using the pool subsystem, which allows scheduling class to be specified per pool.
Time Sharing/Interactive Scheduling Classes
Time sharing and interactive classes use the same algorithm, the difference between them is that interactive scheduling class attempts to provide a slight boost to the foreground process
The two classes provide a table which has entries for:
- quantum - number of time periods allowed
- tqexp - priority to change thread to when quantum expired
- slpret - priority to change thread to when returning from a sleep
- maxwait - maximum number of seconds to wait for CPU before changing priority
- lwait - priority to change thread to when maxwait expired
# dispadmin -g -c TS
# Time Sharing Dispatcher Configuration
RES=1000
# ts_quantum ts_tqexp ts_slpret ts_maxwait ts_lwait PRIORITY LEVEL
200 0 50 0 50 # 0
200 0 50 0 50 # 1
200 0 50 0 50 # 2
200 0 50 0 50 # 3
200 0 50 0 50 # 4
200 0 50 0 50 # 5
200 0 50 0 50 # 6
200 0 50 0 50 # 7
200 0 50 0 50 # 8
200 0 50 0 50 # 9
...
160 0 51 0 51 # 10
160 1 51 0 51 # 11
160 2 51 0 51 # 12
160 3 51 0 51 # 13
160 4 51 0 51 # 14
...
40 40 58 0 59 # 50
40 41 58 0 59 # 51
40 46 58 0 59 # 56
40 47 58 0 59 # 57
40 48 58 0 59 # 58
20 49 59 32000 59 # 59
To change the dispatch parameter table for the TS and IA classes create a new table in a file and insert this file into the running kernel:
# dispadmin -c TS -g > new_table
# ( edit new_table )
# dispadmin -c TS -s new_table
# dispadmin -c TS -g > new_table
# ( edit new_table )
# dispadmin -c TS -s new_table
The new table will come effect immediately no reboot is required here. But the change will only have effect during the current life-time of the current boot time. To make the change effective on subsequent boots the dispadmin -c TS -s new_table has to be run as an initialization script on each boot. It is recommended the this is placed after the single-user milestone is reached to enable the system to be booted to single user mode in the case the table turns out to be incorrect.
Labels:
dispadmin,
Performance,
Tuning
Monday, August 23, 2010
controlling cpu usage part 2: CPU Usage Limit in the Shell
The shell ulimit command can be used to check or set the CPU limit for any subsequently created children, and their descendants. The -t option to ulimit sets the amount of CPU time a process may use before it is sent a SIGXCPU signal by the kernel. The default is unlimited (maximum CPU time).
# su - useruser $ ulimit -t
unlimited
user $ sh
user $ ulimit -t
unlimited
user $ ulimit -t 10
user $ date; while : ; do : ; done; date
Friday, 5 September 2008 3:34:56 PM EST
Cpu Limit Exceeded (core dumped)
user $
# su - useruser $ ulimit -t
unlimited
user $ sh
user $ ulimit -t
unlimited
user $ ulimit -t 10
user $ date; while : ; do : ; done; date
Friday, 5 September 2008 3:34:56 PM EST
Cpu Limit Exceeded (core dumped)
user $
Labels:
Performance,
Tuning
Friday, August 20, 2010
controlling cpu usage part 1: Introduction
CPU usage can be controlled in a number of different ways. The possible choices as of Solaris 05/08 are:
- We can set a CPU usage limit in the shell
- We can manipulate the dispatch parameter kernel tables
- We can use different schedulers, such as the FSS (Fair Share Scheduler)
- We can bind a process to a CPU
- We can use processor sets
- We can create pools, which combine scheduler changes and processor sets
- We can set a capped-cpu resource control for solaris container or zones
In the coming weeks I will discuss these options in a little detail in hope you can improve performance or tune aspects of your environment better.
Labels:
Performance,
Tuning