HMC-related commands
Change HMCs date
The HMC date and time in format: MMDDhhmm[[CC]YY][.ss]
# chhmc -c date -s modify --datetime 09121621 --timezone Europe/Paris
The Customize Date/Time request completed successfully. Please reboot the HMC.
# date
Fri Sep 12 16:21:03 CEST 2014
Check NTP service configuration
This should return enable with some NTP servers ip address. You can also grep some info in the
/var/log/ntp logfile.
# lshmc -r -Fxntp,xntpserver
enable,"127.127.1.0,[your_NTP_server]"
# grep -i sync /var/log/ntp |tail -2
4 Apr 15:47:20 ntpd[7024]: synchronized to 10.16.158.1, stratum 2
4 Apr 15:47:20 ntpd[7024]: kernel time sync status change 2001
Configure and Activate NTP service
(one NTP server at a time, sorry folks...)
# chhmc -c xntp -s add -a [your_NTP_server]
# chhmc -c xntp -s enable
IBM Service Agent
This will display the status of the IBM service agent, and the mail sending configuration (if any) of the events
detected by ESA
hscroot@hmc# lssacfg -t email
status=enabled,smtp_server=10.245.192.12,smtp_server_port=25,\
"[email protected]/ESA.All, \
[email protected]/ESA.CallHomeOnly"
> you can change (or enable) these parameters with chsacfg command :
HMC V7:
hscroot@hmc # chsacfg -t email -o setsmtp -h 10.99.99.99 -p 25 -a\
nagios/ESA.CallHomeOnly
invokeViaTasklet() ->
invokeViaTasklet()
invokeViaTasklet()
invokeViaTasklet()</pre>
<pre>hscroot@hmc # lssacfg -t email
status=enabled,smtp_server=10.240.122.96,smtp_server_port=25,\
email_addresses=nagios/ESA.CallHomeOnly/ESA.All
HMC V8:
hscroot@hmc # chsacfg -t email -o setsmtp -h 10.99.99.99 -p 25
hscroot@hmc # chsacfg -t email -o add -a "nagios/ESA.All"
then you can test it :
# chsacfg -t callhomeserver -o test
Test beginning
Edge_Gateway_1:129.42.56.189:443(esupport.ibm.com)::Failed
Edge_Gateway_2:129.42.56.189:443(esupport.ibm.com)::Failed
Edge_Gateway_3:129.42.54.189:443(esupport.ibm.com)::Failed
Edge_Gateway_4:129.42.54.189:443(esupport.ibm.com)::Failed
Edge_Gateway_5:129.42.60.189:443(esupport.ibm.com)::Failed
Edge_Gateway_6:129.42.60.189:443(esupport.ibm.com)::Failed
Testing Completed
List/activate utilization data (useful for nmon or lpar2rrd, or performance monitoring in
HMC >V8.)
# lslparutil -m [managed_system] -r all
time=02/10/2014 15:22:30,event_type=sample,resource_type=sys,sys_time=02/10/2014 \
15:22:30,primary_state=Started,detailed_state=None,configurable_sys_proc_units=102.0,\
configurable_sys_mem=3481600,curr_avail_sys_proc_units=55.85,\
curr_avail_5250_cpw_percent=0.0,curr_avail_sys_mem=2158848,sys_firmware_mem=101888,\
proc_cycles_per_second=512000000
#chlparutil -r config -m CPU-FRM12012B-P795-SOCSIL-DAL -s 60
On/off CoD
Listing informations about billing details , then about resources available
# lscod -t bill -m [managed_system] -c onoff -r mem \
-Factivated_onoff_resources,avail_resources_for_onoff,hist_expired_resource_days,\
hist_unreturned_resource_days,collection_date
0000,2744,00000705,00000000,2014-02-10
# lscod -t cap -m [managed_system] -r [mem|proc] -c onoff
mem_onoff_state=Available,activated_onoff_mem=0,avail_mem_for_onoff=2809856,\
unreturned_onoff_mem=0,onoff_request_mem_days_left=0,onoff_mem_day_hours_left=0,\
onoff_mem_days_avail=30976
Listing history of On/Off CoD operatoins :
# lscod -t hist -m [managed_system]
time_stamp=01/27/2014 19:24:43,entry=HSCL0317 On/Off CoD memory request period expired.
time_stamp=01/27/2014 19:23:59,entry=HSCL0316 On/Off CoD processor request period expired.
time_stamp=01/26/2014 18:41:24,"entry=HSCL0301 CUoD processor activation code entered,
number of processors: 44, index: 0000."
time_stamp=01/26/2014 18:35:12,"entry=HSCL0302 CUoD memory activation code entered, GB of
memory: 1704, index: 0000."
time_stamp=01/25/2014 18:58:46,"entry=HSCL030F On/Off CoD memory activated, GB of memory:
352, number of days: 2."
time_stamp=01/25/2014 18:58:17,"entry=HSCL030E On/Off CoD processors activated, number of
processors: 38, number of days: 2."
time_stamp=08/21/2013 13:54:51,"entry=HSCL0303 On/Off processor enablement code entered,
maximum number of processor days: 3420."
time_stamp=08/21/2013 13:54:11,"entry=HSCL0304 On/Off memory enablement code entered,
maximum number of memory days: 31680."
time_stamp=07/22/2013 14:29:45,entry=HSCL0316 On/Off CoD processor request period expired.
time_stamp=07/20/2013 14:03:48,"entry=HSCL030E On/Off CoD processors activated, number of
processors: 19, number of days: 2."
time_stamp=07/20/2013 10:07:10,"entry=HSCL0302 CUoD memory activation code entered, GB of
memory: 512, index: 0000."
time_stamp=06/26/2013 13:25:45,entry=HSCL0317 On/Off CoD memory request period expired.
time_stamp=06/26/2013 12:48:26,entry=HSCL0316 On/Off CoD processor request period expired.
time_stamp=06/25/2013 13:12:09,"entry=HSCL030F On/Off CoD memory activated, GB of memory:
1, number of days: 1."
time_stamp=06/25/2013 12:35:03,"entry=HSCL030E On/Off CoD processors activated, number of
processors: 1, number of days: 1."
time_stamp=04/08/2013 08:08:40,"entry=HSCL0303 On/Off processor enablement code entered,
maximum number of processor days: 360."
time_stamp=04/08/2013 08:07:38,"entry=HSCL0304 On/Off memory enablement code entered, max
imum number of memory days: 999."
Virtual Ethernet settings
Quick and easy way to view which vlans are served by your Vswitches :
# lshwres -r virtualio --rsubtype vswitch -m [managed_system] -F
ETHERNET0(Default),none
VSWITCH1,"715,190,2,745,191,999,4094"
VSWITCH2,"223,163,131,1272,1273,1099,999,4094"
VSWITCH3,"1271,131,24,164,224,225,163,84,999,4094"
> Please note that if you use 802.1q you may check out that the PVID is displayed as well, and is not to be
included in your actual active VLANS, as for the control channel (4094 in my example)
Generally, the PVID for a SEA is 999, as shown in my example.
Get the names of virtual switches and associated
vlan IDs
# lshwres -r virtualio --rsubtype vswitch -m [managed_system]
ETHERNET0(Default),"vlan_ids=182,181,82,81,172,171,112"
Get the virtual switches list per server
# lssyscfg -r sys -Fname |while read f; do switch=`lshwres -r virtualio --rsubtype vswitch
-m $f`; echo "$f";echo "$switch";echo; done
More useful one-liner : VLANs repartition by VIO Server
# lshwres -r virtualio --rsubtype eth -m [managed_system] --level lpar
-Fis_trunk,lpar_name,addl_vlan_ids |grep "^1"| sed -e 's/^1,//g' -e 's/,\"/:/g' -e
's/\"$//g'
VIO1:24,84,131,163,164,224,225,1271
VIO2:833,836,843,846
VIO3:56,1271
you can see here that we do not display the PVID (we could have actually, with the port_vlan_id attribute),
and we filtered by trunked adapters only, so we avoid the display of the control channels.
Sharedpools
Check if your pseries is multiple shared_pools capable :
# lssyscfg -r sys -m [managed_system] -Factive_lpar_share_idle_procs_capable
> returns 1 if it supports multiple shared_pools, and 0 if not .
List the sharedpools active on the frame and the lpars associated withem :
# lshwres -r procpool -m [managed_system] -Fname,lpar_names
DefaultPool,"VIO1,VIO2"
shp_oracle,"my_db_lpar1,my_db_lpar2"
shp_app,"my_app_lpar"
Creating a new sharepool
Actually you cannot create a sharepool, you just have to rename the existing ones (from poolid=0 to poolid=63)
dont ask me why
Before doing so, check the existing sharedpools and spot a poolid that isnt allocated like 4 in my example
below:
# chhwres -r procpool -m [managed_system] -o s --poolid 4 -a
"new_name=MyNewPool,max_pool_proc_units=2"
Changing a sharedpool size up to 10 cores
# chhwres -r procpool -m [managed_system] -o s --poolname my_sharedpool -a
max_pool_proc_units=10
Changing the current sharedpool for a LPAR :
# chhwres -r procpool -m [managed_system] -o s -p [lpar_name] -a
"shared_proc_pool_name=[target_sharepool_name]"
Changing the sharedpool defined in a LPARs profile :
# chsyscfg -r prof -m [managed_system] -i
"lpar_name=[lpar_name],name=[profile_name],shared_proc_pool_name=[target_sharepool_name]"
Hardware-related commands
Getting the firmware level of a managed system :
# lslic -m [managed_system] -t sys -Fcurr_ecnumber_primary:activated_level
01AM730:99
Getting the firmware level of all your managed systems :
#lssyscfg -r sys -Fname |while read f
do
model=`lssyscfg -r sys -Fname:type_model |grep $f`
echo -n "$model: ";lslic -m $f -t sys -Fcurr_ecnumber_primary:activated_level
done
frame,9117-MMC: 01AM740:100
frame,9117-MMD: 01AM760:51
frame,9117-MMD: 01AM760:68
frame,8202-E4C: 01AL740:152
frame,9117-MMD: 01AM760:68
frame,9117-MMD: 01AM760:68
frame,9117-MMD: 01AM760:51
frame,8231-E1D: 01AL770:90
frame,8202-E4D: 01AL770:90
frame,8408-E8D: 01AM770:48
frame,8202-E4C: 01AL740:152
frame,8231-E1D: 01AL770:48
frame,8231-E1D: 01AL770:90
frame,9117-MMD: 01AM760:68
frame,8231-E1D: 01AL770:90
frame,8202-E4D: 01AL770:90
frame,8231-E1D: 01AL770:90
frame,9117-MMC: 01AM740:100
Rebuild a managed system profile which is incomplete
hscroot@hmc~> lssyscfg -r sys -Fname,state |grep compl
192.168.128.17,Incomplete
hscroot@hmc~> chsysstate -m 192.168.128.17 -r sys -o rebuild
Listing all I/O slots from a pseries , sorted by device type and numbered :
/!\ (in order to do this, you need to execute the awk |sort|uniq part from an AIX system, because you cant
use awk with hscroot on a HMC ( I really should ask IBM why, one day ?! it is so useful))
# lshwres -m [managed_system] -r io --rsubtype slot\
-F drc_name:bus_id:description| awk -F: '{print $NF}' | sort |uniq -c
24 8 Gigabit PCI Express Dual Port Fibre Channel Adapter
21 Empty slot
38 Ethernet controller
17 PCI-E SAS Controller
on my beautiful p795, I have 21 empty slots, 24 NPIV-capable Fibre channel adapter, 17 SAS
controllers and 38 FcoE cards
If you wish to have more thorough information (like physical location), you juste have to remove the
awk |sort|uniq part, which will give this kind of output (much more verbose, please note the null
value which gives us the unassigned I/O Cards):
# lshwres -m [managed_system] -r io --rsubtype slot \
-F drc_name:bus_id:description:lpar_name
U5803.001.9SS05BI-P1-C8:617:8 Gigabit PCI Express Dual Port Fibre Channel Adapter:null
U5803.001.9SS05BI-P1-C9:618:Ethernet controller:null
U5803.001.9SS05BI-P1-C10:619:Ethernet controller:null
U5803.001.9SS05BI-P1-C4:612:Ethernet controller:null
[...]
U5803.001.9SS05BI-P2-C3:1042:Empty slot:null
U5803.001.9SS05BI-P2-C7:1048:Ethernet controller:VIO2
U5803.001.9SS05BI-P2-C8:1049:Ethernet controller:VIO1
U5803.001.9SS05BI-P2-C9:1050:8 Gigabit PCI Express Dual Port Fibre Channel Adapter:VIO2
U5803.001.9SS05BI-P2-C5:1045:PCI-E SAS Controller:VIO1
LPAR-related actions
Power on a LPAR :
# chsysstate -m [managed_system] -o on -r lpar -n [lpar_name] -f
[profile_name]
Shutting down a LPAR:
# chsysstate -m [managed_system] -r lpar -n [lpar_name] -o shutdown [--immed]
Monitor the boot led code of a dumping/starting LPAR :
# lsrefcode -r lpar -m [Managed_system] -F
lpar_name,lpar_id,,refcode,fru_call_out_loc_codes --filter "lpar_names=LPAR1"
LPAR1,35,0555,FSCK ERROR
Boot a LPAR in maintenance mode:
# nim -o maint_boot -a spot=spot53TL9SP4 -a boot_client=no -a open_console=no [LPAR]
Rename a LPAR :
# chsyscfg -m [managed_system] -r lpar -i "name=my_lpar_name,new_name=my_new_lpar_name"
Add 1 EC to a LPAR which has 0.5 (DLPAR operation)
# chhwres -r proc -m [managed_system] -p [LPAR] -o a --procunits 1
Add 1 EC to a LPAR which has 0.5 (profile operation)
# chsyscfg -m [managed_system] -r prof -i
"name=[profile],lpar_name=[lpar],desired_proc_units=1.5"
Removing a virtual FC adapter on a VIO server (DLPAR operation) :
# chhwres -r virtualio -m [managed_system] -o r --id [VIO_id] --rsubtype fc -s [slot
ID]
Removing a virtual FC adapter on a VIO server (profile operation):
# chsyscfg -r prof -m [managed system] -i
"lpar_id=[VIO_id],name=[profile_name],virtual_fc_adapters-=[virtual-slot-number/client-or-
server/[remote-lpar-ID]/[remote-lpar-name]/remote-slot-number/[wwpns]/is-required]"
User-related commands
Listing existing HMC users
# lshmcusr -F
root,hmcsuperadmin,root,99999,ALL:,local,,1,1,0,0,15,0,0,,md5,0
lpar2rrd,hmcviewer,HMC User,99999,ALL:,local,,0,1,0,0,15,0,0,,md5,0
hscpe,hmcpe,HMC User,99999,ALL:,local,,0,1,0,0,15,0,0,,md5,0
hscroot,hmcsuperadmin,HMC Super User,99999,ALL:,local,,1,1,0,0,15,0,0,,md5,0
Changing a users password
# chhmcusr -u hscroot -t passwd
Listing access rights to lpars for a custom role
# lsaccfg -t resourcerole --filter "resourceroles=my_custom_role"
name=resourceroleext,"resources=lpar:root/ibmhscS1_0|123*9119-FHB*84Z99B4|
IBMHSC_Partition,lpar:root/ibmhscS1_0|14*9119-FHB*84Z99B4|IBMHSC_Partition"
Other : Getting to know and check HMCs connectivity from the client side
# lsrsrc IBM.MCP
Resource Persistent Attributes for IBM.MCP
resource 1:
MNName = "10.10.10.10"
NodeID = 8729111498266952156
KeyToken = "HMC1"
IPAddresses = {"10.10.10.1"}
ConnectivityNames = {"10.10.10.10"}
HMCName = "7042CR6*9999DAC"
HMCIPAddr = "10.10.10.3"
HMCAddIPs = "10.10.10.5"
HMCAddIPv6s = ""
ActivePeerDomain = ""
NodeNameList = {"lpar1"}
resource 2:
MNName = "10.10.10.10"
NodeID = 1943424031094794948
KeyToken = "HMC2"
IPAddresses = {"10.10.10.2"}
ConnectivityNames = {"10.10.10.10"}
HMCName = "7042CR6*9999DBC"
HMCIPAddr = "10.10.10.4"
HMCAddIPs = "10.10.10.6"
HMCAddIPv6s = ""
ActivePeerDomain = ""
NodeNameList = {"lpar2"}
Links
Hardware Management Console Related technical information
HMC V7 R7.8.0 Command Line specification