1. PowerHA Version
There is no specific command to get the PowerHA version. However, the version of the cluster.es.server.rte fileset actually reflects the PowerHA version
# lslpp -Lqc cluster.es.server.rte | cut -d: -f3
6.1.0.3
If you don't trust the above method to determine the PowerHA version you could also ask the cluster manager:
# lssrc -ls clstrmgrES | egrep '^local node vrmf|^cluster fix level'
local node vrmf is 6103
cluster fix level is "3"
2. Program Paths
The path to the cluster commands are not included in the default path. It makes sense to extend the default path to include the cluster paths:
# export PATH=$PATH:/usr/es/sbin/cluster:/usr/es/sbin/cluster/utilities:/usr/es/sbin/cluster/sbin:/usr/es/sbin/cluster/cspoc
Dynamic Reconfiguration (CSPOC)
1. Extend a Volume Group
One or more PVs can be added to an existing volume group. Since it is not guaranteed that the hdisks are numbered the same way across all nodes you need to specify a reference node with the "-R" switch:
nodeA# /usr/es/sbin/cluster/sbin/cl_extendvg -cspoc -n'nodeA,nodeB' -R'nodeA' VolumeGroup hdiskA hdiskB hdisk...
2. Reduce a Volume Group
nodeA# /usr/es/sbin/cluster/sbin/cl_reducevg -cspoc -n'nodeA,nodeB' -R'nodeA' VolumeGroup hdiskA hdiskB hdisk...
3. Add a Filesystem to an Existing VG
nodeA# /usr/es/sbin/cluster/sbin/cl_mklv -cspoc -n'nodeA,nodeB' -R'nodeA' -y'LVName' -t'jfs2' -e'x' -u'2' -s's' VolumeGroup LPs hdiskA hdiskB
It is recommended to use the most narrow upperbound possible to keep the mirror consistant after a filesystem extension.
You could also use a map file to tell the command how to setup your LV:
nodeA# /usr/es/sbin/cluster/sbin/cl_mklv -cspoc -n'nodeA,nodeB' -R'nodeA' -y'LVName' -t'jfs2' -m MapFile VolumeGroup LPs
nodeA# /usr/es/sbin/cluster/cspoc/cli_chlv -e x -u'2' -s's' LVName
The format of the map file is
hdiskA:PP1
hdiskB:PP1
:
hdiskC:PP2
hdiskD:PP2
First put in all mappings for mirror copy 1, then add all mapping for mirror copy 2. The number of entries per mirror copy has to equal the number of LPs given at the command line. Be careful!
Then we create the filessystem on top of the just created LV...
nodeA# /usr/es/sbin/cluster/sbin/cl_crfs -v'jfs2' -m'/mountpoint' -d'/dev/LVName' -A'n' -a'logname=INLINE'
4. Increase a Filesystem
If there are still enough free PPs in the existing Volume group the filesystem can be extended with a standard AIX command:
nodeA# chfs -a size=512G /mountpoint
Be sure that superstrictness is set as well as that the upper bound is set correctly.
In case you need to add new disks to the VG (1. Extend a Volume Group) in order to be able to extend the filesystem you need to extend the underlying LV first:
nodeA# /usr/es/sbin/cluster/sbin/cl_extendlv -u'8' -m'MapFile' LVName LPs
nodeA# chfs -a size=512G /mountpoint
The upperbound has to be adapted to the new number of PVs. The map file must only contain the additional mappings.
5. Mirror a Logical Volume
# /usr/es/sbin/cluster/sbin/cl_mklvcopy -R'NODE' -e'x' LVName 2 hdiskA hdiskB hdisk...
Since it is not guaranteed that the hdisks are numbered the same way across all nodes you need to specify a reference node.
6. Remove a Logical Volume Mirror
# /usr/es/sbin/cluster/sbin/cl_rmlvcopy -R'NODE' LVName 1 hdiskA hdiskB hdisk...
Resource Group Management
1. Move a Resource Group to another Node
# /usr/es/sbin/cluster/utilities/clRGmove -g RG -n NODE -m
It is also possible to move multiple resource groups in one go:
# /usr/es/sbin/cluster/utilities/clRGmove -g "RG1,RG2,RG3" -n NODE -m
2. Bring a Resource Group Down
# /usr/es/sbin/cluster/utilities/clRGmove -g RG -n NODE -d
3. Bring a Resource Group Up
# /usr/es/sbin/cluster/utilities/clRGmove -g RG -n NODE -u
Cluster Information
1. Where Can I Find the Log Files?
Historically the cluster's main log file could be found under "
/tmp/hacmp.out
". But nowadays the location is configurable. If you don't know where to look run# /usr/es/sbin/cluster/utilities/cllistlogs
/var/hacmp/log/hacmp.out
/var/hacmp/log/hacmp.out.1
/var/hacmp/log/hacmp.out.2
2. Where Can I Finde the Application Start/Stop Scripts?
# /usr/es/sbin/cluster/utilities/cllsserv
AppSrv1 /etc/cluster/start_appsrv1 /etc/cluster/stop_appsrv1
AppSrv2 /etc/cluster/start_appsrv2 /etc/cluster/stop_appsrv2
3. Show the Configuration of a Particular Resource Group
# /usr/es/sbin/cluster/utilities/clshowres -g RG
If you are interested in the configuration of all resource groups you can use clshowres without any option:
# /usr/es/sbin/cluster/utilities/clshowres
4. Cluster IP Configuration
# /usr/es/sbin/cluster/utilities/cllsif
Cluster State
1. Cluster State
The most widely known tool to check the cluster state is probably clstat:
# /usr/es/sbin/cluster/clstat -a
The switch "-a" forces clstat to run in terminal mode rather to than open an X window.
The same information can be obtained with
# /usr/es/sbin/cluster/utilities/cldump
2. The Cluster Manager
If for whatever reason snmp does not allow you to use clstat or cldump you can still ask the cluster manager about the state of your cluster:
# lssrc -ls clstrmgrES
Current state: ST_STABLE
sccsid = "@(#)36 1.135.1.101 src/43haes/usr/sbin/cluster/hacmprd/main.C, hacmp.pe, 53haes_r610 11/16/10 06:18:14"
i_local_nodeid 0, i_local_siteid -1, my_handle 1
ml_idx[1]=0 ml_idx[2]=1
There are 0 events on the Ibcast queue
There are 0 events on the RM Ibcast queue
CLversion: 11
local node vrmf is 6103
cluster fix level is "3"
The following timer(s) are currently active:
Current DNP values
DNP Values for NodeId - 1 NodeName - barney
PgSpFree = 16743822 PvPctBusy = 3 PctTotalTimeIdle = 90.121875
DNP Values for NodeId - 2 NodeName - betty
PgSpFree = 16746872 PvPctBusy = 0 PctTotalTimeIdle = 97.221894
3. Where are the Resources Currently Active?
# /usr/es/sbin/cluster/utilities/clRGinfo
-----------------------------------------------------------------------------
Group Name Group State Node
-----------------------------------------------------------------------------
RES_GRP_01 ONLINE barney
OFFLINE betty
RES_GRP_02 ONLINE betty
OFFLINE barney
No comments:
Post a Comment