or
#lsrsrc IBM.ManagementServer
resource 1:
Name = "xx.xx.xx.xx"
Hostname = "xx.xx.xx.xx"
ManagerType = "HMC"
LocalHostname = "xx.xx.xx.xx"
ClusterTM = "9078-160"
ClusterSNum = ""
ActivePeerDomain = ""
NodeNameList = {"HOST 명"}
After confirmation from the Application and DBA teams that the application and databases are down the UNIX Administrators need to perform the following steps.
1) Remove the alternate i.e.old copy of rootvg altinst_rootvg to free hdisk1
2) Add hdisk1 to rootvg
3) Mirror the rootvg's hdisk0 with hdisk1
4) Create the boot device
5) Rebuild the bootlist
6) Add hdisk1 and hdisk0 to bootlist and reboot the server to disable the quorum
7) Check the status of server and check the boot log for any errors
8) Rearrange the bootlist with hdisk0 and hdisk1
9) Reboot with hdisk0
First complete a status check
Hostname:/:/> lspv hdisk0 005db36d2f1e67e5 rootvg active hdisk1 005db36d8792541c altinst_rootvg active
Hostname:/> lsvg -p rootvg rootvg: PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION hdisk0 active 542 337 109..08..03..108..109
Hostname:/:/> bootinfo -b hdisk0 Hostname:/:/> lsvg rootvg altinst_rootvg issvg datavg
Now proceed with the activity:
Hostname:/:/> alt_rootvg_op -X altinst_rootvg #This removes altinst_rootvg ; hdisk1 is now accessable
Bootlist is set to the boot disk: hdisk0
Hostname:/:/> lsvg rootvg issvg datavg
Hostname:/:/> lsvg -o datavg issvg rootvg
Hostname:/:/> lspv hdisk0 005db36d2f1e67e5 rootvg active hdisk1 005db36d8792541c None
Hostname:/> extendvg -f rootvg hdisk1 #Add hdisk1 to rootvg; I had to use the -f force option
Hostname:/> lsvg -p rootvg rootvg: PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION hdisk0 active 542 337 109..08..03..108..109 hdisk1 active 542 542 109..108..108..108..109
Hostname:/> mirrorvg rootvg 0516-1124 mirrorvg: Quorum requirement turned off, reboot system for this to take effect for rootvg. 0516-1126 mirrorvg: rootvg successfully mirrored, user should perform bosboot of system to initialize boot records. Then, user must modify bootlist to include: hdisk0 hdisk1. You have mail in /usr/spool/mail/root
Hostname:/> lsvg -p rootvg rootvg: PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION hdisk0 active 542 337 109..08..03..108..109 hdisk1 active 542 353 109..24..03..108..109
Hostname:/> lsvg -l rootvg rootvg: LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT hd5 boot 1 2 2 closed/syncd N/A hd6 paging 32 64 2 open/syncd N/A hd8 jfslog 1 2 2 open/syncd N/A hd4 jfs 1 2 2 open/syncd / hd2 jfs 101 202 2 open/syncd /usr hd9var jfs 12 24 2 open/syncd /var hd3 jfs 24 48 2 open/syncd /tmp hd1 jfs 14 28 2 open/syncd /home hd10opt jfs 2 4 2 open/syncd /opt dump_disk0 sysdump 16 16 1 open/syncd N/A lv01 jfs 1 2 2 open/syncd /var/adm/perfmgr
Hostname:/> bosboot -a Hostname: Boot image is 30577 512 byte blocks.
Hostname:/> bootlist -m normal hdisk0 hdisk1
Hostname> bootlist -m normal hdisk1 hdisk0
Hostname> reboot
Hostname> bootinfo -b hdisk1
Hostname:/> alog -o -t boot | less
Search for your disk (hdisk1 for me) in the boot log:
Hostname> bootlist -m normal hdisk0 hdisk1
Hostname> reboot
Hostname> bootinfo -b hdisk0
http://www.computers-it.com/aix/aix_remirror_rootvg_after_migration_inst.php
Default location where AIX copies system dump is page space.
# sysdumpdev -l primary /dev/hd6 secondary /dev/sysdumpnull copy directory /var/adm/ras forced copy flag TRUE always allow dump TRUE dump compression ON # lsvg -l rootvg rootvg: LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT hd5 boot 1 2 2 closed/syncd N/A hd6 paging 12 24 2 open/syncd N/A hd8 jfs2log 1 2 2 open/syncd N/A hd4 jfs2 1 2 2 open/syncd / hd2 jfs2 19 38 2 open/syncd /usr hd9var jfs2 1 2 2 open/syncd /var hd3 jfs2 5 10 2 open/syncd /tmp hd1 jfs2 1 2 2 open/syncd /home hd10opt jfs2 8 16 2 open/syncd /opt
This can create a problem since system will not automatically reboot in case of a crash. instead the system will prompt for instructions what to do with the dump. Luckily, it is very easy to change this settings and allocate dedicated logical volume for storing system dump.
First you need to know how big sys dump logical volume should be.
# sysdumpdev -e Estimated dump size in bytes: 483393536
So, in this case LV should be at least 460MB. If you take a closer look at the logical volume output above, you'll notice that all values in 'PPs' column are twice as big as values in 'LPs' column. That can only mean our root volume group is mirrored. So actually we will need 460MBx2. Let's check if our root volume group has enough free space.
# lsvg rootvg VOLUME GROUP: rootvg VG IDENTIFIER: 00cb8a0c00004c000000010bfabee774 VG STATE: active PP SIZE: 128 megabyte(s) VG PERMISSION: read/write TOTAL PPs: 542 (69376 megabytes) MAX LVs: 256 FREE PPs: 411 (52608 megabytes) LVs: 8 USED PPs: 131 (16768 megabytes) OPEN LVs: 7 QUORUM: 1 TOTAL PVs: 2 VG DESCRIPTORS: 3 STALE PVs: 0 STALE PPs: 0 ACTIVE PVs: 2 AUTO ON: yes MAX PPs per VG: 32512 MAX PPs per PV: 1016 MAX PVs: 32 LTG size (Dynamic): 256 kilobyte(s) AUTO SYNC: no HOT SPARE: no BB POLICY: relocatable
Seems that we have 51GB free. More than enough. Also we found out that we have 2 physical volumes in this volume group. Let's check what are the names of this PVs, we will need them soon.
# lsvg -p rootvg rootvg: PV_NAME PV STATE TOTAL PPs FREE PPs FREE DISTRIBUTION hdisk1 active 271 207 54..29..18..54..52 hdisk2 active 271 204 54..26..18..54..52
Now, what we gonna do is create two logical volumes, one on each PV, and set one as a primary dump device and the other as a secondary dump device. The reason why we do this is that we don't want to mirror sysdump device, but we still need two copies in case one of the hard drives fails.
In this example physical partition is 128MB so we should add 4 PPs to our new logical volumes. Let's start.
# echo $((128*4)) 512 # mklv -t sysdump -y sysdump1 rootvg 4 hdisk1 # mklv -t sysdump -y sysdump2 rootvg 4 hdisk2 # lsvg -l rootvg rootvg: LV NAME TYPE LPs PPs PVs LV STATE MOUNT POINT hd5 boot 1 2 2 closed/syncd N/A hd6 paging 12 24 2 open/syncd N/A hd8 jfs2log 1 2 2 open/syncd N/A hd4 jfs2 1 2 2 open/syncd / hd2 jfs2 19 38 2 open/syncd /usr hd9var jfs2 1 2 2 open/syncd /var hd3 jfs2 5 10 2 open/syncd /tmp hd1 jfs2 1 2 2 open/syncd /home hd10opt jfs2 8 16 2 open/syncd /opt sysdump1 sysdump 4 4 1 closed/syncd N/A sysdump2 sysdump 4 4 1 closed/syncd N/A
Logical volumes sysdump1 and sysdump2 are created. Now, let's change the system dump settings. First the primary device.
# sysdumpdev -Pp /dev/sysdump1
And now the second.
# sysdumpdev -Ps /dev/sysdump2
Let's check if it's applied.
# sysdumpdev -l primary /dev/sysdump1 secondary /dev/sysdump2 copy directory /var/adm/ras forced copy flag TRUE always allow dump TRUE dump compression ON
Everything seems fine. All we have to do now is to wait for a system to crash to test our new settings. :o)
http://www.miljan.org/wiki/index.php/How_to_Change_Default_System_Dump_Device_in_AIX
Paging related commands
Command | Description |
---|---|
chps | Changes paging space attributes |
lsps | Lists paging space attributegs |
mkps | Creates paging space. |
rmps | Remomves paging space (& lv on which it resies) |
swapon | Activates paging |
Like most modern UNIXes, AIX systems must be rebooted in order to fully deactivate paging space. The full process is as follows:
# sysdumpdev -l primary /dev/dumplv secondary /dev/sysdumpnull copy directory /var/adm/ras forced copy flag TRUE always allow dump TRUE dump compression OFFsysdumpdev -P -p ${new_dump_dev} # Sets the new dump device
# lsps -a
Page Space Physical Volume Volume Group Size %Used Active Auto Type
paging00 hdisk3 rootvg 4096MB 1 yes yes lv
hd6 hdisk2 rootvg 4096MB 1 yes yes lv
Modifying default paging space
Anytime you're messing with the default paging space (other than increasing the size), you're going to have to supply a temporary default paging space. The checklist that follows shows all the steps required to reduce the size of the default paging lv, hd6.