Thursday, 28 March 2013

check

cheeck

Tuesday, 26 March 2013

Filesystem


FS-File system 


A file system is a hierarchical tree structure of files and directories. Some tasks are performed more efficiently on a file system than on each directory within the file system. For example, you can back up, move, or secure an entire file system.

File systems are associated with devices (logical volumes) represented by special files in /dev. When a file system is mounted, the logical volume and its contents are connected to a directory in the hierarchical tree structure. You can access both local and remote file systems using the mount command.

AIX supports these file system types:
JFS      Journaled File System which exists within a Logical Volume on disk
JFS2     Enhanced Journaled File System which exists within a Logical Volume on disk
CDRFS    CD-ROM File System on a Compact Disc
NFS      Network File System accessed across a network
UDF      Universal Disk Format (DVD ROM media)
GPFS     General Parallel Filesystem
SMBFS    Server Message Block Filesystem (cifs_fs, samba share)

All of the information about the file system is centralized in the /etc/filesystems file. Most of the file system maintenance commands take their defaults from this file. The file is organized into stanza names that are file system names and contents that are attribute-value pairs specifying characteristics of the file system.

/tmp:                           <-- names the directory where the file system is normally mounted
        dev      = /dev/hd3     <-- for local mounts identifies the block special file where the file system reside
                                    for remote mounts, it identifies the file or directory to be mounted
        vfs      = jfs2         <-- specifies the type of mount. For example, vfs=nfs
        log      = /dev/hd8     <-- full path name of the filesystem logging logical volume (only for jfs and jfs2)
        mount    = automatic    <-- used by the mount command to determine whether this file system should be mounted by default
        type     = nas          <-- several file systems can be mounted by giving the value as an argument to the -t flag (mount -t nas)
        check    = false        <-- used by the fsck command to determine the default file systems to be checked
        vol      = /tmp         <-- used by the mkfs command when initiating the label on a new file system
        free     = false        <-- it is there because of unix traditions only (It is totally ignored by any and all AIX commands)
                                    (df command in traditional UNIX would use it to determine which file systems to report)

For the option mount, these are valid entries: automatic, true, false, removable, and readonly:
automatic    fs is to be mounted at boot; this is usually used for system-defined file systems.
true         mount all is allowed to mount this file system.
false        mount will only occur when the file system is specified as an argument to the mount command, or the type is used for mount.

The asterisk (*) is the comment character used in the /etc/filesystems file.
To remove a file system data from /etc/filesystems: imfs -x -l <lvname>


System-Created File Systems in AIX
The six standard file systems in AIX Versions 5 and higher are /, /home, /usr, /proc, /tmp, and /var. Each of these file systems is always associated with a logical volume name:

Logical Volume          File System or Description
------------------------------------------------------
hd1                     /home    (users' home dir)   
hd2                     /usr     (operating system commands, libraries and application programs)
hd3                     /tmp     (temporary space for all users)
hd4                     /        (critical files for system operations, programs that complete the boot process)
hd5                     <boot logical volume>
hd6                     <primary paging space>
hd8                     <primary JFS or JFS2 log>
hd9var                  /var     (variable spool and log files)
hd10opt                 /opt     (freeware programs)
/proc                   /proc    (pseudo fs kept in memory to support threads)

------------------------

Superblock
In a JFS, the superblock is the first addressable block (and a backup at the thirty-first addressable block) on a file system. It is 4096 bytes in size. The superblock is very important because a file system cannot be mounted if the superblock is corrupted. This is why there is a secondary or backup superblock at block 31. The superblock contains the following: size of the filesystem, number of datablocks in the fs, state of the fs...

# dd count=1 bs=4k skip=31 seek=1 if=/dev/hd4 of=/dev/hd4     <--this will restore the superblock from block #31
# fsck -p <fs>                                                <--this will copy also the superblock from #31
# dumpfs /usr                                                 <--shows the superblock, i-node map, and disk map information

------------------------

i-node (index node)
A file system has a fixed number of i-nodes that are located following the superblock. i-nodes contain information about files, including the location of the data on the disk. They contain all of the identifying information about files (file type, size, permissions, user/group/owner, create/modification and last access dates) except for the file name, which is stored in the directory, and the contents of the file, which are stored in the data blocks. Each file or directory has an i-node associated with it. AIX reserves a number of i-nodes for files and directories every time a file system is created, and if all the available inodes are used, no more files can be created, even if the fs has free space.

------------------------

jfslog
AIX uses a journaled file system, meaning that certain i-node information is stored in a transaction log during writes. Its real value is in maintaining the integrity of the file system. Journaled file systems enable faster system reboots after system crashes. Each volume group has a jfslog file that is automatically created when the first file system is created in that volume group. The jfslog ensures the integrity of a file system by immediately writing all meta-data information to itself. Meta-data is information about the file system, such as changes to the i-nodes and the free lists. The jfslog keeps track of what is supposed to happen to the file system and whether it gets done. You are allowed to have a separate log for each filesystem.

(If a jfslog has been created manually, the logform command should be used to activate it as the jfslog for that vg.)



------------------------

Special or device files
A special file, sometimes called device file is associated with a particular hw device or other resource of the computer system. AIX uses them to provide file I/O access to character and block device drivers. Special files are distinguished from other files by having a "c" or "b" stored in the i-nodes, and they are located under the /dev directory. Character and block I/O requests are performed by issuing a read or write request on the device file:
- Character device file: Character devices (tape drives, tty devices) are capable of doing sequential I/O.                   
- Block device file: Block devices can only do random I/O, such as disk devices.

mknod: creates new special files (i-node and the file type (c or b) sould be set), major minor numbers will be written into the i-node

------------------------

Directories
The directory entry contains an index number associated with the file's i-node, the file name....
Every well formed directory contains the entries: . and ..
-.: points to the i-node for the directory itself
-..: points to the i-node for the parent directory

Because directory entries contain file names paired with i-nodes, every directory entry is a link.

------------------------

Links
Links are connection between a file name and an i-node. The i-node number actually identifies the file not the filename. By using links, any i-node or file can be known by many different names.
-hard link: Hard links can be created only between files that are in the same fs.
(when the last hard link is removed , the i-node and its data are deleted)
           
# ls -li: (bello is a hard link, and link count 2 shows it)
4 -rw-r--r--    2 root     system            0 Jul  8 23:27 bello

-symbolic link: Allow access to data in other filesystems from a new filename.

# ls -li: (bello is a sym. link, and  the first character "l" shows this)
lrwxrwxrwx    1 root     system           15 Jul  8 23:30 bello -> /test_fs1/hello

------------------------

NameFS (Name File System)
A NameFS is a pseudo-file system that lets you access a directory through two different path names. With a NameFS you can create a second path and you don't have to change permissions, copy, move, rename or even touch the original file system.

You could use a Name File System to set up a directory with:
-an alternate path (so it’s like a shortcut)
-different permissions (e.g. when some applications or users should have read-only access)
-other mount attributes such as Direct I/O (dio) or Concurrent I/O (cio)

How to setup NameFS:

1. mkdir -p /shortcut                                <--create a dir, which will be the mount point for the new Name file System

2.:
mount -v namefs /some/long/path /shortcut            <--with this you got access to the files via 2 paths
mount -v namefs -o ro /data/report /data_reports     <--with this you can make a read only mount of a dir
mount -v namefs -o cio /db2/W01/redo /deb2redo       <--with this you can mount a dir with CIO to improve I/O

------------------------

crfs ...                       creates a file system (crfs creates lv as well; mkfs will create an fs over an already created lv)
crfs -v jfs2 -d W01origlogAlv -m /oracle/W01/origlogA -A yes -p rw -a options=cio -a agblksize=512
               
mount                          displays information about all the currently mounted file systems
mount dir_name                 mount the file system according to the entry in /etc/filesystems
mount lv_name dir_name         mount the file system to another lv than  in /etc/filesystems
umount dir_name                umount the filesystem
mount -a or mount -all         mounts all the file systems at one time

lsfs                           displays the characteristics of file systems
lsfs -q                        more detailed info about the fs (lv size...) (it queries the superblock)
                               (-v: list filesytems belonging to given fs type (jfs2, nfs); -u: lists filesystems in the given mount group)
rmfs /test                     removes a file system
rmfs -r /test                  removes the mount point also
chfs -a size=+5G /shadowtemp   it will increase by 5G the fs (-5G can be used as well, or 5G will set tthe size of the fs to 5GB)
                               (if fs was reduced but space is not freed up defragfs could help)
chfs -a options='rw' /shadow   shows with lsfs rw (I think rw is the deafault anyway)

imfs -x -l <lvname>            remove a file system data from /etc/filesystems


fsck                           checks file system consistency (should not run on a mounted file system)
defragfs /home                 improves or reports the status of contiguous space within a file system

ls -ldi <dir>                  shows inode number in the first column
istat /etc/passwd              display information regarding a particular inode (last updated, modified, accessed)
                               (update: change in the inode (file name, owner...); modified: change in the content of the file or dir)

df                             monitor file system growth
du dir_name                    (disk usage), to find which files are taking up the most space
du -sm * | sort -rn | head     shows every dir size in MB (du -sk * the same in KB), the first 10 largest

skulker                        cleans up file systems by removing unwanted or obsolete files
fileplace <filename>           displays the placement of file blocks within logical or physical volumes, it will show if a file fragmented

fuser /etc/passwd              lists the process numbers of local processes using the /etc/passwd file
fuser -cux /var                shows which processes are using the given filesystem
fuser -cuxk /var               it will kill the above processes   
fuser -dV /tmp                 shows deleted files (inode) with process ids which were open by a process (so its space could not be freed up)
                               (-V: verbose will show the size of the files as well)
                               if we rm a file, while it is opened by a process its space will not free up.
                               solution: kill the process, wait for the process to finish or reboot the system
---------------------------------------

HOW TO FIND FILES AFTER A SPECIFIC DATE:

touch mmddhhmm filename        creates a file at a specific date
find /var -xdev -newer filename -ls

---------------------------------------

CREATING FS with commands:

1. mkvg -y oravg -s 128 hdiskpower62                                 <--creates vg with 128MB pp
2. mklv -y oraloglv -t jfs2log -a e -r n oravg 1 hdiskpower62        <--creates loglv (-a: allocation (e:edge), -r: relocatable (n:no))
3. mklv -y oralv -t jfs2 -a e oravg 500 hdiskpower62                 <--creates lv (-a: allocation (e:edge))
4. crfs -v jfs2 -a logname=oraloglv -d oralv -m /ora                 <--creates fs with specified loglv (set auto mount if needed)
5. mount /ora                                                        <--mount fs
6. chown -R oracle.dba /ora_backup                                   <--set owner/permission

---------------------------------------

EXTENDING FS with commands:

1. extendvg oravg hdiskpower63                                       <--extends vg with hdisk
2. chlv -x 1024 oralv                                                <--set the maximum number of logical partitions if needed
3. extendlv oralv 20 hdiskpower63                                    <--extends lv to the specified hdisk
4. lslv -m oralv                                                     <--check allocations if needed
5. lsfs -q /ora                                                      <--shows new size of the lv (copy value of 'lv size')
6. chfs -a size=146800640 /ora                                       <--use the 'lv size' value to enlarge fs

--------------------------------------------

HOW TO CORRECT CORRUPTED FS:
1. fsck /fs1                <--checks fs consistency (should not run on a mounted file system)
2. umount /fs1              <--umounts fs

If umount fails:
fuser -cux /fs1             <--shows processes running in the fs
fuser -kcux  /fs1           <--kills the above processes (kill -9 works as well) (inittab/repawn processes will be there again)
umount /fs1

3. fsck -y /fs1             <--corrects errors

--------------------------------------------

CANNOT UNMOUNT FILE SYSTEMS:

-files are open or a user is using a directory in the fs:
fuser                       <--determines the PIDs for all processes that have open references within the fs
kill                        <--these processes can be killed

-loaded kernel extension:
genkex                      <--reports all loaded kernel extension

-file systems are still mounted within that file system:
umount                      <--umount first the embedded file systems!!!!!

-you can check processes using it with lsof:
lsof /home                  <--it will show the pids what should be terminated (kill <pid>)

--------------------------------------------

HOW TO COPY A FILESYSTEM TO A NEW VG:
command cplv is useful as it will copy at pp (lp) level not files (if there are many files cplv is better)

1. create vg if needed (loglv as well)                    <--we created bbvg, with log devce:bbloglv
2. umount /domo                                           <--umount fs that you want to copy (/domo is the fs, domolv is its lv)
3. cplv -v bbvg domolv                                    <--copy domolv to bbvg (it will create a new lv there like fslv01)
4. chfs -a dev=/dev/fslv01 -a log=/dev/bbloglv /domo      <--changes the log device for fs to the one which we created in the new vg
(chfs -a dev=/dev/fslv01 -a log=INLINE /domo              <--if we have inline log)
5. fsck -p /dev/fslv01                                    <--ensure fs integrity
6. mount /domo                                            <--mount fs

--------------------------------------------

HOW TO COPY A FILESYSTEM (in the same vg):

1. create temporary fs   
    extendvg macavg hdiskX hdiskY                          <--add storage to vg
    mkdir /maca_u10/macatmp                                <--create tmp mount point
    chown oramaca.dbamaca /maca_u10/macatmp                <--set rights   
    mklv -t jfs2 -y macatmplv macavg 1596 hdiskX hdiskY    <--create temp lv (-y: new lv name, 1596: number of lps)
                                                           (if no jfslog, then mklv -t jfs2log..., and logform)
    crfs -v jfs2 -d macatmplv -m /maca_u10/macatmp         <--create temp fs    (-d: device name, -m: mount point)
    check if everything is identical                       <--df, lsfs, mount

2. copy data to temporary fs       
    umount /maca_u10/macaoradata
    mount -r /maca_u10/macaoradata                         <--mount with read only

    a.
    cp -prh /maca_u10/oradata/* /maca_u10/macatmp/         <--this is good if small number of files need to be copied
   
    or
   
    b.
    cd /maca_u10/macaoradata
    tar cvf - . | (cd <tempnewfs> && tar xvf -)            <--it copies everything from here to tempnewfs (file size limit? 6GB OK)

    ls -lR | wc -l                                         <--check if everything is identical, for both dir

3. rename lv and fs
    umount /maca_u10/oradata   
    umount /maca_u10/macatmp   
    chlv -n macaoradatalv_o macaoradatalv                  <--rename oradatalv to old (chlv -n newlv oldlv)
    chfs -m /maca_u10/oradata_o /maca_u10/oradata          <--rename oradata fs to old fs (chfs -m newmnt oldmnt)
    chlv -n macaoradatalv macatmplv                        <--rename tmplv to oradatalv   
    chfs -m /maca_u10/oradata /maca_u10/macatmp            <--rename tmp fs to oradata fs   
    fsck /maca_u10/oradata                                 <--fsck, before mount
    mount /maca_u10/oradata   
    chown oramaca.dbamaca /maca_u10/oradata                <--check (set) rights (if needed)   

4. create mirror   
    rmfs /maca_u10/oradata_o                               <--remove old fs (if space needed)
    rmdir /maca_u10/macatmp; rmdir /maca_u10/oradata_o     <--remove dirs: macatmp, oradata_o   
    mklvcopy macaoradatalv 2 hdiskX hdiskY                 <--mirror oradatalv (2: 2 copies)
    syncvg -l macaoradatalv                                <--synchronization

--------------------------------------------

BACKUP/RECREATE/RESTORE FILESYSTEM:

1. cd /filesystem
2. tar -cvf /tmp/filesystem.tar ./*                        <--it creates a backup of all the files in the fs
3. cd /
4. umount /filesystem
5. mkfs /filesystem
6. mount /filesystem
7. cd /filesystem
8. tar -xvf /tmp/filesystem.tar > /dev/null                <--restores the data (output redirected, as displaying is time consuming)

--------------------------------------------

CHANGING THE LOG LOGICAL VOLUME:
If an fs is write extensive, the use of the log logical volume can cause io bottleneck if it is placed on the same disk.
(e.g. data-fs is located on hdisk2)

1. umount /data-fs                                         <--umount the fs for which you want to create the new log logical volume
2. mlv -t jfs2log -y datafsloglv datavg 1 hdisk1           <--create a new log logical volume
3. logform /dev/datafsloglv                                <--format the log
4. chfs -a log=/dev/datafsloglv /data-fs                   <--it will modify /etc/filesystems to consist the new settings
5. getlvcb -ATdatalv                                       <--just for checking if lvcb is updated
6. mount /data-fs                                          <--mount back the changed filesystem

--------------------------------------------

REMOVING A FILE WITH SPECIAL CHARACTERS:

1. find inode number
root@bb_lpar: /tmp/bb # ls -i
   49                <--the name of the file is empty
   35 commands              
   47 lps.txt


2.  remove file by inode number
root@bb_lpar: /tmp/bb # find . -inum 49 -exec rm '{}' \;

--------------------------------------------

FILESYSTEM CLEANUP HINTS:

find large files:
find . -xdev -size +4000000c -exec ls -l {} \;             <--it will list files larger than 4MB in the fs
find . -type f | xargs ls -s | sort -rn | head             <--10 largest file (if there is another fs under it, it will search there too)
find . -type f -size +10000 | while read X ; do du -sm "$X" ; done | sort -n | tail -n 15        <--15 largest file


/etc:

/etc/perf/daily/                                           <--xmdaily logs can be removed if not needed
                                                           (can be removed from inittab and xm processes can be killed)

who /etc/security/failedlogin                              <--lists failed logins
> /etc/security/failedlogin                                <--clears that file


/usr:

/usr/websm/pc_client                                       <--windows, exe files can be removed


/var:

/var/adm/wtmp:
who /var/adm/wtmp                                          <--shows the contents of that file
/usr/sbin/acct/fwtmp < /var/adm/wtmp | tail -5000 > /tmp/wtmp.asc   <--converts wtmp to ascii, saves last 500 lines
/usr/sbin/acct/fwtmp -ic < /tmp/wtmp.asc > /var/adm/wtmp            <--converts back to original format
rm /tmp/wtmp.asc                                           <--delete the ascii file


/var/adm/cron/log:
> /var/adm/cron/log                                        <--this can be cleared
   
/var/spool/lpd:
stopsrc -s qdaemon                                         <--stops qdaemon
rm /var/spool/lpd/qdir/*                                   <--clears dir
rm /var/spool/lpd/stat/*                       
rm /var/spool/qdaemon/*
startsrc -s qdaemon                                        <--starts qdaemon

/var/spool/mail   
                                         <--under this dir, not needed mails can be cleared as well
/var/adm/sulog                                             <--this file can be reduced (cleared) as well

--------------------------

Volume Group

VOLUME GROUP
When you install a system, the first volume group (VG) is created. It is called the rootvg. Your rootvg volume group is a base set of logical volumes required to start the system. It includes paging space, the journal log, boot data, and dump storage, each on its own separate logical volume.

A normal VG is limited to 32512 physical partitions. (32 physical volumes, each with 1016 partitions)
you can change it with: chvg -t4 bbvg (the factor is 4, which means: maximum partitions:4064 (instead of 1016), max disks:8 (instead of 32))


How do I know if my volume group is normal, big, or scalable?
Run the lsvg command on the volume group and look at the value for MAX PVs. The value is 32 for normal, 128 for big, and 1024 for scalable volume group.

VG type     Maximum PVs    Maximum LVs    Maximum PPs per VG    Maximum PP size
Normal VG     32              256            32,512 (1016 * 32)      1 GB
Big VG        128             512            130,048 (1016 * 128)    1 GB
Scalable VG   1024            4096           2,097,152               128 GB


If a physical volume is part of a volume group, it contains 2 additional reserved areas. One area contains both the VGSA and the VGDA, and this area is started from the first 128 reserved sectors (blocks) on the disk. The other area is at the end of the disk, and is reserved as a relocation pool for bad blocks.

VGDA (Volume Group Descriptor Area)
It is an area on the hard disk (PV) that contains information about the entire volume group. There is at least one VGDA per physical volume, one or two copies per disk. It contains physical volume list (PVIDs), logical volume list (LVIDs), physical partition map (maps lps to pps)

# lqueryvg -tAp hdisk0                                <--look into the VGDA (-A:all info, -t: tagged, without it only numbers)
Max LVs:        256
PP Size:        27                                    <--exponent of 2:2 to 7=128MB
Free PPs:       698
LV count:       11
PV count:       2
Total VGDAs:    3
Conc Allowed:   0
MAX PPs per PV  2032
MAX PVs:        16
Quorum (disk):  0
Quorum (dd):    0
Auto Varyon ?:  1
Conc Autovaryo  0
Varied on Conc  0
Logical:        00cebffe00004c000000010363f50ac5.1   hd5 1       <--1: count of mirror copies (00cebff...c5 is the VGID)
                00cebffe00004c000000010363f50ac5.2   hd6 1
                00cebffe00004c000000010363f50ac5.3   hd8 1
                ...
Physical:       00cebffe63f500ee                2   0            <--2:VGDA count 0:code for its state (active, missing, removed)
                00cebffe63f50314                1   0            (The sum of VGDA count should be the same as the Total VGDAs)
Total PPs:      1092
LTG size:       128
...
Max PPs:        32512
-----------------------

VGSA (Volume Group Status Area)
The VGSAs are always present, but used with mirroring only. Needed to track the state of mirror copies, that means whether synchronized or stale. Per-disk stucure, but twice on each disk.


Quorum
Non-rootvg volume groups can be taken offline and brought online by a process called varying on and varying off a volume group. The system checks the availability of all VGDAs for a particular volume group to determine if a volume group is going to be varied on or off.
When attempting to vary on a volume group, the system checks for a quorum of the VGDA to be available. A quorum is equal to 51 percent or more of the VGDAs available. If it can find a quorum, the VG will be varied on; if not, it will not make the volume group available.
Turning off the quorum does not allow a varyonvg without a quorum, it just prevents the closing of an active vg when losing its quorum. (so forced varyon may needed: varyonvg -f VGname)

After turning it off (chvg -Qn VGname) it is in effect immediately. 


LTG
LTG is the maximum transfer size of a logical volume (volume group?).
At 5.3 and 6.1 AIX dynamically sets LTG size (calculated at each volume group activation).
LTG size can be changed with: varyonvg -M<LTG size>
(The chvg -L has no effect on volume groups created on 5.3 or later (it was used on 5.2)
To display the LTG size of a disk: /usr/sbin/lquerypv -M <hdisk#>


lsvg                      lists the volume groups that are on the system
lsvg -o                   lists all volume groups that are varied on
lsvg -o | lsvg -ip        lists pvs of online vgs
lsvg rootvg               gives details about the vg (lsvg -L <vgname>, will doe not wait for the lock release (useful during mirroring))
lsvg -l rootvg            info about all logical volumes that are part of a vg
lsvg -M rootvg            lists all PV, LV, PP deatils of a vg (PVname:PPnum LVname: LPnum :Copynum) 
lsvg -p rootvg            display all physical volumes that are part of the volume group
lsvg -n <hdisk>           shows vg infos, but it is read from the VGDA on the specified disk (it is useful to compare it with different disks)

mkvg -s 2 -y testvg hdisk13    creates a volume group
    -s                    specify the physical partition size
    -y                    indicate the name of the new vg

chvg                      changes the characteristics of a volume group
chvg -u <vgname>          unlocks the volume group (if a command core dumping, or the system crashed and vg is left in locked state)
                          (Many LVM commands place a lock into the ODM to prevent other commands working on the same time.)
extendvg rootvg hdisk7    adds hdisk7 to rootvg (-f forces it: extendvg -f ...)
reducevg rootvg hdisk7    tries to delete hdisk7 (the vg must be varied on) (reducevg -f ... :force it)
                          (it will fail if the vg contains open logical volumes)
reducevg datavg <pvid>    reducevg command can use pvid as well (it is useful, if disk already removed from ODM, but VGDA still exist)


syncvg                    synchronizes stale physical partitions (varyonvg better, becaue first it reestablis reservation then sync in backg.)
varyonvg rootvg           makes the vg available (-f force a vg to be online if it does not have the quorum of available disks)
                          (varyonvg acts as a self-repair program for VGDAs, it does a syncvg as well)
varyoffvg rootvg          deactivate a volume group
mirrorvg -S P01vg hdisk1  mirroring rootvg to hdisk1 (checking: lsvg P01vg | grep STALE) (-S: background sync)
                          (mirrorvg -m rootvg hdisk1 <--m makes an exact copy, pp mapping will be identical, it is advised this way)
unmirrorvg testvg1 hdisk0 hdisk1 remove mirrors on the vg from the specified disks 

exportvg avg              removes the VGs definition out of the ODM and /etc/filesystems (for ODM problems after importvg will fix it)
importvg -y avg hdisk8    makes the previously exported vg known to the system (hdisk8, is any disk belongs to the vg)

reorgvg                   rearranges physical partitions within the vg to conform with the placement policy (outer edge...) for the lv.
                          (For this 1 free pp is needed, and the relocatable flag for lvs must be set to 'y': chlv -r...)

getlvodm -j <hdisk>       get the vgid for the hdisk from the odm
getlvodm -t <vgid>        get the vg name for the vgid from the odm
getlvodm -v <vgname>      get the vgid for the vg name from the odm

getlvodm -p <hdisk>       get the pvid for the hdisk from the odm
getlvodm -g <pvid>        get the hdisk for the pvid from the odm
lqueryvg -tcAp <hdisk>    get all the vgid and pvid information for the vg from the vgda (directly from the disk)
                          (you can compare the disk with odm: getlvodm <-> lqueryvg)


synclvodm <vgname>        synchronizes or rebuilds the lvcb, the device configuration database, and the vgdas on the physical volumes
redefinevg                it helps regain the basic ODM informations if those are corrupted (redefinevg -d hdisk0 rootvg)
readvgda hdisk40          shows details from the disk

Physical Volume states (and quorum):
lsvg -p VGName            <--shows pv states (not devices states!)
    active                <--during varyonvg disk can be accessed
    missing               <--during varyonvg disk can not be accessed + quorum is available
                          (after disk repair varyonvg VGName will put in active state)
    removed               <--no disk access during varyonvg + quorum is not available --> you issue varyonvg -f VGName
                          (after force varyonvg in the above case, PV state will be removed, and it won't be used for quorum)
                          (to put back in active state, first we have to tell the system the failure is over:)
                          (chpv -va hdiskX, this defines the disk as active, and after that varyonvg will synchronize)


---------------------------------------

Mirroring rootvg (i.e after disk replacement):
1. disk replaced -> cfgmgr           <--it will find the new disk (i.e. hdisk1)
2. extendvg rootvg hdisk1            <--sometimes extendvg -f rootvg...
(3. chvg -Qn rootvg)                 <--only if quorum setting has not yet been disabled, because this needs a restart
4. mirrorvg -s rootvg                <--add mirror for rootvg (-s: synchronization will not be done)
5. syncvg -v rootvg                  <--synchronize the new copy (lsvg rootvg | grep STALE)
6. bosboot -a                        <--we changed the system so create boot image (-a: create complete boot image and device)
                                     (hd5 is mirrorred, no need to do it for each disk. ie. bosboot -ad hdisk0)
7. bootlist -m normal hdisk0 hdisk1  <--set normal bootlist
8. bootlist -m service hdisk0 hdisk1 <--set bootlist when we want to boot into service mode
(9. shutdown -Fr)                    <--this is needed if quorum has been disabled
10.bootinfo -b                       <--shows the disk  which was used for boot

---------------------------------------

Export/Import:
1. node A: umount all fs -> varyoffvg myvg
2. node A: exportvg myvg            <--ODM cleared
3. node B: importvg -y myvg hdisk3  <-- -y: vgname, if omitted a new vg will be created (if needed varyonvg -> mount fs)
if fs already exists:
    1. umount the old one and mount the imported one with this: mount -o log=/dev/loglv01 -V jfs2 /dev/lv24 /home/michael
    (these details have to added in the mount command, and these can be retreived from LVCB: getlvcb lv24 -At)

    2. vi /etc/filesystems, create a second stanza for the imported filesystems with a new mountpoint.

---------------------------------------

VG problems with ODM:

if varyoff possible:
1. write down VGs name, major number, a disk
2. exportvg VGname
3. importvg -V MajorNum -y VGname hdiskX

if varyoff not possible:
1. write down VGs name, major number, a disk
2. export the vg by the backdoor, using: odmdelete
3. re-import vg (may produce warnings, but works)
(it is not necessary to umount fs or stop processes)

---------------------------------------

Changing factor value (chvg -t) of a VG:

A normal or a big vg has the following limitations after creation: 
MAX PPs per VG = MAX PVs * MAX PPS per PV)

                   Normal       Big
MAX PPs per VG:    32512        130048
MAX PPs per PV:    1016         1016
MAX PVs:           32           128

If we want to extend the vg with a disk, which is so large that we would have more than 1016 PPs on that disk, we will receive:
root@bb_lpar: / # extendvg bbvg hdisk4
0516-1162 extendvg: Warning, The Physical Partition Size of 4 requires the
        creation of 1024 partitions for hdisk4.  The limitation for volume group
        bbvg is 1016 physical partitions per physical volume.  Use chvg command
        with -t option to attempt to change the maximum Physical Partitions per
        Physical volume for this volume group.

If we change the factor value of the VG, then extendvg will be possible:
root@bb_lpar: / # chvg -t 2 bbvg
0516-1164 chvg: Volume group bbvg changed.  With given characteristics bbvg
        can include up to 16 physical volumes with 2032 physical partitions each.

Calculation:
Normal VG: 32/factor = new value of MAX PVs
Big VG: 128/factor= new value of MAX PVs

-t   PPs per PV        MAX PV (Normal)    MAX PV (Big)
1    1016              32                 128
2    2032              16                 64
3    3048              10                 42
4    4064              8                  32
5    5080              6                  25
...

"chvg -t" can be used online either increasing or decreasing the value of the factor.

---------------------------------------

Changing Normal VG to Big VG:

If you reached the MAX PV limit of a Normal VG and playing with the factor (chvg -t) is not possible anymore you can convert it to Big VG.
It is an online activity, but there must be free PPs on each physical volume, because VGDA will be expanded on all disks:

root@bb_lpar: / # lsvg -p bbvg
bbvg:
PV_NAME           PV STATE          TOTAL PPs   FREE PPs    FREE DISTRIBUTION
hdisk2            active            511         2           02..00..00..00..00
hdisk3            active            511         23          00..00..00..00..23
hdisk4            active            1023        0           00..00..00..00..00

root@bb_lpar: / # chvg -B bbvg
0516-1214 chvg: Not enough free physical partitions exist on hdisk4 for the
        expansion of the volume group descriptor area.  Migrate/reorganize to free up
        2 partitions and run chvg again.

In this case we have to migrate 2 PPs from hdisk4 to hdsik3 (so 2 PPs will be freed up on hdisk4):

root@bb_lpar: / # lspv -M hdisk4
hdisk4:1        bblv:920
hdisk4:2        bblv:921

hdisk4:3        bblv:922
hdisk4:4        bblv:923
hdisk4:5        bblv:924
...

root@bb_lpar: / # lspv -M hdisk3
hdisk3:484      bblv:3040
hdisk3:485      bblv:3041
hdisk3:486      bblv:3042
hdisk3:487      bblv:1
hdisk3:488      bblv:2
hdisk3:489-511

root@bb_lpar: / # migratelp bblv/920 hdisk3/489
root@bb_lpar: / # migratelp bblv/921 hdisk3/490

root@bb_lpar: / # lsvg -p bbvg
bbvg:
PV_NAME           PV STATE          TOTAL PPs   FREE PPs    FREE DISTRIBUTION
hdisk2            active            511         2           02..00..00..00..00
hdisk3            active            511         21          00..00..00..00..21
hdisk4            active            1023        2           02..00..00..00..00

If we try again changing to Big VG, now it is successful:
root@bb_lpar: / # chvg -B bbvg
0516-1216 chvg: Physical partitions are being migrated for volume group
        descriptor area expansion.  Please wait.
0516-1164 chvg: Volume group bbvg2 changed.  With given characteristics bbvg2
        can include up to 128 physical volumes with 1016 physical partitions each.

If you check again, freed up PPs has been used:
root@bb_lpar: / # lsvg -p bbvg
bbvg:
PV_NAME           PV STATE          TOTAL PPs   FREE PPs    FREE DISTRIBUTION
hdisk2            active            509         0           00..00..00..00..00
hdisk3            active            509         17          00..00..00..00..17
hdisk4            active            1021        0           00..00..00..00..00

---------------------------------------

Changing Normal (or Big) VG to Scalable VG:

If you reached the MAX PV limit of a Normal or a Big VG and playing with the factor (chvg -t) is not possible anymore you can convert that VG to Scalable VG. A Scalable VG allows a maximum of 1024 PVs and 4096 LVs and a very big advantage that the maximum number of PPs applies to the entire VG and is no longer defined on a per disk basis.

!!!Converting to Scalable VG is an offline activity (varyoffvg), and there must be free PPs on each physical volume, because VGDA will be expanded on all disks.

root@bb_lpar: / # chvg -G bbvg
0516-1707 chvg: The volume group must be varied off during conversion to
        scalable volume group format.

root@bb_lpar: / # varyoffvg bbvg
root@bb_lpar: / # chvg -G bbvg
0516-1214 chvg: Not enough free physical partitions exist on hdisk2 for the
        expansion of the volume group descriptor area.  Migrate/reorganize to free up
        18 partitions and run chvg again.


After migrating some lps to free up required PPs (in this case it was 18), then changing to Scalable VG is successful:
root@bb_lpar: / # chvg -G bbvg
0516-1224 chvg: WARNING, once this operation is completed, volume group bbvg
        cannot be imported into AIX 5.2 or lower versions. Continue (y/n) ?
...
0516-1712 chvg: Volume group bbvg changed.  bbvg can include up to 1024 physical volumes with 2097152 total physical partitions in the volume group.

---------------------------------------

0516-008 varyonvg: LVM system call returned an unknown error code (2). 
solution: export LDR_CNTRL=MAXDATA=0x80000000@DSA (check /etc/environment if LDR_CNTRL has a value, which is causing the trouble)

---------------------------------------

If VG cannot be created:
root@aix21c: / # mkvg -y tvg hdisk29
0516-1376 mkvg: Physical volume contains a VERITAS volume group.
0516-1397 mkvg: The physical volume hdisk29, will not be added to
the volume group.
0516-862 mkvg: Unable to create volume group.
root@aixc: / # chpv -C hdisk29        <--clears owning volume manager from a disk, after this mkvg was successful

 ---------------------------------------

root@aix1: /root # importvg -L testvg -n hdiskpower12
0516-022 : Illegal parameter or structure value.
0516-780 importvg: Unable to import volume group from hdiskpower12.


For me the solution was:
there was no pvid on the disk, after adding it (chdev -l hdiskpower12 -a pv=yes) it was OK
--------------------------------------- 


Reorgvg log files, and how it is working:

reorgvg activity is logged in lvmcfg:
root@bb_lpar: / # alog -ot lvmcfg | tail -3
[S 17039512 6750244 10/23/11-12:39:05:781 reorgvg.sh 580] reorgvg bbvg bb1lv
[S 7405650 17039512 10/23/11-12:39:06:689 migfix.c 168] migfix /tmp/.workdir.9699494.17039512_1/freemap17039512 /tmp/.workdir.9699494.17039512_1/migrate17039512 /tmp/.workdir.9699494.17039512_1/lvm_moves17039512
[E 17039512 47:320 reorgvg.sh 23] reorgvg: exited with rc=0

Field of these lines:
S - Start, E - End; PID, PPID; TIMESTAMP

At E (end) line shows how long reorgvg was running (in second:milliseconds):
47:320 = 47s 320ms


for a long running reorgvg, you can see it's status:

1. check the working dir of reorgvg
root@aixdb2: /root # alog -ot lvmcfg | tail -3 | grep workdir
[S 5226546 5288122 10/22/11-13:55:11:001 migfix.c 165] migfix /tmp/.workdir.4935912.5288122_1/freemap5288122 /tmp/.workdir.4935912.5288122_1/migrate5288122 /tmp/.workdir.4935912.5288122_1/lvm_moves5288122


2. check lvm_moves file in that dir (we will need the path of this file):
root@aixdb2: /root # ls -l /tmp/.workdir.4935912.5288122_1 | grep lvm_moves
-rw-------    1 root     system      1341300 Oct 22 13:55 lvm_moves5288122

(it contains all the lp migartions, and reorgvg goes through on this file, line by line)


3. check the process of reorgvg:
root@aixdb2: /root # ps -ef | grep reorgvg
    root 5288122 5013742   0 13:52:16  pts/2  0:12 /bin/ksh /usr/sbin/reorgvg P_NAVISvg p_datlv

root@aixdb2: /root # ps -fT 5288122
 CMD
 /bin/ksh /usr/sbin/reorgvg P_NAVISvg p_datlv
 |\--lmigratepp -g 00c0ad0200004c000000012ce4ad7285 -p 00c80ef201f81fa6 -n 1183 -P 00c0ad021d62f017 -N 1565
  \--awk -F: {print "-p "$1" -n "$2" -P "$3" -N "$4 }

(lmigratepp shows: -g VGid -p SourcePVid -n SourcePPnumber -P DestinationPVid -N DestinationPPnumber)

lmigratepp shows the actual PP which is migrated at this moment
(if you ask few seconds later it will show the next PP which is migrated, and it uses the lvm_moves file)

4. check the line number of the PP which is being migrated at this moment:
(now the ps command in step 3 is extended with the content of the lvm_moves file)

root@aixdb2: /root # grep -n `ps -fT 5288122|grep migr|awk '{print $12":"$14}'` /tmp/.workdir.4935912.5288122_1/lvm_moves5288122
17943:00c0ad021d66f58b:205:00c0ad021d612cda:1259
17944:00c80ef24b619875:486:00c0ad021d66f58b:205

you can compare the above line numbers (17943, 17944) to how many lines lvm_moves file has.
root@aixdb2: /root # cat /tmp/.workdir.4935912.5288122_1/lvm_moves5288122 | wc -l
   31536

It shows that from 31536 lp migrations we are at this moment at 17943.

---------------------------------------

0516-304 : Unable to find device id 00080e82dfb5a427 in the Device
        Configuration Database.


If a disk has been deleted (rmdev) somehow, but from the vg it was not removed:
root@bb_lpar: / # lsvg -p newvg
newvg:
PV_NAME           PV STATE          TOTAL PPs   FREE PPs    FREE DISTRIBUTION
hdisk2            active            31          20          06..00..02..06..06
0516-304 : Unable to find device id 00080e82dfb5a427 in the Device
        Configuration Database.
00080e82dfb5a427  removed           31          31          07..06..06..06..06


VGDA still shows the missing disk is part of the vg:
root@bb_lpar: / # lqueryvg -tAp hdisk2
...
Physical:       00080e82dfab25bc                2   0
                00080e82dfb5a427                0   4

VGDA should be updated (on hdisk2) but it is possible only, if the PVID is used with reducevg:
root@bb_lpar: / # reducevg newvg 00080e82dfb5a427

---------------------------------------

If you run into not being able to access an hdiskpowerX disk, you may need to reset the reservation bit on it:

root@aix21: / # lqueryvg -tAp hdiskpower13
0516-024 lqueryvg: Unable to open physical volume.
        Either PV was not configured or could not be opened. Run diagnostics.

root@aix21: / # lsvg -p sapvg
PTAsapvg:
PV_NAME           PV STATE          TOTAL PPs   FREE PPs    FREE DISTRIBUTION
hdiskpower12      active            811         0           00..00..00..00..00
hdiskpower13      missing           811         0           00..00..00..00..00
hdiskpower14      active            811         0           00..00..00..00..00

root@aix21: / # bootinfo -s hdiskpower13
0


Possible solution could be emcpowerreset:
root@aix21: / # /usr/lpp/EMC/Symmetrix/bin/emcpowerreset fscsi0 hdiskpower13

(after this varyonvg will bring back the disk into active state)