2012年12月3日月曜日

GPFS on AIX

https://sites.google.com/site/rhdisk0/unix/aix/gpfs
Filesets

gpfs.base - GPFS File Manager
gpfs.msg.en_US - GPFS Server Messages - U.S. English
gpfs.docs.data - GPFS Server Manpages and Documentation
Path

GPFS commands are in a separate directory.

export PATH=$PATH:/usr/lpp/mmfs/bin
Daemons, status


# ps -ef | grep mmfs

# mmlscluster

# mmlsconfig
Log

# tail -f /var/adm/ras/mmfs.log.latest
PV migration

Collect information

Dump/Save current configuration

# gpfs.snap

List active mounts

# mmlsmount all
File system foobar is mounted on 2 nodes.

List storage pools

# mmlsfs all -P

File system attributes for /dev/foobar:
=====================================
flag value description
---- ---------------- -----------------------------------------------------
-P system Disk storage pools in file system


List disks in each filesystem

# mmlsfs all -d

File system attributes for /dev/foobar:
=====================================
flag value description
---- ---------------- -----------------------------------------------------
-d mycluster00nsd Disks in file system


List current NSDs (network shared disks)

# mmlsnsd -M

Disk name NSD volume ID Device Node name Remarks
---------------------------------------------------------------------------------------
mycluster00nsd 0AEC13994BFCEEF7 /dev/hdisk7 host1.mydomain.com
mycluster00nsd 0AEC13994BFCEEF7 /dev/hdisk7 host1.mydomain.com
mycluster00nsd 0AEC13994BFCEEF7 - host3.mydomain.com (not found) directly attached

mmlsnsd: 6027-1370 The following nodes could not be reached:
host3.mydomain.com


List filesystem manager node(s)

# mmlsmgr
file system manager node
---------------- ------------------
foobar 10.111.11.111 (host2)

Cluster manager node: 10.111.11.111 (host2)

Show the state of GPFS daemons

- on the local node:

# mmgetstate

Node number Node name GPFS state
------------------------------------------
2 myhost2 active

- on all cluster members:

# mmgetstate -a

Node number Node name GPFS state
------------------------------------------
1 myhost1 active
2 myhost2 active
3 somexx1 unknown


Add new disk(s)

Configure new device
# lspv
# cfgmgr

Verify new disk
# lspv
# lspath -l hdiskX
# errpt | head

Edit desc file
Disk name: hdiskX
Primary/Backup server: only for NFS
Disk usage: almost always 'dataAndMetadata'
Failure group: can be -1 when there are no other disks for failover, otherwise see 'mmdf $fs' -> failure group
Desired name: arbitrary; 'clusternameXXnsd' is suggested
Storage pool(s): see 'mmlsfs all -P'
The contents of the file is similar to this:
#DiskName:PrimaryServer:BackupServer:DiskUsage:FailureGroup:DesiredName:StoragePool
hdisk7:::dataAndMetadata:-1:foobar00nsd:system

Configure new disk(s) as NSD
# mmcrnsd -F /path/to/hdisk7.desc

Check NSDs
# mmlsnsd; mmlsnsd -F

Add disk to FS using the transformed desc file (mmcrnsd comments out the previous line and inserts a new one)
# mmadddisk foobar -F hdisk7.desc

Remove old disk(s)

(mmrestripefs?)

Task: Migrate data by deleting disks from GPFS

Example: 4x 100GB DS8100 => 1x 400GB XIV, 2x 4Gb adapters, net 250GB data = 27 minutes on an idle system

WARNING: high I/O!

# mmdeldisk foobar "gpfsXnsd;gpfsYnsd..." -r

Reconfigure tiebreakers

WARNING: GPFS must be shut down on all nodes!

# mmshutdown -a
# mmchconfig tiebreakerDisks="foobar00nsd"
# mmstartup -a

Remove NSD and physical LUN
# mmdelnsd -p [NSD volume ID]

Record old LUNs for the storage administrator (should they be removed from the zoning)
# pcmpath query essmap ...

Delete AIX devices
# rmdev -dl hdiskX

Delete GPFS filesystem

Where fs is the GPFS filesystem device, as seen in mmlsfs, for example, fs0:

Check for processes using the filesystem
# fuser -cux /mount/point

Umount filesystem on all cluster nodes
# mmumount fs0 -a

Delete filesystem

WARNING: Data is destroyed permanently!

# mmdelfs fs0

0 件のコメント:

コメントを投稿