Friday, September 19, 2008

AIX,HP,SOLARIS,LINUX most used commands on 1 page

DIRECTORY MAPPINGS:


USER CREATION:
GENERAL COMMANDS:


PRINTERS:

TCP/IP:


SYSTEM FILES:
DISKS/LVM COMMANDS:



MISC:




SOFTWARE:



DEVICES:


Further more if you need below info :

FAQ:

AIX: http://www.emerson.emory.edu/services/aix-faq/
HP-UX: http://www.faqs.org/faqs/hp/hpux-faq/
LINUX: http://en.tldp.org/FAQ/Linux-FAQ/index.html
SOLARIS: http://www.science.uva.nl/pub/solaris/solaris2/

ONLINE MANUAL :

AIX: http://publib.boulder.ibm.com/infocenter/pseries/v5r3/index.jsp
HP-UX: http://docs.hp.com/en/hpuxman_pages.html
LINUX: http://en.tldp.org/
SOLARIS: http://docs.sun.com/app/docs/prod/solaris.10

CERTIFICATIONS:

AIX: http://www-03.ibm.com/certify/
HP-UX: http://h10076.www1.hp.com/education/hpux-sysadmin.htm
LINUX: http://www.redhat.com/certification/rhce/
SOLARIS: http://www.sun.com/training/certification/solaris/scsa.xml

NOTE: All the commands might not be upto date. You can always use the above links to find the latest commands or information

Tuesday, September 9, 2008

Outputs necessary to be taken before any activity/reboot

Necessary outputs for any system :

lsvg > lsvg.out

lsvg -o > lsvg-o.out
lspv > lspv.out
df -k > dfk.out
lsdev -C > lsdev.out
lssrc -a|grep active > lssrc.out
lslpp -L > lslpp.out
cat /etc/inittab > inittab.out
ifconfig -a > ifconfig.a.out
netstat -rn > netstatrn.out
powermt display > poweradapters.out
powermt display dev=all > powerhdisks.out
prtconf > prtconf.out
lsfs -q > lsfsq.out
lscfg -vp > lscfg.out
odmget Config_Rules > odmconfigrules.out
odmget CuAt > odmCuAt.out
odmget CuDep > odmCuDep.out
odmget CuDv > odmCuDv.out
odmget CuDvDr > odmCuDvDr.out
odmget PdAt > odmPdAt.out
odmget PdDv > odmPdDDv.out
ps -ef | grep -i pmon > oracle.out
ps -ef > ps.out
ps -ef | wc -l > pswordct.out
ps -ef | grep -i smb > samba.out
who > who.out
last > last.out



If the system has Cluster then:

/usr/es/sbin/cluster/utilities/cldump > cldump.out
/usr/es/sbin/cluster/utilities/cltopinfo > cltopinfo.out
/usr/es/sbin/cluster/utilities/clshowres > clshowres.out
/usr/es/sbin/cluster/utilities/clfindres > clfindres.out
/usr/es/sbin/cluster/utilities/cllscf > cllscf.out
/usr/es/sbin/cluster/utilities/cllsif > cllsif.out

Frequently Asking questions in HACMP

a. What characters should a hostname contain for HACMP configuration?
The hostname cannot have following characters: -, _, * or other special characters.

b. Can Service IP and Boot IP be in same subnet?

No. The service IP address and Boot IP address cannot be in same subnet. This is the basic requirement for HACMP cluster configuration. The verification process does not allow the IP addresses to be in same subnet and cluster will not start.

c. Can multiple Service IP addresses be configured on single Ethernet cards?

Yes. Using SMIT menu, it can be configured to have multiple Service IP addresses running on single Ethernet card. It only requires selecting same network name for specific Service IP addresses in SMIT menu.

d. What happens when a NIC having Service IP goes down?

When a NIC card running the Service IP address goes down, the HACMP detects the failure and fails over the service IP address to available standby NIC on same node or to another node in the cluster.

e. Can Multiple Oracle Database instances be configured on single node of HACMP cluster?

Yes. Multiple Database instances can be configured on single node of HACMP cluster. For this one needs to have separate Service IP addresses over which the listeners for every Oracle Database will run. Hence one can have separate Resource groups which will own each Oracle instance. This configuration will be useful if there is a failure of single Oracle Database instance on one node to be failed over to another node without disturbing other running Oracle instances.

f. Can HACMP be configured in Active - Passive configuration?

Yes. For Active - In Passive cluster configuration, do not configure any Service IP on the passive node. Also for all the resource groups on the Active node please specify the passive node as the next node in the priority to take over in the event of failure of active node.

g. Can file system mounted over NFS protocol be used for Disk Heartbeat?

No. The Volume mounted over NFS protocol is a file system for AIX, and since disk device is required for Enhanced concurrent capable volume group for disk heartbeat the NFS file system cannot be used for configuring the disk heartbeat. One needs to provide disk device to AIX hosts over FCP or iSCSI protocol.

h. Which are the HACMP log files available for troubleshooting?

Following are log files which can be used for troubleshooting:
1. /var/hacmp/clverify/current//* contains logs from current execution of cluster verification.
2. /var/hacmp/clverify/pass//* contains logs from the last time verification passed.
3. /var/hacmp/clverify/fail//* contains logs from the last time verification failed.
4. /tmp/hacmp.out file records the output generated by the event scripts of HACMP as they execute.
5. /tmp/clstmgr.debug file contains time-stamped messages generated by HACMP clstrmgrES activity.
6. /tmp/cspoc.log file contains messages generated by HACMP C-SPOC commands.
7. /usr/es/adm/cluster.log file is the main HACMP log file. HACMP error messages and messages about HACMP related events are appended to this log.
8. /var/adm/clavan.log file keeps track of when each application that is managed by HACMP is started or stopped and when the node stops on which an application is running.
9. /var/hacmp/clcomd/clcomd.log file contains messages generated by HACMP cluster communication daemon.
10. /var/ha/log/grpsvcs. file tracks the execution of internal activities of the grpsvcs daemon.
11. /var/ha/log/topsvcs. file tracks the execution of internal activities of the topsvcs daemon.
12. /var/ha/log/grpglsm file tracks the execution of internal activities of grpglsm daemon.


People Interested in Detailed study on HACMP Please read :

http://www.redbooks.ibm.com/redbooks/pdfs/sg246375.pdf

Tuesday, September 2, 2008

HACMP configuration

CONFIGURING NETWORK INTERFACE ADAPTERS
In our example, we have two NICs, one used as cluster interconnects and other as bootable adapter. The service IP address will be activated on the bootable adapter after cluster services are started on the nodes. Following IP addresses are used in the setup:


NODE1: hostname – btcppesrv5
Boot IP address - 10.73.70.155 btcppesrv5
Netmask - 255.255.254.0
Interconnect IP address - 192.168.73.100 btcppesrv5i
Netmask - 255.255.255.0
Service IP address - 10.73.68.222 btcppesrv5sv
Netmask - 255.255.254.0

NODE2: hostname – btcppesrv6
Boot IP address - 10.73.70.156 btcppesrv6
Netmask - 255.255.254.0
Interconnect IP address - 192.168.73.101 btcppesrv6i
Netmask - 255.255.255.0
Service IP address - 10.73.68.223 btcppesrv6sv
Netmask - 255.255.254.0

EDITING CONFIGURATION FILES FOR HACMP
1. /usr/sbin/cluster/netmon.cf – All the IP addresses present in the network need to be entered in this file. Refer to Appendix for sample file.

2. /usr/sbin/cluster/etc/clhosts – All the IP addresses present in the network need to be entered in this file. Refer to Appendix for sample file.
3. /usr/sbin/cluster/etc/rhosts - All the IP addresses present in the network need to be entered in this file. Refer to Appendix for sample file.
4. /.rhosts – All the IP addresses present in the network with username (i.e. root) need to be entered in this file. Refer to Appendix for sample file.
5. /etc/hosts – All the IP addresses with their IP labels present in network need to be entered in this file. Refer to Appendix for sample file.

Note:
All the above mentioned files need to be configured on both the nodes of cluster.



CREATING CLUSTER USING SMIT

This is a sample HACMP configuration that might require customization for your environment. This section demonstrates how to configure two AIX nodes, btcppesrv5 and btcppesrv6, into a HACMP cluster.

1. Configure two AIX nodes to allow the user root to use the rcp and remsh commands between themselves without having to specify a password.

2. Log in as user root on AIX node btcppesrv5.

3. Enter the following command to create an HACMP cluster.
# smit hacmp
Perform the following steps. These instructions assume that you are using the graphical user interface to SMIT (that is, smit –M). If you are using the ASCII interface to SMIT (that is, smit –C), modify these instructions accordingly.
a) Click Initialization and Standard Configuration.
b) Click Configure an HACMP Cluster and Nodes.
c) In the Cluster Name field, enter netapp.
d) In the New Nodes (via selected communication paths) field, enter btcppesrv5 and btcppesrv6.
e) Click OK.
f) Click Done.
g) Click Cancel.
h) Select the Exit > Exit SMIT Menu option.

4. Enter the following command to configure the heartbeat networks as private networks.
# smit hacmp
Perform the following steps.
a) Click Extended Configuration.
b) Click Extended Topology Configuration.
c) Click Configure HACMP Networks.
d) Click Change/Show a Network in the HACMP cluster.
e) Select net_ether_01 (192.168.73.0/24).
f) In the Network Attribute field, select private.
g) Click OK.
h) Click Done.
i) Click Cancel.
j) Select the Exit > Exit SMIT Menu option.

5. Enter the following command to configure Service IP Labels/Addresses.
# smit hacmp
Perform the following steps.
a) Click Initialization and Standard Configuration.
b) Click Configure Resources to Make Highly Available.
c) Click Configure Service IP Labels / Addresses.
d) Click Add a Service IP Label / Address.
e) In the IP Label / Address field, enter btcppsrv5sv.
f) In the Network Name field, select net_ether_02 (10.73.70.0/23). The Service IP label will be activated on network interface 10.73.70.0/23 after cluster service starts.
g) Click OK.
h) Click Done.
i) Click Cancel.
j) Similarly follow steps d) to h) for adding second service IP label btcppsrv6sv.
k) Select the Exit > Exit SMIT Menu option.

6. Enter the following command to create Empty Resource Groups with Node priorities.
# smit hacmp
Perform the following steps.
a) Click Initialization and Standard Configuration.
b) Click Configure HACMP Resource Groups.
c) Click Add a Resource Group.
d) In the Resource Group Name field, enter RG1.
e) In the Participating Nodes (Default Node Priority) field, enter btcppesrv5 and btcppesrv6. The Resource Group RG1 will be online on btcppsrv5 first when cluster service starts; in the event of failure RG1 will be taken over by btcppesrv6 as the node priority for RG1 is assigned to btcppesrv5 first.
f) Click OK.
g) Click Done.
h) Click Cancel.
i) Similarly follow steps d) to h) for adding second Resource group RG2 with node priority first assigned to btcppesrv6.
j) Select the Exit > Exit SMIT Menu option.

7. Enter the following command to make Service IP labels part of Resource Groups.
# smit hacmp
Perform the following steps.
a) Click Initialization and Standard Configuration.
b) Click Configure HACMP Resource Groups.
c) Click Change/Show Resources for a Resource Group (standard).
d) Select a resource Group from pick list as RG1.
e) In the Service IP Labels / Addresses field, enter btcppesrv5sv. As btcppesrv5sv service IP label has to be activated on first node btcppesrv5.
f) Click OK.
g) Click Done.
h) Click Cancel.
i) Similarly follow steps c) to h) for adding second Service IP Label btcppesrv6sv in Resource Group RG2. A btcppesrv6sv service IP label has to be activated on second node btcppesrv6.
j) Select the Exit > Exit SMIT Menu option.


VERIFYING AND SYNCHRONIZING CLUSTER USING SMIT

This section demonstrates how to verify and synchronize the nodes in an HACMP cluster. This process of verification and synchronization actually verifies the HACMP configuration done from one node and then synchronizes to other node in the cluster. So whenever there are any changes to be done in the HACMP cluster, they are required to be done from a single node and to be synchronized with other nodes.
1. Log in as user root on AIX node btcppesrv5.
2. Enter following command to verify and synchronize all nodes in HACMP cluster.
# smit hacmp
Perform the following steps.
a) Click Initialization and Standard Configuration.
b) Click Verify and Synchronize HACMP Configuration.
c) Click Done.
d) Select the Exit > Exit SMIT Menu option.


STARTING CLUSTER SERVICES

This section demonstrates how to start an HACMP cluster on both the participating nodes.
1. Log in as user root on AIX node btcppesrv5.
2. Enter following command to start HACMP cluster.
# smit cl_admin
Perform the following steps.
a) Click Manage HACMP services.
b) Click Start Cluster Services.
c) In the Start Now, on System Restart or Both fields, select now.
d) In the Start Cluster Services on these nodes field, enter btcppesrv5 and btcppesrv6. The cluster services can be started on both the nodes simultaneously.
e) In the Startup Cluster Information Daemon field, select true.
f) Click OK.
g) Click Done.
h) Click Cancel.
i) Select the Exit > Exit SMIT Menu option.


STOPPING CLUSTER SERVICES

This section demonstrates how to stop an HACMP cluster on both the participating nodes.
1. Log in as user root on AIX node btcppesrv5.
2. Enter following command to stop HACMP cluster.
# smit cl_admin
Perform the following steps.
a) Click Manage HACMP services.
b) Click Stop Cluster Services.
c) In the Stop Now, on System Restart or Both fields, select now.
d) In the Stop Cluster Services on these nodes field, enter btcppesrv5 and btcppesrv6. The cluster services can be stopped on both the nodes simultaneously.
e) Click OK.
f) In the Are You Sure? Dialog box, click OK.
g) Click Done.
h) Click Cancel.
i) Select the Exit > Exit SMIT Menu option.


CONFIGURING DISK HEARTBEAT

For configuring Disk Heartbeating, it is required to create the Enhanced Concurrent Capable Volume group on both the AIX nodes.To be able to use HACMP C-SPC successfully, it is required that some basic IP based topology already exists, and that the storage devices have their PVIDs on both systems’ ODMs. This can be verified by running lspv command on each AIX node. If a PVID does not exist on any AIX node, it is necessary to run
chdev –l -a pv=yes command on each AIX node.

btcppesrv5#> chdev –l hdisk3 –a pv=yes
btcppesrv6#> chdev –l hdisk3 –a pv=yes

This will allow C-SPOC to match up the device(s) as known shared storage devices.
This demonstrates how to create Enhanced Concurrent Volume Group:

1. Log in as user root on AIX node btcppesrv5.

2. Enter following command to create Enhanced concurrent VG.
# smit vg
Perform the following steps.
a) Click Add Volume Group.
b) Click Add an Original Group.
c) In the Volume group name field, enter heartbeat.
d) In the Physical Volume Names field, enter hdisk3.
e) In the Volume Group Major number field, enter 59. This number is the number available for a particular AIX node; it can be found out from the available list in the field.
f) In the Create VG concurrent capable field, enter YES.
g) Click OK.
h) Click Done.
i) Click Cancel.
j) Select the Exit > Exit SMIT Menu option.

On btcppesrv5 AIX node check the newly created volume group using command lsvg.
On second AIX node enter importvg –V -y command to import the volume group:

btcppesrv6#> importvg -V 59 -y heartbeat hdisk3

Since the enhanced concurrent volume groups are available for both the AIX nodes, we will use discovery method of HACMP to find the disks available for Heartbeat.

This demonstrates how to configure Disk heartbeat in HACMP:

1. Log in as user root on AIX node btcppesrv5.

2. Enter following command to configure Disk heartbeat.
# smit hacmp
Perform the following steps.
a) Click Extended Configuration.
b) Click Discover HACMP-related information from configured Nodes. This will run automatically and create /usr/es/sbin/cluster/etc/config/clvg_config file that contains the information it has discovered.
c) Click Done.
d) Click Cancel.
e) Click Extended Configuration.
f) Click Extended Topology Configuration.
g) Click Configure HACMP communication Interfaces/Devices.
h) Click Add Communication Interfaces/Devices.
i) Click Add Discovered Communication Interfaces and Devices.
j) Click Communication Devices.
k) Select both the Devices listed in the list.
l) Click Done.
m) Click Cancel.
n) Select the Exit > Exit SMIT Menu option.

It is necessary to add the Volume group into HACMP Resource Group and synchronize the cluster.

Enter the following command to create Empty Resource Groups with different policies than what we created earlier.
# smit hacmp
Perform the following steps.
a) Click Initialization and Standard Configuration.
b) Click Configure HACMP Resource Groups.
c) Click Add a Resource Group.
d) In the Resource Group Name field, enter RG3.
e) In the Participating Nodes (Default Node Priority) field, enter btcppesrv5 and btcppesrv6.
f) In the Startup policy field, enter Online On All Available Nodes.
g) In the Fallover Policy field, enter Bring Offline (On Error Node Only).
h) In the Fallback Policy field, enter never Fallback.
i) Click OK.
j) Click Done.
k) Click Cancel.
l) Click Change/Show Resources for a Resource Group (Standard).
m) Select RG3 from the list.
n) In the Volume Groups field, enter heartbeat. The concurrent capable volume group which was created earlier.
o) Click OK.
p) Click Done.
q) Click Cancel.
r) Select the Exit > Exit SMIT Menu option.

Monday, September 1, 2008

Hacmp Installation

HACMP software installation :

Considering that your cluster is well planned we can see the below steps for HACMP installation


Checking for prerequisites

Once you have finished your planning working sheets, verify that your system meets the requirements that are required by HACMP; many potential errors can be eliminated if you make this extra effort. HACMP V5.1 requires one of the following operating system components:

  • AIX 5L V5.1 ML5 with RSCT V2.2.1.30 or higher.
  • AIX 5L V5.2 ML2 with RSCT V2.3.1.0 or higher (recommended 2.3.1.1).
  • C-SPOC vpath support requires SDD 1.3.1.3 or higher.
For the latest information about prerequisites and APARs, refer to the README file that comes with the product and the following IBM Web page: http://techsupport.services.ibm.com/server/cluster/ The following AIX 5L base operating system (BOS) components are required for HACMP.



Install the RSCT (Reliable Scalable Cluster Technology) images before installing HACMP. Ensure that each node has the same version of RSCT.






HACMP SOFTWARE INSTALLATIONThe HACMP software installation medium contains the HACMP enhanced scalability subsystem images. This provides the services for cluster membership, system management, configuration integrity and control, failover, and recovery. It also includes cluster status and monitoring facilities for programmers and system administrators.

To install the HACMP software on a server node from the installation medium:


1) Insert the CD into the CD-ROM drive and enter: smit install_all SMIT displays the first Install and Update from ALL Available Software panel.

2) Enter the device name of the installation medium or install directory in the INPUT device / directory for software field and press Enter.

3) Enter field values as follows.




4) The fields other than mentioned in above table should be kept as default values only. When one is satisfied with the entries, press Enter.
5) SMIT prompts to confirm the selections.
6) Press Enter again to install.
7) After the installation completes, verify the installation as described in the below section


To complete the installation after the HACMP software is installed:

1) Verify the software installation by using the AIX 5L command lppchk, and check the installed directories for the expected files. The lppchk command verifies that files for an installable software product (file set) match the Software Vital Product Data (SWVPD) database information for file sizes, checksum values, or symbolic links.
2) Run the commands lppchk -v and lppchk -c “cluster.*”
3) If the installation is OK, both commands should return nothing.
4) Reboot each HACMP cluster node.

Some hacmp commands

Normally the commands starting with CL* are located in /usr/es/sbin/cluster/utilities/

clstat - show cluster state and substate; needs clinfo.
cldump - SNMP-based tool to show cluster state
cldisp - similar to cldump, perl script to show cluster state.
cltopinfo - list the local view of the cluster topology.
clshowsrv -a - list the local view of the cluster subsystems.
clfindres (-s) - locate the resource groups and display status.
clRGinfo -v - locate the resource groups and display status.
clcycle - rotate some of the log files.
cl_ping - a cluster ping program with more arguments.
clrsh - cluster rsh program that take cluster node names as argument.
clgetactivenodes - which nodes are active?
get_local_nodename - what is the name of the local node?
clconfig - check the HACMP ODM.
clRGmove - online/offline or move resource groups.
cldare - sync/fix the cluster.
cllsgrp - list the resource groups.
clsnapshotinfo - create a large snapshot of the hacmp configuration.
cllscf - list the network configuration of an hacmp cluster.
clshowres - show the resource group configuration.
cllsif - show network interface information.
cllsres - show short resource group information.
cllsnode - list a node centric overview of the hacmp configuration.
lssrc -ls clstrmgrES - list the cluster manager state.
lssrc -ls topsvcs - show heartbeat information.

Hacmp log files

The HACMP software writes the messages it generates to the system console and to several log files. Each log file contains a different subset of messages generated by the HACMP software.When viewed as a group, the log filesprovide a detailed view of all cluster activity.

Although the actual location of the log files on the system may seem scattered, the log diversity provides information for virtually any HACMP event.Moreover, you can customize the location of the log files, and specify the verbosity of the logging operations.

The following list describes the log files into which the HACMP software writes messages and the types of cluster messages they contain.The list also provides recommendations for using the different log files.

/usr/es/adm/cluster.log
Contains time-stamped, formatted messages generated
by HACMP scripts and daemons.

/tmp/hacmp.out
Contains time-stamped, formatted messages generated
by HACMP scripts on the current day. In verbose mode
(recommended), this log file contains a line-by-line record
of every command executed by the scripts, including the
values of all arguments to each command. An event
summary of each high-level event is included at the end of
each event’s details (similar to adding the -x option to a
shell script).

system error log
Contains time-stamped, formatted messages from all AIX
subsystems, including scripts and daemons.

/usr/es/sbin/cluster/history/cluster.mmddyyyy
Contains time-stamped, formatted messages generated
by HACMP scripts. The system creates a cluster history
file every day, identifying each file by its file name
extension, where mm indicates the month, dd indicates
the day, and yyyy the year.

/tmp/clstrmgr.debug
Contains time-stamped, formatted messages generated
by clstrmgrES activity. The messages are verbose. With
debugging turned on, this file grows quickly. You should
clean up the file and turn off the debug options as soon as
possible.

/tmp/cspoc.log
Contains time-stamped, formatted messages generated
by HACMP C-SPOC commands. The file resides on the
node that invokes the C-SPOC command.

/tmp/dms_loads.out
Stores log messages every time HACMP triggers the
deadman switch.

/tmp/emuhacmp.out
Contains time-stamped, formatted messages generated
by the HACMP Event Emulator. The messages are
collected from output files on each node of the cluster,
and cataloged together into the /tmp/emuhacmp.out log
file.

/var/hacmp/clverify/clverify.log
The file contains the verbose messages output by the
clverify utility. The messages indicate the node(s),
devices, command, and so on, in which any verification
error occurred.

/var/ha/log/grpsvcs, /var/ha/log/topsvcs, and /var/ha/log/grpglsm
Contains time-stamped messages in ASCII format. All
these files track the execution of the internal activities of
their respective daemons.

Hacmp history and evolution


IBM High Availability Cluster Multi-Processing goes back to the early 1990s.
HACMP development started in 1990 to provide high availability solution for
applications running on RS/6000 servers.
We do not provide information about the very early releases, since those
releases are nor supported or in use at the time this book was developed,
instead, we provide highlights about the most recent versions.

HACMP V4.2.2:
Along with HACMP Classic (HAS), this version introduced the enhanced
scalability version (ES) based on RSCT (Reliable Scalable Clustering
Technology) topology, group, and event management services, derived from
PSSP (Parallel Systems Support Program).


HACMP V4.3.X:
This version introduced, among other aspects, 32 node support for HACMP/ES,
C-SPOC enhancements, ATM network support, HACMP Task guides (GUI for
simplifying cluster configuration), multiple pre- and post- event scripts, FDDI
MAC address takeover, monitoring and administration support enhancements,
node by node migration, and AIX fast connect support.


HACMP V4.4.X:
New items in this version are integration with Tivoli®, application monitoring,
cascading with out fallback, C-SPOC enhancements, improved migration
support, integration of HA-NFS functionality, and soft copy documentation (HTML
and PDF).


HACMP V4.5:
In this version, AIX 5L is required, and there is an automated configuration
discovery feature, multiple service labels on each network adapter (through the
use of IP aliasing), persistent IP address support, 64-bit-capable APIs, and
monitoring and recovery from loss of volume group quorum.


HACMP V5.1:
This is the version that introduced major changes, from configuration
simplification and performance enhancements to changing HACMP terminology.
Some of the important new features in HACMP V5.1 were:
  • SMIT “Standard” and “Extended” configuration paths (procedures)
  • Automated configuration discovery
  • Custom resource groups
  • Non IP networks based on heartbeating over disks
  • Fast disk takeover
  • Forced varyon of volume groups
  • Heartbeating over IP aliases
  • HACMP “classic” (HAS) has been dropped; now there is only HACMP/ES,
  • based on IBM Reliable Scalable Cluster Technology
  • Improved security, by using cluster communication daemon (eliminates the
  • need of using standard AIX “r” commands, thus eliminating the need for the
  • /.rhosts file)
  • Improved performance for cluster customization and synchronization Normalization of HACMP terminology
  • Simplification of configuration and maintenance
  • Online Planning Worksheets enhancements
  • Forced varyon of volume groups
  • Custom resource groups
  • Heartbeat monitoring of service IP addresses/labels on takeover node(s)
  • Heartbeating over IP aliases
  • Heartbeating over disks
  • Various C-SPOC enhancements
  • GPFS integration
  • Fast disk takeover
  • Cluster verification enhancements
  • Improved resource group management

HACMP V5.2:
Starting July 2004, the new HACMP V5.2 added more improvements in
management, configuration simplification, automation, and performance areas.
Here is a summary of the improvements in HACMP V5.2:

  • Two-Node Configuration Assistant, with both SMIT menus and a Java™
  • interface (in addition to the SMIT “Standard” and “Extended” configuration
  • paths).
  • File collections.
  • User password management.
  • Classic resource groups are not used anymore, having been replaced by
  • custom resource groups.
  • Automated test procedures.
  • 6 IBM Eserver pSeries HACMP V5.x Certification Study Guide Update
  • Automatic cluster verification.
  • Improved Online Planning Worksheets (OLPW) can now import a
  • configuration from an existing HACMP cluster.
  • Event management (EM) has been replaced by resource monitoring and a
  • control (RMC) subsystem (standard in AIX).
  • Enhanced security.
  • Resource group dependencies.
  • Self-healing clusters
HACMP V5.3 :

In addition to Smart Assist for WebSphere, HACMP 5.3 provides new Smart
Assist features:
  • – DB2® 8.1, 8.2 EE
  • – Oracle Application Server 10g (OAS)
Additional resource and resource group management features include:
  • – Cluster-wide resource group location dependencies (including XD)
  • – Distribution preference for the IP service aliases
Online Planning Worksheet functionality has been extended, and also the OLPW
configuration file format has been unified. In this version, it is possible to:
  • – Clone a cluster from an existing “live” configuration
  • – Extracting Cluster snapshot information to XML format for use with OLPW
OEM volume groups and file systems has been added to for Veritas Volume
Manager (VxVM), and Veritas File System (VxFS).
It is also possible to generate and send SMS pager messages (e-mail format)
when HACMP events occur.
Performance and usability have been improved using new architecture for
communication between Clinfo and the Cluster Manager.
Using Cluster Information (Clinfo) API versioning removes requirement to
recompile clients in the future.
Cluster Verification facilities continue to grow to help customers prevent problems
before they occur.

HACMP V5.4 :
Release date July 2006

  • Web-based GUI
  • Nondisruptive HACMP cluster startup, upgrades, and maintenance

HACMP V5.4.1:

Release date November 2007.
  • AIX Workload Partitions support (WPAR)
  • New GLVM monitoring
  • NFSv4 support improvements

For more knowledge on HACMP :

www.redbooks.ibm.com/redbooks/pdfs/sg246375.pdf -->> HACMP certification preparation


Good IBM link for more on hacmp:
http://publib.boulder.ibm.com/infocenter/clresctr/vxrx/index.jsp?topic=/com.ibm.cluster.hacmp.doc/hacmpbooks.html