Working with kernel

sexta-feira, 25 de maio de 2012

kctune
You can use this command to see all kernel parameters and to change it.

Listing kernel parameters and the current configuration:
# kctune

Changing a parameter:
# kctune parameter=134217728

You can say for it change the parameters just after a reboot, there is few options to customize. Some parameter is mandatory to reboot for apply.

kcmodule

The kcmodule command queries and changes the states of kernel modules in the currently running configuration or in a saved configuration.

There is five different kernel state

From man page:

unused The module is not used in any way.
static The module is statically bound into the kernel executable.
auto The module will be dynamically loaded into the kernel when something tries to use it.
loaded The module is dynamically loaded into the kernel.
best The module will be put into the state identified by the kernel module
developer as its "best" state. Typically this will be auto, if supported by the
module, otherwise loaded, if supported by the module, otherwise static.
Note that a module in best state will inherit any changes that HP makes to
the "best" state for a module, in a patch or a future release of HP-UX.

How to see all optional modules and their current states?
# kcmodule

How to check the current state?
# kcmodule module

How to change the state?
# kcmodule module=newState

Identifying a lun from a HP EVA storage

There's two good method to do that:

Using scsimg command (11iv3):
# scsimgr get_attr -D /dev/rdisk/disk22 -a wwid

Search for the bellow line

current = 0x600508d400101a2f00010000011c0000

Using evainfo:

I think since the 8 version you are able to find it on "HP StorageWorks Storage System Scripting Utility CD (SSSU)". This script provide much useful information about their disks (including the serial numer).
# evainfo -aP

How to configure a NTP server?

sábado, 19 de maio de 2012

Enable the ntp server in the config file /etc/rc.config.d/netdaemons:
# vi /etc/rc.config.d/netdaemons..
# this variable list the nfs servers separated by space.
export NTPDATE_SERVER='ntp.mycorp.com pool.ntp.org in.pool.ntp.org'
# This flag enable the ntp service to start with the server
export XNTPD=1
# See man pages to see the options available
export XNTPD_ARGS=

Setup the time zone in /etc/TIMEZONE file (look for TZ variable):
# vi /etc/TIMEZONE
..
# Use your time zone here, you can check man pages for see others examples for time zone.
TZ=MST7MDT

Edit the ntp.conf configuration file:
# vi /etc/ntp.conf
..
server unix-box-ntp
# ntp server used (poll) to obtain time
server delhi-ntp
# a peer relationship with another ntp server
peer delhi-noc-ntp
# driftfile : track local clock time (drift of the local clock)
driftfile /etc/ntp.drift

Start the NTP service:
# /sbin/init.d/xntpd start

Check if xntpd service is running:
# ps -ef | grep xntpd

Verify if the NTP server is working fine:
# ntpq -p

For troubleshot check the syslog (/var/adm/syslog/syslog.log).

Adding a new node in a running cluster


1- Add the new node in the file /etc/cmcluster/cmclnodelist.
2- Get the most up-to-date ASCII configuration file.
# cmgetconf -v -c clustername /etc/cmcluster/cluster.ascii

3- Query all nodes, including the new node, in the cluster.
# cmquerycl -v -c cluster_name -n node1 -n node2 cluster.ascii

4- Compare the ASCII files obtained from cmgetconf and cmquerycl.
5- Update the ASCII configuration file obtained from cmquerycl.
6- Check the new ASCII configuration file.
# cmcheckconf -v -C cluster.ascii

7- Compile and distribute the new binary cluster configuration file.
# cmapplyconf -v -C cluster.ascii

8- Start cluster services on the new node.
# cmrunnode -v newNode

9- Check the cluster status and the log file for validate.

# cmviewcl

tail -100 /var/adm/syslog/syslog.log

How to upgrade Serviceguard

sábado, 12 de maio de 2012


1. First of all create a backup of /etc/cmcluster dir:
#cp -pR /etc/cmcluster /etc/cmcluster.bck

2. Also do a cmgetconf of the current configuration.

3. Halt ALL PKGs running on the first node and the node:
#cmhaltpkg pkg1
#cmhaltpkg pkg2
#cmhaltnode node


4. Get the service guard product name:
swlist |grep -i service

5. Uninstall Serviceguard:
#swremove -x enforce_dependencies=false
 Select the Serviceguard product in the list.

6. Install the new Serviceguard version using swinstall.

7. After installation check the new version with swlist |grep -i service

8. start the node
#cmrunnode node

9. Now for validate...
Check cluster status:
#cmviewcl -v
Check cluster logs on /var/adm/syslog/syslog.log

Now with the node validate do the same for others node.
Finally start all PKGs.