Linux menu

Sunday, September 13, 2015

Package administration in solaris 10

Package administration in solaris 10



Administering Packages

    pkgadd        Installs software packages to the system

    pkgrm        Removes a package from the system

    pkginfo        Displays software package information

    pkgchk        Checks package installation state

    pkgtrans    Translates packages from one format to another


# more /var/sadm/install/contents    ==> to list all installed software                             packages


# pkginfo | more    ==> to display information about installed software                     packages

# pkginfo -l | more    ==> to view additional information

# pkginfo -l SUNWman    ==> to view information of a specific package

# pkginfo | wc -l    ==> to find how many packages are currently installed


To view information about packages that are located on the DVD


# pkginfo -d /cdrom/cdrom0/s0/Solaris_10/Product | more

# pkginfo -d /cdrom/cdrom0/Solaris_10/Product | more



Checking a Package Installation

The pkgchk command checks to determine if a package has been installed on the system. If the pkgchk command does not display a message, it indicates the package was installed successfully and that no changes have been made
to any files or directories in the package.


# pkgchk SUNWladm    ==> to check the contents & attributes of a                     currently installed package 

# pkgchk -v SUNWladm    ==> to list the files contained in a software package

# pkgchk -p /etc/shadow    ==> to find if the contents & attributes of a file                     have changed since it was installed with its                     software package

# pkgchk -l -p /usr/bin/showrev    ==> to list information about selected files                         that make up a package

# pkgchk -l -P showrev    ==> to find if a particular file is installed & to                     find the directory in which it is located 

If -p option is used, the full path must be typed to get information about the file. If -P option is used, a partial path name can be used.



Adding a Software Package from DVD

Example: To transfer the SUNWvts software package from CD-ROM & install it

# cd /cdrom/cdrom0/Solaris_10/ExtraValue/CoBundled/SunVTS_6.0/Packages

# pkgadd -d . SUNWvts


Adding Packages by Using a Spool Directory

The pkgadd command, by default, looks in the /var/spool/pkg directory for any packages specified on the command line.

The default directory for packages that have been spooled, but not installed is /var/spool/pkg.


To copy a package from the Solaris DVD into the /var/spool/pkg directory

# pkgadd -d /cdrom/cdrom0/s0/Solaris_10/Product -s spool SUNWauda 

The -s option with the keyword spool copies the package into the /var/spool/pkg directory by default.


# ls -al /var/spool/pkg        ==> to verify if the package exists in the                         spool directory

# pkgadd SUNWauda        ==> to add the package from the spool area

# pkgrm -s spool SUNWauda    ==> to remove a package from the spool area 



To use an alternative spooling directory


# pkgadd -d /cdrom/cdrom0/s0/Solaris_10/Product -s /export/pkg SUNWauda

# pkgrm -s /export/pkg SUNWauda        ==> to remove a package from the 
                        spool directory



Removing a Software Package


Caution – Be cautious of the dependency warnings you receive when removing a package. The system allows you to remove these packages even though they may be required by a different package.

# pkgrm SUNWauda



PACKAGE FORMATS

File system (or Directory) format: Multiple files and directories.

A package (SUNWrsc) in file system format:

# ls -ld SUNWrsc

# cd SUNWrsc
# ls -l


Data stream format: Single file.

A package in data stream format:

# ls -l SUNWrsc.pkg

# file SUNWrsc.pkg

# head SUNWrsc.pkg


Translating Package Formats

To translate a package from file system format in /var/tmp to data stream format

# pkgtrans /var/tmp /tmp/SUNWrsc.pkg SUNWrsc

First argument is the directory where file system format package is stored. Second argument is the package data stream file.
Third argument is the package to translate.

If no package name is given a list of all packages in the directory is displayed


# pkgadd -d /tmp/SUNWrsc.pkg all    ==> to install packages in a data 
                        stream format


Example:

To create a data streamed package

# cd /cdrom/cdrom0/s0/Solaris*

# pkgtrans -s Product /var/tmp/stream.pkg SUNWzlib SUNWftpr SUNWftpu

# file /var/tmp/stream.pkg

# head -5 /var/tmp/stream.pkg

# pkgadd -d /var/tmp/stream.pkg

Solaris ZFS (Cheat sheet) refrence II

ZFS Cheatsheet II

Directories and Files
error messages/var/adm/messages
console
States
DEGRADEDOne or more top-level devices is in the degraded state because they have become offline. Sufficient replicas exist to keep functioning
FAULTEDOne or more top-level devices is in the faulted state because they have become offline. Insufficient replicas exist to keep functioning
OFFLINEThe device was explicity taken offline by the "zpool offline" command
ONLINEThe device is online and functioning
REMOVEDThe device was physically removed while the system was running
UNAVAILThe device could not be opened
Scrubbing and Resilvering
ScrubbingExamines all data to discover hardware faults or disk failures, only one scrub may be running at one time, you can manually scrub.
Resilveringis the same concept as rebuilding or resyncing data on to new disks into an array, the smart thing resilvering does is it does not rebuild the whole disk only the data that is required (the data blocks not the free blocks) thus reducing the time to resync a disk. Resilvering is automatic when you replace disks, etc. If a scrub is already running it is suspended until the resilvering has finished and then the scrubbing will continue.
ZFS Devices
DiskA physical disk drive
FileThe absolute path of pre-allocated files/images
MirrorStandard raid-1 mirror
Raidz1/2/3## non-standard distributed parity-based software raid levels, one common problem called "write-hole" is elimiated because raidz in ## zfs the data and stripe are written simultanously, basically is a power failure occurs in the middle of a write then you have the ## data plus the parity or you dont, also ZFS supports self-healing if it cannot read a bad block it will reconstruct it using the
## parity, and repair or indicate that this block should not be used.
## You should keep the raidz array at a low power of two plus partity
raidz1 - 3, 5, 9 disks
raidz2 - 4, 6, 8, 10, 18 disks
raidz3 - 5, 7, 11, 19 disks

## the more parity bits the longer it takes to resilver an array, standard mirroring does not have the problem of creating the parity
## so is quicker in resilvering
## raidz is more like raid3 than raid5 but does use parity to protect from disk failures
raidz/raidz1 - minimum of 3 devices (one parity disk), you can suffer a one disk loss
raidz2         - minimum of 4 devices (two parity disks), you can suffer a two disk loss
raidz3         - minimum of 5 devices (three parity disks) , you can suffer a three disk loss
sparehard drives marked as "hot spare" for ZFS raid, by default hot spares are not used in a disk failure you must turn on the "autoreplace" feature.
cacheLinux caching mechanism use what is known as least recently used (LRU) algorithms, basically first in first out (FIFO) blocks are moved in and out of cache. Where ZFS cache is different it caches both least recently used block (LRU) requests and least frequent used (LFU) block requests, the cache device uses level 2 adaptive read cache (L2ARC).
logThere are two terminologies here
  • ZFS intent log (ZIL) - a logging mechanism where all the data to be written is stored, then later flushed as a transactional write, this is similar to a journal filesystem (ext3 or ext4).
  • Seperate intent log (SLOG) - a seperate logging devive that caches the synchronous parts of the ZIL before flushing them to the slower disk, it does not cache asynchronous data (asynchronous data is flushed directly to the disk). If the SLOG exists the ZIL will be moved to it rather than residing on platter disk, everything in the SLOG will always be in system memory. Basically the SLOG is the device and the ZIL is data on the device.
Storage Pools
displayingzpool list
zpool list -o name,size,altroot
# zdb can view the inner workings of ZFS (zdb has a number of options)
zdb <option> <pool>

Note: there are a number of properties that you can select, the default is: name, size, used, available, capacity, health, altroot
statuszpool status

## Show only errored pools with more verbosity
zpool status -xv      
statisticszpool iostat -v 5 5

Note: use this command like you would iostat
historyzpool history -il

Note: once a pool has been removed the history is gone
creating## You cannot shrink a pool only grow it

## performing a dry run but don't actual perform the creation (notice the -n)
zpool create -n data01 c1t0d0s0

# you can persume that I created two files called /zfs1/disk01 and /zfs1/disk02 using mkfile
zpool create data01 /zfs1/disk01 /zfs1/disk02

# using a standard disk slice
zpool create data01 c1t0d0s0

## using a different mountpoint than the default /<pool name>
zpool create -m /zfspool data01 c1t0d0s0

# mirror and hot spare disks examples, hot spares are not used by default turn on the "autoreplace" feature for each pool
zpool create data01 mirror c1t0d0 c2t0d0 mirror c1t0d1 c2t0d1
zpool create data01 mirror c1t0d0 c2t0d0 spare c3t0d0

## setting up a log device and mirroring it
zpool create data01 mirror c1t0d0 c2t0d0 log mirror c3t0d0 c4t0d0

## setting up a cache device
zpool create data 01 mirror c1t0d0 c2t0d0 cache c3t0d0 c3t1d0
## you can also create raid pools (raidz/raidz1 - mirror, raidz2 - single parity, raidz3 double partity)
zpool create data01 raidz2 c1t0d0 c1t1d0 c1t2d0 c1t3d0 c1t4d0
destroyingzpool destroy /zfs1/data01

## in the event of a disaster you can re-import a destroyed pool
zpool import -f -D -d /zfs1 data031
addingzpool add data01 c2t0d0

Note: make sure that you get this right as zpool only supports the removal of hot spares and cache disks, for mirrors see attach and detach below
Resizing## When replacing a disk with a larger one you must enable the "autoexpand" feature to allow you to use the extended space, you must do this before replacing the first disk
removingzpool remove data01 c2t0d0

Note: zpool only supports the removal of hot spares and cache disks, for mirrors see attach and detach below
clearing faultszpool clear data01

## Clearing a specific disk fault
zpool clear data01 c2t0d0
attaching (mirror)## c2t0d0 is an existing disk that is not mirrored, by attaching c3t0d0 both disks will become a mirror pair
zpool attach data01 c2t0d0 c3t0d0
detaching (mirror)zpool detach data01 c2t0d0

Note: see above notes is attaching
onliningzpool online data01 c2t0d0
offliningzpool offline data01 c2t0d0

## Temporary offlining (will revent back after a reboot)
zpool offline data01 -t c2t0d0
Replacing## replacing like for like
zpool replace data03 c2t0d0

## replacing with another disk
zpool replace data03 c2t0d0 c3t0d0
scrubbingzpool scrub data01

## stop a scrubbing in progress, check the scrub line using "zpool status data01" to see any errors
zpool scrub -s data01
Note; see top of table for more information about resilvering and scrubbing
exportingzpool export data01
## you can list exported pools using the import command
zpool import
importing## when using standard disk devices i.e c2t0d0
zpool import data01

## if using files in say the /zfs filesystem
zpool import -d /zfs
## importing a destroyed pool
zpool import -f -D -d /zfs1 data03
getting parameterszpool get all data01

Note: the source column denotes if the value has been change from it default value, a dash in this column means it is a read-only value
setting parameterszpool set autoreplace=on data01

Note: use the command "zpool get all <pool>" to obtain list of current setting
upgrade## List upgrade paths
zpool upgrade -v

## upgrade all pools
zpool upgrade -a

## upgrade specific pool, use "zpool get all <pool>" to obtain version number of a pool
zpool upgrade data01

## upgrade to a specific version
zpool upgrade -V 10 data01
Filesystem
displayingzfs list

## list different types
zfs list -t filesystem
zfs list -t snapshot
zfs list -t volume
zfs list -t all -r <zpool>

## recursive display
zfs list -r data01/oracle

## complex listing
zfs list -o name,mounted,sharenfs,mountpoint

Note: there are a number of attributes that you can use in a complex listing, so use the man page to see them all
creating## persuming i have a pool called data01 create a /data01/apache filesystem
zfs create data01/apache

## using a different mountpoint
zfs create -o mountpoint=/oracle data01/oracle

## create a volume - the device can be accessed via /dev/zvol/[rdsk|dsk]/data03/swap
zfs create -V 50mb data01/swap
swap -a /dev/zvol/dsk/data01/swap
Note: don't use a zfs volume as a dump device it is not supported
destroyingzfs destroy data01/oracle

## using the recusive options -r = all children, -R = all dependants
zfs destroy -r data01/oracle
zfs destroy -R data01/oracle
mountingzfs mount data01
# you can create temporary mount that expires after unmounting
zfs mount -o mountpoint=/tmpmnt data01/oracle

Note: there are all the normal mount options that you can apply i.e ro/rw, setuid
unmountingzfs umount data01
sharezfs share data01

## Persist over reboots
zfs set sharenfs=on data01

## specific hosts
zfs set sharenfs="rw=@10.85.87.0/24" data01/apache
unsharezfs unshare data01

## persist over reboots
zfs set sharenfs=off data01
snapshotting## snapshotting is like taking a picture, delta changes are recorded to the snapshot when the original file system changes, to
## remove a dataset all previous snaphots have to be removed, you can also rename snapshots.
## You cannot destroy a snapshot if it has a clone

## creating a snapshot
zfs snapshot data01@10022010

## renaming a snapshot
zfs snapshot data01@10022010 data01@keep_this

## destroying a snapshot
zfs destroy data01@10022010
rollback## by default you can only rollback to the lastest snapshot, to rollback to older one you must delete all newer snapshots
zfs rollback data01@10022010
cloning/promoting## clones are writeable filesystems that was upgraded from a snapshot, a dependency will remain on the snapshot as long as the
## clone exists. A clone uses the data from the snapshot to exist. As you use the clone it uses space separate from the snapshot.
## clones cannot be created across zpools, you need to use send/receive see below topics
## cloning
zfs clone data01@10022010 data03/clone
zfs clone -o mountpoint=/clone data01@10022010 data03/clone

## promoting a clone, this allows you to destroy the original file ssytem that the clone is attached to
zfs promote data03/clone

Note: the clone must reside in the same pool
renaming## the dataset must be kept within the same pool
zfs rename data03/ora_disk01 data03/ora_d01
Note: you have two options
-p creates all the non-existing parent datasets
-r recursively rename the sanpshots of all descendent datasets (used with snapshots only)
Compression## You enable compression by seeting a feature, compressions are on, off, lzjb, gzip, gzip[1-9] ans zle, not that it only start
## compression when you turn it on, other existing data will not be compressed
zfs set compression=lzjb data03/apache
## you can get the compression ratio
zfs get compressratio data03/apache
Deduplication## you can save disk space using deduplication which can be on file, block or byte, for example using file each file is hashed with a
## cryptographic hashing algorithm such as SHA-256, if a file matches then we just point to the existing file rather than storing a
## new file, this is ideal for small files but for large files a single character change would mean that all the data has to be copied

## block deduplication allows you to share all the same blocks in a file minus the blocks that are different, this allows to share the
## unique blocks on disk and the reference shared blocks in RAM, however it may need a lot of RAM to keep track of which blocks
## are shared and which are not., however this is the preferred option other than file or byte deduplication. Shared blocks are
## stored in what is called a "deduplication table", the more deduplicated blocks the larger the table, the table is read everytime
## to make a block change thus the table should be held in fast RAM, if you run out of RAM then the table will spillover onto disk.
## So how much RAM do you need, you can use the zdb command to check, take the "bp count", it takes about 320 bytes of ram
## for each deduplicate block in the pool, so in my case 288674 means I would need about 92MB, for example a 200GB would need
## about 670MB for the table, a good rule would be to allow 5GB of RAM for every 1TB of disk.

## to see the block the dataset consumes
zdb -b data01

## to turn on deduplicate
zfs set dedup=on data01/text_files

## to see the deduplicatio ratio
zfs get dedupratio data01/text_files
## to see the histrogram of howm many blocks are referenced how many time
zdb -DD <pool>
getting parameters## List all the properties
zfs get all data03/oracle

## get a specific property
zfs get setuid data03/oracle

## get a list of a specific properites for all datasets
zfs get compression
Note: the source column denotes if the value has been change from it default value, a dash in this column means it is a read-only value
setting parameters## set and unset a quota
zfs set quota=50M data03/oracle
zfs set quota=none data03/oracle
Note: use the command "zfs get all <dataset> " to obtain list of current settings
inherit## set back to the default value
zfs inherit compression data03/oracle
upgrade## List the upgrade paths
zfs upgrade -v
## List all the datasets that are not at the current level
zfs upgrade

## upgrade a specific dataset
upgrade -V <version> data03/oracle
send/receive## here is a complete example of a send and receive with incremental update
## create some test files
mkfile -v 100m /zfs/master
mkdir -v 100m /zfs/slave

## create mountpoints
mkdir /master
mkdir /slave

## Create the pools
zpool create master
zpool create slave
## create the data filesystem
zfs create master/data

## create a test file
echo "created: 09:58" > /master/data/test.txt

## create a snapshot and send it to the slave, you could use SSH or tape to transfer to another server (see below)
zfs snapshot master/data@1
zfs send master/data@1 | zfs receive slave/data
## set the slave to read-only because you can cause data corruption, make sure if do this before accessing anything the
## slave/data directory
zfs set readonly=on slave/data

## update the original test.txt file
echo "`date`" >> /master/data/text.txt

## create a second snapshot and send the differences, you may get an error message saying that the desination had been
## modified this is because you did not set the slave/data to ready only (see above)
zfs snapshot master/data@2
zfs send -i master/data@1 master/data@2 | zfs receive slave/data
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
## using SSH
zfs send master/data@1 | ssh backup_server zfs receive backups/data@1
## using a tape drive, you can also use cpio
zfs send master/data@1 > /dev/rmt/0
zfs receive slave/data2@1 < /dev/rmt/0
zfs rename slave/data slave/data.old
zfs rename slave/data2 slave/data

## you can also save incremental data
zfs send master/data@12022010 > /dev/rmt/0
zfs send -i master/data@12022010 master/data@13022010 > /dev/rmt/0
## Using gzip to compress the snapshot
zfs send master/fs@snap | gzip > /dev/rmt/0
allow/unallow## display the permissions set and any user permissions
zfs allow master

## create a permission set
zfs allow -s @permset1 create,mount,snapshot,clone,promote master
## delete a permission set
zfs unallow -s @permset1 master

## grant a user permissions
zfs allow vallep @permset1 master

## revoke a user permissions
zfs unallow vallep @permset1 master

Note: there are many permissions that you can set so see the man page or just use the "zfs allow" command
Quota/Reservation## Not strickly a command but wanted to discuss here, you can apply a quota to a dataset, you can reduce this quota only if the
## quota has not already exceeded, if you exceed the quota you will get a error message, you also have reservations which will
## guarantee that a specified amount of disk space is available to the filesystem, both are applied to datasets and there
## descendants (snapshots, clones)
## Newer versions of Solaris allow you to set group and user quota's

## you can also use refquota and refreservation to manage the space without accounting for disk space consumed by descendants
## such as snapshots and clones. Generally you would set quota and reservation higher than refquota and refreservation
  • quota & reservation - properties are used for managing disk space consumed by datasets and their descendants
  • refquota & refreservation - properties are used for managing disk space consumed by datasets only
## set a quota
zfs set quota=100M data01/apache

## get a quota
zfs get quota data01/apache
## setup user quota (use groupquota for groups)
zfs set userquota@vallep=100M data01/apache
## remove a user quota (use groupquota for groups)
zfs set userquota@vallep=none data01/apache
## List user quota (use groupspace for groups), you can alsolist users with quota's for exampe root user
zfs userspace data01/apache
zfs get userused@vallep data01/apache
ZFS tasks
Replace failed disk# List the zpools and identify the failed disk
zpool list

# replace the disk (can use same disk or new disk)
zpool replace data01 c1t0d0
zpool replace data01 c1t0d0 c1t1d0
# clear any existing errors
zpool clear data01

# scrub the pool to check for anymore errors (this depends on the size of the zpool as it can take a long time to complete
zpool scrub data01
# you can now remove the failed disk in the normal way depending on your hardware
Expand a pools capacity# you cannot remove a disk from a pool but you can replace it with a larger disk
zpool replace data01 c1t0d0 c2t0d0
zpool set autoexpand=on data01
Install the boot block# the command depends if you are using a sparc or a x86 system
sparc - installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c0t1d0
x86    - installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c0t1d0s0
Lost root password# You have two options to recover the root password

## option one
ok> boot -F failsafe
     whne requested follow the instructions to mount the rpool on /a
cd /a/etc
vi passwd|shadow
init 6
## Option two
ok boot cdrom|net -s (you can boot from the network or cdroml)
zpool import -R /a rpool
zfs mount rpool/ROOT/zfsBE
cd /a/etc
vi passwd|shadow
init 6
Primary mirror disk in root is unavailable or fails# boot the secondary mirror
ok> boot disk1

## offline and unconfigure failed disk, there may be different options on unconfiguring a disk depends on the hardware
zpool offline rpool c0t0d0s0
cfgadm -c unconfigure c1::dsk/c0t0d0

# Now you can physically replace the disk, reconfigure it and bring it online
cfgadm -c configure c1::dsk/c0t0d0
zpool online rpool c0t0d0

# Let the pool know you have replaced the disk
zpool replace rpool c0t0d0s0
# if the replace above fails the detach and reattach the primary mirror
zpool deatch rpool c0t0d0s0
zpool attach rpool c0t1d0s0 c0t0d0s0

# make checks
zpool status rpool

# dont forget to add the boot block (see above)
Resize swap area (and dump areas)# You can resize the swap if it is not being used, first record the size and if it is being used
swap -l

# resize the swap area, first by removing it
swap -d /dev/zvol/dsk/rpool/swap
zpool set volsize=2G rpool/swap

# Now activate the swap and check the size, if the -a option does not work then use "swapadd" command
swap -a /dev/zvol/dsk/rpool/swap
swap -l
Note: if you cannot delete the original swap area due to being too busy then simple add another swap area, the same procedure is used for dump areas but using the "dumpadm" command

Thursday, September 10, 2015

Service Management Facility II

SMF-Service Management Facility administration II

Service Management facility administration:In the last post we have seen how to list the services using SMF commands and here we will see how to administrate those FMRI(Service) using SMF commands.In Solaris Operating system administration, You should have fair enough knowledge to  start,stop,restart the SMF service.And also you should know how to troubleshoot if the service is not coming to online.In the end of the post we will see how to backup and restore the SMF repository. svcadm is a command which will be used to administrate the SMF services.
Note:Solaris 10 still supports legacy method of stopping and starting service using rc scripts.

From man page of svcadm,
bash-3.00# svcadm
Usage: svcadm [-v] [cmd [args ... ]]
        svcadm enable [-rst]  ...      - enable and online service(s)
        svcadm disable [-st]  ...      - disable and offline service(s)
        svcadm restart  ...            - restart specified service(s)
        svcadm refresh  ...            - re-read service configuration
        svcadm mark [-It]   ... - set maintenance state
        svcadm clear  ...              - clear maintenance state
        svcadm milestone [-d]        - advance to a service milestone

        Services can be specified using an FMRI, abbreviation, or fnmatch(5)
        pattern, as shown in these examples for svc:/network/smtp:sendmail

        svcadm  svc:/network/smtp:sendmail
        svcadm  network/smtp:sendmail
        svcadm  network/*mail
        svcadm  network/smtp
        svcadm  smtp:sendmail
        svcadm  smtp
        svcadm  sendmail

1. How To enable the SMF service ? 
First you list the service which you want to enable using svcs command.Then you can use svcadm to enable the service and you can monitor the service status using svcs command.
Sometimes you need to use complete FMRI(svc:/network/ipfilter:default)
bash-3.00# svcs ipfilter
STATE          STIME    FMRI
disabled       May_17   svc:/network/ipfilter:default
bash-3.00# svcadm enable ipfilter
bash-3.00# svcs ipfilter
STATE          STIME    FMRI
offline*       11:44:16 svc:/network/ipfilter:default
bash-3.00# svcs ipfilter
STATE          STIME    FMRI
online         11:44:31 svc:/network/ipfilter:default


2.How to Disable the SMF service ?
You can disable the service using svcadm disable command and check the staus using svcs.
bash-3.00# svcadm disable  svc:/network/ipfilter:default
bash-3.00# svcs  svc:/network/ipfilter:default
STATE          STIME    FMRI
disabled       11:50:00 svc:/network/ipfilter:default

3.How to Enable/Disable service temporarily ?
 In SMF ,we have an option to enable/disable the service temporarily.This disable will not persist across the reboot.You see that temporary flag in the enabled status.
bash-3.00# svcadm disable -t svc:/network/ipfilter:default
bash-3.00# svcs -l svc:/network/ipfilter:default
fmri         svc:/network/ipfilter:default
name         IP Filter
enabled      false (temporary)
state        disabled
next_state   none
state_time   Sat May 25 11:53:10 2013
logfile      /var/svc/log/network-ipfilter:default.log
restarter    svc:/system/svc/restarter:default
dependency   require_all/restart file://localhost/etc/ipf/ipf.conf (online)
dependency   require_all/none svc:/system/filesystem/usr (online)
dependency   require_all/restart svc:/network/pfil (online)
dependency   require_all/restart svc:/network/physical (online)
dependency   require_all/restart svc:/system/identity:node (online)
bash-3.00#

bash-3.00# svcadm enable -t svc:/network/ipfilter:default
bash-3.00#
bash-3.00# svcs -l svc:/network/ipfilter:default
fmri         svc:/network/ipfilter:default
name         IP Filter
enabled      true (temporary)
state        online
next_state   none
state_time   Sat May 25 12:12:11 2013
logfile      /var/svc/log/network-ipfilter:default.log
restarter    svc:/system/svc/restarter:default
dependency   require_all/restart file://localhost/etc/ipf/ipf.conf (online)
dependency   require_all/none svc:/system/filesystem/usr (online)
dependency   require_all/restart svc:/network/pfil (online)
dependency   require_all/restart svc:/network/physical (online)
dependency   require_all/restart svc:/system/identity:node (online)

4.How to restart the service ?
bash-3.00# svcadm restart  svc:/network/ipfilter:default
bash-3.00# svcs svc:/network/ipfilter:default
STATE          STIME    FMRI
online         12:13:16 svc:/network/ipfilter:default
bash-3.00# date
Sat May 25 12:13:27 IST 2013

5.How to reload the configuration by without restarting the service ?
bash-3.00# svcadm refresh svc:/network/ipfilter:default
bash-3.00# svcs svc:/network/ipfilter:default
STATE          STIME    FMRI
online         12:14:29 svc:/network/ipfilter:default
bash-3.00#

6.How to mark  the service to maintenance state ?
bash-3.00# svcadm mark -It maintenance svc:/network/ipfilter:default
bash-3.00# svcs svc:/network/ipfilter:default
STATE          STIME    FMRI
maintenance    12:16:01 svc:/network/ipfilter:default

7.How to clear the service from maintenance state ?
bash-3.00# svcs svc:/network/ipfilter:default
STATE          STIME    FMRI
maintenance    12:16:01 svc:/network/ipfilter:default
bash-3.00# svcadm clear svc:/network/ipfilter:default
bash-3.00# svcs svc:/network/ipfilter:default
STATE          STIME    FMRI
online         12:17:58 svc:/network/ipfilter:default

8.How to restore the SMF snapshot ? 
You can restore complete SMF repository using below method. But you need reboot to take effect.
Caution:System may reboot immediately according to the options which you provide.
bash-3.00# /lib/svc/bin/restore_repository

See http://sun.com/msg/SMF-8000-MY for more information on the use of
this script to restore backup copies of the smf(5) repository.

If there are any problems which need human intervention, this script will
give instructions and then exit back to your shell.

Note that upon full completion of this script, the system will be rebooted
using reboot(1M), which will interrupt any active services.

The following backups of /etc/svc/repository.db exist, from
oldest to newest:

manifest_import-20120628_141735
manifest_import-20120628_150827
manifest_import-20121214_201555
boot-20130107_130944
boot-20130204_213314
boot-20130322_230722
boot-20130517_200728

The backups are named based on their type and the time what they were taken.
Backups beginning with "boot" are made before the first change is made to
the repository after system boot.  Backups beginning with "manifest_import"
are made after svc:/system/manifest-import:default finishes its processing.
The time of backup is given in YYYYMMDD_HHMMSS format.

Please enter either a specific backup repository from the above list to
restore it, or one of the following choices:

        CHOICE            ACTION
        ----------------  ----------------------------------------------
        boot              restore the most recent post-boot backup
        manifest_import   restore the most recent manifest_import backup
        -seed-            restore the initial starting repository  (All
                            customizations will be lost, including those
                            made by the install/upgrade process.)
        -quit-            cancel script and quit

Enter response [boot]:
After confirmation, the following steps will be taken:
svc.startd(1M) and svc.configd(1M) will be quiesced, if running.
/etc/svc/repository.db
    -- renamed --> /etc/svc/repository.db_old_20130525_122228
/etc/svc/repository-boot
    -- copied --> /etc/svc/repository.db
and the system will be rebooted with reboot(1M).
Proceed [yes/no]? y

Quiescing svc.startd(1M) and svc.configd(1M): done.
/etc/svc/repository.db
    -- renamed --> /etc/svc/repository.db_old_20130525_122228
/etc/svc/repository-boot
    -- copied --> /etc/svc/repository.db
The backup repository has been successfully restored.
Rebooting in 5 seconds.

9.How to revert specific SMF service snapshot ?
You can revert the specific service old snapshot using the following method.
bash-3.00# svccfg
svc:> select svc:/network/ipfilter:default
svc:/network/ipfilter:default> listsnap
initial
last-import
running
svc:/network/ipfilter:default> revert running
svc:/network/ipfilter:default> quit
bash-3.00# svcadm refresh svc:/network/ipfilter:default
bash-3.00# svcadm restart svc:/network/ipfilter:default
bash-3.00# svcs svc:/network/ipfilter:default
STATE          STIME    FMRI
disabled       12:24:17 svc:/network/ipfilter:default
bash-3.00# svcadm enable svc:/network/ipfilter:default
bash-3.00# svcs svc:/network/ipfilter:default
STATE          STIME    FMRI
online         12:32:17 svc:/network/ipfilter:default
bash-3.00#

Service Management Facility I

Solaris 10 Services

There is a new feature to the way Solaris 10 handles services, this feature is called Service Management facility (SMF).

Terminology

Fault managed resource identifiers (FMRI) identifies services

            svc :/system/system- log:default

svc:Service type
/system/system-logName
:defaultInstance
The repository is the source for all known services on the system, it imports the service manifest into the database and then never references the manifest again.
The manifest properties are the following and are contained in a xml file (see appendix A).
  • Name of service
  • Number of instances
  • Start, stop and refresh methods
  • Property groups
  • Service model
  • Fault handling
  • Documentation template
A milestone is a predefined set of capabilities for a set of services, similar to a run level

File locations
SMF log files/var/svc/log
SMF log files/etc/svc/volatile
SMF manifests/var/svc/manifest/*
SMF method/lib/svc/method/*

Daemons
start svc daemonsvc.startd
svc configuration daemonsvc.configd

Service Commands 
Show the state of all servicessvcs –a
Show detailed informationsvcs –l
Show the dependenciessvcs –d
Show the dependentssvcs –D
show the processes of a servicesvcs –p
Explain why the service failedsvcs –x
verbose informationsvcs –v
Disable a service (stop)svcadm disable
Enabled a service (start)svcadm enable
restart a servicesvcadm restart
Reread the config file (HUP)svcadm refresh
Put service into maintenance/degrade modesvcadm mark
Start a service to the desired milestone levelsvcadm milestone
Show values for a given propertysvcprop –p
show details from a snapshotsvcprop –s

INETD services commands
List all configured inetd servicesinetadm
Detailed information on a inetd serviceinetadm -l
enable a inetd serviceinetadm -e
disable a inetd serviceinetadm -d

Administration

If a service fails to start
            # svcs –xv

To modify the properties of an inetd service
# inetadm –m spray bind_addr=”192.168.0.1”
# inetadm –l spray

Repair a corrupt repository using the default repository
Script# /lib/svc/bin/restore_repository          (follow the prompts)Note: all old repositories are in /etc/svc you can use an old one in place of the global
Manually# pstop “ pgrep svc.startd”
# kill svc.configd
# cp /etc/svc/repository.db /etc/svc/repository.badGlobal zone
# cp /lib/svc/seed/global.db /etc/svc/repository.db
# reboot
Non-Global zone
# cp /lib/svc/seed/nonglobal.db /etc/svc/repository.db
# reboot (only reboot zone)

Note: all old repositories are in /etc/svc you can use an old one in place of the global

Start service interactively during boot
Ok> boot –m milestone=none (login as root)
# svcadm milestone –t all         (enable all services)
# svcs –l                         (look for hanging services and check log files /var/svc/log)
continue with normal boot procedures
Other boot commands are:
Ok> boot –m verbose               (verbose output)
Ok> boot –m debug                  (very verbose output)

Manifests

To check the integrity of a manifest xml file
       # /usr/bin/xmllint mysvc.xml
To import your service manifest           
       # /usr/sbin/svccfg -v import /var/svc/manifest/site/mysvc.xml                                                                 
Appendix A – screen shots

List of services configured
# svcs
legacy_run      Mar_02    lrc :/etc/rc3_d/S80mipagent
legacy_run      Mar_02    lrc :/etc/rc3_d/S81volmgt
legacy_run      Mar_02    lrc :/etc/rc3_d/S82initsma
legacy_run      Mar_02    lrc :/etc/rc3_d/S84appserv
legacy_run      Mar_02    lrc :/etc/rc3_d/S90samba
online          Mar_02    svc:/system/svc/ restarter:default
online          Mar_02    svc:/network/ pfil:default
online          Mar_02    svc:/system/ filesystem/root:default
online          Mar_02    svc:/network/ loopback:default
online          Mar_02    svc:/milestone/name- services:default
offline         Mar_02    svc:/application/print/ ipp-     listener:default
offline         Mar_02    svc:/application/print/rfc1179:default

A detailed list of a service
# svcs –l sendmail
fmri          svc:/network/ smtp:sendmail
name          sendmail SMTP mail transfer agent
enabled       true
state         online
next_state    none
state_time    Thu 03 Mar 2005 07:05:07 AM GMT
logfile       / var/svc/log/network-smtp:sendmail.log
restarter     svc:/system/svc/ restarter:default
contract_ id   216
dependency    require_all/refresh file://localhost/etc/mail/sendmail.cf (online)
dependency    require_all/refresh file://localhost/etc/nsswitch.conf (online)
dependency    optional_all/none svc:/system/ filesystem/autofs (online)
dependency    require_all/none svc:/system/ filesystem/local (online)
dependency    require_all/none svc:/network/service (online)
dependency    require_all/refresh svc:/milestone/name-services (online)
dependency    optional_all/refresh svc:/system/ identity:domain (online)
dependency    optional_all/none svc:/system/system-log (online)


Saturday, September 5, 2015

Solaris ZFS (Cheat sheet) refrence I

Pool Related Commands

# zpool create datapool c0t0d0Create a basic pool named datapool
# zpool create -f datapool c0t0d0Force the creation of a pool
# zpool create -m /data datapool c0t0d0Create a pool with a different mount point than the default.
# zpool create datapool raidz c3t0d0 c3t1d0 c3t2d0Create RAID-Z vdev pool
# zpool add datapool raidz c4t0d0 c4t1d0 c4t2d0Add RAID-Z vdev to pool datapool
# zpool create datapool raidz1 c0t0d0 c0t1d0 c0t2d0 c0t3d0 c0t4d0 c0t5d0Create RAID-Z1 pool
# zpool create datapool raidz2 c0t0d0 c0t1d0 c0t2d0 c0t3d0 c0t4d0 c0t5d0Create RAID-Z2 pool
# zpool create datapool mirror c0t0d0 c0t5d0Mirror c0t0d0 to c0t5d0
# zpool create datapool mirror c0t0d0 c0t5d0 mirror c0t2d0 c0t4d0disk c0t0d0 is mirrored with c0t5d0 and disk c0t2d0 is mirrored withc0t4d0
# zpool add datapool mirror c3t0d0 c3t1d0Add new mirrored vdev to datapool
# zpool add datapool spare c1t3d0Add spare device c1t3d0 to the datapool
## zpool create -n geekpool c1t3d0Do a dry run on pool creation

Show Pool Information

# zpool status -xShow pool status
# zpool status -v datapoolShow individual pool status in verbose mode
# zpool listShow all the pools
# zpool list -o name,sizeShow particular properties of all the pools (here, name and size)
# zpool list -Ho nameShow all pools without headers and columns

File-system/Volume related commands

# zfs create datapool/fs1Create file-system fs1 under datapool
# zfs create -V 1gb datapool/vol01Create 1 GB volume (Block device) in datapool
# zfs destroy -r datapooldestroy datapool and all datasets under it.
# zfs destroy -fr datapool/datadestroy file-system or volume (data) and all related snapshots

Set ZFS file system properties

# zfs set quota=1G datapool/fs1Set quota of 1 GB on filesystem fs1
# zfs set reservation=1G datapool/fs1Set Reservation of 1 GB on filesystem fs1
# zfs set mountpoint=legacy datapool/fs1Disable ZFS auto mounting and enable mounting through /etc/vfstab.
# zfs set sharenfs=on datapool/fs1Share fs1 as NFS
# zfs set compression=on datapool/fs1Enable compression on fs1

File-system/Volume related commands

# zfs create datapool/fs1Create file-system fs1 under datapool
# zfs create -V 1gb datapool/vol01Create 1 GB volume (Block device) in datapool
# zfs destroy -r datapooldestroy datapool and all datasets under it.
# zfs destroy -fr datapool/datadestroy file-system or volume (data) and all related snapshots

Show file system info

# zfs listList all ZFS file system
# zfs get all datapool”List all properties of a ZFS file system

Mount/Umount Related Commands

# zfs set mountpoint=/data datapool/fs1Set the mount-point of file system fs1 to /data
# zfs mount datapool/fs1Mount fs1 file system
# zfs umount datapool/fs1Umount ZFS file system fs1
# zfs mount -aMount all ZFS file systems
# zfs umount -aUmount all ZFS file systems

ZFS I/O performance

# zpool iostat 2Display ZFS I/O Statistics every 2 seconds
# zpool iostat -v 2Display detailed ZFS I/O statistics every 2 seconds

ZFS maintenance commands

# zpool scrub datapoolRun scrub on all file systems under data pool
# zpool offline -t datapool c0t0d0Temporarily offline a disk (until next reboot)
# zpool onlineOnline a disk to clear error count
# zpool clearClear error count without a need to the disk

Import/Export Commands

# zpool importList pools available for import
# zpool import -aImports all pools found in the search directories
# zpool import -dTo search for pools with block devices not located in /dev/dsk
# zpool import -d /zfs datapoolSearch for a pool with block devices created in /zfs
# zpool import oldpool newpoolImport a pool originally named oldpool under new name newpool
# zpool import 3987837483Import pool using pool ID
# zpool export datapoolDeport a ZFS pool named mypool
# zpool export -f datapoolForce the unmount and deport of a ZFS pool

Snapshot Commands

Combine the send and receive operation
# zfs snapshot datapool/fs1@12jan2014Create a snapshot named 12jan2014 of the fs1 filesystem
# zfs list -t snapshotList snapshots
# zfs rollback -r datapool/fs1@10jan2014Roll back to 10jan2014 (recursively destroy intermediate snapshots)
# zfs rollback -rf datapool/fs1@10jan2014Roll back must and force unmount and remount
# zfs destroy datapool/fs1@10jan2014Destroy snapshot created earlier
# zfs send datapool/fs1@oct2013 > /geekpool/fs1/oct2013.bakTake a backup of ZFS snapshot locally
# zfs receive anotherpool/fs1 < /geekpool/fs1/oct2013.bakRestore from the snapshot backup backup taken
# zfs send datapool/fs1@oct2013 | zfs receive anotherpool/fs1
# zfs send datapool/fs1@oct2013 | ssh node02 “zfs receive testpool/testfs”Send the snapshot to a remote system node02

Clone Commands

# zfs clone datapool/fs1@10jan2014 /clones/fs1Clone an existing snapshot
# zfs destroy datapool/fs1@10jan2014Destroy clone