Thursday, July 28, 2011

Shell script to monitor Server Space

Below is the script for monitor the disk space information of a server. You can place this script as
" space. sh " and run it as " sh space.sh " .

###### starting of space.sh #####

hostname=`hostname`
date=`date`
df -h > space.txt
threshold=50
#nooflines=`df -h | wc -l`
echo "      DISK SPACE INFORMATION"
echo "=================================="
for i in `grep -n ' ' space.txt|grep -v hi | cut -d':' -f1`
do
lno=`expr $i + 1`
if [ $lno -eq 16 ]; then
#here 16 is no of lines from o/p of "df -h"
rm space.txt
exit;
fi
usep=`cat space.txt |awk '{print $5 " " $6}'|awk '{print $1}'|sed -n $lno'p'|cut -d'%' -f1`
#echo $usep
partition=`cat space.txt |awk '{print $5 " " $6}' |awk '{print $2}'|sed -n $lno'p'`
#echo $partition
pr=`cat space.txt |sed -n $lno'p'`
if [ $usep -ge $threshold ]; then
  # echo "Running out of space \"$partition ($usep%)\" on $hostname as on $date"
  # echo $pr
  echo "Server Name     : `hostname`"
  echo "Date            : `date`"
  cat space.txt |sed -n $lno'p'|awk '{print "Space Status    : file system " $6 " is "$5" full"}'
  cat space.txt |sed -n $lno'p'|awk '{print "Total Size      : " $2}'
  cat space.txt |sed -n $lno'p'|awk '{print "Used Space      : " $3}'
  cat space.txt |sed -n $lno'p'|awk '{print "Free Space      : " $4}'
  echo "Threshold Limit : $threshold%"
  echo " -------------------------------------"
fi
done

##### end of space.sh #######

Sample output:
===============
bash-3.00$ sh space.sh
     
DISK SPACE INFORMATION
==================================
Server Name     : appikonda
Date            : Wed Jun 15 09:40:32 IST 2011
Space Status    : file system /export/home is 63% full
Total Size      : 43G
Used Space      : 27G
Free Space      : 16G
Threshold Limit : 50%
 -------------------------------------
Server Name     : appikonda
Date            : Wed Jun 15 09:40:32 IST 2011
Space Status    : file system /data/export/home is 97% full
Total Size      : 58G
Used Space      : 55G
Free Space      : 2.0G
Threshold Limit : 50%
 -------------------------------------

Disable and Enabling SQL*Plus commands in Oracle 10g

1.       Place the below scripts as enable.sh , disable.sh , list.sh in “ORACLE” users base directory.
2.       You can run the disable.sh script like below to disable a command for a specified user.
>sh disable.sh TESTDB INSERT
------ here INSERT privilege for TESTDB user will be disabled.
3.       You can see the result using list.sh script.
>sh list.sh
USERID                         ATTRIBUTE                      CHAR_VALUE
------------------------------ ------------------------------ ------------------------------
TESTDB                           INSERT                         DISABLED

4.       You can run the enable.sh script like below to enable a command for a specified user.
>sh enable.sh TESTDB INSERT

After login to the sqlplus as any user, just give the command. If the command was disabled then below error will be shown;

>sqlplus testdb/testdb

TESTDB > delete

SP2-0544: Command "delete" disabled in Product User Profile

Kindly contact us for further clarifications.

Disable script ( disable.sh ) :
=====================
#!/bin/sh
userid=$1
attribute=$2
if [ $userid != 'SYS' ] && [ $userid != 'SYSTEM' ]; then
user_chk=`sqlplus -s / as sysdba << !
SET HEAD OFF;
SELECT count(*) from dba_users where username='$userid' ;
EXIT
!`
if [ $user_chk -gt 0 ]
then
sqlplus -S / as sysdba << EOF
insert into system.product_user_profile (product, userid, attribute, scope, numeric_value, char_value, date_value, long_value) values ('SQL*Plus', '$userid','$attribute', NULL, NULL, 'DISABLED', NULL,NULL);
commit;
exit;
EOF
else
echo "User id $userid does not exist in database"
fi
else
echo "Product Profile settings not applicable for this $userid"
fi
#####  end of disable.sh  ####

Enable script ( enable.sh ) :
===================

#!/bin/sh
userid=$1
attribute=$2
if [ $userid != 'SYS' ] && [ $userid != 'SYSTEM' ]; then
sqlplus -S / as sysdba << EOF
delete from system.product_user_profile where userid='$userid' and attribute='$attribute';
commit;
exit;
EOF
else
echo "Product Profile settings not applicable for this $userid"
fi
##### end of enable.sh #######


List information ( list.sh ) :
====================

sqlplus -S / as sysdba << EOF
col userid for a30
col attribute for a30
col char_value for a30
set lines 150
select userid,attribute,char_value from system.product_user_profile;


shell script to transfer archivelogs to another server

The below script will send the archive files from one server to another server and it always keep 5 recent files in the source location.

Save this code as movearch.sh and run it as 
" sh movearch.sh >>log.txt"

Below are the assumptions :

TODAYD=`date '+%d%m%y_%HH%MM%SS'`
HOST='192.168.1.1'
USER='oracle'
PASSWD='oracle'
FILE=arch_dir_$TODAYD.tar.gz
SOURCEDIR='/arch/oracle/oradata/arch'
DESTDIR='/switchpool/pool/ARC_PRD_BKP'

mkdir $SOURCEDIR/arch_dir_$TODAYD
echo "=============================================">>$SOURCEDIR/log.txt
echo " script execution started on `date` ">>$SOURCEDIR/log.txt
echo "=============================================">>$SOURCEDIR/log.txt
echo "Total files found : ">>$SOURCEDIR/log.txt

cnt=`ls -lrt $SOURCEDIR/*.arc | wc -l`
cnt1=`expr $cnt - 5`

for i in `ls -lrt $SOURCEDIR/*.arc | head -$cnt1 | awk '{print $9}'`
do
mv $i $SOURCEDIR/arch_dir_$TODAYD
echo "$i">>$SOURCEDIR/log.txt
done
echo "Files gzip started : `date`">>$SOURCEDIR/log.txt
gzip $SOURCEDIR/arch_dir_$TODAYD/*.arc
echo "Files gzip finished : `date`">>$SOURCEDIR/log.txt
tar -cvf $SOURCEDIR/arch_dir_$TODAYD.tar $SOURCEDIR/arch_dir_$TODAYD
gzip $SOURCEDIR/arch_dir_$TODAYD.tar
echo "gzip $SOURCEDIR/arch_dir_$TODAYD.tar.gz completed : `date`">>$SOURCEDIR/log.txt
rm -r $SOURCEDIR/arch_dir_$TODAYD
echo "ftp starts : `date`">>$SOURCEDIR/log.txt
#########  FTP starts here ###########
ftp -n $HOST <<END_SCRIPT
quote USER $USER
quote PASS $PASSWD
cd $DESTDIR
put $FILE
quit
END_SCRIPT
echo "ftp ends : `date`">>$SOURCEDIR/log.txt
echo "=============================================">>$SOURCEDIR/log.txt
echo " script execution completed on `date` ">>$SOURCEDIR/log.txt
echo "=============================================">>$SOURCEDIR/log.txt

###########  END OF SCRIPT ###########

Standby database using RMAN backups

In Primary:
1)      Create a standby control file.

SQL>alter database backup standby controlfile as ‘/Oracle/oracle/control01.ctl’;

2)      Login in Rman prompt.

$rman target sys/sys
RMAN> backup as compressed backupset incremental level=0 database maxsetsize=1g format '/Oracle/oracle/full_comp_prime_%t.bkp';

3)      SCP all the backup files to corresponding location in standby.

4)      Modify the pfile of primary as required and restart the DB.

In Secondary:
1)      Copy the backup files to the corresponding location.
2)      Modify the standby pfile as required  and start the DB in nomount stage.

SQL>startup pfile=initprime.ora nomount
3)      Mount the standby database.

SQL>alter database mount standby database;


Primary database Pfile:

*.db_files=230
*.db_name='prime'
*.instance_name='prime'
*.processes=100
*.sessions=100
*.sga_max_size=150m
*.sga_target=116m
*.undo_management='AUTO'
*.undo_retention=10800
*.undo_tablespace='UNDOTBS1'
*.background_dump_dest='/Oracle/oracle/oradata/data/admin/bdump'
*.control_files='/Oracle/oracle/oradata/data/oradata/control01.ctl'
*.core_dump_dest='/Oracle/oracle/oradata/data/admin/cdump'
*.user_dump_dest='/Oracle/oracle/oradata/data/admin/udump'

*.standby_file_management=AUTO
*.log_archive_format='%t_%r_%s.dbf'
*.remote_login_passwordfile='EXCLUSIVE'

*.FAL_CLIENT='153'
*.FAL_SERVER='152'
*.log_archive_dest_1='LOCATION=/Oracle/oracle/oradata/data/admin/arch'
*.log_archive_dest_2='service=152 lgwr sync'



*.log_file_name_convert='/Oracle/oracle/oradata/data/oradata/two','/Oracle/oracle/oradata/prime/two'
*.db_file_name_convert='/Oracle/oracle/oradata/data/oradata/one','/Oracle/oracle/oradata/prime/one','/Oracle/oracle/oradata/data/oradata/two','/Oracle/oracle/oradata/prime/two','/Oracle/oracle/oradata/data/oradata/three','/Oracle/oracle/oradata/prime/three'
*.standby_archive_dest='/Oracle/oracle/admin/prime/arch'


Standby database Pfile:
*.background_dump_dest='/Oracle/oracle/admin/prime/bdump'
*.control_files='/Oracle/oracle/oradata/prime/control01.ctl'
*.core_dump_dest='/Oracle/oracle/admin/prime/cdump'
*.db_files=230
*.db_name='prime'
*.instance_name='prime'
*.log_archive_dest_1='LOCATION=/Oracle/oracle/admin/prime/arch'
*.log_archive_format='%t_%r_%s.dbf'
*.processes=100
*.sessions=100
*.sga_max_size=150m
*.sga_target=125m
*.undo_management='AUTO'
*.undo_retention=10800
*.undo_tablespace='UNDOTBS1'
*.user_dump_dest='/Oracle/oracle/admin/prime/udump'
*.FAL_CLIENT='152'
*.FAL_SERVER='153'
*.remote_login_passwordfile='EXCLUSIVE'
*.standby_file_management='AUTO'
*.log_file_name_convert='/Oracle/oracle/oradata/data/oradata/two','/Oracle/oracle/oradata/prime/two'
*.db_file_name_convert='/Oracle/oracle/oradata/data/oradata/one','/Oracle/oracle/oradata/prime/one','/Oracle/oracle/oradata/data/oradata/two','/Oracle/oracle/oradata/pr
ime/two','/Oracle/oracle/oradata/data/oradata/three','/Oracle/oracle/oradata/prime/three'
*.standby_archive_dest='/Oracle/oracle/admin/prime/arch'

Logminer in Oracle 10g

Introduction:
=============

LogMiner runs in an Oracle instance with the database either mounted or
unmounted. LogMiner uses a dictionary file, which is a special file that
indicates the database that created it as well as the time the file was
created. The dictionary file is not required, but is recommended. Without a
dictionary file, the equivalent SQL statements will use Oracle internal object
IDs for the object name and present column values as hex data.

For example, instead of the SQL statement:

    INSERT INTO emp(name, salary) VALUES ('John Doe', 50000);

LogMiner will display:

    insert into Object#2581(col#1, col#2) values (hextoraw('4a6f686e20446f65'),
    hextoraw('c306'));"

Create a dictionary file by mounting a database and then extracting dictionary
information into an external file.

You must create the dictionary file from the same database that generated the
log files you want to analyze. Once created, you can use the dictionary file
to analyze redo logs.

When creating the dictionary, specify the following:

    * DICTIONARY_FILENAME to name the dictionary file.
    * DICTIONARY_LOCATION to specify the location of the file.

LogMiner analyzes redo log files from any version 8.0.x and later Oracle
database that uses the same database characterset and is running on the same
hardware as the analyzing instance.

Note: The LogMiner packages are owned by the SYS schema.  Therefore, if you
      are not connected as user SYS, you must include SYS in your call.  For
      example: 

          EXECUTE SYS.DBMS_LOGMNR_D.BUILD

To Create a Dictionary on the Oracle Database (9i and later)
====================================================
In the 9i and later releases, the ability to extract the dictionary to a flat file as well as creating a dictionary with the redo logs is available.

For example, enter the following to create the file dictionary.ora in /oracle/database:

1. Make sure to specify an existing directory that Oracle has permissions
   to write to by the PL/SQL procedure by setting the initialization
   parameter UTL_FILE_DIR in the init.ora.
   For example, set the following to use /oracle/logs:

     UTL_FILE_DIR =/oracle/database

   Be sure to shutdown and restart the instance after adding UTL_FILE_DIR to the init or spfile. 

   If you do not reference this parameter, the procedure will fail.

2. Use SQL*Plus to mount and then open the database whose files you want to
   analyze. For example, enter:

     STARTUP

3. Execute the PL/SQL procedure DBMS_LOGMNR_D.BUILD. Specify both a file name
   for the dictionary and a directory pathname for the file. This procedure
   creates the dictionary file, which you should use to analyze log files.
   For example, enter the following to create file dictionary.ora in
   '/oracle/database/:

Example:
-------------
SQL> EXECUTE DBMS_LOGMNR_D.BUILD('dictionary.ora', -
  2 '/oracle/database/', -
  3 OPTIONS => DBMS_LOGMNR_D.STORE_IN_FLAT_FILE);



If extracting the  database dictionary information to the redo logs, use the DBMS_LOGMNR_D.BUILD procedure with the STORE_IN_REDO_FILES option and do not specify a filename or location.

Example:
-------------
SQL> EXECUTE DBMS_LOGMNR_D.BUILD ( -
  2 OPTIONS=>DBMS_LOGMNR_D.STORE_IN_REDO_LOGS);


Please note that to extract a dictionary to the redo logs, the database must be open and in ARCHIVELOG mode and archiving must be enabled
Also to make sure that the redo logs contain information that will provide the most value to you, you should enable at least minimal supplemental logging.

SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA

Specifying Redo Logs for Analysis
=================================

Once you have created a dictionary file, you can begin analyzing redo logs.
Your first step is to specify the log files that you want to analyze using
the ADD_LOGFILE procedure. Use the following constants:

    * NEW to create a new list.
    * ADDFILE to add redo logs to a list.
    * REMOVEFILE to remove redo logs from the list.

To Use LogMiner:

1. Use SQL*Plus to start an Oracle instance, with the database either mounted
   or unmounted.

   For example, enter:

     STARTUP

2. Create a list of logs by specifying the NEW option when executing the
   DBMS_LOGMNR.ADD_LOGFILE procedure. For example, enter the following to
   specify /oracle/logs/log1.f:

     (INCLUDE THE '-' WHEN ENTERING THE FOLLOWING)

     EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
     LOGFILENAME => '/oracle/logs/log1.f', -
     OPTIONS => dbms_logmnr.NEW);

3. If desired, add more logs by specifying the ADDFILE option.

   For example, enter the following to add /oracle/logs/log2.f:

     (INCLUDE THE '-' WHEN ENTERING THE FOLLOWING)

     EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
     LOGFILENAME => '/oracle/logs/log2.f', -
     OPTIONS => dbms_logmnr.ADDFILE);

4. If desired, remove logs by specifying the REMOVEFILE option.

   For example, enter the following to remove /oracle/logs/log2.f:

     (INCLUDE THE '-' WHEN ENTERING THE FOLLOWING)

     EXECUTE DBMS_LOGMNR.ADD_LOGFILE( -
     LOGFILENAME => '/oracle/logs/log2.f', -
     OPTIONS => dbms_logmnr.REMOVEFILE);


Using LogMiner:
===============

Once you have created a dictionary file and specified which logs to analyze,
you can start LogMiner and begin your analysis. Use the following options to
narrow the range of your search at start time:


This option                    Specifies
===========     =========

STARTSCN                   The beginning of an SCN range.
ENDSCN           The termination of an SCN range.
STARTTIME                The beginning of a time interval.
ENDTIME                     The end of a time interval.
DICTFILENAME        The name of the dictionary file.


Once you have started LogMiner, you can make use of the following data
dictionary views for analysis:

This view               Displays information about
===================     ==================================================

V$LOGMNR_DICTIONARY        The dictionary file in use.
V$LOGMNR_PARAMETERS      Current parameter settings for LogMiner.
V$LOGMNR_LOGS           Which redo log files are being analyzed.
V$LOGMNR_CONTENTS            The contents of the redo log files being analyzed.


To Use LogMiner:
================

1. Issue the DBMS_LOGMNR.START_LOGMNR procedure to start LogMiner utility.

   For example, to start LogMiner using /oracle/dictionary.ora, issue:

     (INCLUDE THE '-' WHEN ENTERING THE FOLLOWING)

     EXECUTE DBMS_LOGMNR.START_LOGMNR( -
     DICTFILENAME =>'/oracle/dictionary.ora');

   Optionally, set the STARTTIME and ENDTIME parameters to filter data by time.
   Note that the procedure expects date values: use the TO_DATE function to
   specify date and time, as in this example:

     (INCLUDE THE '-' WHEN ENTERING THE FOLLOWING)

     EXECUTE DBMS_LOGMNR.START_LOGMNR( -
     DICTFILENAME => '/oracle/dictionary.ora', -
     STARTTIME => to_date('01-Jan-1998 08:30:00', 'DD-MON-YYYY HH:MI:SS'), -
     ENDTIME => to_date('01-Jan-1998 08:45:00', 'DD-MON-YYYY HH:MI:SS'));

   Use the STARTSCN and ENDSCN parameters to filter data by SCN, as in this
   example:

     (INCLUDE THE '-' WHEN ENTERING THE FOLLOWING)

     EXECUTE DBMS_LOGMNR.START_LOGMNR( -
     DICTFILENAME => '/oracle/dictionary.ora', -
     STARTSCN => 100, -
     ENDSCN => 150);

2. View the output via the V$LOGMNR_CONTENTS table. LogMiner returns all rows
   in SCN order, which is the same order applied in media recovery.

   For example,the following query lists information about operations:

     SELECT operation, sql_redo FROM v$logmnr_contents;

     OPERATION SQL_REDO              
     --------- ----------------------------------------------------------
     INTERNAL     
     INTERNAL     
     START     set transaction read write; 
     UPDATE    update SYS.UNDO$ set NAME = 'RS0', USER# = 1, FILE# = 1, BLOCK# = 2450, SCNBAS =
     COMMIT    commit;                                                                        
     START     set transaction read write;                   
     UPDATE    update SYS.UNDO$ set NAME = 'RS0', USER# = 1, FILE# = 1, BLOCK# = 2450, SCNBAS =
     COMMIT    commit;                          
     START     set transaction read write;    
     UPDATE    update SYS.UNDO$ set NAME = 'RS0', USER# = 1, FILE# = 1, BLOCK# = 2450, SCNBAS =
     COMMIT    commit;                                                                         
     11 rows selected.


Analyzing Archived Redo Log Files from Other Databases:
=======================================================

You can run LogMiner on an instance of a database while analyzing redo log
files from a different database. To analyze archived redo log files from other
databases,

LogMiner must:

* Access a dictionary file that is both created from the same database as the
  redo log files and created with the same database character set.
* Run on the same hardware platform that generated the log files, although it
  does not need to be on the same system.
* Use redo log files that can be applied for recovery from Oracle version 8.0
  and later.