Tuesday, 7 December 2010

Key points to consider for Oracle RAC/Cluster in Linux

1. Ensure Max. Open Files is set to 65k for Oracle Cluster.
Reference
Oracle Bug # 8429716
Oracle Metalink Note id # 858279.1

You may see timeout errors like below in the crsd.log file.
Important : Oracle Metalink note id # 732086.1 & Bug 6196746 may misguide you.

2010-12-07 03:14:54.776: [ CRSEVT][1494178112] CAAMonitorHandler :: 0:Action Script /u01/crs/oracle/product/crs/bin/racgwrap(check) timed out for

ora.dlp34db.vip! (timeout=60)
2010-12-07 03:14:54.776: [ CRSAPP][1494178112] CheckResource error for ora.dlp34db.vip error code = -2
2010-12-07 03:14:54.846: [ CRSEVT][1492076864] CAAMonitorHandler :: 0:Could not join /u01/crs/oracle/product/crs/bin/racgwrap(check)
category: 1234, operation: scls_process_join, loc: childcrash, OS error: 0, other: Abnormal termination of the child


2. Increase DIAGWAIT. This avoids failure in dumping log information prior to Node Eviction by giving sufficient time.
Eventually it helps to troubleshoot the root cause of Node Eviction
Reference
Oracle Metalink Note id # 559365.1

crsctl set css diagwait 13 -force

3. Ensure HUGEPAGE is set appropriately.
Oracle Metalink Note Reference
Shell Script to Calculate Values Recommended Linux HugePages / HugeTLB Configuration [ID 401749.1]
HugePages on 64-bit Linux [ID 361468.1]
HugePages on Linux: What It Is... and What It Is Not... [ID 361323.1]

4. Additional settings to fetch server level core information to diagnose the Server crash due to Oracle Cluster
a. Enable CSSD core by following Oracle Metalink Note 605449.1
b. Configure kdump by following Oracle Metalink Note 452067.1


SQLID level Tracing Feature in 11g

Now since Oracle 11g onwards you can trace all sessions by filtering a particular SQL ID or
set of SQL IDs


For example

alter session/system set events 'sql_trace [sql:g3yc1js3g2689|7ujay4u33g337] wait=true | false, bind=true | false, planstat=never | first_execution | all_executionss|level 12';

alter session/system set events 'sql_trace [sql:g3yc1js3g2689|7ujay4u33g337] off';

Trace will be generated upon running that particular SQL ID

Keywords : SQL ID, sqlid trace

Monday, 22 February 2010

Solution for long running query against DBA_EXTENTS to find the corrupted segment

Problem

You are running a query against DBA_EXTENTS to find the segment information (most probably you would be using
this to find a CORRUPTED segment/object) for a particular FILE_ID and BLOCK_ID.

Query might look something like this.

select owner,segment_name,segment_type
from dba_extents
where file_id =
and between block_id and (block_id+(blocks-1));

This query might run longer and sometimes you may lose your patience waiting for it.


Solution

  • Dump the block and take the object id using the below command

alter system dump datafile block ;
Eg.alter system dump datafile 1811 block 20000;

FYI, this command generate a trace file in the user_dump_dest directory

  • Grep the object number from the trace file.

Locate the trace file and grep the object id which will be presented in HEX value.

grep "seg/obj" op01csdb01_ora_13143.trc

seg/obj: 0xe024a2 csc: 0x878.a5583a53 itc: 50 flg: E typ: 1 - DATA

  • Find the object information from DBA_OBJECTS

select owner,object_id,data_object_id,object_name,subobject_name

from dba_objects

where (object_id=to_number('E024A2', 'xxxxxx')

or

data_object_id=to_number('E024A2', 'xxxxxx')

);

Note : the count of 'x' should be equal to the number of characters in the source.

Eg. E024A2 - is 6 character string, hence i used 'xxxxxx'

Keywords : query; dba_extents; long; slow; corrupt ; segment; object

Monday, 25 January 2010

How to create duplicate database from a primary when the FULL backup happens out of standby

Scenario

1. You want to create a DUPLICATE database to perform PITR out of Primary PRODUCTION database. Duplicate/auxiliary instance is created in a different host.

2. FULL/DYNAMIC backup of primary production database goes out of STANDBY

Assumption

1. Backup of Production databae controlfile exists

Facts

In this scenario, I used the following command to perform PITR recovery in RETEK database.
1. Production database
a. op01rtdb01/2
b. primnode1/primnode2
2. Standby database
a. Op99rtdb01
b. standbynode1
c. Policy : rdc_oradb_qa
d. Netbackup master server : nbmstrnode1

Note : FULL database backup is performed out of op99rtdb01

3. Duplicate/Auxiliary database
a. op01drdb01
b. auxnode1

Commands

PFA the general commands for this sort PITR. This document would also cover the directory creation and other steps.
Otherwise, the key steps are

1. Login to auxnode1 as “oracle” unix user

2. . oraenv (type op01drdb01)

3. Startup nomount

rman

connect target SYS/@op01rtdb01 Note : I am connecting only to primary production database, though the FULL backup happens out of OP99RTDB01.

connect catalog rman10g/@op01emdb02

CONNECT AUXILIARY /

run
{
allocate channel t1 type 'SBT_TAPE' parms 'ENV=(NB_ORA_CLIENT=standbynode1,NB_ORA_CLASS=rdc_oradb_qa,NB_ORA_SERV=nbmstrnode1)';
allocate auxiliary channel t2 type 'SBT_TAPE' parms 'ENV=(NB_ORA_CLIENT=standbynode1,NB_ORA_CLASS=rdc_oradb_qa,NB_ORA_SERV=nbmstrnode1)';
allocate auxiliary channel t3 type 'SBT_TAPE' parms 'ENV=(NB_ORA_CLIENT=standbynode1,NB_ORA_CLASS=rdc_oradb_qa,NB_ORA_SERV=nbmstrnode1)';
allocate auxiliary channel t4 type 'SBT_TAPE' parms 'ENV=(NB_ORA_CLIENT=standbynode1,NB_ORA_CLASS=rdc_oradb_qa,NB_ORA_SERV=nbmstrnode1)';
set until time "to_date('2009-10-09 02:00:00','YYYY-MM-DD hh24:mi:ss')";
DUPLICATE TARGET DATABASE TO OP01DRDB
db_file_name_convert=('/u01/app/oracle/admin/oralink/op01rtdb/','/u01/app/oracle/admin/oralink/op01drdb/')
skip tablespace ARIDAT03,RTKIDX06,RTKIDXEX01,RTKIDXEX03,OUTLN01,RTKIDX02,RTKDAT02,ARIDAT02,RTKIDX08,RTKDATEX03,RTKDAT03,RTKIDX04,RTKIDX01,RTKDAT04,PMIDAT01,TOOLS01,ARIIDX02,
ARIIDX03,AUDDAT01,ARIDAT01,RTKDAT01,RTKDAT06,RTKIDX03,RTKIDXEX02,RTKIDX07,ARIIDX01,RTKIDX05,RTKDAT07,USERS01,RTKDAT05,RTKDATEX02,RTKDAT08
LOGFILE
GROUP 1 ('/u01/app/oracle/admin/oralink/op01drdb/op01drdb01_0101a.rdo','/u01/app/oracle/admin/oralink/op01drdb/op01drdb01_0101b.rdo') SIZE 511M,
GROUP 2 ('/u01/app/oracle/admin/oralink/op01drdb/op01drdb01_0201a.rdo','/u01/app/oracle/admin/oralink/op01drdb/op01drdb01_0201b.rdo') SIZE 511M,
GROUP 3 ('/u01/app/oracle/admin/oralink/op01drdb/op01drdb01_0301a.rdo','/u01/app/oracle/admin/oralink/op01drdb/op01drdb01_0301b.rdo') SIZE 511M REUSE;
}

Note : To get the value of the parameter marked in RED, you need to go to op99rtdb01/standbynode1:/usr/openv/netbackup/logs/user_ops/dbext/logs
Grep for rdc_oradb_qa and CAT any of the file.

standbynode1:[op99rtdb01]:/usr/openv/netbackup/logs/user_ops/dbext/logs>more 709.0.1264176793
Backup started Fri Jan 22 10:13:13 2010


10:14:11 INF - Backup id = standbynode1_1264176842
10:14:11 INF - Backup time = 1264176842
10:14:11 INF - Policy name = rdc_oradb_qa
10:14:11 INF - Snapshot = 0
10:14:12 INF - Frozen image = 0
10:14:12 INF - Backup copy = 0
10:14:12 INF - Master server = nbmstrnode1

Key words : create ; auxiliary ; duplicate ; standby ; dataguard ; full ; backup ;

Thursday, 13 August 2009

Tune your Query using Oracle SQL Profile

SQL Profiles are more preferred than OUTLINE as it is simple to use and manage.
You may find my below steps useful to start with SQL Profile. I used this to tune a single query to generate & apply SQL Profile to improve the execution plan.

Note : Here I am trying to generate SQL Profile for one single SQL statement. If you want to generate SQL Profile for multiple SQL statements then you need to create a SQL Tuning Set and then run this.
a. Create a new STS
b. Load the STS
c. Select the STS to review the contents
d. Update the STS if necessary
e. Create a tuning task with the STS as input

Overall Logical Steps to create a SQL Profile through SQL Tuning Advisor

1) You use the CREATE_TUNING_TASK Functions to create a tuning task for tuning a single statement or a group of SQL statements.
2) The EXECUTE_TUNING_TASK Procedure executes a previously created tuning task.
3) The REPORT_TUNING_TASK Function displays the results of a tuning task.
4) You use the SCRIPT_TUNING_TASK Function to create a SQL*PLUS script which can then be executed to implement a set of Advisor recommendations

1. Creating a SQL Tuning Task

Method 1

DECLARE
my_task_name VARCHAR2(30);
my_sqltext CLOB;
BEGIN
my_sqltext := 'SELECT /*+ ORDERED */ * ' 'FROM employees e, locations l, departments d ' 'WHERE e.department_id = d.department_id AND ' 'l.location_id = d.location_id AND ' 'e.employee_id < :bnd';
my_task_name := DBMS_SQLTUNE.CREATE_TUNING_TASK(
sql_text => my_sqltext,
bind_list => sql_binds(anydata.ConvertNumber(100)),
user_name => 'HR',
scope => 'COMPREHENSIVE',
time_limit => 1800,
task_name => 'my_sql_tuning_task',
description => 'Task to tune a query on a specified employee');
END;


Method 2

DECLARE
my_task_name VARCHAR2(30);
my_sqlid varchar2(30);
my_planhashvalue number;
BEGIN
my_sqlid := 'a59zbsyutxfg1';
my_planhashvalue := 343073668;
my_task_name := DBMS_SQLTUNE.CREATE_TUNING_TASK(

sql_id => my_sqlid,
plan_hash_value => my_planhashvalue,
scope => 'COMPREHENSIVE',
task_name => 'my_sql_tuning_task',
time_limit => 1800,
description => 'Task to tune a query');
END;


Method 3

DECLARE
my_task_name VARCHAR2(30);
my_beginsnap number;
my_endsnap number;
my_sqlid varchar2(30);
my_planhashvalue number;
BEGIN
my_sqlid := 'ffpyzcn9809s9';
my_planhashvalue := 2377894994;
my_beginsnap := 20134;
my_endsnap := 20136;
my_task_name := DBMS_SQLTUNE.CREATE_TUNING_TASK(
begin_snap => my_beginsnap,
end_snap => my_endsnap,
sql_id => my_sqlid,
plan_hash_value => my_planhashvalue,
scope => 'COMPREHENSIVE',
task_name => 'my_sql_tuning_task',
time_limit => 1800,
description => 'Task to tune a query');
END;


2. Executing a SQL Tuning Task

BEGIN
DBMS_SQLTUNE.EXECUTE_TUNING_TASK( task_name => 'my_sql_tuning_task' );
END;


3. Checking the Status of a SQL Tuning Task

SELECT status FROM USER_ADVISOR_TASKS WHERE task_name = 'my_sql_tuning_task';

4. Checking the Progress of the SQL Tuning Advisor

SELECT sofar, totalwork
FROM V$ADVISOR_PROGRESS
WHERE task_name = 'my_sql_tuning_task';


5. Displaying the Results of a SQL Tuning Task

SET LONG 9999999
SET LONGCHUNKSIZE 99999999
SET LINESIZE 200
SELECT DBMS_SQLTUNE.REPORT_TUNING_TASK( 'my_sql_tuning_task') FROM DUAL;

6. Accept SQL Profile

The above report might advise SQL Profile if found anything.It can then be implemented using.

execute dbms_sqltune.accept_sql_profile(task_name => 'my_sql_tuning_task', replace => TRUE);

6. How to manage(enable/disable) a SQL PROFILE

BEGIN
DBMS_SQLTUNE.ALTER_SQL_PROFILE(
name =>'SYS_SQLPROF_014845b914845b98',
attribute_name => 'STATUS',
value => 'DISABLED');
END;


BEGIN DBMS_SQLTUNE.ALTER_SQL_PROFILE(
name =>'SYS_SQLPROF_01487a4c1487a4cb',
attribute_name => 'STATUS',
value => 'ENABLED');
END;



Keywords : Oracle; SQL profile ;sqlprofile ; tune; query

Saturday, 1 August 2009

Quick Commands to copy DATAFILE or ARCHIVELOG file from ASM to unix filesystem

In Oracle 10g,

RMAN Solution

I. To Copy Datafile from ASM to filesystem

run{
allocate channel c1 type disk;
copy datafile '+DATA01/datafile/dat01.577.687959027' to /tmp/dat01.577.687959027';
}

II. To Copy archive log file from ASM to filesystem

backup as copy archivelog from sequence 23676 until sequence 23680 format '/tmp/archivelog/informatica%h_%e_resetlogs.arc';

PLSQL Solution using DBMS_FILE_TRANSFER

Create a directory object to point the source

create or replace directory SDIR as '+DATA01/datafile/';

Note : In my example, the database datafile which i wanted to copy from ASM to filesystem was residing in '+DATA01/datafile/'

Create a directory object to point the destination

create or replace directory DDIR as '/tmp';

Copy the file

BEGIN
dbms_file_transfer.copy_file(source_directory_object =>'SDIR',
source_file_name => 'dat01.577.687959027',
destination_directory_object => 'DDIR',
destination_file_name => 'dat01.dbf');
END;

In Oracle 11g,

Apart from the above compatible commands, ASMCMD command line utility has been enhanced with "cp" copy command.

ASMCMD[+]>cp +DATA01/datafile/dat01.577.687959027 /tmp/dat01.dbf

Keywords : copy; file; asm; filesystem; datafile; archive; archivelog

Monday, 22 June 2009

Unit size in DBA_HIST_TBSPC_SPACE_USAGE

Question : What is the unit of the below columns in DBA_HIST_TBSPC_SPACE_USAGE?
a. tablespace_size
b. tablespace_maxsize
c. tablespace_usedsize

Answer : They are reprensted in no. of blocks. In other words to get the actual size from this view you need to multiply the values with the DBA_TABLESPACES.BLOCK_SIZE

I used this simple query ( you can tune this query if you see it running slow) to get the tablespace growth report on weekly basis.

select /*+ parallel(c, 5) */
to_char(c.BEGIN_INTERVAL_TIME,'MON'),
to_char(c.BEGIN_INTERVAL_TIME,'WW'),
b.name,
round(max(a.tablespace_size*d.block_size)/1024/1024) tblsize_in_MB,
round(max(a.tablespace_maxsize*d.block_size)/1024/1024) totalsize_in_MB,
round(max(a.tablespace_usedsize*d.block_size)/1024/1024) usedsize_in_MB
from DBA_HIST_TBSPC_SPACE_USAGE a, v$tablespace b,
DBA_HIST_SNAPSHOT c, dba_tablespaces d
where a.tablespace_id=b.ts#
and a.snap_id=c.snap_id
and b.name=d.tablespace_name
and b.name in ('USERS')
group by to_char(c.BEGIN_INTERVAL_TIME,'MON'),
to_char(c.BEGIN_INTERVAL_TIME,'WW'),
b.name
order by 3,2,1;


Keywords : DBA_HIST_TBSPC_SPACE_USAGE, size, unit