Use TRACE=1FF0300 to trace the expdp/impdp backup
expdp system/manager DIRECTORY=new_dir DUMPFILE=expdp_parallel_%U.dmp LOGFILE=expdp_full.log FULL=y PARALLEL=15 TRACE=1FF0300
Dump would be like :
Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.
;;;
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Data Mining and Real Application Testing options
ORA-31694: master table "SYSTEM"."SYS_IMPORT_FULL_01" failed to load/unload
ORA-02354: error in exporting/importing data
ORA-39774: parse of metadata stream failed with the following error:
LPX-00210: expected '<' instead of 'ΓΏ'
ORA-06512: at "SYS.DBMS_SYS_ERROR", line 86
ORA-06512: at "SYS.KUPW$WORKER", line 8936
----- PL/SQL Call Stack -----
object line object
handle number name
0xfb472c90 20462 package body SYS.KUPW$WORKER
0xfb472c90 9028 package body SYS.KUPW$WORKER
0xfb472c90 4519 package body SYS.KUPW$WORKER
0xfb472c90 9739 package body SYS.KUPW$WORKER
0xfb472c90 1775 package body SYS.KUPW$WORKER
0xfa49efb8 2 anonymous block
1FF0300 is used for full trace(worker processes & master control processes).
400300 is to trace Worker process(es) and trace 80300 is to trace Master Control Process (MCP).
If it is Master Process trace file then generated file name is,
SID_dmNUMBER_process_id.trc
- If it is Worker Process trace file then generated file name is,
SID_dwNUMBER_process_id.trc
My experience, findings and thoughts in my daily work with Oracle products
Thursday, 6 December 2012
Tuesday, 6 November 2012
Workflow mailer is not getting up (Oracle R12)
Issue:
Workflow mailer got down.Tried to start it but not getting Up
Error:
at oracle.apps.fnd.wf.mailer.MailerUtils.isSocketConnectableNew(MailerUtils.java:300)
at oracle.apps.fnd.wf.mailer.SMTPUtils.isValidOutbound(SMTPUtils.java:522)
at oracle.apps.fnd.wf.mailer.Mailer.validateParameterValues(Mailer.java:1238)
at oracle.apps.fnd.cp.gsc.SvcComponent.performValidateParameterValues(SvcComponent.java:233)
at oracle.apps.fnd.cp.gsc.SvcComponent.start(SvcComponent.java:314)
at oracle.apps.fnd.cp.gsc.SvcComponentContainer.handleComponentEvent(SvcComponentContainer.java:2212)
at oracle.apps.fnd.cp.gsc.SvcComponentContainer.onBusinessEvent(SvcComponentContainer.java:301)
at oracle.apps.fnd.wf.bes.DispatchThread.run(DispatchThread.java:57)
Finding:
Autoconfig has changed the Outbound Server Name
Solution:
Set Outbound Server name to test-mac.ABC.com
Note:Inbound Server name is same as Outbound Server Name in our case.
Workflow mailer got down.Tried to start it but not getting Up
Error:
at oracle.apps.fnd.wf.mailer.MailerUtils.isSocketConnectableNew(MailerUtils.java:300)
at oracle.apps.fnd.wf.mailer.SMTPUtils.isValidOutbound(SMTPUtils.java:522)
at oracle.apps.fnd.wf.mailer.Mailer.validateParameterValues(Mailer.java:1238)
at oracle.apps.fnd.cp.gsc.SvcComponent.performValidateParameterValues(SvcComponent.java:233)
at oracle.apps.fnd.cp.gsc.SvcComponent.start(SvcComponent.java:314)
at oracle.apps.fnd.cp.gsc.SvcComponentContainer.handleComponentEvent(SvcComponentContainer.java:2212)
at oracle.apps.fnd.cp.gsc.SvcComponentContainer.onBusinessEvent(SvcComponentContainer.java:301)
at oracle.apps.fnd.wf.bes.DispatchThread.run(DispatchThread.java:57)
Finding:
Autoconfig has changed the Outbound Server Name
Solution:
Set Outbound Server name to test-mac.ABC.com
Note:Inbound Server name is same as Outbound Server Name in our case.
Sunday, 7 October 2012
Startup Shutdown Scripts in Oracle R12
Description of EBS Application services start stop scripts
a) adstrtall.sh
This script actually is a master script to start all components/services of middle tier or application tier. This script will use Service Control API to start all services which are enabled after checking them in context file (SID_HOSTNAME.xml / CONTEXT_NAME.xml).
b)adstrtal.sh
This script is a master script to start all application tier services adstpal.sh is the master script to stop all the services at the application
c) adalnctl.sh
Script to start / stop apps listener (FNDFS and FNDFS). This listener file will be in 10.1.2 ORACLE_HOME (i.e. Forms & Reports Home) listener.ora file will be in $INST_TOP/apps/$CONTEXT_NAME/ora/10.1.2/network/admin directory
d) adapcctl.sh
Script to start/stop Web Server or Oracle HTTP Server. This script uses opmn (Oracle Process Manager and Notification Server)
e) adcmctl.sh
Script to start / stop concurrent manager. (This script in turn calls startmgr.sh )
f) adformsctl.sh
Script to start / stop Forms OC4J from 10.1.3 Oracle_Home. This script will also useopmnctl to start/stop Forms
g) adformsrvctl.sh
This script is used to start forms in socket mode. Default forms connect method in R12 is servlet.
h) adoacorectl.sh
This script will start/stop oacore OC4J in 10.1.3 Oracle_Home. This scripts will also use opmnctl (similar to adapcctl & adformsctl)
i) adoafmctl.sh
This script will start/stop oafm OC4J in 10.1.3 Oracle_Home. This scripts will also use opmnctl to start oacore instance of OC4J
h) jtffmctl.sh
This script will be used to start/stop one to one fulfilment server.
k) mwactl.sh
To start / stop mwa telnet server where mwa is mobile application.
Starting up AS10g services in an EBusiness Suite Release 12 environment(Doc ID 743518.1)
a) adstrtall.sh
This script actually is a master script to start all components/services of middle tier or application tier. This script will use Service Control API to start all services which are enabled after checking them in context file (SID_HOSTNAME.xml / CONTEXT_NAME.xml).
b)adstrtal.sh
This script is a master script to start all application tier services adstpal.sh is the master script to stop all the services at the application
c) adalnctl.sh
Script to start / stop apps listener (FNDFS and FNDFS). This listener file will be in 10.1.2 ORACLE_HOME (i.e. Forms & Reports Home) listener.ora file will be in $INST_TOP/apps/$CONTEXT_NAME/ora/10.1.2/network/admin directory
d) adapcctl.sh
Script to start/stop Web Server or Oracle HTTP Server. This script uses opmn (Oracle Process Manager and Notification Server)
e) adcmctl.sh
Script to start / stop concurrent manager. (This script in turn calls startmgr.sh )
f) adformsctl.sh
Script to start / stop Forms OC4J from 10.1.3 Oracle_Home. This script will also useopmnctl to start/stop Forms
g) adformsrvctl.sh
This script is used to start forms in socket mode. Default forms connect method in R12 is servlet.
h) adoacorectl.sh
This script will start/stop oacore OC4J in 10.1.3 Oracle_Home. This scripts will also use opmnctl (similar to adapcctl & adformsctl)
i) adoafmctl.sh
This script will start/stop oafm OC4J in 10.1.3 Oracle_Home. This scripts will also use opmnctl to start oacore instance of OC4J
h) jtffmctl.sh
This script will be used to start/stop one to one fulfilment server.
k) mwactl.sh
To start / stop mwa telnet server where mwa is mobile application.
Starting up AS10g services in an EBusiness Suite Release 12 environment(Doc ID 743518.1)
Sunday, 16 September 2012
How to Start/Stop EBS Middle tier Services - EBS R12
Sometimes DBA needs to bounce the middle tier services only. Below are the steps to bounce it:
a) adoacorectl.sh stop
b) adapcctl.sh stop
Wait for few seconds
c) adapcctl.sh start
d) adoacorectl.sh start
a) adoacorectl.sh stop
b) adapcctl.sh stop
Wait for few seconds
c) adapcctl.sh start
d) adoacorectl.sh start
Sunday, 9 September 2012
Sequence of adstpall.sh and adstrtall.sh scripts - EBS R12
Best option is to use adstpall/adstrtall script to stop and start the Application.But if any reason you want to do it manually then below is the sequence of scripts
adstpall.sh Sequence
a) jtffmctl.sh
b) adcmctl.sh
c) adoafmctl.sh
d) adformsctl.sh
e) adoacorectl.sh
f) adapcctl.sh
g) adalnctl.sh
h) adopmnctl.sh
adstrtall.sh Sequence
a) adopmnctl.sh
b) adalnctl.sh
c) adapcctl.sh
d) adoacorectl.sh
e) adformsctl.sh
f) adoafmctl.sh
g) adcmctl.sh
h) jtffmctl.sh
adstpall.sh Sequence
a) jtffmctl.sh
b) adcmctl.sh
c) adoafmctl.sh
d) adformsctl.sh
e) adoacorectl.sh
f) adapcctl.sh
g) adalnctl.sh
h) adopmnctl.sh
adstrtall.sh Sequence
a) adopmnctl.sh
b) adalnctl.sh
c) adapcctl.sh
d) adoacorectl.sh
e) adformsctl.sh
f) adoafmctl.sh
g) adcmctl.sh
h) jtffmctl.sh
Wednesday, 5 September 2012
ORA-24247: network access denied by access control list (ACL)
Issue:
ERROR at line 1:
ORA-24247: network access denied by access control list (ACL)
ORA-06512: at "SYS.UTL_TCP", line 17
ORA-06512: at "SYS.UTL_TCP", line 246
ORA-06512: at "SYS.UTL_SMTP", line 115
ORA-06512: at "SYS.UTL_SMTP", line 138
ORA-06512: at "XXX", line 36
ORA-06512: at line 1
Changes:
The problem occured after migrating to 11g from 10g.
Findings:
SQL> select owner,object_name,object_type,status from dba_objects where object_name='MAILSERVER_ACL';
no rows selected
Solution:
Step 1: connect to the Database and create a procedure as given below:
SQL> set serveroutput on
SQL> create or replace procedure mailserver_acl(
aacl varchar2,
acomment varchar2,
aprincipal varchar2,
aisgrant boolean,
aprivilege varchar2,
aserver varchar2,
aport number)
is
begin
begin
DBMS_NETWORK_ACL_ADMIN.DROP_ACL(aacl);
dbms_output.put_line('ACL dropped.....');
exception
when others then
dbms_output.put_line('Error dropping ACL: '||aacl);
dbms_output.put_line(sqlerrm);
end;
begin
DBMS_NETWORK_ACL_ADMIN.CREATE_ACL(aacl,acomment,aprincipal,aisgrant,aprivilege);
dbms_output.put_line('ACL created.....');
exception
when others then
dbms_output.put_line('Error creating ACL: '||aacl);
dbms_output.put_line(sqlerrm);
end;
begin
DBMS_NETWORK_ACL_ADMIN.ASSIGN_ACL(aacl,aserver,aport);
dbms_output.put_line('ACL assigned.....');
exception
when others then
dbms_output.put_line('Error assigning ACL: '||aacl);
dbms_output.put_line(sqlerrm);
end;
commit;
dbms_output.put_line('ACL commited.....');
end;
/
Procedure created.
SQL> show errors
No errors.
Step 2: create an ACL as given below:
SQL> begin
mailserver_acl(
'mailserver_acl.xml',
'ACL for used Email Server to connect',
'TEST_USER', -----Username sending mail
TRUE,
'connect',
'abc-smtp-int.MAIL_SERVER.com', ---Mail Server
25); ---Port of mail server
end;
/
ACL dropped.....
ACL created.....
ACL assigned.....
ACL commited.....
PL/SQL procedure successfully completed.
Also be sure that mail server entry should be in host file
ACL Views
The DBA_NETWORK_ACLS, DBA_NETWORK_ACL_PRIVILEGES and USER_NETWORK_ACL_PRIVILEGES views display the current ACL settings.
The DBA_NETWORK_ACLS view displays information about network and ACL assignments.
COLUMN host FORMAT A30
COLUMN acl FORMAT A30
SELECT host, lower_port, upper_port, acl FROM dba_network_acls;
HOST LOWER_PORT UPPER_PORT ACL
------------------------------ ---------- ---------- ------------------------------
10.1.10.* /sys/acls/test_acl_file.xml
192.168.2.3 80 80 /sys/acls/test_acl_file.xml
2 rows selected.
The DBA_NETWORK_ACL_PRIVILEGES view displays information about privileges associated with the ACL.
COLUMN acl FORMAT A30
COLUMN principal FORMAT A30
SELECT acl,principal,privilege,is_grant,
TO_CHAR(start_date, 'DD-MON-YYYY') AS start_date,
TO_CHAR(end_date, 'DD-MON-YYYY') AS end_date
FROM dba_network_acl_privileges;
ACL PRINCIPAL PRIVILE IS_GR START_DATE END_DATE
------------------------------ ------------------------------ ------- ----- ----------- -----------
/sys/acls/test_acl_file.xml TEST1 connect true 02-APR-2008
/sys/acls/test_acl_file.xml TEST2 connect false
2 rows selected.
The USER_NETWORK_ACL_PRIVILEGES view displays the current users network ACL settings.
conn test1/test1@db11g
COLUMN host FORMAT A30
SELECT host, lower_port, upper_port, privilege, status
FROM user_network_acl_privileges;
HOST LOWER_PORT UPPER_PORT PRIVILE STATUS
------------------------------ ---------- ---------- ------- -------
10.1.10.* connect GRANTED
192.168.2.3 80 80 connect GRANTED
2 rows selected.
conn test2/test2@db11g
COLUMN host FORMAT A30
SELECT host, lower_port, upper_port, privilege, status
FROM user_network_acl_privileges;
HOST LOWER_PORT UPPER_PORT PRIVILE STATUS
------------------------------ ---------- ---------- ------- -------
10.1.10.* connect DENIED
192.168.2.3 80 80 connect DENIED
2 rows selected.
Checking Privileges
In addition to the ACL views, privileges can be checked using the CHECK_PRIVILEGE and CHECK_PRIVILEGE_ACLID functions of the DBMS_NETWORK_ACL_ADMIN package.
conn sys/password@db11g AS SYSDBA
SELECT DECODE(
DBMS_NETWORK_ACL_ADMIN.check_privilege('test_acl_file.xml', 'TEST1', 'connect'),
1, 'GRANTED', 0, 'DENIED', NULL) privilege
FROM dual;
PRIVILE
-------
GRANTED
1 row selected.
COLUMN acl FORMAT A30
COLUMN host FORMAT A30
SELECT acl,host,
DECODE(DBMS_NETWORK_ACL_ADMIN.check_privilege_aclid(aclid, 'TEST2', 'connect'),
1, 'GRANTED', 0, 'DENIED', NULL) privilege
FROM dba_network_acls;
PRIVILE
-------
DENIED
ERROR at line 1:
ORA-24247: network access denied by access control list (ACL)
ORA-06512: at "SYS.UTL_TCP", line 17
ORA-06512: at "SYS.UTL_TCP", line 246
ORA-06512: at "SYS.UTL_SMTP", line 115
ORA-06512: at "SYS.UTL_SMTP", line 138
ORA-06512: at "XXX", line 36
ORA-06512: at line 1
Changes:
The problem occured after migrating to 11g from 10g.
Findings:
SQL> select owner,object_name,object_type,status from dba_objects where object_name='MAILSERVER_ACL';
no rows selected
Solution:
Step 1: connect to the Database and create a procedure as given below:
SQL> set serveroutput on
SQL> create or replace procedure mailserver_acl(
aacl varchar2,
acomment varchar2,
aprincipal varchar2,
aisgrant boolean,
aprivilege varchar2,
aserver varchar2,
aport number)
is
begin
begin
DBMS_NETWORK_ACL_ADMIN.DROP_ACL(aacl);
dbms_output.put_line('ACL dropped.....');
exception
when others then
dbms_output.put_line('Error dropping ACL: '||aacl);
dbms_output.put_line(sqlerrm);
end;
begin
DBMS_NETWORK_ACL_ADMIN.CREATE_ACL(aacl,acomment,aprincipal,aisgrant,aprivilege);
dbms_output.put_line('ACL created.....');
exception
when others then
dbms_output.put_line('Error creating ACL: '||aacl);
dbms_output.put_line(sqlerrm);
end;
begin
DBMS_NETWORK_ACL_ADMIN.ASSIGN_ACL(aacl,aserver,aport);
dbms_output.put_line('ACL assigned.....');
exception
when others then
dbms_output.put_line('Error assigning ACL: '||aacl);
dbms_output.put_line(sqlerrm);
end;
commit;
dbms_output.put_line('ACL commited.....');
end;
/
Procedure created.
SQL> show errors
No errors.
Step 2: create an ACL as given below:
SQL> begin
mailserver_acl(
'mailserver_acl.xml',
'ACL for used Email Server to connect',
'TEST_USER', -----Username sending mail
TRUE,
'connect',
'abc-smtp-int.MAIL_SERVER.com', ---Mail Server
25); ---Port of mail server
end;
/
ACL dropped.....
ACL created.....
ACL assigned.....
ACL commited.....
PL/SQL procedure successfully completed.
Also be sure that mail server entry should be in host file
ACL Views
The DBA_NETWORK_ACLS, DBA_NETWORK_ACL_PRIVILEGES and USER_NETWORK_ACL_PRIVILEGES views display the current ACL settings.
The DBA_NETWORK_ACLS view displays information about network and ACL assignments.
COLUMN host FORMAT A30
COLUMN acl FORMAT A30
SELECT host, lower_port, upper_port, acl FROM dba_network_acls;
HOST LOWER_PORT UPPER_PORT ACL
------------------------------ ---------- ---------- ------------------------------
10.1.10.* /sys/acls/test_acl_file.xml
192.168.2.3 80 80 /sys/acls/test_acl_file.xml
2 rows selected.
The DBA_NETWORK_ACL_PRIVILEGES view displays information about privileges associated with the ACL.
COLUMN acl FORMAT A30
COLUMN principal FORMAT A30
SELECT acl,principal,privilege,is_grant,
TO_CHAR(start_date, 'DD-MON-YYYY') AS start_date,
TO_CHAR(end_date, 'DD-MON-YYYY') AS end_date
FROM dba_network_acl_privileges;
ACL PRINCIPAL PRIVILE IS_GR START_DATE END_DATE
------------------------------ ------------------------------ ------- ----- ----------- -----------
/sys/acls/test_acl_file.xml TEST1 connect true 02-APR-2008
/sys/acls/test_acl_file.xml TEST2 connect false
2 rows selected.
The USER_NETWORK_ACL_PRIVILEGES view displays the current users network ACL settings.
conn test1/test1@db11g
COLUMN host FORMAT A30
SELECT host, lower_port, upper_port, privilege, status
FROM user_network_acl_privileges;
HOST LOWER_PORT UPPER_PORT PRIVILE STATUS
------------------------------ ---------- ---------- ------- -------
10.1.10.* connect GRANTED
192.168.2.3 80 80 connect GRANTED
2 rows selected.
conn test2/test2@db11g
COLUMN host FORMAT A30
SELECT host, lower_port, upper_port, privilege, status
FROM user_network_acl_privileges;
HOST LOWER_PORT UPPER_PORT PRIVILE STATUS
------------------------------ ---------- ---------- ------- -------
10.1.10.* connect DENIED
192.168.2.3 80 80 connect DENIED
2 rows selected.
Checking Privileges
In addition to the ACL views, privileges can be checked using the CHECK_PRIVILEGE and CHECK_PRIVILEGE_ACLID functions of the DBMS_NETWORK_ACL_ADMIN package.
conn sys/password@db11g AS SYSDBA
SELECT DECODE(
DBMS_NETWORK_ACL_ADMIN.check_privilege('test_acl_file.xml', 'TEST1', 'connect'),
1, 'GRANTED', 0, 'DENIED', NULL) privilege
FROM dual;
PRIVILE
-------
GRANTED
1 row selected.
COLUMN acl FORMAT A30
COLUMN host FORMAT A30
SELECT acl,host,
DECODE(DBMS_NETWORK_ACL_ADMIN.check_privilege_aclid(aclid, 'TEST2', 'connect'),
1, 'GRANTED', 0, 'DENIED', NULL) privilege
FROM dba_network_acls;
PRIVILE
-------
DENIED
Monday, 3 September 2012
TNS-12557 TNS-12560 TNS-00527
Issue:
Not able to start the listener.Getting below mentioned error.
Started with pid=15898 TNS-01150: The address of the specified listener name is incorrect
TNSLSNR for HPUX: Version 10.2.0.3.0 - Production on 17-AUG-2012 09:35:49
Copyright (c) 1991, 2006, Oracle. All rights reserved.
System parameter file is /usr/oracle102/network/admin/listener.ora
Log messages written to /usr/oracle102/network/log/listener.log
Trace information written to /usr/oracle102/network/trace/listener.trc
Trace level is currently 0
Started with pid=15966
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.11.50.110)(PORT=1521)))
Error listening on: (ADDRESS=(PROTOCOL=ipc)(PARTIAL=yes)(QUEUESIZE=1))
No longer listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.1.50.226)(PORT=1521)))
TNS-12557: TNS:protocol adapter not loadable
TNS-12560: TNS:protocol adapter error
TNS-00527: Protocol Adapter not loadable
Findings:
In listener.log same message
Enable the trace on listener
--------------------------------------
Add the below enties in listener.log and try to start the listener.
TRACE_LEVEL_LISTENER = 16
TRACE_FILE_LISTENER = LIST_01.trc
TRACE_DIRECTORY_LISTENER = /oracle
TRACE_TIMESTAMP_LISTENER = ON
DIAG_ADR_ENABLED_LISTENER = OFF <== this parameter is required on 11g Sever Version in order to create a listener trace file
Issue verified from the listener trace :
--------------------------------------------------
17-AUG-2012 11:17:27:333] nsgllsn: DH supported/allowed for the endpoint
[17-AUG-2012 11:17:27:333] nsgllsn: DH enabled for the endpoint
[17-AUG-2012 11:17:27:333] nsglhins: entry
[17-AUG-2012 11:17:27:333] nsglhins: exit
[17-AUG-2012 11:17:27:333] nslisten: entry
[17-AUG-2012 11:17:27:333] nsc2addr: entry
[17-AUG-2012 11:17:27:333] nsc2addr: (ADDRESS=(PROTOCOL=ipc)(PARTIAL=yes)(QUEUESIZE=1))
[17-AUG-2012 11:17:27:333] sntuscrt: entry
[17-AUG-2012 11:17:27:333] sntuscrt: illegal permission
[17-AUG-2012 11:17:27:333] sntuscrt: exit
[17-AUG-2012 11:17:27:333] nserror: entry
[17-AUG-2012 11:17:27:333] nserror: nsres: id=0, op=78, ns=12557, ns2=12560; nt[0]=527, nt[1]=515, nt[2]=-1; ora[0]=0, ora[1]=0, ora[2]=0
[17-AUG-2012 11:17:27:333] nsc2addr: error exit
[17-AUG-2012 11:17:27:333] nslisten: error exit
[17-AUG-2012 11:17:27:333] nsglhfre: entry
[17-AUG-2012 11:17:27:333] nsglhrem: entry
[17-AUG-2012 11:17:27:333] nsglhrem: entry
[17-AUG-2012 11:17:27:333] nsglhrem: exit
[17-AUG-2012 11:17:27:333] nsglhfre: Terminating listening endpoint: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=prod.corp.abc.co.in)(PORT=1521)))
[17-AUG-2012 11:17:27:333] nsdisc: entry
[17-AUG-2012 11:17:27:333] nsclose: entry
[17-AUG-2012 11:17:27:333] nstimarmed: entry
[17-AUG-2012 11:17:27:333] nstimarmed: no timer allocated
[17-AUG-2012 11:17:27:333] nstimarmed: normal exit
[17-AUG-2012 11:17:27:333] nttctl: entry
[17-AUG-2012 11:17:27:333] nttctl: entry
[17-AUG-2012 11:17:27:333] nsdo: entry
[17-AUG-2012 11:17:27:333] nsdo: cid=1, opcode=98, *bl=0, *what=0, uflgs=0x40, cflgs=0x2
[17-AUG-2012 11:17:27:333] nsdo: rank=64, nsctxrnk=0
[17-AUG-2012 11:17:27:333] nsdo: nsctx: state=7, flg=0x4002, mvd=0
[17-AUG-2012 11:17:27:333] nsevunreg: entry
[17-AUG-2012 11:17:27:333] nsevunreg: cid=1, sgt=0, rdm=0
listener trace shows following error before the error:
[17-Aug-2012 11:17:27:333] sntuscrt: illegal permission ====> folder permission issue. listener trace shows following message before the error:
Cause
1) Ensure that /tmp/.oracle or /var/tmp/.oracle directory exists.
2) Confirm that the user who is trying to start the listener has adequate read and write permissions on the directory specified above. The permissions should be 777.
3) If the /tmp directory has reached full capacity, this would cause the listener to fail to write the socket files.
To implement the solution, please use the following example:
1. cd /var/tmp
2. Check the whether the .oracle directory exists:
cd .oracle
3. If the directory does not exist, request the System Administrator create the directory and set the ownership as root:root with the permissions set to 01777
mkdir /var/tmp/.oracle chmod 01777 /var/tmp/.oracle chown root /var/tmp/.oracle chgrp root /var/tmp/.oracle
4. Next try starting the TNS Listener using the ‘lsnrctl start <listener_name>’ command.
If still the issue persists...
Check read/write permission on /tmp to oracle user.Oracle will write some files under /tmp/.oracle and /var/tmp/.oracle, and the listener trace shows "illegal permission" error which exactly means it can't write on it. Also confirmed that oracle user does not have read/write permission on /tmp
Grant read/write permission on /tmp to oracle user
Not able to start the listener.Getting below mentioned error.
Started with pid=15898 TNS-01150: The address of the specified listener name is incorrect
TNSLSNR for HPUX: Version 10.2.0.3.0 - Production on 17-AUG-2012 09:35:49
Copyright (c) 1991, 2006, Oracle. All rights reserved.
System parameter file is /usr/oracle102/network/admin/listener.ora
Log messages written to /usr/oracle102/network/log/listener.log
Trace information written to /usr/oracle102/network/trace/listener.trc
Trace level is currently 0
Started with pid=15966
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.11.50.110)(PORT=1521)))
Error listening on: (ADDRESS=(PROTOCOL=ipc)(PARTIAL=yes)(QUEUESIZE=1))
No longer listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.1.50.226)(PORT=1521)))
TNS-12557: TNS:protocol adapter not loadable
TNS-12560: TNS:protocol adapter error
TNS-00527: Protocol Adapter not loadable
Findings:
In listener.log same message
Enable the trace on listener
--------------------------------------
Add the below enties in listener.log and try to start the listener.
TRACE_LEVEL_LISTENER = 16
TRACE_FILE_LISTENER = LIST_01.trc
TRACE_DIRECTORY_LISTENER = /oracle
TRACE_TIMESTAMP_LISTENER = ON
DIAG_ADR_ENABLED_LISTENER = OFF <== this parameter is required on 11g Sever Version in order to create a listener trace file
Issue verified from the listener trace :
--------------------------------------------------
17-AUG-2012 11:17:27:333] nsgllsn: DH supported/allowed for the endpoint
[17-AUG-2012 11:17:27:333] nsgllsn: DH enabled for the endpoint
[17-AUG-2012 11:17:27:333] nsglhins: entry
[17-AUG-2012 11:17:27:333] nsglhins: exit
[17-AUG-2012 11:17:27:333] nslisten: entry
[17-AUG-2012 11:17:27:333] nsc2addr: entry
[17-AUG-2012 11:17:27:333] nsc2addr: (ADDRESS=(PROTOCOL=ipc)(PARTIAL=yes)(QUEUESIZE=1))
[17-AUG-2012 11:17:27:333] sntuscrt: entry
[17-AUG-2012 11:17:27:333] sntuscrt: illegal permission
[17-AUG-2012 11:17:27:333] sntuscrt: exit
[17-AUG-2012 11:17:27:333] nserror: entry
[17-AUG-2012 11:17:27:333] nserror: nsres: id=0, op=78, ns=12557, ns2=12560; nt[0]=527, nt[1]=515, nt[2]=-1; ora[0]=0, ora[1]=0, ora[2]=0
[17-AUG-2012 11:17:27:333] nsc2addr: error exit
[17-AUG-2012 11:17:27:333] nslisten: error exit
[17-AUG-2012 11:17:27:333] nsglhfre: entry
[17-AUG-2012 11:17:27:333] nsglhrem: entry
[17-AUG-2012 11:17:27:333] nsglhrem: entry
[17-AUG-2012 11:17:27:333] nsglhrem: exit
[17-AUG-2012 11:17:27:333] nsglhfre: Terminating listening endpoint: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=prod.corp.abc.co.in)(PORT=1521)))
[17-AUG-2012 11:17:27:333] nsdisc: entry
[17-AUG-2012 11:17:27:333] nsclose: entry
[17-AUG-2012 11:17:27:333] nstimarmed: entry
[17-AUG-2012 11:17:27:333] nstimarmed: no timer allocated
[17-AUG-2012 11:17:27:333] nstimarmed: normal exit
[17-AUG-2012 11:17:27:333] nttctl: entry
[17-AUG-2012 11:17:27:333] nttctl: entry
[17-AUG-2012 11:17:27:333] nsdo: entry
[17-AUG-2012 11:17:27:333] nsdo: cid=1, opcode=98, *bl=0, *what=0, uflgs=0x40, cflgs=0x2
[17-AUG-2012 11:17:27:333] nsdo: rank=64, nsctxrnk=0
[17-AUG-2012 11:17:27:333] nsdo: nsctx: state=7, flg=0x4002, mvd=0
[17-AUG-2012 11:17:27:333] nsevunreg: entry
[17-AUG-2012 11:17:27:333] nsevunreg: cid=1, sgt=0, rdm=0
listener trace shows following error before the error:
[17-Aug-2012 11:17:27:333] sntuscrt: illegal permission ====> folder permission issue. listener trace shows following message before the error:
Cause
1) Ensure that /tmp/.oracle or /var/tmp/.oracle directory exists.
2) Confirm that the user who is trying to start the listener has adequate read and write permissions on the directory specified above. The permissions should be 777.
3) If the /tmp directory has reached full capacity, this would cause the listener to fail to write the socket files.
To implement the solution, please use the following example:
1. cd /var/tmp
2. Check the whether the .oracle directory exists:
cd .oracle
3. If the directory does not exist, request the System Administrator create the directory and set the ownership as root:root with the permissions set to 01777
mkdir /var/tmp/.oracle chmod 01777 /var/tmp/.oracle chown root /var/tmp/.oracle chgrp root /var/tmp/.oracle
4. Next try starting the TNS Listener using the ‘lsnrctl start <listener_name>’ command.
If still the issue persists...
Check read/write permission on /tmp to oracle user.Oracle will write some files under /tmp/.oracle and /var/tmp/.oracle, and the listener trace shows "illegal permission" error which exactly means it can't write on it. Also confirmed that oracle user does not have read/write permission on /tmp
Grant read/write permission on /tmp to oracle user
Saturday, 1 September 2012
How to release undo space in oracle
Issue:
The database is not able to release UNDO space.
Findings:
a) Undo tablespace always shows 100% utilization
b) There is No error in the alert log
c) We have queried the v$undostat and we found the value of tuned_undo_retention is quite bigger than the undo_retention which is 54000
Solution:
Set "_smu_debug_mode= 33554432" in database
It should allow the expired segments to be released and re-used and you should see a drop in the number of unexpired segments.
There is no negative impact of using this parameter. Its the fixed value to specify its mode.
This mode helps auto tune of undo to work effectively and specially in such cases where max_query_length < tuned_undoretention, it helps alot.
The database is not able to release UNDO space.
Findings:
a) Undo tablespace always shows 100% utilization
b) There is No error in the alert log
c) We have queried the v$undostat and we found the value of tuned_undo_retention is quite bigger than the undo_retention which is 54000
Solution:
Set "_smu_debug_mode= 33554432" in database
It should allow the expired segments to be released and re-used and you should see a drop in the number of unexpired segments.
There is no negative impact of using this parameter. Its the fixed value to specify its mode.
This mode helps auto tune of undo to work effectively and specially in such cases where max_query_length < tuned_undoretention, it helps alot.
Thursday, 30 August 2012
Create SQL Profile
Sometimes we faced SQL plan changed issue.Using sqlt we can confirm whether plan of any sql has changed or not.
If plan is changed and we have best plan then we can set sql profile.
How to create SQL profile
Download the sqlt from metalink and Install it.
Once you generate the sqlt of any sqlid and confirmed the plan has changed then go to sqlt directory.
cd sqlt/utl
conn / as sysdba
SQL> START coe_xfr_sql_profile.sql 1jfdhkb18cg4c 3818097359;
1jfdhkb18cg4c --> SQL ID
3818097359--> BEST SQL hash value from sqlt report
This will generate a script to create the sql profile. Use it to create the profile.
If plan is changed and we have best plan then we can set sql profile.
How to create SQL profile
Download the sqlt from metalink and Install it.
Once you generate the sqlt of any sqlid and confirmed the plan has changed then go to sqlt directory.
cd sqlt/utl
conn / as sysdba
SQL> START coe_xfr_sql_profile.sql 1jfdhkb18cg4c 3818097359;
1jfdhkb18cg4c --> SQL ID
3818097359--> BEST SQL hash value from sqlt report
This will generate a script to create the sql profile. Use it to create the profile.
Sunday, 15 July 2012
ORA-1078 ORA-1565 Errors while starting database with SPFILE on ASM/RAC (shared location)
Fact:
Create pfile='/data01/pfile.ora' from spfile;
Did some changes in pfile
Startup the database using pfile
Now create spfile from pfile using coomand,
create spfile='/data01/oradata/ORCL/spfileORCL.ora' from pfile;
Then I startup the database Node1,but I got the below mentioned error
Node 1:
SQL> startup
ORA-01078: failure in processing system parameters
ORA-01565: error in identifying file '/ices/data01/oradata/ORCL/spfileORCL.ora'
ORA-27041: unable to open file
HPUX-ia64 Error: 24: Too many open files
Additional information: 2
Node 2:
SYS@ORCL1>startup
ORA-03113: end-of-file on communication channel
Findings:
Error first states that it has failed to process the parameter. Second states that it has failed to identify the spfile and is unable to open.
But I was able to open the database normally with pfile. Also the spfile was present in the Shared Location and accessible from both the Nodes.
Problem here was that I had started the database with pfile from non-default location and pfile located in $ORACLE_HOME/dbs had following entries
Solution:
Specify the Pfile path manually while creation of Spfile
SQL> CREATE SPFILE='/data01/oradata/ORCL/spfileORCL.ora' from pfile='/data01/pfile.ora';
File created.
Now,retest the issue.
Create pfile='/data01/pfile.ora' from spfile;
Did some changes in pfile
Startup the database using pfile
Now create spfile from pfile using coomand,
create spfile='/data01/oradata/ORCL/spfileORCL.ora' from pfile;
Then I startup the database Node1,but I got the below mentioned error
Node 1:
SQL> startup
ORA-01078: failure in processing system parameters
ORA-01565: error in identifying file '/ices/data01/oradata/ORCL/spfileORCL.ora'
ORA-27041: unable to open file
HPUX-ia64 Error: 24: Too many open files
Additional information: 2
Node 2:
SYS@ORCL1>startup
ORA-03113: end-of-file on communication channel
Findings:
Error first states that it has failed to process the parameter. Second states that it has failed to identify the spfile and is unable to open.
But I was able to open the database normally with pfile. Also the spfile was present in the Shared Location and accessible from both the Nodes.
Problem here was that I had started the database with pfile from non-default location and pfile located in $ORACLE_HOME/dbs had following entries
Solution:
Specify the Pfile path manually while creation of Spfile
SQL> CREATE SPFILE='/data01/oradata/ORCL/spfileORCL.ora' from pfile='/data01/pfile.ora';
File created.
Now,retest the issue.
Sunday, 17 June 2012
How to Reorganization Tables in oracle
Table reorganization
|After many changes to table data, |logically sequential data may be on non-sequential physical data |pages so that the database manager must perform additional |read operations to access data. Additional read |operations are also required if a significant number of rows |have been deleted. In such a case, you might consider |reorganizing the table to match the index and to reclaim space. |
A) Identify Tables needs to reorg:
1) table1
2) table2
3) table3
4) table4
B) Check free space
Free Space requied=10GB
Steps:
1)Take the Backup
a)Export backup
or
b)Full Backup
2)Take the Dump of Tables
select OWNER,SEGMENT_NAME,SEGMENT_TYPE,TABLESPACE_NAME,sum(BYTES/1024/1024)"size MB" from dba_segments where SEGMENT_NAME in('table1',
'table2',
'table3',
'table4',
'table4')
and owner='HR'
group by OWNER,SEGMENT_NAME,SEGMENT_TYPE,TABLESPACE_NAME
order by OWNER,SEGMENT_NAME,SEGMENT_TYPE,TABLESPACE_NAME
3)Take the Dump of Indexes
select owner,segment_name,segment_type,tablespace_name,BYTES/1024/1024 from dba_segments where segment_name in (select INDEX_NAME from dba_indexes where table_name in ('table1',
'table2',
'table3',
'table4',
'table4')
and owner='HR') and owner='HR' order by segment_name
4)Check the available space in the Tablespaces
set pages 200
set lines 200
col n_extent format 999,999,999
ttitle 'Free Space by Tablespace'
col col1 format a28 heading "Tablespace"
col col2 format 999,999,999,999 heading "Bytes(KB)"
col col3 format 999,999,999 heading "Used(KB)"
col col4 format 999,999,999 heading "Free(KB)"
col col5 format 999.9 heading "Free(%)"
break on report
compute sum of col2 col3 col4 FREE_MAX on report
select
-- b.file_id "File #",
b.tablespace_name col1,
b.bytes / 1024 col2,
((b.bytes - sum(nvl(a.bytes,0)))) / 1024 col3,
(sum(nvl(a.bytes,0))) / 1024 col4,
(sum(nvl(a.bytes,0)) / (b.bytes)) * 100 col5,
max(a.bytes)/1024 FREE_MAX, count(*)
from sys.dba_free_space a,
(select tablespace_name, sum(bytes) bytes from sys.dba_data_files
group by tablespace_name ) b
-- where a.file_id(+) = b.file_id
where a.tablespace_name = b.tablespace_name
group by b.tablespace_name, b.bytes
order by 5;
SQL> set lines 200
SQL> col file_name for a60
SQL> col tablespace_name for a25
SQL> select file_name,tablespace_name,bytes/1024/1024,autoextensible from dba_data_files where tablespace_name='TEST';
!bdf | grep /data1
Check the availablity of Space and Manage the Space as per the requirement
5)Check the status of the objects
SELECT COUNT(*) FROM DBA_OBJECTS;
SELECT COUNT(*) FROM DBA_OBJECTS WHERE STATUS='VALID';
SELECT COUNT(*) FROM DBA_OBJECTS WHERE STATUS='INVALID';
select owner,object_type,status,count(*) from dba_objects where owner='HR' group by owner,object_type,status order by object_type;
SELECT OWNER,OBJECT_NAME,OBJECT_TYPE FROM dba_objects where status='INVALID' AND OWNER='HR';
6)Check the Count of the objects
select count(*) from HR.table1;
select count(*) from HR.table2;
select count(*) from HR.table3;
select count(*) from HR.table4;
select count(*) from HR.table4;
7) Prepare the Script to move the TABLES
a) Non-Partition
select ' alter table ' ||OWNER||'.'||TABLE_NAME||' move tablespace GLOBAL_data1;'
from dba_tables where TABLE_NAME in ('table1',
'table2',
'table3',
'table4',
'table4')
and owner='HR' and PARTITIONED='NO'
b)Partition
select 'alter table ' ||table_owner||'.'||table_name || ' move partition ' || partition_name || ' tablespace GLOBAL_data1;' from dba_tab_partitions where table_name in('table1',
'table2',
'table3',
'table4',
'table4')
and TABLE_OWNER='HR'
Execute the script and generate the spool file.
Check the spool file for errors.
8)Prepare the Script to move the Indexes
a) Non-Partition
select 'alter index ' ||owner||'.'||index_name || ' rebuild TABLESPACE GLOBAL_data1;'
from dba_indexes where table_name in ('table1',
'table2',
'table3',
'table4',
'table4')
and owner='HR'
b)Partition
select 'alter index ' ||index_owner||'.'||index_name || ' rebuild partition ' || partition_name || ' tablespace GLOBAL_data1;'
from dba_ind_partitions
where index_name in (select INDEX_NAME from dba_indexes where table_name in ('table1',
'table2',
'table3',
'table4',
'table4')
and owner='HR' and PARTITIONED='YES')
Execute the script and generate the spool file.
Check the spool file for errors.
10) Confirm the Status
Confirm the Status and count of the objects using Step 4)and 5
If it is same then activity is successfull
11)RollBack Plan
Restore the Table from the Export Backup
Sunday, 13 May 2012
ORA-12012 ORA-31623: a job is not attached to this session via the specified handle
Issue:
Few Jobs are failing
Error Message:
Errors in file /orahome/oracle/product/10.2.0/admin/orcl/bdump/orcl1_j001_2694.trc:
ORA-12012: error on auto execute of job 716137
ORA-31623: a job is not attached to this session via the specified handle
Findings:
All these jobs are found to be scheduled to run at the same time (4AM daily)
The problem in running these jobs all at the same time is due to the fact that all these jobs are using the same Table 'DAYJOB'
So, each job tries to create a new table 'DAYJOB' and hence each job fails.
Solution:
1.) Schedule these jobs at different interval of time. Make sure that none of the jobs overlap with each other.
or
2.) Use a different table name for each of the jobs (DAYJOB1,DAYJOB2...etc).
Please check with the application team the impact of using a different table name for each job.
Few Jobs are failing
Error Message:
Errors in file /orahome/oracle/product/10.2.0/admin/orcl/bdump/orcl1_j001_2694.trc:
ORA-12012: error on auto execute of job 716137
ORA-31623: a job is not attached to this session via the specified handle
Findings:
All these jobs are found to be scheduled to run at the same time (4AM daily)
The problem in running these jobs all at the same time is due to the fact that all these jobs are using the same Table 'DAYJOB'
So, each job tries to create a new table 'DAYJOB' and hence each job fails.
Solution:
1.) Schedule these jobs at different interval of time. Make sure that none of the jobs overlap with each other.
or
2.) Use a different table name for each of the jobs (DAYJOB1,DAYJOB2...etc).
Please check with the application team the impact of using a different table name for each job.
Sunday, 6 May 2012
How to backup and restore statistics using dbms_stats
1)Check last analyzed date of scott schema tables
ORCL>>select table_name,to_char(last_analyzed,'DD-MON-YYYY HH:MI:SS') FROM DBA_TABLES WHERE OWNER='SCOTT';
TABLE_NAME TO_CHAR(LAST_ANALYZE
------------------------------ --------------------
SALGRADE 09-FEB-2009 10:00:04
BONUS 09-FEB-2009 10:00:04
EMP 09-FEB-2009 10:00:04
DEPT 09-FEB-2009 10:00:04
2)Create stat table in users tablespace
ORCL>>exec dbms_stats.create_stat_table(ownname => 'SCOTT', stattab => 'stats_bkp_scott', tblspace => 'USERS');
PL/SQL procedure successfully completed.
3)Take the statistics backup of scott schema in stat table
ORCL>>exec dbms_stats.export_schema_stats(ownname => 'SCOTT', stattab => 'stats_bkp_scott');
PL/SQL procedure successfully completed.
4)Take the export backup of scott schema or the owner of stats table
$ exp scott/scott1 file=scott_stat_bkp_09122010.dmp tables=scott.STATS_BKP_SCOTT
Export: Release 10.2.0.4.0 - Production on Thu Dec 9 14:37:42 2010
Copyright (c) 1982, 2007, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Export done in US7ASCII character set and AL16UTF16 NCHAR character set
server uses WE8ISO8859P1 character set (possible charset conversion)
About to export specified tables via Conventional Path ...
. . exporting table STATS_BKP_SCOTT 24 rows exported
Export terminated successfully without warnings.
5)Gather the statistics
14:41:39 ORCL>>exec dbms_stats.gather_table_stats(ownname=>'SCOTT', tabname=>'DEPT', cascade=>true, method_opt => 'for all indexed columns',granularity =>'all',estimate_percent=> 30,degree=>12);
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.14
14:41:51 ORCL>>exec dbms_stats.gather_table_stats(ownname=>'SCOTT', tabname=>'EMP', cascade=>true, method_opt => 'for all indexed columns',granularity =>'all',estimate_percent=> 30,degree=>12);
PL/SQL procedure successfully completed.
14:42:58 ORCL>>exec dbms_stats.gather_table_stats(ownname=>'SCOTT', tabname=>'BONUS', cascade=>true, method_opt => 'for all indexed columns',granularity =>'all',estimate_percent=> 30,degree=>12);
PL/SQL procedure successfully completed.
14:43:19 ORCL>>exec dbms_stats.gather_table_stats(ownname=>'SCOTT', tabname=>'SALGRADE', cascade=>true, method_opt => 'for all indexed columns',granularity =>'all',estimate_percent=> 30,degree=>12);
PL/SQL procedure successfully completed.
6)Check the last analyzed date of tables
14:43:41 ORCL>> select table_name,to_char(last_analyzed,'DD-MON-YYYY HH:MI:SS') FROM DBA_TABLES WHERE OWNER='SCOTT';
TABLE_NAME TO_CHAR(LAST_ANALYZE
------------------------------ --------------------
STATS_BKP_SCOTT
SALGRADE 09-DEC-2010 02:43:41
BONUS 09-DEC-2010 02:42:59
EMP 09-DEC-2010 02:42:27
DEPT 09-DEC-2010 02:41:50
7)Import/Revert the statistics of one/two table from the backup
15:07:22 ORCL>>exec dbms_stats.import_table_stats(ownname=>'scott', tabname=>'emp', statown=>'scott', stattab=>'stats_bkp_scott', cascade=>true);
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.30
15:10:28 ORCL>>exec dbms_stats.import_table_stats(ownname=>'scott', tabname=>'dept', statown=>'scott', stattab=>'stats_bkp_scott', cascade=>true);
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.26
8)Check the last analyzed date of the tables
15:120:31 ORCL>>select table_name,to_char(last_analyzed,'DD-MON-YYYY HH:MI:SS') FROM DBA_TABLES WHERE OWNER='SCOTT';
TABLE_NAME TO_CHAR(LAST_ANALYZE
------------------------------ --------------------
STATS_BKP_SCOTT
SALGRADE 09-DEC-2010 02:43:41
BONUS 09-DEC-2010 02:42:59
EMP 09-FEB-2009 10:00:04
DEPT 09-FEB-2009 10:00:04
Elapsed: 00:00:00.54
9)Revert the statistics of whole schema from the backup
15:40:38 ORCL>>EXECUTE DBMS_STATS.IMPORT_SCHEMA_STATS ('SCOTT','stats_bkp_scott');
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.34
10)Check the last analyzed date of the tables
15:45:53 ORCL>>select table_name,to_char(last_analyzed,'DD-MON-YYYY HH:MI:SS') FROM DBA_TABLES WHERE OWNER='SCOTT';
TABLE_NAME TO_CHAR(LAST_ANALYZE
------------------------------ --------------------
STATS_BKP_SCOTT
SALGRADE 09-FEB-2009 10:00:04
BONUS 09-FEB-2009 10:00:04
EMP 09-FEB-2009 10:00:04
DEPT 09-FEB-2009 10:00:04
Elapsed: 00:00:00.27
ORCL>>select table_name,to_char(last_analyzed,'DD-MON-YYYY HH:MI:SS') FROM DBA_TABLES WHERE OWNER='SCOTT';
TABLE_NAME TO_CHAR(LAST_ANALYZE
------------------------------ --------------------
SALGRADE 09-FEB-2009 10:00:04
BONUS 09-FEB-2009 10:00:04
EMP 09-FEB-2009 10:00:04
DEPT 09-FEB-2009 10:00:04
2)Create stat table in users tablespace
ORCL>>exec dbms_stats.create_stat_table(ownname => 'SCOTT', stattab => 'stats_bkp_scott', tblspace => 'USERS');
PL/SQL procedure successfully completed.
3)Take the statistics backup of scott schema in stat table
ORCL>>exec dbms_stats.export_schema_stats(ownname => 'SCOTT', stattab => 'stats_bkp_scott');
PL/SQL procedure successfully completed.
4)Take the export backup of scott schema or the owner of stats table
$ exp scott/scott1 file=scott_stat_bkp_09122010.dmp tables=scott.STATS_BKP_SCOTT
Export: Release 10.2.0.4.0 - Production on Thu Dec 9 14:37:42 2010
Copyright (c) 1982, 2007, Oracle. All rights reserved.
Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Export done in US7ASCII character set and AL16UTF16 NCHAR character set
server uses WE8ISO8859P1 character set (possible charset conversion)
About to export specified tables via Conventional Path ...
. . exporting table STATS_BKP_SCOTT 24 rows exported
Export terminated successfully without warnings.
5)Gather the statistics
14:41:39 ORCL>>exec dbms_stats.gather_table_stats(ownname=>'SCOTT', tabname=>'DEPT', cascade=>true, method_opt => 'for all indexed columns',granularity =>'all',estimate_percent=> 30,degree=>12);
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.14
14:41:51 ORCL>>exec dbms_stats.gather_table_stats(ownname=>'SCOTT', tabname=>'EMP', cascade=>true, method_opt => 'for all indexed columns',granularity =>'all',estimate_percent=> 30,degree=>12);
PL/SQL procedure successfully completed.
14:42:58 ORCL>>exec dbms_stats.gather_table_stats(ownname=>'SCOTT', tabname=>'BONUS', cascade=>true, method_opt => 'for all indexed columns',granularity =>'all',estimate_percent=> 30,degree=>12);
PL/SQL procedure successfully completed.
14:43:19 ORCL>>exec dbms_stats.gather_table_stats(ownname=>'SCOTT', tabname=>'SALGRADE', cascade=>true, method_opt => 'for all indexed columns',granularity =>'all',estimate_percent=> 30,degree=>12);
PL/SQL procedure successfully completed.
6)Check the last analyzed date of tables
14:43:41 ORCL>> select table_name,to_char(last_analyzed,'DD-MON-YYYY HH:MI:SS') FROM DBA_TABLES WHERE OWNER='SCOTT';
TABLE_NAME TO_CHAR(LAST_ANALYZE
------------------------------ --------------------
STATS_BKP_SCOTT
SALGRADE 09-DEC-2010 02:43:41
BONUS 09-DEC-2010 02:42:59
EMP 09-DEC-2010 02:42:27
DEPT 09-DEC-2010 02:41:50
7)Import/Revert the statistics of one/two table from the backup
15:07:22 ORCL>>exec dbms_stats.import_table_stats(ownname=>'scott', tabname=>'emp', statown=>'scott', stattab=>'stats_bkp_scott', cascade=>true);
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.30
15:10:28 ORCL>>exec dbms_stats.import_table_stats(ownname=>'scott', tabname=>'dept', statown=>'scott', stattab=>'stats_bkp_scott', cascade=>true);
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.26
8)Check the last analyzed date of the tables
15:120:31 ORCL>>select table_name,to_char(last_analyzed,'DD-MON-YYYY HH:MI:SS') FROM DBA_TABLES WHERE OWNER='SCOTT';
TABLE_NAME TO_CHAR(LAST_ANALYZE
------------------------------ --------------------
STATS_BKP_SCOTT
SALGRADE 09-DEC-2010 02:43:41
BONUS 09-DEC-2010 02:42:59
EMP 09-FEB-2009 10:00:04
DEPT 09-FEB-2009 10:00:04
Elapsed: 00:00:00.54
9)Revert the statistics of whole schema from the backup
15:40:38 ORCL>>EXECUTE DBMS_STATS.IMPORT_SCHEMA_STATS ('SCOTT','stats_bkp_scott');
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.34
10)Check the last analyzed date of the tables
15:45:53 ORCL>>select table_name,to_char(last_analyzed,'DD-MON-YYYY HH:MI:SS') FROM DBA_TABLES WHERE OWNER='SCOTT';
TABLE_NAME TO_CHAR(LAST_ANALYZE
------------------------------ --------------------
STATS_BKP_SCOTT
SALGRADE 09-FEB-2009 10:00:04
BONUS 09-FEB-2009 10:00:04
EMP 09-FEB-2009 10:00:04
DEPT 09-FEB-2009 10:00:04
Elapsed: 00:00:00.27
Sunday, 22 April 2012
Oracle Voting disk concept
Voting Disk is a file that resides in the shared storage area and must be accessible by all nodes in the cluster. All nodes in the cluster register their heart-beat information in the voting disk, so as to confirm that they are all operational. If heart-beat information of any node in the voting disk is not available that node will be evicted from the cluster. The CSS (Cluster Synchronization Service) daemon in the clusterware maintains the heart beat of all nodes to the voting disk. When any node is not able to send heartbeat to voting disk, then it will reboot itself, thus help avoiding the split-brain syndrome.
For high availability, Oracle recommends that you have a minimum of three or odd number (3 or greater) of voting disks.
According to Oracle – “An absolute majority of voting disks configured (more than half) must be available and responsive at all times for Oracle Clusterware to operate.” which means to survive from loss of ‘N’ voting disks, you must configure atleast ‘2N+1’ voting disks.
Suppose you have 5 voting disks configured for your 2 Node environment, then you can survive even after loss of 2 voting disks.
Keep in mind that, having multiple voting disks is reasonable if you keep them on different disks/volumes/san arrays so that your cluster can survive even during the loss of one disk/volume/array. So, there is no point in configuring multiple voting disks on a single disk/lun/array.
But there is a special scenario, where all the nodes in the cluster can see the all voting disks but the cluster-interconnect between the nodes failed, to avoid split-brain syndrome in this scenario, node eviction must happen. But the question here is which one?
According to Oracle – “The node with the lower node number will survive the eviction (The first node to join the cluster)”. So, the very first one that joined in the cluster will survive from eviction.
Operations
1.) Obtaining voting disk information –
2.) Adding Voting Disks
First shut down Oracle Clusterware on all nodes, then use the following commands as the root user.
3.) Removing a voting disk:
First shut down Oracle Clusterware on all nodes, then use the following commands as the root user.
Do not use -force option for adding or removing voting disk while the Oracle Clusterware stack is active, it can corrupt cluster configuration. You can use it when cluster is down and can modify the voting disk configuration using either of these commands without interacting with active Oracle Clusterware daemons.
4.) Backing up Voting Disks
Perform backup operation whenever there is change in the configuration like add/delete of new nodes or add/delete of voting disks.
If your voting disk is stored on a raw device, specify the device name -
5.) Recovering Voting Disks
A Bad voting disk can be recovered using a backup copy.
For high availability, Oracle recommends that you have a minimum of three or odd number (3 or greater) of voting disks.
According to Oracle – “An absolute majority of voting disks configured (more than half) must be available and responsive at all times for Oracle Clusterware to operate.” which means to survive from loss of ‘N’ voting disks, you must configure atleast ‘2N+1’ voting disks.
Suppose you have 5 voting disks configured for your 2 Node environment, then you can survive even after loss of 2 voting disks.
Keep in mind that, having multiple voting disks is reasonable if you keep them on different disks/volumes/san arrays so that your cluster can survive even during the loss of one disk/volume/array. So, there is no point in configuring multiple voting disks on a single disk/lun/array.
But there is a special scenario, where all the nodes in the cluster can see the all voting disks but the cluster-interconnect between the nodes failed, to avoid split-brain syndrome in this scenario, node eviction must happen. But the question here is which one?
According to Oracle – “The node with the lower node number will survive the eviction (The first node to join the cluster)”. So, the very first one that joined in the cluster will survive from eviction.
Operations
1.) Obtaining voting disk information –
$ crsctl query css votedisk
2.) Adding Voting Disks
First shut down Oracle Clusterware on all nodes, then use the following commands as the root user.
# crsctl add css [path of voting disk]
3.) Removing a voting disk:
First shut down Oracle Clusterware on all nodes, then use the following commands as the root user.
# crsctl delete css [path of voting disk]
Do not use -force option for adding or removing voting disk while the Oracle Clusterware stack is active, it can corrupt cluster configuration. You can use it when cluster is down and can modify the voting disk configuration using either of these commands without interacting with active Oracle Clusterware daemons.
4.) Backing up Voting Disks
Perform backup operation whenever there is change in the configuration like add/delete of new nodes or add/delete of voting disks.
$ dd if=current_voting_disk of=backup_file_name
If your voting disk is stored on a raw device, specify the device name -
$ dd if=/dev/sdd1 of=/tmp/vd1_.dmp
5.) Recovering Voting Disks
A Bad voting disk can be recovered using a backup copy.
$ dd if=backup_file_name of=current_voting_disk
Sunday, 15 April 2012
Bug 9689310 Excessive child cursors / high VERSION_COUNT / OERI:17059 due to bind mismatch
We have observed High version Count for the below queries in ORCL database
Data Collected:
SQL_ID MODULE VERSION_COUNT
92bxfrxwu51v5 oracl@orcl01.xyz.in (TNS V1-V3) 17
087g1afdgd4fh oracl@orcl01.xyz.in (TNS V1-V3) 12
carh6s6nmqh5w oracl@orcl01.xyz.in (TNS V1-V3) 25
b0kxk28kwfhn9 otrans@picggx02.xyz.in (TNS V1-V3) 46
ftuhvswznfpv8 otrans@picggx02.xyz.in (TNS V1-V3) 17
SqlText ( b0kxk28kwfhn9)
INSERT INTO M_CACHE01_ITEM( MESG_TYP,SITE_ID,JOB_NO,JOB_DT,INV_S
ERNO,ITEM_NO,SCHEME_CD,RITC_CODE,DESC_GD01,DESC_GD02,DESC_GD03,Q
TY_UNITS,QUANTITY,ITEM_RATE,UNIT_OF_RATE,NO_OF_UNITS,PMV,NOTN_NO
,THIRD_PARTY,FILE_NAME,UNIQUE_ID) VALUES (:1,:2,:3,TO_DATE(:4,'Y
YYY-MM-DD HH24:MI:SS'),:5,:6,:7,:8,:9,:10,:11,:12,:13,:14,:15,:1
6,:17,:18,:19,:20,:21)
Cause:
V$SQL_BIND_METADATA showed that the bind lengths are different among cursors for the same bind variables
Impact:
High amount of latch: library cache wait event is observed in the database due to high version count
Solution:
Apply Patch 9689310
Data Collected:
SQL_ID MODULE VERSION_COUNT
92bxfrxwu51v5 oracl@orcl01.xyz.in (TNS V1-V3) 17
087g1afdgd4fh oracl@orcl01.xyz.in (TNS V1-V3) 12
carh6s6nmqh5w oracl@orcl01.xyz.in (TNS V1-V3) 25
b0kxk28kwfhn9 otrans@picggx02.xyz.in (TNS V1-V3) 46
ftuhvswznfpv8 otrans@picggx02.xyz.in (TNS V1-V3) 17
SqlText ( b0kxk28kwfhn9)
INSERT INTO M_CACHE01_ITEM( MESG_TYP,SITE_ID,JOB_NO,JOB_DT,INV_S
ERNO,ITEM_NO,SCHEME_CD,RITC_CODE,DESC_GD01,DESC_GD02,DESC_GD03,Q
TY_UNITS,QUANTITY,ITEM_RATE,UNIT_OF_RATE,NO_OF_UNITS,PMV,NOTN_NO
,THIRD_PARTY,FILE_NAME,UNIQUE_ID) VALUES (:1,:2,:3,TO_DATE(:4,'Y
YYY-MM-DD HH24:MI:SS'),:5,:6,:7,:8,:9,:10,:11,:12,:13,:14,:15,:1
6,:17,:18,:19,:20,:21)
Cause:
V$SQL_BIND_METADATA showed that the bind lengths are different among cursors for the same bind variables
Impact:
High amount of latch: library cache wait event is observed in the database due to high version count
Solution:
Apply Patch 9689310
Sunday, 1 April 2012
Oracle Log Mining Steps on RAC
*The whole activity must be done on one session.
*Check the number of archive logs for which you want to do log mining.
Check the sequence of archive logs from the both for which you want to mining
Suppose,
10.10.10.1 - node1
Mon Dec 13 12:26:16 2010
Thread 1 advanced to log sequence 15289 (LGWR switch)
Mon Dec 13 16:02:50 2010
Thread 1 advanced to log sequence 15298 (LGWR switch)
10.10.10.2 - node2
Mon Dec 13 12:26:16 2010
Thread 2 advanced to log sequence 15062 (LGWR switch)
Mon Dec 13 16:22:04 2010
Thread 2 advanced to log sequence 15072 (LGWR switch)
Steps:
1)set utl_file_dir
alter system set utl_file_dir='/oracle/test' scope=spfile (The path/location where you put the archive logs)
Bounce the database
2)Extracting the LogMiner Dictionary to a Flat file (flat file is one of the option)
EXECUTE DBMS_LOGMNR_D.BUILD('dictionary.ora','/oracle/test',DBMS_LOGMNR_D.STORE_IN_FLAT_FILE);
3)Add Logfiles
EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/oracle/test/1_15289_66678900.dbf',OPTIONS => DBMS_LOGMNR.new);
EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/oracle/test/1_15290_66678900.dbf',OPTIONS => DBMS_LOGMNR.ADDFILE);
EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/oracle/test/1_15291_66678900.dbf',OPTIONS => DBMS_LOGMNR.ADDFILE);
EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/oracle/test/1_15292_66678900.dbf',OPTIONS => DBMS_LOGMNR.ADDFILE);
4)Filtering Data that is returned (Showing only Committed transactions)
EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.COMMITTED_DATA_ONLY);
5)Extract relevant information from the view v$logmnr_contents
Example:
select seg_owner,operation,sql_redo,sql_undo from v$logmnr_contents where SEG_owner='SCOTT';
6)To end the Log mining process
exec dbms_logmnr.end_logmnr;
Note: Check the below doc for proper information
Using LogMiner, How to determine the cause of lots of redo generation. [ID 300395.1]
LogMiner Utility Release 8.1.x - 10g [ID 291686.1]
*Check the number of archive logs for which you want to do log mining.
Check the sequence of archive logs from the both for which you want to mining
Suppose,
10.10.10.1 - node1
Mon Dec 13 12:26:16 2010
Thread 1 advanced to log sequence 15289 (LGWR switch)
Mon Dec 13 16:02:50 2010
Thread 1 advanced to log sequence 15298 (LGWR switch)
10.10.10.2 - node2
Mon Dec 13 12:26:16 2010
Thread 2 advanced to log sequence 15062 (LGWR switch)
Mon Dec 13 16:22:04 2010
Thread 2 advanced to log sequence 15072 (LGWR switch)
Steps:
1)set utl_file_dir
alter system set utl_file_dir='/oracle/test' scope=spfile (The path/location where you put the archive logs)
Bounce the database
2)Extracting the LogMiner Dictionary to a Flat file (flat file is one of the option)
EXECUTE DBMS_LOGMNR_D.BUILD('dictionary.ora','/oracle/test',DBMS_LOGMNR_D.STORE_IN_FLAT_FILE);
3)Add Logfiles
EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/oracle/test/1_15289_66678900.dbf',OPTIONS => DBMS_LOGMNR.new);
EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/oracle/test/1_15290_66678900.dbf',OPTIONS => DBMS_LOGMNR.ADDFILE);
EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/oracle/test/1_15291_66678900.dbf',OPTIONS => DBMS_LOGMNR.ADDFILE);
EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME => '/oracle/test/1_15292_66678900.dbf',OPTIONS => DBMS_LOGMNR.ADDFILE);
4)Filtering Data that is returned (Showing only Committed transactions)
EXECUTE DBMS_LOGMNR.START_LOGMNR(OPTIONS => DBMS_LOGMNR.COMMITTED_DATA_ONLY);
5)Extract relevant information from the view v$logmnr_contents
Example:
select seg_owner,operation,sql_redo,sql_undo from v$logmnr_contents where SEG_owner='SCOTT';
6)To end the Log mining process
exec dbms_logmnr.end_logmnr;
Note: Check the below doc for proper information
Using LogMiner, How to determine the cause of lots of redo generation. [ID 300395.1]
LogMiner Utility Release 8.1.x - 10g [ID 291686.1]
Sunday, 18 March 2012
TKPROF - Oracle
TKPROF
is the basic performance diagnostic tool that can help you monitor and tune applications running against the Oracle Server.
Usage: tkprof tracefile outputfile [explain= ] [table= ] [print= ] [insert= ] [sys= ] [sort= ] table=schema.tablename Use 'schema.tablename' with 'explain=' option. explain=user/password Connect to ORACLE and issue EXPLAIN PLAN. print=integer List only the first 'integer' SQL statements. aggregate=yes|no insert=filename List SQL statements and data inside INSERT statements. sys=no TKPROF does not list SQL statements run as user SYS. record=filename Record non-recursive statements found in the trace file. waits=yes|no Record summary for any wait events found in the trace file.
sort=option Set of zero or more of the following sort options:
prscnt number of times parse was called
prscpu cpu time parsing
prsela elapsed time parsing
prsdsk number of disk reads during parse
prsqry number of buffers for consistent read during parse
prscu number of buffers for current read during parse
prsmis number of misses in library cache during parse
execnt number of execute was called
execpu cpu time spent executing
exeela elapsed time executing
exedsk number of disk reads during execute
exeqry number of buffers for consistent read during execute
execu number of buffers for current read during execute
exerow number of rows processed during execute
exemis number of library cache misses during execute
fchcnt number of times fetch was called
fchcpu cpu time spent fetching
fchela elapsed time fetching
fchdsk number of disk reads during fetch
fchqry number of buffers for consistent read during fetch
fchcu number of buffers for current read during fetch
fchrow number of rows fetched
userid userid of user that parsed the cursor
Trace Output
SQL STATEMENTS PHASES
- Parse
- Execute
- Fetch
count cpu elap disk query current rows Parse: 1 1 2 0 0 0 Execute: 1 0 0 0 0 2 0 Fetch: 2 69 113 142 430 0 36
The output shows 142 disk reads and 430 memory reads (query + current). Having such a high number of disk reads compared to physical reads is certainly a potential problem. The execution path shows a full table scan confirming that we may have a potential problem.
count
The number of times this type of call was made.
cpu
The total CPU time for all of the calls of this type for this statement. If the TIMED_STATISTICS parameter in the init.ora is not set to TRUE, this statistic and the elapsed statistic will be 0.
elapsed
The total elapsed time for this call.
disk
The total number of data blocks retrieved from disk to satisfy this call.
query
The total number of data buffers retrieved from memory for this type SELECT statements usually retrieve buffers in this mode.
current
The total number of data buffers retrieved from memory for this type of call. UPDATE, INSERT, or DELETE the usual access buffers in this mode.
rows
The total number of rows processed by this statement. The rows statements will appear in the row of Fetch statistics. INSERTS, UPDATES, and DELETES will appear in the execute row.
Few important tables for Oracle APPS DBA
Concurrent Manager
FND_CONCURRENT_QUEUES
FND_CONCURRENT_PROGRAMS
FND_CONCURRENT_REQUESTS
FND_CONCURRENT_PROCESSES
FND_CONCURRENT_QUEUE_SIZE
FND
FND_APPL_TOPS
FND_LOGINS
FND_USER
FND_DM_NODES
FND_TNS_ALIASES
FND_NODES
FND_RESPONSIBILITY
FND_DATABASES
FND_UNSUCCESSFUL_LOGINS
FND_LANGUAGES
FND_APPLICATION
FND_PROFILE_OPTION_VALUES
FND_PRODUCT_GROUPS
AD / Patches
AD_APPLIED_PATCHES
AD_PATCH_DRIVERS
AD_BUGS
AD_INSTALL_PROCESSES
AD_SESSIONS
AD_APPL_TOPS
FND_CONCURRENT_QUEUES
FND_CONCURRENT_PROGRAMS
FND_CONCURRENT_REQUESTS
FND_CONCURRENT_PROCESSES
FND_CONCURRENT_QUEUE_SIZE
FND
FND_APPL_TOPS
FND_LOGINS
FND_USER
FND_DM_NODES
FND_TNS_ALIASES
FND_NODES
FND_RESPONSIBILITY
FND_DATABASES
FND_UNSUCCESSFUL_LOGINS
FND_LANGUAGES
FND_APPLICATION
FND_PROFILE_OPTION_VALUES
FND_PRODUCT_GROUPS
AD / Patches
AD_APPLIED_PATCHES
AD_PATCH_DRIVERS
AD_BUGS
AD_INSTALL_PROCESSES
AD_SESSIONS
AD_APPL_TOPS
Sunday, 26 February 2012
Autoconfig basics
Oracle AutoConfig in Oracle Applications
AutoConfig is the tool, which is used to configure oracle application systems. AutoConfig uses a file called context file, which is used to configure changes. Context file is a XML file having all the parameters of application system.These parameters are part of some configuration file for some services or it may be a part of some environment file. Each configuration file has one corresponding template file (provided by autoconfig patches, stored in $PROD_TOP/admin/template directory).Profile options and other instance specific information in the database is maintained by many sql scripts, called from wrapper shell/perl scripts. These scripts also have corresponding template files (also provided by autoconfig patches, `in $PROD_TOP/admin/templates). In these template files all the environment specific values are replaced by placeholders (like %s_webhost%). Environment specific values for this placeholder is stored in an environment specific xml file (called application context file) stored in $APPL_TOP/admin.
Eg,
Following entry in httpd.conf
Timeout 300 is replaced by following in its template $FND_TOP/admin/template/httpd_ux_ias1022.conf
Timeout %s_ohstimeout%
For the above placeholder the value stored in the xml file is:
<ohstimeout oa_var=”s_ohstimeout”>300</ohstimeout>
Each placeholder has a corresponding xml node in the context file. The placeholder name is the value for the “oa_var” attribute of the xml node, the placeholder name without the prefix “s_” (generally) is the node name for the xml node and the value is stored as a child text node. These xml nodes are termed as context variables by Oracle documentation and each node primarily is identified by its oa_var attribute value. So effectively we can say that in the above case we replace “300″ by context variable “ohstimeout” whose oa_var value is “s_ohstimeout”. Configuration files can be easily created from the template file by just replacing all the placeholders with the corresponding values from the context file. This process is termed as “instantiating the template” in Oracle documentation.
Driver files (stored in $PROD_TOP/admin/driver) store information about what to do with each template (e.g. instantiate it and replace the existing configuration file with it, instantiate it and run it). These files are named as <PROD_TOP>tmpl.drv (e.g. adtmpl.drv, fndtmpl.drv etc.) They contain one line for each template they manage.
When autoconfig (adconfig.pl or adconfig.sh) runs it just processes the driver file for each product, line by line doing what the driver file instructs it to do. The order of execution of each line is not sequential.
AutoConfig is the tool, which is used to configure oracle application systems. AutoConfig uses a file called context file, which is used to configure changes. Context file is a XML file having all the parameters of application system.These parameters are part of some configuration file for some services or it may be a part of some environment file. Each configuration file has one corresponding template file (provided by autoconfig patches, stored in $PROD_TOP/admin/template directory).Profile options and other instance specific information in the database is maintained by many sql scripts, called from wrapper shell/perl scripts. These scripts also have corresponding template files (also provided by autoconfig patches, `in $PROD_TOP/admin/templates). In these template files all the environment specific values are replaced by placeholders (like %s_webhost%). Environment specific values for this placeholder is stored in an environment specific xml file (called application context file) stored in $APPL_TOP/admin.
Eg,
Following entry in httpd.conf
Timeout 300 is replaced by following in its template $FND_TOP/admin/template/httpd_ux_ias1022.conf
Timeout %s_ohstimeout%
For the above placeholder the value stored in the xml file is:
<ohstimeout oa_var=”s_ohstimeout”>300</ohstimeout>
Each placeholder has a corresponding xml node in the context file. The placeholder name is the value for the “oa_var” attribute of the xml node, the placeholder name without the prefix “s_” (generally) is the node name for the xml node and the value is stored as a child text node. These xml nodes are termed as context variables by Oracle documentation and each node primarily is identified by its oa_var attribute value. So effectively we can say that in the above case we replace “300″ by context variable “ohstimeout” whose oa_var value is “s_ohstimeout”. Configuration files can be easily created from the template file by just replacing all the placeholders with the corresponding values from the context file. This process is termed as “instantiating the template” in Oracle documentation.
Driver files (stored in $PROD_TOP/admin/driver) store information about what to do with each template (e.g. instantiate it and replace the existing configuration file with it, instantiate it and run it). These files are named as <PROD_TOP>tmpl.drv (e.g. adtmpl.drv, fndtmpl.drv etc.) They contain one line for each template they manage.
When autoconfig (adconfig.pl or adconfig.sh) runs it just processes the driver file for each product, line by line doing what the driver file instructs it to do. The order of execution of each line is not sequential.
Sunday, 19 February 2012
Compile Form,Pll with compile_all in Oracle R12
frmcmp_batch userid=apps/apps module=$AU_TOP/forms/US/FORM_name.fmb output_file=$PRODUCT_TOP/forms/US/FORM_name.fmx
module_type=form compile_all=special
frmcmp_batch userid=apps/apps module=<Name_of_pll_file> output_file=<Name_of_plx_file>
module_type=library compile_all=special
Why compile forms with compile_all=special
compile_all=special was originally introduced as a workaround for a bug that was fixed some time ago.
The main reason it is still recommended for Oracle Applications is due to the way attached libraries are inherited from other attached libraries, and the difficulties in subsequently removing them if added by mistake.
For example, if a new library is attached to CUSTOM.pll, that library also becomes directly attached to every form or library that has CUSTOM.pll attached, once the higher level object has been re-compiled.
With compile_all=yes, the new library would end up attached to both the source and generated objects, and removing it would be extremely difficult: as well as removing it from CUSTOM.pll, you would have to remove it from every library or form that had CUSTOM.pll attached.
With compile_all=special, the source is not updated, and only the generated files (.plx and .fmx) have the new library attached.
In the case where someone has mistakenly attached a library to CUSTOM.pll, (for example when they do not follow Oracle customization standards, and attach an Applications product library), you simply need to delete it from CUSTOM.pll, regenerate all the forms and libraries, and it will be gone.
If you never customize, or if you follow the standards documented in the Oracle Applications Developer's Guide, the use of compile_all=yes should never cause this kind of problem, but Oracle still recommends that you use compile_all=special.
If you open and save a form or library in the builder, it will inherit any new library attachments from its sub-libraries in the same way as if it was generated with compile_all=yes.
module_type=form compile_all=special
frmcmp_batch userid=apps/apps module=<Name_of_pll_file> output_file=<Name_of_plx_file>
module_type=library compile_all=special
Why compile forms with compile_all=special
compile_all=special was originally introduced as a workaround for a bug that was fixed some time ago.
The main reason it is still recommended for Oracle Applications is due to the way attached libraries are inherited from other attached libraries, and the difficulties in subsequently removing them if added by mistake.
For example, if a new library is attached to CUSTOM.pll, that library also becomes directly attached to every form or library that has CUSTOM.pll attached, once the higher level object has been re-compiled.
With compile_all=yes, the new library would end up attached to both the source and generated objects, and removing it would be extremely difficult: as well as removing it from CUSTOM.pll, you would have to remove it from every library or form that had CUSTOM.pll attached.
With compile_all=special, the source is not updated, and only the generated files (.plx and .fmx) have the new library attached.
In the case where someone has mistakenly attached a library to CUSTOM.pll, (for example when they do not follow Oracle customization standards, and attach an Applications product library), you simply need to delete it from CUSTOM.pll, regenerate all the forms and libraries, and it will be gone.
If you never customize, or if you follow the standards documented in the Oracle Applications Developer's Guide, the use of compile_all=yes should never cause this kind of problem, but Oracle still recommends that you use compile_all=special.
If you open and save a form or library in the builder, it will inherit any new library attachments from its sub-libraries in the same way as if it was generated with compile_all=yes.
Wednesday, 1 February 2012
SMON_SCN_TO_TIME_AUX - CLUSTER
How to Reorg the object SMON_SCN_TO_TIME_AUX . The object type is CLUSTER.
Please test the steps on test environment first.
Take Backup
export ORACLE_SID=dev
export ORACLE_HOME=/ora_DEV/app/oracle/product/11.2.0/prodclone/dbhome_1
rman target /
run {
allocate channel ch1 device type disk;
allocate channel ch2 device type disk;
crosscheck archivelog all;
crosscheck backup of database;
delete expired backup of database ;
backup full tag full_db_bkp_dev format '/ora_DEV/PROD_STAGE/RMAN_DBF_%d_%T_%s_%p_%u.bkp' (database);
sql 'alter system archive log current';
backup format '/ora_DEV/PROD_STAGE/RMAN_ARCH_%d_%T_%s_%p_%u.bkp' archivelog all;
backup format '/ora_DEV/PROD_STAGE/RMAN_CTL_%d_%T_%s_%p_%u.bkp' current controlfile;
release channel ch1;
release channel ch2;
}
shutdown immediate
startup restrict
# Enable event 12500. Setting the 12500 event at system level should stop SMON from updating the SMON_SCN_TIME table.
alter system set events '12500 trace name context forever, level 10';
# Truncate the cluster SMON_SCN_TO_TIME
truncate cluster SMON_SCN_TO_TIME_AUX;
Rebuild the indexes
alter index SMON_SCN_TO_TIME_IDX rebuild;
alter index SMON_SCN_TO_TIME_AUX_IDX rebuild;
alter index SMON_SCN_TIME_TIM_IDX rebuild;
alter index SMON_SCN_TIME_SCN_IDX rebuild;
# Analyze the table to confirm it is clean
analyze table SMON_SCN_TIME validate structure cascade;
# Disable the event 12500 so SMON can resume updating the SMON_SCN_TIME table.
alter system set events '12500 trace name context off';
# Disable restricted session:
alter system disable restricted session;
shutdown immediate
startup
Please test the steps on test environment first.
Take Backup
export ORACLE_SID=dev
export ORACLE_HOME=/ora_DEV/app/oracle/product/11.2.0/prodclone/dbhome_1
rman target /
run {
allocate channel ch1 device type disk;
allocate channel ch2 device type disk;
crosscheck archivelog all;
crosscheck backup of database;
delete expired backup of database ;
backup full tag full_db_bkp_dev format '/ora_DEV/PROD_STAGE/RMAN_DBF_%d_%T_%s_%p_%u.bkp' (database);
sql 'alter system archive log current';
backup format '/ora_DEV/PROD_STAGE/RMAN_ARCH_%d_%T_%s_%p_%u.bkp' archivelog all;
backup format '/ora_DEV/PROD_STAGE/RMAN_CTL_%d_%T_%s_%p_%u.bkp' current controlfile;
release channel ch1;
release channel ch2;
}
shutdown immediate
startup restrict
# Enable event 12500. Setting the 12500 event at system level should stop SMON from updating the SMON_SCN_TIME table.
alter system set events '12500 trace name context forever, level 10';
# Truncate the cluster SMON_SCN_TO_TIME
truncate cluster SMON_SCN_TO_TIME_AUX;
Rebuild the indexes
alter index SMON_SCN_TO_TIME_IDX rebuild;
alter index SMON_SCN_TO_TIME_AUX_IDX rebuild;
alter index SMON_SCN_TIME_TIM_IDX rebuild;
alter index SMON_SCN_TIME_SCN_IDX rebuild;
# Analyze the table to confirm it is clean
analyze table SMON_SCN_TIME validate structure cascade;
# Disable the event 12500 so SMON can resume updating the SMON_SCN_TIME table.
alter system set events '12500 trace name context off';
# Disable restricted session:
alter system disable restricted session;
shutdown immediate
startup
Tuesday, 31 January 2012
RMAN - adclone.pl failed with error
Issue:
Cloning failed with error.
On EBS 12.1.2 during restoration of RMAN Backup using adclone,we got the error:
StackTrace -
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown Source)
at java.lang.reflect.Constructor.newInstance(Unknown Source)
at oracle.apps.ad.clone.util.CloneProcessor.run(CloneProcessor.java:67)
at java.lang.Thread.run(Unknown Source)
Caused by: java.lang.NoClassDefFoundError: oracle/jdbc/OracleDriver
Solution:
1 - Copy $RDBMS_HOME/jdbc/lib/ojdbc6.jar to $RDBMS_HOME/appsutil/clone/jlib/ojdbc6.jar
2 - Run the adclone.pl again, it will now find the ojdbc6.jar and will also continue
Cloning failed with error.
On EBS 12.1.2 during restoration of RMAN Backup using adclone,we got the error:
StackTrace -
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown Source)
at java.lang.reflect.Constructor.newInstance(Unknown Source)
at oracle.apps.ad.clone.util.CloneProcessor.run(CloneProcessor.java:67)
at java.lang.Thread.run(Unknown Source)
Caused by: java.lang.NoClassDefFoundError: oracle/jdbc/OracleDriver
Solution:
1 - Copy $RDBMS_HOME/jdbc/lib/ojdbc6.jar to $RDBMS_HOME/appsutil/clone/jlib/ojdbc6.jar
2 - Run the adclone.pl again, it will now find the ojdbc6.jar and will also continue
Monday, 30 January 2012
ORA-609 and TNS-12537 in 11g Alert.log
Issue: ORA-609 and TNS-12537 in 11g Alert.log
Findings:
1) lsnrctl status hanged
2) conn user/pwd@orcl hanged
3) Database is up and running
4) Error in alert log
nt OS err code: 0
Fatal NI connect error 12537, connecting to:
(LOCAL=NO)
VERSION INFORMATION:
TNS for Linux: Version 11.1.0.6.0 - Production
Oracle Bequeath NT Protocol Adapter for Linux: Version 11.1.0.6.0 - Production
TCP/IP NT Protocol Adapter for Linux: Version 11.1.0.6.0 - Production
Time: 25-JAN-2012 12:38:49
Tracing not turned on.
Tns error struct:
ORA-609 : opiodr aborting process unknown ospid (4018_47693892033648)
ns main err code: 12537
TNS-12537: TNS:connection closed
ns secondary err code: 12560
nt main err code: 0
nt secondary err code: 0
nt OS err code: 0
ORA-609 : opiodr aborting process unknown ospid (4016_47051970669680)
Cause:
The ORA-609 error is thrown when a client connection of any kind failed to complete or aborted the connection
process before the server process was completely spawned.
Beginning with 10gR2, a default value for inbound connect timeout has been set at 60 seconds.
Solution:
Increase the values for INBOUND_CONNECT_TIMEOUT at both listener and server side sqlnet.ora file as a preventive measure.
If the problem is due to connection timeouts,an increase in the following parameters should eliminate or reduce the occurrence of the ORA-609s.
Sqlnet.ora: SQLNET.INBOUND_CONNECT_TIMEOUT=180
Listener.ora: INBOUND_CONNECT_TIMEOUT_listener_name=120
I had faced this issue on Linux Server.In may case the issue got resolved after changing one of the Kernel Parameter.
Change the value of net.ipv4.ip_forward from 0 to 1 in /etc/sysctl.conf file.
# Controls IP packet forwarding
net.ipv4.ip_forward = 1
Findings:
1) lsnrctl status hanged
2) conn user/pwd@orcl hanged
3) Database is up and running
4) Error in alert log
nt OS err code: 0
Fatal NI connect error 12537, connecting to:
(LOCAL=NO)
VERSION INFORMATION:
TNS for Linux: Version 11.1.0.6.0 - Production
Oracle Bequeath NT Protocol Adapter for Linux: Version 11.1.0.6.0 - Production
TCP/IP NT Protocol Adapter for Linux: Version 11.1.0.6.0 - Production
Time: 25-JAN-2012 12:38:49
Tracing not turned on.
Tns error struct:
ORA-609 : opiodr aborting process unknown ospid (4018_47693892033648)
ns main err code: 12537
TNS-12537: TNS:connection closed
ns secondary err code: 12560
nt main err code: 0
nt secondary err code: 0
nt OS err code: 0
ORA-609 : opiodr aborting process unknown ospid (4016_47051970669680)
Cause:
The ORA-609 error is thrown when a client connection of any kind failed to complete or aborted the connection
process before the server process was completely spawned.
Beginning with 10gR2, a default value for inbound connect timeout has been set at 60 seconds.
Solution:
Increase the values for INBOUND_CONNECT_TIMEOUT at both listener and server side sqlnet.ora file as a preventive measure.
If the problem is due to connection timeouts,an increase in the following parameters should eliminate or reduce the occurrence of the ORA-609s.
Sqlnet.ora: SQLNET.INBOUND_CONNECT_TIMEOUT=180
Listener.ora: INBOUND_CONNECT_TIMEOUT_listener_name=120
I had faced this issue on Linux Server.In may case the issue got resolved after changing one of the Kernel Parameter.
Change the value of net.ipv4.ip_forward from 0 to 1 in /etc/sysctl.conf file.
# Controls IP packet forwarding
net.ipv4.ip_forward = 1
Sunday, 29 January 2012
Digital Certificate Expired Details
Digital Certificate Expired Details:
The digital certificate used to sign Oracle Forms deployment related JAR files has expired on Thu Mar 24 19:59:59 EDT 2011.
So, a dialog box appears whenever form application opens.
Impact on the Application:
The expiration of the JAR signature does not prevent the use of the product or applications created with it . It simply means that inspection of the certificate details will show the expiration date. It also means that end users may be presented with a dialog at startup which indicates that the ce! rtificate has expired, but are offered an option to run the application regardless.
Choosing Cancel or Deny, will prevent the Forms applet from running.
Workaround:
Clicking on yes on the dialog box, the application runs fine.
Solution:
Apply the following patches:
1.) Forms bundle patch: 9593176 (Size :5.9MB)
2.) Patch:11710576 (Size :4.7MB)
Validity of the solution:
These patches are available to extend the jar file expiration date to Sat Jul 06 19:59:59 EDT 2013.
Approach:
The patch will be applied on the Pre-prod environment first. Testing will be done in preprod first and if successful the patch will be applied to the Production Environment.
The digital certificate used to sign Oracle Forms deployment related JAR files has expired on Thu Mar 24 19:59:59 EDT 2011.
So, a dialog box appears whenever form application opens.
Impact on the Application:
The expiration of the JAR signature does not prevent the use of the product or applications created with it . It simply means that inspection of the certificate details will show the expiration date. It also means that end users may be presented with a dialog at startup which indicates that the ce! rtificate has expired, but are offered an option to run the application regardless.
Choosing Cancel or Deny, will prevent the Forms applet from running.
Workaround:
Clicking on yes on the dialog box, the application runs fine.
Solution:
Apply the following patches:
1.) Forms bundle patch: 9593176 (Size :5.9MB)
2.) Patch:11710576 (Size :4.7MB)
Validity of the solution:
These patches are available to extend the jar file expiration date to Sat Jul 06 19:59:59 EDT 2013.
Approach:
The patch will be applied on the Pre-prod environment first. Testing will be done in preprod first and if successful the patch will be applied to the Production Environment.
Thursday, 26 January 2012
Listener Services stops automatically every 2-3 minutes
Issue:
Listener Services stops automatically every 2-3 minutes.
Error in listener log:
Started with pid=9907
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.10.10.1)(PORT=1521)))
TIMESTAMP * CONNECT DATA [* PROTOCOL INFO] * EVENT [* SID] * RETURN CODE
26-FEB-2011 11:50:47 * (CONNECT_DATA=(CID=(PROGRAM=)(HOST=test01)(USER=oracle))(COMMAND=status)(ARGUMENTS=64)(SERVICE=LISTENER)(VERSION=169870080)) * status * 0
26-FEB-2011 11:51:39 * service_register * unixpi * 0
26-FEB-2011 11:51:58 * (CONNECT_DATA=(CID=(PROGRAM=)(HOST=test01)(USER=oracle))(COMMAND=status)(ARGUMENTS=64)(SERVICE=LISTENER)(VERSION=169870080)) * status * 0
No longer listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.10.10.1)(PORT=1521)))
26-FEB-2011 11:52:01 * (CONNECT_DATA=(CID=(PROGRAM=)(HOST=test01)(USER=oracle))(COMMAND=stop)(ARGUMENTS=64)(SERVICE=LISTENER)(VERSION=169870080)) * stop * 0
Findings:
As listener it seems that indeed listener is stopped "automatically every 2-3 minutes" but this as a result of COMMAND=STOP
issues by USER=oracle (it's OS user here) and this is issued from HOST=test01 ... and command comes from a Oracle
software having as client libraries VERSION=169870080 (A200300 in hex which means 10.2.0.3) ... the only suggestion we can
offer is to check on the machine test01 for a script or (in case it's a cluster node) a rule in cluster software ... about a sequence
where it is checked the status,probably the script or the rules are having some bad conclusion about that ... and the consequence
is that the command issued is COMMAND=STOP. It is visible the same sequence all the time ....
(COMMAND=status)
(COMMAND=status)
(COMMAND=STOP)...
So there is a check about something expected to be found in the status command, the status is considered not the one expected
and as a result the listener is restarted. If this is a cluster then you will need the CLUSTER admin to check about the crs scripts...
The CRS scripts could provide the same results / symptoms if the checks for the as named "cluster resources" (listener is one)
will not be interpreted well. The script is checking is a listener status is OK in base of some scripts logics (specific to your script)
and if not then the listener is restarted. Usually such would require manual (human) interventions. Other possible source is a DBA
script running in in there ... performing the same check in loop. LSNRCTL STATUS (2 times) ... the result is parsed and if there is a
certain condition meed the listener is stopped using LSNRCTL STOP command... This is not a listener issue, some DBA have to
check and fix the condition checked or to simple disable such check ... Please verify and confirm this ...
Exact Issue:
The Application(hard coded) is checking the status of default listener and stop it.
Solution
Change the Name of the Listener.
Listener Services stops automatically every 2-3 minutes.
Error in listener log:
Started with pid=9907
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.10.10.1)(PORT=1521)))
TIMESTAMP * CONNECT DATA [* PROTOCOL INFO] * EVENT [* SID] * RETURN CODE
26-FEB-2011 11:50:47 * (CONNECT_DATA=(CID=(PROGRAM=)(HOST=test01)(USER=oracle))(COMMAND=status)(ARGUMENTS=64)(SERVICE=LISTENER)(VERSION=169870080)) * status * 0
26-FEB-2011 11:51:39 * service_register * unixpi * 0
26-FEB-2011 11:51:58 * (CONNECT_DATA=(CID=(PROGRAM=)(HOST=test01)(USER=oracle))(COMMAND=status)(ARGUMENTS=64)(SERVICE=LISTENER)(VERSION=169870080)) * status * 0
No longer listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=10.10.10.1)(PORT=1521)))
26-FEB-2011 11:52:01 * (CONNECT_DATA=(CID=(PROGRAM=)(HOST=test01)(USER=oracle))(COMMAND=stop)(ARGUMENTS=64)(SERVICE=LISTENER)(VERSION=169870080)) * stop * 0
Findings:
As listener it seems that indeed listener is stopped "automatically every 2-3 minutes" but this as a result of COMMAND=STOP
issues by USER=oracle (it's OS user here) and this is issued from HOST=test01 ... and command comes from a Oracle
software having as client libraries VERSION=169870080 (A200300 in hex which means 10.2.0.3) ... the only suggestion we can
offer is to check on the machine test01 for a script or (in case it's a cluster node) a rule in cluster software ... about a sequence
where it is checked the status,probably the script or the rules are having some bad conclusion about that ... and the consequence
is that the command issued is COMMAND=STOP. It is visible the same sequence all the time ....
(COMMAND=status)
(COMMAND=status)
(COMMAND=STOP)...
So there is a check about something expected to be found in the status command, the status is considered not the one expected
and as a result the listener is restarted. If this is a cluster then you will need the CLUSTER admin to check about the crs scripts...
The CRS scripts could provide the same results / symptoms if the checks for the as named "cluster resources" (listener is one)
will not be interpreted well. The script is checking is a listener status is OK in base of some scripts logics (specific to your script)
and if not then the listener is restarted. Usually such would require manual (human) interventions. Other possible source is a DBA
script running in in there ... performing the same check in loop. LSNRCTL STATUS (2 times) ... the result is parsed and if there is a
certain condition meed the listener is stopped using LSNRCTL STOP command... This is not a listener issue, some DBA have to
check and fix the condition checked or to simple disable such check ... Please verify and confirm this ...
Exact Issue:
The Application(hard coded) is checking the status of default listener and stop it.
Solution
Change the Name of the Listener.
Sunday, 22 January 2012
ORA-00600: [kpnatdm-1], [2501], [533]
Errors in file /orahome/oracle/product/10.2.0/admin/orcl/udump/orcl2_ora_26752.trc:
ORA-00600: internal error code, arguments: [kpnatdm-1], [2501], [533], [], [], [], [], []
Findings:
The error encounters when the below mentioned SQL is fired by program oracle@test02 from machine test02 using operating system user orauser.
SELECT :1||"A1"."ERR_DESC"||',' FROM "DIR"."DI_ERROR_LIST" "A1" WHERE "A1"."ERR_CD"=SUBSTR(:1,1,3) AND "A1"."MESG_ID"='ALCHI01'
This type of argument [kpnatdm-1], [2501], [533] for ORA-00600 is unknown in the bug database.
Recomendation:
It is suggested to Upgrade the database to 10.2.0.5
ORA-00600: internal error code, arguments: [kpnatdm-1], [2501], [533], [], [], [], [], []
Findings:
The error encounters when the below mentioned SQL is fired by program oracle@test02 from machine test02 using operating system user orauser.
SELECT :1||"A1"."ERR_DESC"||',' FROM "DIR"."DI_ERROR_LIST" "A1" WHERE "A1"."ERR_CD"=SUBSTR(:1,1,3) AND "A1"."MESG_ID"='ALCHI01'
This type of argument [kpnatdm-1], [2501], [533] for ORA-00600 is unknown in the bug database.
Recomendation:
It is suggested to Upgrade the database to 10.2.0.5
Tuesday, 17 January 2012
New Jobs in Oracle 11g
1. ORA$AUTOTASK_CLEAN
The job is created by the 11g script catmwin.sql - Catalog script for
Maintenance Window catmwin.sql defines maintenance window and stats collection job.This job is an autotask repository data ageing job and deletes obsolete AUTOTASK repository data. The delete of this old data will be performed by the procedure ora$age_autotask_data which will be executed by the job.
2. HM_CREATE_OFFLINE_DICTIONARY
The job is created by the 11g script catmwin.sql which mentions that this is a job for creation of offline dictionary for Database Repair Advisor.
3. DRA_REEVALUATE_OPEN_FAILURES
The job is created by the 11g script catmwin.sql which mentions that this is a job for reevaluate open failures for Database Recovery Advisor. In other words it is used to get assistance in case of loss or corruption of datafiles, redo log files or controlfiles.The job executes the procedure dbms_ir.reevaluateopenfailures.
4. MGMT_CONFIG_JOB
Comes with the OCM(Oracle Configuration Manager) installation.
This is a configuration collection job.The job is created by the script ocmjb10.sql by running procedure 'ORACLE_OCM.MGMT_CONFIG.collect_config'.
5. MGMT_STATS_CONFIG_JOB
This is an OCM Statistics collection job created in ocmjb10.sql by running
'ORACLE_OCM.MGMT_CONFIG.collect_stats'.
6. BSLN_MAINTAIN_STATS_JOB
This job is a compute statistics job. This job runs the BSLN_MAINTAIN_STATS_PROG program on the BSLN_MAINTAIN_STATS_SCHED schedule. The program BSLN_MAINTAIN_STATS_PROG will keep the default baseline's statistics up-to-date.
7. XMLDB_NFS_CLEANUP_JOB
The job is created in xdbu102.sql and runs the procedure dbms_xdbutil_int.cleanup_expired_nfsclients.
8. RSE$CLEAN_RECOVERABLE_SCRIPT
This job is STREAMS related. It is created in execstr.sql and is new in 11.2. The script execstr.sql executes anonymous blocks for STReam. The job is responsible with the cleaning of recoverable scripts and calls ' 'sys.dbms_streams_auto_int.clean_recoverable_script;'.
9. SM$CLEAN_AUTO_SPLIT_MERGE
This job is STREAMS related. It is created in file execstr.sql and is new in 11.2 too. The job is responsible with the cleaning of auto split merge views.
The job is created by the 11g script catmwin.sql - Catalog script for
Maintenance Window catmwin.sql defines maintenance window and stats collection job.This job is an autotask repository data ageing job and deletes obsolete AUTOTASK repository data. The delete of this old data will be performed by the procedure ora$age_autotask_data which will be executed by the job.
2. HM_CREATE_OFFLINE_DICTIONARY
The job is created by the 11g script catmwin.sql which mentions that this is a job for creation of offline dictionary for Database Repair Advisor.
3. DRA_REEVALUATE_OPEN_FAILURES
The job is created by the 11g script catmwin.sql which mentions that this is a job for reevaluate open failures for Database Recovery Advisor. In other words it is used to get assistance in case of loss or corruption of datafiles, redo log files or controlfiles.The job executes the procedure dbms_ir.reevaluateopenfailures.
4. MGMT_CONFIG_JOB
Comes with the OCM(Oracle Configuration Manager) installation.
This is a configuration collection job.The job is created by the script ocmjb10.sql by running procedure 'ORACLE_OCM.MGMT_CONFIG.collect_config'.
5. MGMT_STATS_CONFIG_JOB
This is an OCM Statistics collection job created in ocmjb10.sql by running
'ORACLE_OCM.MGMT_CONFIG.collect_stats'.
6. BSLN_MAINTAIN_STATS_JOB
This job is a compute statistics job. This job runs the BSLN_MAINTAIN_STATS_PROG program on the BSLN_MAINTAIN_STATS_SCHED schedule. The program BSLN_MAINTAIN_STATS_PROG will keep the default baseline's statistics up-to-date.
7. XMLDB_NFS_CLEANUP_JOB
The job is created in xdbu102.sql and runs the procedure dbms_xdbutil_int.cleanup_expired_nfsclients.
8. RSE$CLEAN_RECOVERABLE_SCRIPT
This job is STREAMS related. It is created in execstr.sql and is new in 11.2. The script execstr.sql executes anonymous blocks for STReam. The job is responsible with the cleaning of recoverable scripts and calls ' 'sys.dbms_streams_auto_int.clean_recoverable_script;'.
9. SM$CLEAN_AUTO_SPLIT_MERGE
This job is STREAMS related. It is created in file execstr.sql and is new in 11.2 too. The job is responsible with the cleaning of auto split merge views.
Monday, 16 January 2012
Oracle PCTFREE and PCTUSED Parameters
PCTFREE and PCTUSED are the block storage parameters
PCTFREE- This parameter is used to specify how much space should be left in the block for updates.The default value is 10.
For Eg, if The PctFree for a block is 30, then Oracle keeps on adding new rows to the block until the block is 70 % full.
It leaves the 30% for future updates. That is, in case, any row is updated and requires some more space, it is taken
out from the 30% remaining in the Block.Specify the value of this parameter HIGH if future updates in the rows of
the table are going to need more space.In case ur table is less prone to updates, then this value can be specified LOW.
PCTUSED : Oracle stops INSERTING new Rows in a table when the space usage reaches the PCTFREE limit and removes it from the free list.
The data block is again added to the free list when the used space in it falls below the percent value set by the PCTUSED parameter.The default value is 40.
Once the data block is added to the free list, Oracle again starts inserting new rows into it.
Hence, if a large value is set for PCTUSED, the data blocks are more frequently returned to the free list and vice versa.
For Eg,Suppose u have specified PCTUSED as 40 %. and PCTFREE as 20 %.
1. Oracle will keep on inserting new rows till the space is 80 % used. It will reserve the remaining 20% for future updates.
2. To start adding new rows again to the block, Oracle will check the space in the Block and the PCTUSED parameter.
3. When the space falls below 40 %, Oracle will start adding new rows to the block.
PCTFREE- This parameter is used to specify how much space should be left in the block for updates.The default value is 10.
For Eg, if The PctFree for a block is 30, then Oracle keeps on adding new rows to the block until the block is 70 % full.
It leaves the 30% for future updates. That is, in case, any row is updated and requires some more space, it is taken
out from the 30% remaining in the Block.Specify the value of this parameter HIGH if future updates in the rows of
the table are going to need more space.In case ur table is less prone to updates, then this value can be specified LOW.
PCTUSED : Oracle stops INSERTING new Rows in a table when the space usage reaches the PCTFREE limit and removes it from the free list.
The data block is again added to the free list when the used space in it falls below the percent value set by the PCTUSED parameter.The default value is 40.
Once the data block is added to the free list, Oracle again starts inserting new rows into it.
Hence, if a large value is set for PCTUSED, the data blocks are more frequently returned to the free list and vice versa.
For Eg,Suppose u have specified PCTUSED as 40 %. and PCTFREE as 20 %.
1. Oracle will keep on inserting new rows till the space is 80 % used. It will reserve the remaining 20% for future updates.
2. To start adding new rows again to the block, Oracle will check the space in the Block and the PCTUSED parameter.
3. When the space falls below 40 %, Oracle will start adding new rows to the block.
Saturday, 14 January 2012
ORA-48913 - error message in Oracle 11g alert log
Error Message in Alert Log:
On 11.2.0.1, encountered the following error in the alert.log.
Wed Jan 04 11:23:10 2012
Non critical error ORA-48913 caught while writing to trace file "/orahome/app/oracle/diag/rdbms/vis/vis2/trace/vis2_lmd0_991420.trc"
Error message: ORA-48913: Writing into trace file failed, file size limit [10485760] reached
Writing to the above trace file is disabled for now on...
Cause:
Parameter MAX_DUMP_FILE_SIZE is set too low
Solution:
Increase the setting for the parameter MAX_DUMP_FILE_SIZE or set it to unlimited
On 11.2.0.1, encountered the following error in the alert.log.
Wed Jan 04 11:23:10 2012
Non critical error ORA-48913 caught while writing to trace file "/orahome/app/oracle/diag/rdbms/vis/vis2/trace/vis2_lmd0_991420.trc"
Error message: ORA-48913: Writing into trace file failed, file size limit [10485760] reached
Writing to the above trace file is disabled for now on...
Cause:
Parameter MAX_DUMP_FILE_SIZE is set too low
Solution:
Increase the setting for the parameter MAX_DUMP_FILE_SIZE or set it to unlimited
Friday, 13 January 2012
nt secondary err code in Oracle 11g Alert log
Error message in 11g database alert log:
Wed Jan 04 18:56:39 2012
*********************************************************
Fatal NI connect error 12170.
VERSION INFORMATION:
TNS for Solaris: Version 11.2.0.1.0 - Production
Oracle Bequeath NT Protocol Adapter for Solaris: Version 11.2.0.1.0 - Production
TCP/IP NT Protocol Adapter for Solaris: Version 11.2.0.1.0 - Production
Time: 04-JAN-2012 18:56:39
Tracing not turned on.
Tns error struct:
ns main err code: 12535
TNS-12535: TNS:operation timed out
ns secondary err code: 12560
nt main err code: 505
TNS-00505: Operation timed out
nt secondary err code: 145
nt OS err code: 0
Client address: (ADDRESS=(PROTOCOL=tcp)(HOST=10.159.34.117)(PORT=1092))
Changes:
No change are necessary, but may have recently upgraded the database to 11g release 1 or higher, or installed a new Oracle11g database.
Note: Prior to 11gR1 these same 'Fatal NI connect error 12170' are written to the sqlnet.log
Cause:
These time out related messages are mostly informational in nature.
The messages indicate the specified client connection (identified by the 'Client address:' details) has experienced a time out. The 'nt secondary err code' identifies the underlying network transport, such as (TCP/IP) timeout limits after a client has abnormally terminated the database connection.
The 'nt secondary err code' translates to underlying network transport timeouts for the following Operating Systems:
For the Solaris system: nt secondary err code: 145:
ETIMEDOUT 145 /* Connection timed out */
For the Linux operating system: nt secondary err code: 110
ETIMEDOUT 110 Connection timed out
For the HP-UX system: nt secondary err code: 238:
ETIMEDOUT 238 /* Connection timed out */
For AIX: nt secondary err code: 78:
ETIMEDOUT 78 /* Connection timed out */
For Windows based platforms: nt secondary err code: 60 (which translates to Winsock Error: 10060)
Description: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
The reason the messages are written to the alert log is related to the use of the new 11g Automatic Diagnostic Repository (ADR) feature being enabled by default. See (Doc ID 454927.1).
Suggested Actions:
- Search the corresponding text version of the listener log located on the database server for the corresponding client connection referenced by the Client address details referenced in the alert log message.
For the message incident below you would search the listener log for the 'Client address' string:
(ADDRESS=(PROTOCOL=tcp)(HOST=10.159.34.117)(PORT=1092))
The search of the listener log should find the most recent connection before the time reference displayed in the alert log message, e.g. '04-JAN-2012 18:56:39'.
-Corresponding listener log entry:
04-JAN-2012 21:20:12 * (CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=AMN11264.us.oracle.com)(CID=(PROGRAM=D:\app\mcassady\product\11.2.0\dbhome_1\bin\sqlplus.exe)(HOST=mcassady-lap)(USER=mca
ssady))) * (ADDRESS=(PROTOCOL=tcp)(HOST=10.159.34.117)(PORT=1092)) * establish * AMN11264.us.oracle.com * 0
Note the time of the client corresponding client connection(s) in the listener log. Here you may find a particular client, set of clients or particular applications that are improperly disconnecting causing the timeout errors to be raised and recorder in the database alert log.
You may choose to revert from the new Automatic Diagnostic Repository (ADR) method to prevent the Oracle Net diagnostic details from being written to the alert log(s) by setting the following Oracle Net configuration parameters:
To revert to Oracle Net Server tracing/logging, set following parameter in the server's sqlnet.ora :
DIAG_ADR_ENABLED = OFF
Also, to back out the ADR diag for the Listener component, set following parameter in the server's listener.ora:
DIAG_ADR_ENABLED_<listenername> = OFF
- Where the <listenername> would be replaced with the actual name of the configured listener(s) in the listener.ora configuration file. For example, if the listener name is 'LISTENER', the parameter would read:
DIAG_ADR_ENABLED_LISTENER = OFF
-Reload or restart the TNS Listener for the parameter change to take effect.
Wed Jan 04 18:56:39 2012
*********************************************************
Fatal NI connect error 12170.
VERSION INFORMATION:
TNS for Solaris: Version 11.2.0.1.0 - Production
Oracle Bequeath NT Protocol Adapter for Solaris: Version 11.2.0.1.0 - Production
TCP/IP NT Protocol Adapter for Solaris: Version 11.2.0.1.0 - Production
Time: 04-JAN-2012 18:56:39
Tracing not turned on.
Tns error struct:
ns main err code: 12535
TNS-12535: TNS:operation timed out
ns secondary err code: 12560
nt main err code: 505
TNS-00505: Operation timed out
nt secondary err code: 145
nt OS err code: 0
Client address: (ADDRESS=(PROTOCOL=tcp)(HOST=10.159.34.117)(PORT=1092))
Changes:
No change are necessary, but may have recently upgraded the database to 11g release 1 or higher, or installed a new Oracle11g database.
Note: Prior to 11gR1 these same 'Fatal NI connect error 12170' are written to the sqlnet.log
Cause:
These time out related messages are mostly informational in nature.
The messages indicate the specified client connection (identified by the 'Client address:' details) has experienced a time out. The 'nt secondary err code' identifies the underlying network transport, such as (TCP/IP) timeout limits after a client has abnormally terminated the database connection.
The 'nt secondary err code' translates to underlying network transport timeouts for the following Operating Systems:
For the Solaris system: nt secondary err code: 145:
ETIMEDOUT 145 /* Connection timed out */
For the Linux operating system: nt secondary err code: 110
ETIMEDOUT 110 Connection timed out
For the HP-UX system: nt secondary err code: 238:
ETIMEDOUT 238 /* Connection timed out */
For AIX: nt secondary err code: 78:
ETIMEDOUT 78 /* Connection timed out */
For Windows based platforms: nt secondary err code: 60 (which translates to Winsock Error: 10060)
Description: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
The reason the messages are written to the alert log is related to the use of the new 11g Automatic Diagnostic Repository (ADR) feature being enabled by default. See (Doc ID 454927.1).
Suggested Actions:
- Search the corresponding text version of the listener log located on the database server for the corresponding client connection referenced by the Client address details referenced in the alert log message.
For the message incident below you would search the listener log for the 'Client address' string:
(ADDRESS=(PROTOCOL=tcp)(HOST=10.159.34.117)(PORT=1092))
The search of the listener log should find the most recent connection before the time reference displayed in the alert log message, e.g. '04-JAN-2012 18:56:39'.
-Corresponding listener log entry:
04-JAN-2012 21:20:12 * (CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=AMN11264.us.oracle.com)(CID=(PROGRAM=D:\app\mcassady\product\11.2.0\dbhome_1\bin\sqlplus.exe)(HOST=mcassady-lap)(USER=mca
ssady))) * (ADDRESS=(PROTOCOL=tcp)(HOST=10.159.34.117)(PORT=1092)) * establish * AMN11264.us.oracle.com * 0
Note the time of the client corresponding client connection(s) in the listener log. Here you may find a particular client, set of clients or particular applications that are improperly disconnecting causing the timeout errors to be raised and recorder in the database alert log.
You may choose to revert from the new Automatic Diagnostic Repository (ADR) method to prevent the Oracle Net diagnostic details from being written to the alert log(s) by setting the following Oracle Net configuration parameters:
To revert to Oracle Net Server tracing/logging, set following parameter in the server's sqlnet.ora :
DIAG_ADR_ENABLED = OFF
Also, to back out the ADR diag for the Listener component, set following parameter in the server's listener.ora:
DIAG_ADR_ENABLED_<listenername> = OFF
- Where the <listenername> would be replaced with the actual name of the configured listener(s) in the listener.ora configuration file. For example, if the listener name is 'LISTENER', the parameter would read:
DIAG_ADR_ENABLED_LISTENER = OFF
-Reload or restart the TNS Listener for the parameter change to take effect.
Thursday, 12 January 2012
Change Static Parameter in RAC Environment
1) Check Parameter is static or dynamic
11:49:35 orcl1 > select name, value, issys_modifiable from v$parameter where name='processes';
NAME VALUE ISSYS_MOD
-------------------- -------------------- ---------
processes 400 FALSE
1 row selected.
The Value is False.It means the parameter is static.
2) Check Whether database Instance Started with PFILE or SPFILE?
11:49:44 orcl1 > select count(1) from v$spparameter where isspecified = 'TRUE';
COUNT(1)
----------
81
1 row selected.
The view returns NULL values if a PFILE was used to start up the instance.
or,
11:50:26 orcl1 > select name, value from v$parameter where name = 'spfile';
NAME VALUE
-------------------- --------------------
spfile /ora_data/db/apps_st/data/spfileorcl.ora
1 row selected.
or,
11:50:45 orcl1 > show parameter spfile;
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
spfile string /ora_data/db/apps_st/data/spfileorcl.ora
Check the same on node 2 also.If the valus is same it means the instance is running with common spfile.
3) Change the Parameter
ALTER SYSTEM SET processes=500 scope=spfile sid='*';
4) Bounce the Instances
11:49:35 orcl1 > select name, value, issys_modifiable from v$parameter where name='processes';
NAME VALUE ISSYS_MOD
-------------------- -------------------- ---------
processes 400 FALSE
1 row selected.
The Value is False.It means the parameter is static.
2) Check Whether database Instance Started with PFILE or SPFILE?
11:49:44 orcl1 > select count(1) from v$spparameter where isspecified = 'TRUE';
COUNT(1)
----------
81
1 row selected.
The view returns NULL values if a PFILE was used to start up the instance.
or,
11:50:26 orcl1 > select name, value from v$parameter where name = 'spfile';
NAME VALUE
-------------------- --------------------
spfile /ora_data/db/apps_st/data/spfileorcl.ora
1 row selected.
or,
11:50:45 orcl1 > show parameter spfile;
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
spfile string /ora_data/db/apps_st/data/spfileorcl.ora
Check the same on node 2 also.If the valus is same it means the instance is running with common spfile.
3) Change the Parameter
ALTER SYSTEM SET processes=500 scope=spfile sid='*';
4) Bounce the Instances
Wednesday, 11 January 2012
Why we need VIP in Oracle 10g RAC
In Oracle RAC database if user process is trying to get connection with some listener and found listener is down or node is down then Oracle RAC automatically transfer this request to another listener on another node. Up to Oracle 9i we use physical IP address in listener configuration. Means if requested connection gets failed then it will be diverting to another node using physical IP address of another surviving node. But during this automatically transfer, connection should need to wait up to get error message of node down or listener down using TCP/IP connection timeout. Means session should need to wait up to getting TCP/IP timeout error dictation. Once error message is received oracle RAC automatically divert this new connection request to another surviving node.Using physical IP address there is biggest gap to get TCP/IP timeout for failover suggestion. Session should need to wait for same timeout. High availability of Oracle RAC depends on this time wasting error message.
In Oracle 10g we have a concept of Virtual IP.Using virtual IP we can save our TCP/IP timeout problem because Oracle notification service maintains communication between each nodes and listeners. Once ONS found any listener down or node down, it will notify another nodes and listeners with same situation. While new connection is trying to establish connection to failure node or listener, virtual IP of failure node automatically divert to surviving node and session will be establishing in another surviving node. This process doesn't wait for TCP/IP timeout event. Due to this new connection gets faster session establishment to another surviving nodes.
Virtual IP (VIP) is for fast connection establishment in failover dictation. Still we can use physical IP address in Oracle 10g in listener if we have no worry for failover timing. We can change default TCP/IP timeout using operating system utilities or commands and kept smaller. But taking advantage of VIP (Virtual IP address) in Oracle 10g RAC database is advisable.
Using VIP configuration, client can be able to get connection fast even fail over of connection request to node. Because vip automatically assign to another surviving node faster and it can't wait for TNS timeout old fashion.
In Oracle 10g we have a concept of Virtual IP.Using virtual IP we can save our TCP/IP timeout problem because Oracle notification service maintains communication between each nodes and listeners. Once ONS found any listener down or node down, it will notify another nodes and listeners with same situation. While new connection is trying to establish connection to failure node or listener, virtual IP of failure node automatically divert to surviving node and session will be establishing in another surviving node. This process doesn't wait for TCP/IP timeout event. Due to this new connection gets faster session establishment to another surviving nodes.
Virtual IP (VIP) is for fast connection establishment in failover dictation. Still we can use physical IP address in Oracle 10g in listener if we have no worry for failover timing. We can change default TCP/IP timeout using operating system utilities or commands and kept smaller. But taking advantage of VIP (Virtual IP address) in Oracle 10g RAC database is advisable.
Using VIP configuration, client can be able to get connection fast even fail over of connection request to node. Because vip automatically assign to another surviving node faster and it can't wait for TNS timeout old fashion.
Tuesday, 10 January 2012
If One Voting Disk is Corrupted - RAC
Voting disks are used in a RAC configuration for maintaining nodes membership. They are critical pieces in a cluster configuration. Starting with ORACLE 10gR2, it is possible to mirror the OCR and the voting disks. Using the default mirroring template, the minimum number of voting disks necessary for a normal functioning is two.
Scenario Setup
In this scenario it is simulated the crash of one voting disk by using the following steps:
1. Identify votings:
crsctl query css votedisk
0. 0 /dev/raw/raw1
1. 0 /dev/raw/raw2
2. 0 /dev/raw/raw3
2. corrupt one of the voting disks (as root):
dd if=/dev/zero /dev/raw/raw3 bs=1M
Recoverability Steps
1. check the “$CRS_HOME/log/[hostname]/alert[hostname].log” file. The following message should be written there which allows us to determine which voting disk became corrupted:
[cssd(9120)]CRS-1604:CSSD voting file is offline: /opt/oracle/product/10.2.0/crs_1/Voting1. Details in /opt/oracle/product/10.2.0/crs_1/log/aut-arz-ractest1/cssd/ocssd.log.
2. According to the above listing the Voting1 is the corrupted disk. Shutdown the CRS stack:
srvctl stop database -d fitstest -o immediate
srvctl stop asm -n aut-vie-ractest1
srvctl stop asm -n aut-arz-ractest1
srvctl stop nodeapps -n aut-vie-ractest1
srvctl stop nodeapps -n aut-arz-ractest1
crs_stat -t
On every node as root:
crsctl stop crs
3. Pick a good voting from the remaining ones and copy it over the corrupted one:
dd if=/dev/raw/raw4 of=/dev/raw/raw3 bs=1M
4. Start CRS (on every node as root):
crsctl start crs
5. Check log file “$CRS_HOME/log/[hostname]/alert[hostname].log”. It should look like shown below:
[cssd(14463)]CRS-1601:CSSD Reconfiguration complete. Active nodes are aut-vie-ractest1 aut-arz-ractest1 .
2011-11-10 15:19:53.954
[crsd(14268)]CRS-1012:The OCR service started on node aut-vie-ractest1.
2011-11-10 11:20:53.987
[evmd(14228)]CRS-1401:EVMD started on node aut-vie-ractest1.
2011-11-10 11:20:55.861 [crsd(14268)]CRS-1201:CRSD started on node aut-vie-ractest1.
6. After a couple of minutes check the status of the whole CRS stack:
[oracle@aut-vie-ractest1 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora....SM2.asm application ONLINE ONLINE aut-...est1
ora....T1.lsnr application ONLINE ONLINE aut-...est1
ora....st1.gsd application ONLINE ONLINE aut-...est1
ora....st1.ons application ONLINE ONLINE aut-...est1
ora....st1.vip application ONLINE ONLINE aut-...est1
ora....SM1.asm application ONLINE ONLINE aut-...est1
ora....T1.lsnr application ONLINE ONLINE aut-...est1
ora....st1.gsd application ONLINE ONLINE aut-...est1
ora....st1.ons application ONLINE ONLINE aut-...est1
ora....st1.vip application ONLINE ONLINE aut-...est1
ora....test.db application ONLINE ONLINE aut-...est1
ora....t1.inst application ONLINE ONLINE aut-...est1
ora....t2.inst application ONLINE ONLINE aut-...est1
Scenario Setup
In this scenario it is simulated the crash of one voting disk by using the following steps:
1. Identify votings:
crsctl query css votedisk
0. 0 /dev/raw/raw1
1. 0 /dev/raw/raw2
2. 0 /dev/raw/raw3
2. corrupt one of the voting disks (as root):
dd if=/dev/zero /dev/raw/raw3 bs=1M
Recoverability Steps
1. check the “$CRS_HOME/log/[hostname]/alert[hostname].log” file. The following message should be written there which allows us to determine which voting disk became corrupted:
[cssd(9120)]CRS-1604:CSSD voting file is offline: /opt/oracle/product/10.2.0/crs_1/Voting1. Details in /opt/oracle/product/10.2.0/crs_1/log/aut-arz-ractest1/cssd/ocssd.log.
2. According to the above listing the Voting1 is the corrupted disk. Shutdown the CRS stack:
srvctl stop database -d fitstest -o immediate
srvctl stop asm -n aut-vie-ractest1
srvctl stop asm -n aut-arz-ractest1
srvctl stop nodeapps -n aut-vie-ractest1
srvctl stop nodeapps -n aut-arz-ractest1
crs_stat -t
On every node as root:
crsctl stop crs
3. Pick a good voting from the remaining ones and copy it over the corrupted one:
dd if=/dev/raw/raw4 of=/dev/raw/raw3 bs=1M
4. Start CRS (on every node as root):
crsctl start crs
5. Check log file “$CRS_HOME/log/[hostname]/alert[hostname].log”. It should look like shown below:
[cssd(14463)]CRS-1601:CSSD Reconfiguration complete. Active nodes are aut-vie-ractest1 aut-arz-ractest1 .
2011-11-10 15:19:53.954
[crsd(14268)]CRS-1012:The OCR service started on node aut-vie-ractest1.
2011-11-10 11:20:53.987
[evmd(14228)]CRS-1401:EVMD started on node aut-vie-ractest1.
2011-11-10 11:20:55.861 [crsd(14268)]CRS-1201:CRSD started on node aut-vie-ractest1.
6. After a couple of minutes check the status of the whole CRS stack:
[oracle@aut-vie-ractest1 ~]$ crs_stat -t
Name Type Target State Host
------------------------------------------------------------
ora....SM2.asm application ONLINE ONLINE aut-...est1
ora....T1.lsnr application ONLINE ONLINE aut-...est1
ora....st1.gsd application ONLINE ONLINE aut-...est1
ora....st1.ons application ONLINE ONLINE aut-...est1
ora....st1.vip application ONLINE ONLINE aut-...est1
ora....SM1.asm application ONLINE ONLINE aut-...est1
ora....T1.lsnr application ONLINE ONLINE aut-...est1
ora....st1.gsd application ONLINE ONLINE aut-...est1
ora....st1.ons application ONLINE ONLINE aut-...est1
ora....st1.vip application ONLINE ONLINE aut-...est1
ora....test.db application ONLINE ONLINE aut-...est1
ora....t1.inst application ONLINE ONLINE aut-...est1
ora....t2.inst application ONLINE ONLINE aut-...est1
Sunday, 8 January 2012
Benefits of Automatic Storage Management
a) Automatic IO load balancing as data is striped across disks and better performance.
b) Available for both Single Instance as well as RAC instance.
c) Free built in 10g kernel – Available even in Standard Edition.
d) Prevents Fragmentation and hence no need to relocate data to reclaim space.
e) Overcomes file system size limitations.
f) Prevents accidental file deletion.
g) Backup with RMAN.
b) Available for both Single Instance as well as RAC instance.
c) Free built in 10g kernel – Available even in Standard Edition.
d) Prevents Fragmentation and hence no need to relocate data to reclaim space.
e) Overcomes file system size limitations.
f) Prevents accidental file deletion.
g) Backup with RMAN.
Friday, 6 January 2012
BSLN_MAINTAIN_STATS_JOB is getting failed
Issue:
BSLN_MAINTAIN_STATS_JOB is getting failed
Error in Alert Log:
Sun Jan 01 13:30:01 2012
Errors in file /orahome/app/oracle/diag/rdbms/orcl/orcl1/trace/orcl1_j000_1781806.trc:
ORA-12012: error on auto execute of job 353101
ORA-06550: line 1, column 780:
PLS-00201: identifier 'DBSNMP.BSLN_INTERNAL' must be declared
ORA-06550: line 1, column 780:
PL/SQL: Statement ignored
Trace file:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options
ORACLE_HOME = /orahome/app/oracle/product/11.2.0/dbhome_1
System name: AIX
Node name: PROD
Release: 1
Version: 6
Machine: 00C8A8E54C00
Instance name: orcl1
Redo thread mounted by this instance: 1
Oracle process number: 56
Unix process pid: 1781806, image: oracle@PROD (J000)
*** 2012-01-01 13:30:01.195
*** SESSION ID:(308.19730) 2012-01-01 13:30:01.195
*** CLIENT ID:() 2012-01-01 13:30:01.195
*** SERVICE NAME:(SYS$USERS) 2012-01-01 13:30:01.195
*** MODULE NAME:(DBMS_SCHEDULER) 2012-01-01 13:30:01.195
*** ACTION NAME:(BSLN_MAINTAIN_STATS_JOB) 2012-01-01 13:30:01.195
ORA-12012: error on auto execute of job 353101
ORA-06550: line 1, column 780:
PLS-00201: identifier 'DBSNMP.BSLN_INTERNAL' must be declared
ORA-06550: line 1, column 780:
PL/SQL: Statement ignored
Findings:
SQL> select object_name from dba_objects where object_id=353101;
OBJECT_NAME
-----------------------------------
BSLN_MAINTAIN_STATS_JOB
09:58:16 orcl1 > select log_date,status
from dba_scheduler_job_run_details
where job_name='BSLN_MAINTAIN_STATS_JOB'
order by log_date desc;
LOG_DATE STATUS
------------------------------------------------------------------------ ------------
01-JAN-12 01.30.01.224309 PM +05:30 FAILED
25-DEC-11 01.30.00.855555 PM +05:30 FAILED
18-DEC-11 01.30.00.761203 PM +05:30 FAILED
11-DEC-11 01.30.00.583605 PM +05:30 FAILED
04-DEC-11 01.30.00.450731 PM +05:30 FAILED
Role of BSLN_MAINTAIN_STATS_JOB:
This job is a compute statistics job. This job runs the BSLN_MAINTAIN_STATS_PROG program on the
BSLN_MAINTAIN_STATS_SCHED schedule. The program BSLN_MAINTAIN_STATS_PROG will keep the default
baseline's statistics up-to-date.
Cause:
Table DBSNMP.BSLN_BASELINES contains inconsistent information. After database cloning , existing records in table “DBSNMP.BSLN_BASELINES” conflict with new baseline information inserted in the cloned database. The DBSNMP user needs to be dropped and re-created.
Fixed:
SQL> @?/rdbms/admin/catnsnmp.sql
SQL> @?/rdbms/admin/catsnmp.sql
BSLN_MAINTAIN_STATS_JOB is getting failed
Error in Alert Log:
Sun Jan 01 13:30:01 2012
Errors in file /orahome/app/oracle/diag/rdbms/orcl/orcl1/trace/orcl1_j000_1781806.trc:
ORA-12012: error on auto execute of job 353101
ORA-06550: line 1, column 780:
PLS-00201: identifier 'DBSNMP.BSLN_INTERNAL' must be declared
ORA-06550: line 1, column 780:
PL/SQL: Statement ignored
Trace file:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options
ORACLE_HOME = /orahome/app/oracle/product/11.2.0/dbhome_1
System name: AIX
Node name: PROD
Release: 1
Version: 6
Machine: 00C8A8E54C00
Instance name: orcl1
Redo thread mounted by this instance: 1
Oracle process number: 56
Unix process pid: 1781806, image: oracle@PROD (J000)
*** 2012-01-01 13:30:01.195
*** SESSION ID:(308.19730) 2012-01-01 13:30:01.195
*** CLIENT ID:() 2012-01-01 13:30:01.195
*** SERVICE NAME:(SYS$USERS) 2012-01-01 13:30:01.195
*** MODULE NAME:(DBMS_SCHEDULER) 2012-01-01 13:30:01.195
*** ACTION NAME:(BSLN_MAINTAIN_STATS_JOB) 2012-01-01 13:30:01.195
ORA-12012: error on auto execute of job 353101
ORA-06550: line 1, column 780:
PLS-00201: identifier 'DBSNMP.BSLN_INTERNAL' must be declared
ORA-06550: line 1, column 780:
PL/SQL: Statement ignored
Findings:
SQL> select object_name from dba_objects where object_id=353101;
OBJECT_NAME
-----------------------------------
BSLN_MAINTAIN_STATS_JOB
09:58:16 orcl1 > select log_date,status
from dba_scheduler_job_run_details
where job_name='BSLN_MAINTAIN_STATS_JOB'
order by log_date desc;
LOG_DATE STATUS
------------------------------------------------------------------------ ------------
01-JAN-12 01.30.01.224309 PM +05:30 FAILED
25-DEC-11 01.30.00.855555 PM +05:30 FAILED
18-DEC-11 01.30.00.761203 PM +05:30 FAILED
11-DEC-11 01.30.00.583605 PM +05:30 FAILED
04-DEC-11 01.30.00.450731 PM +05:30 FAILED
Role of BSLN_MAINTAIN_STATS_JOB:
This job is a compute statistics job. This job runs the BSLN_MAINTAIN_STATS_PROG program on the
BSLN_MAINTAIN_STATS_SCHED schedule. The program BSLN_MAINTAIN_STATS_PROG will keep the default
baseline's statistics up-to-date.
Cause:
Table DBSNMP.BSLN_BASELINES contains inconsistent information. After database cloning , existing records in table “DBSNMP.BSLN_BASELINES” conflict with new baseline information inserted in the cloned database. The DBSNMP user needs to be dropped and re-created.
Fixed:
SQL> @?/rdbms/admin/catnsnmp.sql
SQL> @?/rdbms/admin/catsnmp.sql
Wednesday, 4 January 2012
Why should Apps & Applsys passwords always be the same
The need to have the same password for Apps and Applsys is because when you sign on to apps, intially it connects to a public schema called APPLSYSPUB.
This validates AOL username and password that we enter (operations/welcome) using guest user account. Once this is verified we select responsibility, this is validated by APPLSYS schema and then it connects to APPS schema.
Since it uses both applsys and apps during signon process this expects both the password to be identical.
Apps is a universal schema has synonyms to all base product tables and sequences.This also has code objects for all products (triggers, views, packages, synonyms etc.).
Applsys schema has applications technology layer products like FND and AD etc.
This validates AOL username and password that we enter (operations/welcome) using guest user account. Once this is verified we select responsibility, this is validated by APPLSYS schema and then it connects to APPS schema.
Since it uses both applsys and apps during signon process this expects both the password to be identical.
Apps is a universal schema has synonyms to all base product tables and sequences.This also has code objects for all products (triggers, views, packages, synonyms etc.).
Applsys schema has applications technology layer products like FND and AD etc.
Monday, 2 January 2012
Check Oracle High Redo Generation
Step 1: Get the segment that experienced the most changes
during the period
SELECT to_char(begin_interval_time,'YYYY_MM_DD HH24:MI') snap_time,
dhsso.object_name,
sum(db_block_changes_delta)
FROM dba_hist_seg_stat dhss,
dba_hist_seg_stat_obj dhsso,
dba_hist_snapshot dhs
WHERE dhs.snap_id = dhss.snap_id
AND dhs.instance_number = dhss.instance_number
AND dhss.obj# = dhsso.obj#
AND dhss.dataobj# = dhsso.dataobj#
AND begin_interval_time
BETWEEN to_date('2008_05_02 12','YYYY_MM_DD HH24')
AND to_date('2008_05_04 12','YYYY_MM_DD HH24')
GROUP BY to_char(begin_interval_time,'YYYY_MM_DD HH24:MI'),
dhsso.object_name
Step 2: Run the following query to examine the changes during
the period to what they were historically.
SELECT to_char(begin_interval_time,'YYYY_MM_DD HH24:MI') snap_time,
sum(db_block_changes_delta)
FROM dba_hist_seg_stat dhss,
dba_hist_seg_stat_obj dhsso,
dba_hist_snapshot dhs
WHERE dhs.snap_id = dhss.snap_id
AND dhs.instance_number = dhss.instance_number
AND dhss.obj# = dhsso.obj#
AND dhss.dataobj# = dhsso.dataobj#
AND dhsso.object_name = 'OT_PENDING'
GROUP BY to_char(begin_interval_time,'YYYY_MM_DD HH24:MI')
Step 3: Gather all SQL statements that were run during the
period that referenced the table, using what is below
SELECT to_char(begin_interval_time,'YYYY_MM_DD HH24:MI'),
dbms_lob.substr(sql_text,4000,1),
dhss.instance_number,
dhss.sql_id,executions_delta,rows_processed_delta
FROM dba_hist_sqlstat dhss,
dba_hist_snapshot dhs,
dba_hist_sqltext dhst
WHERE upper(dhst.sql_text) LIKE '%OT_PENDING%'
AND dhss.snap_id=dhs.snap_id
AND dhss.instance_Number=dhs.instance_number
AND dhss.sql_id = dhst.sql_id
Step 4: Take the SQL_ID for the statement and searched
through DBA_HIST_ACTIVE_SESS_HISTORY, using what is below
SELECT instance_number, to_char(sample_time,'yyyy_mm_dd hh24:mi:ss'),
user_id,
program
FROM dba_hist_active_sess_history
WHERE sql_id = 'bxjz5d4p6x02u'
AND snap_id BETWEEN 43212 AND 43220
One of the best document to check the high generation of redo:
http://appcrawler.com/wordpress/2009/04/15/who-is-generating-all-the-redo/
Subscribe to:
Posts (Atom)