Steps to Apply PSU 5 on Oracle 11.2.0.3 Database
Oracle recommends to apply Latest available PSU-1
1) Download and install p6880880_112000_Linux-x86-64(Pre-Req patch)
a) Backup the $ORACLE_HOME/OPatch directory
b) Remove the contents of OPatch directory
c) cp the patch in p6880880_112000_Linux-x86-64.zip at $ORACLE_HOME and unzip the patch(unzip -o patchno)
d) Check Prereq pf patch
Go to patch 14727310 location
Run -> opatch prereq CheckConflictAgainstOHWithDetail -ph ./
e) Rollback patch 13004894
opatch rollback -id 13004894
f) Apply the patch 14727310
opatch apply
g) Apply patch 13004894
opatch apply
My experience, findings and thoughts in my daily work with Oracle products
Thursday, 5 December 2013
Tuesday, 3 December 2013
Change Override address in Workflow Mailer
Change Override address in Oracle workflow mailer from backend
1) Check current Override address
select fscpv.PARAMETER_VALUE from fnd_svc_comp_param_vals fscpv
where fscpv.parameter_id in (select fscpt.parameter_id
from fnd_svc_comp_params_tl fscpt
where fscpt.display_name = 'Test Address');
2) Update with new override address
update fnd_svc_comp_param_vals fscpv
set fscpv.PARAMETER_VALUE = 'vivek.bhutani@xyz.com'
where fscpv.parameter_id in (select fscpt.parameter_id
from fnd_svc_comp_params_tl fscpt
where fscpt.display_name = 'Test Address');
3) Confirm the new override address
select fscpv.PARAMETER_VALUE from fnd_svc_comp_param_vals fscpv
where fscpv.parameter_id in (select fscpt.parameter_id
from fnd_svc_comp_params_tl fscpt
where fscpt.display_name = 'Test Address');
1) Check current Override address
select fscpv.PARAMETER_VALUE from fnd_svc_comp_param_vals fscpv
where fscpv.parameter_id in (select fscpt.parameter_id
from fnd_svc_comp_params_tl fscpt
where fscpt.display_name = 'Test Address');
2) Update with new override address
update fnd_svc_comp_param_vals fscpv
set fscpv.PARAMETER_VALUE = 'vivek.bhutani@xyz.com'
where fscpv.parameter_id in (select fscpt.parameter_id
from fnd_svc_comp_params_tl fscpt
where fscpt.display_name = 'Test Address');
3) Confirm the new override address
select fscpv.PARAMETER_VALUE from fnd_svc_comp_param_vals fscpv
where fscpv.parameter_id in (select fscpt.parameter_id
from fnd_svc_comp_params_tl fscpt
where fscpt.display_name = 'Test Address');
Monday, 2 December 2013
How to disable maintenance mode in oracle apps
Disable the maintenance mode from sqlplus in Oracle ERP
$ cd $AD_TOP/patch/115/sql
$ sqlplus apps/apps
SQL> select fnd_profile.value('APPS_MAINTENANCE_MODE') from dual;
SQL> @adsetmmd.sql DISABLE
$ cd $AD_TOP/patch/115/sql
$ sqlplus apps/apps
SQL> select fnd_profile.value('APPS_MAINTENANCE_MODE') from dual;
SQL> @adsetmmd.sql DISABLE
Sunday, 1 December 2013
Drop Database
How to Drop Oracle 10g/11g Database
SQL> startup mount restrict
ORACLE instance started.
Total System Global Area 6413680640 bytes
Fixed Size 2240344 bytes
Variable Size 452985000 bytes
Database Buffers 5939134464 bytes
Redo Buffers 19320832 bytes
Database mounted.
SQL> select name from v$database;
SQL> drop database;
SQL> startup mount restrict
ORACLE instance started.
Total System Global Area 6413680640 bytes
Fixed Size 2240344 bytes
Variable Size 452985000 bytes
Database Buffers 5939134464 bytes
Redo Buffers 19320832 bytes
Database mounted.
SQL> select name from v$database;
SQL> drop database;
Saturday, 16 November 2013
ORA-00997: illegal use of LONG datatype
While moving one table to another tablespace I encountered ORA-00997 error.After searching on Metalink I got the below solution:
Error:
alter table VIVEK.IMAGES move tablespace APPS_TS_TX_DATA;
Solution:
CREATE OR REPLACE DIRECTORY dmpdir AS '/d01/vivek/migration/exp';
GRANT READ, WRITE ON DIRECTORY dmpdir TO vivek;
expdp system/manager directory=dmpdir dumpfile=image_tab.dmp logfile=image_tab.log TABLES=VIVEK.IMAGES
DROP TABLE VIVEK.IMAGES CASCADE CONSTRAINTS;
impdp system/manager directory=dmpdir dumpfile=image_tab.dmp logfile=IMP_image_tab.log REMAP_SCHEMA=VIVEK:VIVEK REMAP_TABLESPACE=SYSTEM:APPS_TS_TX_DATA
Moving Table to Another Tablespace Fails with ORA-00997 [ID 165901.1]
Error:
alter table VIVEK.IMAGES move tablespace APPS_TS_TX_DATA;
Solution:
CREATE OR REPLACE DIRECTORY dmpdir AS '/d01/vivek/migration/exp';
GRANT READ, WRITE ON DIRECTORY dmpdir TO vivek;
expdp system/manager directory=dmpdir dumpfile=image_tab.dmp logfile=image_tab.log TABLES=VIVEK.IMAGES
DROP TABLE VIVEK.IMAGES CASCADE CONSTRAINTS;
impdp system/manager directory=dmpdir dumpfile=image_tab.dmp logfile=IMP_image_tab.log REMAP_SCHEMA=VIVEK:VIVEK REMAP_TABLESPACE=SYSTEM:APPS_TS_TX_DATA
Moving Table to Another Tablespace Fails with ORA-00997 [ID 165901.1]
Saturday, 26 October 2013
ORA-29871: invalid alter option for a domain index
ORA-29871 error while rebuilding the Domain Index:
SQL> alter index HR.PER_EMPDIR_PEOPLE_N1 rebuild TABLESPACE APPS_TS_TX_DATA;
alter index HR.PER_EMPDIR_PEOPLE_N1 rebuild TABLESPACE APPS_TS_TX_DATA
*
ERROR at line 1:
ORA-29871: invalid alter option for a domain index
Solution:
1. DROP INDEX "HR"."PER_EMPDIR_PEOPLE_N1";
2. Commit;
After migration create index using below command
3. CREATE INDEX "HR"."PER_EMPDIR_PEOPLE_N1" ON "HR"."PER_EMPDIR_PEOPLE" ("PERSON_KEY") INDEXTYPE IS "CTXSYS"."CONTEXT" ctx_ddl.set_attribute(TABLESPACE APPS_TS_TX_IDX);
4. Commit and then run the query:
select * from dba_objects where object_name like 'PER_EMPDIR_PEOPLE_N1'
Check whether the status of the index is valid.
SQL> alter index HR.PER_EMPDIR_PEOPLE_N1 rebuild TABLESPACE APPS_TS_TX_DATA;
alter index HR.PER_EMPDIR_PEOPLE_N1 rebuild TABLESPACE APPS_TS_TX_DATA
*
ERROR at line 1:
ORA-29871: invalid alter option for a domain index
Solution:
1. DROP INDEX "HR"."PER_EMPDIR_PEOPLE_N1";
2. Commit;
After migration create index using below command
3. CREATE INDEX "HR"."PER_EMPDIR_PEOPLE_N1" ON "HR"."PER_EMPDIR_PEOPLE" ("PERSON_KEY") INDEXTYPE IS "CTXSYS"."CONTEXT" ctx_ddl.set_attribute(TABLESPACE APPS_TS_TX_IDX);
4. Commit and then run the query:
select * from dba_objects where object_name like 'PER_EMPDIR_PEOPLE_N1'
Check whether the status of the index is valid.
Saturday, 7 September 2013
How to Compile JSP in 11i
1) Shutdown the Application Services
2) Compile jsp
cd $COMMON_TOP
cp -rP _pages _pages_bkp
cd $FND_TOP/patch/115/bin
perl ojspCompile.pl --compile --flush -p 2
logfile set: /data/inst/apps/TEST_our12-test/logs/appl/rgf/ojsp/ojspc_error.log
starting...(compiling all)
using 10i internal ojsp ver: 10
synchronizing dependency file:
loading deplist...8088
enumerating jsps...8088
updating dependency...0
initializing compilation:
eliminating children...6005 (-2083)
translating and compiling:
translating jsps...6005/6005 [failed: 1] in 1m32s
compiling jsps...6004/6004 in 4m25s
Finished!
Start the Application services
2) Compile jsp
cd $COMMON_TOP
cp -rP _pages _pages_bkp
cd $FND_TOP/patch/115/bin
perl ojspCompile.pl --compile --flush -p 2
logfile set: /data/inst/apps/TEST_our12-test/logs/appl/rgf/ojsp/ojspc_error.log
starting...(compiling all)
using 10i internal ojsp ver: 10
synchronizing dependency file:
loading deplist...8088
enumerating jsps...8088
updating dependency...0
initializing compilation:
eliminating children...6005 (-2083)
translating and compiling:
translating jsps...6005/6005 [failed: 1] in 1m32s
compiling jsps...6004/6004 in 4m25s
Finished!
Start the Application services
File System Space Requirements for R12 Standard Installation
Minimum disk space Requirement for R12 installation
Applications node file system (includes OracleAS 10.1.2 & OracleAS 10.1.3Oracle Home, COMMON_TOP, APPL_TOP, and INST_TOP) : 35 GB (50 GB on HP-UX Itanium)
Database node file system (Fresh install) : 55 GB (65 GB on HP-UX Itanium)
Database node file system (Vision Demo database) : 208 GB (210 GB on HP-UX Itanium)
Sunday, 30 June 2013
Concurrent Managers Status shows “System Hold, Fix Manager”
Concurrent Managers Status shows “System Hold, Fix Manager” in Concurrent–>Manager–>Administer Screen form
Solution:
a) Ensure Concurrent :GSM Enabled profile is set to ‘Y’
b) Run $FND_TOP/patch/115/sql/afdcm037.sql
c) Go to $FND_TOP/bin
adrelink.sh force=y “fnd FNDLIBR”
adrelink.sh force=y “fnd FNDSM”
adrelink.sh force=y “fnd FNDFS”
adrelink.sh force=y “fnd FNDCRM”
d) Run cmclean.sql
e) Start Application Service (adstrtal.sh)
Solution:
a) Ensure Concurrent :GSM Enabled profile is set to ‘Y’
b) Run $FND_TOP/patch/115/sql/afdcm037.sql
c) Go to $FND_TOP/bin
adrelink.sh force=y “fnd FNDLIBR”
adrelink.sh force=y “fnd FNDSM”
adrelink.sh force=y “fnd FNDFS”
adrelink.sh force=y “fnd FNDCRM”
d) Run cmclean.sql
e) Start Application Service (adstrtal.sh)
Saturday, 29 June 2013
Oracle SGA - System Global Area
- SGA is dynamic and sized using SIZE_MAX_SIZE
- SGA memory allocated and tracked in granules by SGA components
– Contiguous virtual memory allocation
– Size based on SGA_MAX_SIZE
– Shared pool
– Database buffer cache
– Redo log buffer
There are two optional memory structures that can be configured within the SGA:
– Large pool
– Java pool
Shared Pool
The shared pool environment contains both fixed and variable structures. The fixed structures remain relatively the same size, whereas the variable structures grow and shrink based on user and program requirements. The actual sizing for the fixed and variable structures is based on an initialization parameter and the work of an Oracle internal algorithm.
It consists of two key performance-related memory structures:
– Library cache
– Data dictionary cache
a) Library Cache
The library cache size is based on the sizing defined for the shared pool. Memory is allocated when a statement is parsed or a program unit is called. If the size of the shared pool is too small, statements are continually reloaded into the library cache, which affects performance.
The library cache is managed by a least recently used (LRU) algorithm. As the cache fills, less recently used execution paths and parse trees are removed from the library cache to make room for the new entries. If the SQL or PL/SQL statements are not reused, they eventually are aged out.
The library cache consists of two structures:
• Shared SQL: The Shared SQL stores and shares the execution plan and parse tree for SQL statements run against the database. The second time that an identical SQL statement is run, it is able to take advantage of the parse information available in the shared SQL to expedite its execution. To ensure that SQL statements use a shared SQL area whenever possible, the text, schema, and bind variables must be exactly the same.
• Shared PL/SQL: The shared PL/SQL area stores and shares the most recently executed
PL/SQL statements. Parsed and compiled program units and procedures (functions,
packages, and triggers) are stored in this area.
b) Data Dictionary Cache
The data dictionary cache is also referred to as the dictionary cache or row cache. Caching data dictionary information into memory improves response time. Information about the database (user account data, data file names, segment names, extent locations, table descriptions, and user privileges) is stored in the data dictionary tables. When this information is needed by the database, the data dictionary tables are read, and the data that is returned is stored in the data dictionary cache.
Database Buffer Cache
When a query is processed, the Oracle server process looks in the database buffer cache for any blocks it needs. If the block is not found in the database buffer cache, the server process reads the block from the data file and places a copy in the database buffer cache. Because subsequent requests for the same block may find the block in memory, the requests may not require physical reads. The Oracle server uses a least recently used algorithm to age out buffers that have not been accessed recently to make room for new blocks in the database buffer cache.
Redo Log Buffer Cache
The redo log buffer cache is a circular buffer that contains changes made to data file blocks.This information is stored in redo entries. Redo entries contain the information necessary to recreate the data prior to the change made by INSERT, UPDATE, DELETE, CREATE,ALTER, or DROP operations.
Large Pool
When users connect through the shared server, Oracle needs to allocate additional space in the shared pool for storing information about the connections between the user processes, dispatchers, and servers. The large pool relieves the burden on areas within the shared pool. The shared pool does not have to give up memory for caching SQL parse trees in favor of shared server session information, I/O, and backup and recover processes. The performance gain is from the reduction of overhead from increasing and shrinkage of the shared SQL cache.
Java Pool
The Java pool is an optional setting but is required if installing and using Java. Its size is set, in bytes, using the JAVA_POOL_SIZE parameter.
Saturday, 22 June 2013
Oracle Background Processes
There are many Oracle background processes. DBA should have knowledge of atleast Mandatory background processes i.e.,DBWR, PMON, CKPT,LGWR, SMON
Below are the Optional background processes
ARCn, LMON, Snnn,QMNn, LMDn,CJQ0, Pnnn,LCKn, Dnnn
Lets have detailed knowledge of mandatory background processes:
Database Writer(DBWR)
The server process records changes to rollback and data blocks in the buffer cache. Database
Writer (DBWn) writes the dirty buffers from the database buffer cache to the data files. It
ensures that a sufficient number of free buffers—buffers that can be overwritten when server
processes need to read in blocks from the data files—are available in the database buffer
cache. Database performance is improved because server processes make changes only in the
buffer cache.
DBWn defers writing to the data files until one of the following events occurs:
• Incremental or normal checkpoint
• The number of dirty buffers reaches a threshold value
• A process scans a specified number of blocks when scanning for free buffers and cannot fine any.
• Timeout occurs.
• A ping request in Real Application Clusters environment.
• Placing a normal or temporary tablespace offline.
• Placing a tablespace in read only mode.
• Dropping or Truncating a table.
ALTER TABLESPACE tablespace name BEGIN BACKUP
LOG Writer(LGWR)
LGWR performs sequential writes from the redo log buffer cache to the redo log file under the following situations:
• When a transaction commits
• When the redo log buffer cache is one-third full
• When there is more than a megabyte of changes records in the redo log buffer cache
• Before DBWn writes modified blocks in the database buffer cache to the data files
• Every 3 seconds.
Because the redo is needed for recovery, LGWR confirms the commit only after the redo is written to disk.
LGWR can also call on DBWn to write to the data files.
Note: DBWn does not write to the online redo logs.
System Monitor(SMON)
If the Oracle instance fails, any information in the SGA that has not been written to disk is lost. For example, the failure of the operating system causes an instance failure. After the loss
of the instance, the background process SMON automatically performs instance recovery when the database is reopened. Instance recovery consists of the following steps:
1. Rolling forward to recover data that has not been recorded in the data files but that has been recorded in the online redo log. This data has not been written to disk because of
the loss of the SGA during instance failure. During this process, SMON reads the redo log files and applies the changes recorded in the redo log to the data blocks. Because all
committed transactions have been written to the redo logs, this process completely recovers these transactions.
2. Opening the database so that users can log on. Any data that is not locked by unrecovered transactions is immediately available.
3. Rolling back uncommitted transactions. They are rolled back by SMON or by the individual server processes as they access locked data.
SMON also performs some space maintenance functions:
• It combines, or coalesces, adjacent areas of free space in the data files.
• It deallocates temporary segments to return them as free space in data files. Temporary segments are used to store data during SQL statement processing.
Process Monitor(PMON)
The background process PMON cleans up after failed processes by:
• Rolling back the user’s current transaction
• Releasing all currently held table or row locks
• Freeing other resources currently reserved by the user
• Restarts dead dispatchers
Checkpoint(CKPT)
An event called a checkpoint occurs when the Oracle background process DBWn writes all the modified database buffers in the SGA, including both committed and uncommitted data,to the data files.
Checkpoints are implemented for the following reasons:
• Checkpoints ensure that data blocks in memory that change frequently are written to data files regularly. Because of the least recently used algorithm of DBWn, a data block that changes frequently might never qualify as the least recently used block and thus might never be written to disk if checkpoints did not occur.
• Because all database changes up to the checkpoint have been recorded in the data files,redo log entries before the checkpoint no longer need to be applied to the data files if instance recovery is required. Therefore, checkpoints are useful because they can expedite instance recovery.
At a checkpoint, the following information is written:
• Checkpoint number into the data file headers
• Checkpoint number, log sequence number, archived log names, and system change numbers into the control file.
CKPT does not write data blocks to disk or redo blocks to the online redo logs.
Below are the Optional background processes
ARCn, LMON, Snnn,QMNn, LMDn,CJQ0, Pnnn,LCKn, Dnnn
Lets have detailed knowledge of mandatory background processes:
Database Writer(DBWR)
The server process records changes to rollback and data blocks in the buffer cache. Database
Writer (DBWn) writes the dirty buffers from the database buffer cache to the data files. It
ensures that a sufficient number of free buffers—buffers that can be overwritten when server
processes need to read in blocks from the data files—are available in the database buffer
cache. Database performance is improved because server processes make changes only in the
buffer cache.
DBWn defers writing to the data files until one of the following events occurs:
• Incremental or normal checkpoint
• The number of dirty buffers reaches a threshold value
• A process scans a specified number of blocks when scanning for free buffers and cannot fine any.
• Timeout occurs.
• A ping request in Real Application Clusters environment.
• Placing a normal or temporary tablespace offline.
• Placing a tablespace in read only mode.
• Dropping or Truncating a table.
ALTER TABLESPACE tablespace name BEGIN BACKUP
LOG Writer(LGWR)
LGWR performs sequential writes from the redo log buffer cache to the redo log file under the following situations:
• When a transaction commits
• When the redo log buffer cache is one-third full
• When there is more than a megabyte of changes records in the redo log buffer cache
• Before DBWn writes modified blocks in the database buffer cache to the data files
• Every 3 seconds.
Because the redo is needed for recovery, LGWR confirms the commit only after the redo is written to disk.
LGWR can also call on DBWn to write to the data files.
Note: DBWn does not write to the online redo logs.
System Monitor(SMON)
If the Oracle instance fails, any information in the SGA that has not been written to disk is lost. For example, the failure of the operating system causes an instance failure. After the loss
of the instance, the background process SMON automatically performs instance recovery when the database is reopened. Instance recovery consists of the following steps:
1. Rolling forward to recover data that has not been recorded in the data files but that has been recorded in the online redo log. This data has not been written to disk because of
the loss of the SGA during instance failure. During this process, SMON reads the redo log files and applies the changes recorded in the redo log to the data blocks. Because all
committed transactions have been written to the redo logs, this process completely recovers these transactions.
2. Opening the database so that users can log on. Any data that is not locked by unrecovered transactions is immediately available.
3. Rolling back uncommitted transactions. They are rolled back by SMON or by the individual server processes as they access locked data.
SMON also performs some space maintenance functions:
• It combines, or coalesces, adjacent areas of free space in the data files.
• It deallocates temporary segments to return them as free space in data files. Temporary segments are used to store data during SQL statement processing.
Process Monitor(PMON)
The background process PMON cleans up after failed processes by:
• Rolling back the user’s current transaction
• Releasing all currently held table or row locks
• Freeing other resources currently reserved by the user
• Restarts dead dispatchers
Checkpoint(CKPT)
An event called a checkpoint occurs when the Oracle background process DBWn writes all the modified database buffers in the SGA, including both committed and uncommitted data,to the data files.
Checkpoints are implemented for the following reasons:
• Checkpoints ensure that data blocks in memory that change frequently are written to data files regularly. Because of the least recently used algorithm of DBWn, a data block that changes frequently might never qualify as the least recently used block and thus might never be written to disk if checkpoints did not occur.
• Because all database changes up to the checkpoint have been recorded in the data files,redo log entries before the checkpoint no longer need to be applied to the data files if instance recovery is required. Therefore, checkpoints are useful because they can expedite instance recovery.
At a checkpoint, the following information is written:
• Checkpoint number into the data file headers
• Checkpoint number, log sequence number, archived log names, and system change numbers into the control file.
CKPT does not write data blocks to disk or redo blocks to the online redo logs.
Saturday, 8 June 2013
startup oracle database instance concepts
startup NOMOUNT: During startup nomount Instance started.It creates the SGA and starts up the background processes but does not provide access to the database.
startup MOUNT: During Startup mount Control file opened for this instance.It mounts the database for certain DBA activities but does not provide user access to the database.
alter database OPEN: During startup database all the files get opened as described by the control file for this instance.It enables users to access the database.
startup MOUNT: During Startup mount Control file opened for this instance.It mounts the database for certain DBA activities but does not provide user access to the database.
alter database OPEN: During startup database all the files get opened as described by the control file for this instance.It enables users to access the database.
Saturday, 1 June 2013
Manage Oracle tablespaces
Here are the commands to Create Tablespace, Resize datafile, tablespace offline/online/read only, drop tablespace
Create tablespace:
CREATE TABLESPACE TEST DATAFILE '/u01/oradata/test01.dbf' SIZE 500m;
Create Undo tablespace:
CREATE UNDO TABLESPACE undo1 DATAFILE '/u01/oradata/undo01.dbf' SIZE 500m;
Create Temporary tablespace:
CREATE TEMPORARY TABLESPACE temp TEMPFILE '/u01/oradata/temp01.dbf' SIZE 500m;
Set Temp tablespace as default tablespace:
ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp;
Take a tablespace offline:
ALTER TABLESPACE TEST OFFLINE;
Take a tablespace online:
ALTER TABLESPACE TEST ONLINE;
Put Tablespace in Read-Only mode:
ALTER TABLESPACE TEST READ ONLY;
Drop tablespace:
DROP TABLESPACE TEST INCLUDING CONTENTS AND DATAFILES;
Enabling Automatic Extension of Data Files
ALTER DATABASE DATAFILE '/u01/oradata/test01.dbf' SIZE 500M AUTOEXTEND ON;
Adding Data Files to a Tablespace:
ALTER TABLESPACE test ADD DATAFILE '/u01/oradata/test02.dbf' SIZE 500M;
Resize Datafile:
alter database datafile '/u01/oradata/test01.dbf' RESIZE 1G;
Create tablespace:
CREATE TABLESPACE TEST DATAFILE '/u01/oradata/test01.dbf' SIZE 500m;
Create Undo tablespace:
CREATE UNDO TABLESPACE undo1 DATAFILE '/u01/oradata/undo01.dbf' SIZE 500m;
Create Temporary tablespace:
CREATE TEMPORARY TABLESPACE temp TEMPFILE '/u01/oradata/temp01.dbf' SIZE 500m;
Set Temp tablespace as default tablespace:
ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp;
Take a tablespace offline:
ALTER TABLESPACE TEST OFFLINE;
Take a tablespace online:
ALTER TABLESPACE TEST ONLINE;
Put Tablespace in Read-Only mode:
ALTER TABLESPACE TEST READ ONLY;
Drop tablespace:
DROP TABLESPACE TEST INCLUDING CONTENTS AND DATAFILES;
Enabling Automatic Extension of Data Files
ALTER DATABASE DATAFILE '/u01/oradata/test01.dbf' SIZE 500M AUTOEXTEND ON;
Adding Data Files to a Tablespace:
ALTER TABLESPACE test ADD DATAFILE '/u01/oradata/test02.dbf' SIZE 500M;
Resize Datafile:
alter database datafile '/u01/oradata/test01.dbf' RESIZE 1G;
Thursday, 30 May 2013
AUDIT SYS/SYSDBA/SYSOPER
The new parameter AUDIT_SYS_OPERATIONS allows the audit of all statements issued by SYS/SYSDBA/SYSOPER in an OS audit trail file.
Startup database with AUDIT_SYS_OPERATIONS set to TRUE.
SQL> show parameter audit
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
audit_file_dest string /data/oracle/db/tech_st/11.2.0/rdbms/audit
audit_sys_operations boolean TRUE
Now,do some activity as sysdba
SQL> conn sys/sys as sysdba
Connected.
SQL> create table abc (name varchar2(10));
Table created.
SQL> drop table abc;
Table dropped.
Go to /data/oracle/db/tech_st/11.2.0/rdbms/audit
cat ORCL_ora_17493_1.aud
Tue Apr 29 05:41:51 2014 -04:00
LENGTH : '190'
ACTION :[36] 'create table abc (name varchar2(10))'
DATABASE USER:[1] '/'
PRIVILEGE :[6] 'SYSDBA'
CLIENT USER:[8] 'oraclone'
CLIENT TERMINAL:[5] 'pts/6'
STATUS:[1] '0'
DBID:[9] '761851345'
Tue Apr 29 05:42:00 2014 -04:00
LENGTH : '168'
ACTION :[14] 'drop table abc'
DATABASE USER:[1] '/'
PRIVILEGE :[6] 'SYSDBA'
CLIENT USER:[8] 'oraclone'
CLIENT TERMINAL:[5] 'pts/6'
STATUS:[1] '0'
DBID:[9] '761851345'
Startup database with AUDIT_SYS_OPERATIONS set to TRUE.
SQL> show parameter audit
NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
audit_file_dest string /data/oracle/db/tech_st/11.2.0/rdbms/audit
audit_sys_operations boolean TRUE
Now,do some activity as sysdba
SQL> conn sys/sys as sysdba
Connected.
SQL> create table abc (name varchar2(10));
Table created.
SQL> drop table abc;
Table dropped.
Go to /data/oracle/db/tech_st/11.2.0/rdbms/audit
cat ORCL_ora_17493_1.aud
Tue Apr 29 05:41:51 2014 -04:00
LENGTH : '190'
ACTION :[36] 'create table abc (name varchar2(10))'
DATABASE USER:[1] '/'
PRIVILEGE :[6] 'SYSDBA'
CLIENT USER:[8] 'oraclone'
CLIENT TERMINAL:[5] 'pts/6'
STATUS:[1] '0'
DBID:[9] '761851345'
Tue Apr 29 05:42:00 2014 -04:00
LENGTH : '168'
ACTION :[14] 'drop table abc'
DATABASE USER:[1] '/'
PRIVILEGE :[6] 'SYSDBA'
CLIENT USER:[8] 'oraclone'
CLIENT TERMINAL:[5] 'pts/6'
STATUS:[1] '0'
DBID:[9] '761851345'
Saturday, 18 May 2013
Manage Redo Log Groups and Members
Below are the examples how to add redo log groups, how to add redo log members,how to drop redo log group,how to drop redo log member,how to clear redo log
Adding Online Redo Log Groups
ALTER DATABASE ADD LOGFILE GROUP 5 ('/u01/orcl/data/redo05a.log','/u01/orcl/data/redo05b.log') SIZE 500M;
Adding Online Redo Log Members
ALTER DATABASE ADD LOGFILE MEMBER
'/u01/orcl/data/redo05c.log' TO GROUP 1,
'/u01/orcl/data/redo05c.log' TO GROUP 2,
'/u01/orcl/data/redo05c.log' TO GROUP 3;
Dropping Online Redo Log Groups
Restrictions
• An instance requires at least two groups of online redo log files.
• An active or current group cannot be dropped.
• When an online redo log group is dropped, the operating system files are not deleted.
ALTER DATABASE DROP LOGFILE GROUP 5;
Dropping Online Redo Log Members
Restrictions
• If the member you want to drop is the last valid member of the group, you cannot drop
that member.
• If the group is current, you must force a log file switch before you can drop the
member.
• If the database is running in ARCHIVELOG mode and the log file group to which the
member belongs is not archived, then the member cannot be dropped.
• When an online redo log member is dropped, the operating system file is not deleted.
ALTER DATABASE DROP LOGFILE MEMBER '/u01/orcl/data/redo05c.log';
Clearing online redo log files:
Restrictions
You can clear an online redo log file whether it is archived or not. However, when it is not
archived, you must include the keyword UNARCHIVED. This makes backups unusable if the
online redo log file is needed for recovery.
ALTER DATABASE CLEAR LOGFILE '/u01/orcl/data/redo05c.log';
Information about group and members can be obtained by querying the data dictionary.
• V$LOG
• V$LOGFILE
Adding Online Redo Log Groups
ALTER DATABASE ADD LOGFILE GROUP 5 ('/u01/orcl/data/redo05a.log','/u01/orcl/data/redo05b.log') SIZE 500M;
Adding Online Redo Log Members
ALTER DATABASE ADD LOGFILE MEMBER
'/u01/orcl/data/redo05c.log' TO GROUP 1,
'/u01/orcl/data/redo05c.log' TO GROUP 2,
'/u01/orcl/data/redo05c.log' TO GROUP 3;
Dropping Online Redo Log Groups
Restrictions
• An instance requires at least two groups of online redo log files.
• An active or current group cannot be dropped.
• When an online redo log group is dropped, the operating system files are not deleted.
ALTER DATABASE DROP LOGFILE GROUP 5;
Dropping Online Redo Log Members
Restrictions
• If the member you want to drop is the last valid member of the group, you cannot drop
that member.
• If the group is current, you must force a log file switch before you can drop the
member.
• If the database is running in ARCHIVELOG mode and the log file group to which the
member belongs is not archived, then the member cannot be dropped.
• When an online redo log member is dropped, the operating system file is not deleted.
ALTER DATABASE DROP LOGFILE MEMBER '/u01/orcl/data/redo05c.log';
Clearing online redo log files:
Restrictions
You can clear an online redo log file whether it is archived or not. However, when it is not
archived, you must include the keyword UNARCHIVED. This makes backups unusable if the
online redo log file is needed for recovery.
ALTER DATABASE CLEAR LOGFILE '/u01/orcl/data/redo05c.log';
Information about group and members can be obtained by querying the data dictionary.
• V$LOG
• V$LOGFILE
Saturday, 11 May 2013
SHUTDOWN Database Concepts
Shutdown Options:
a)Shutdown normal
b) Shutdown transactional
c) Shutdown immediate
d) Shutdown abort
Here are the difference between Shutdown Normal, transactional, immediate and abort:
Shutdown Normal
Normal is the default shutdown mode. Normal database shutdown proceeds with the following conditions:
• No new connections can be made.
• The Oracle server waits for all users to disconnect before completing the shutdown.
• Database and redo buffers are written to disk.
• Background processes are terminated, and the SGA is removed from memory.
• Oracle closes and dismounts the database before shutting down the instance.
• The next startup does not require an instance recovery.
Shutdown Transactional
A transactional shutdown prevents clients from losing work. A transactional database shutdown proceeds with the following conditions:
• No client can start a new transaction on this particular instance.
• A client is disconnected when the client ends the transaction that is in progress.
• When all transactions have finished, a shutdown immediately occurs.
• The next startup does not require an instance recovery.
Shutdown Immediate
Immediate database shutdown proceeds with the following conditions:
• Current SQL statements being processed by Oracle are not completed.
• The Oracle server does not wait for users currently connected to the database to disconnect.
• Oracle rolls back active transactions and disconnects all connected users.
• Oracle closes and dismounts the database before shutting down the instance.
• The next startup does not require an instance recovery.
Shutdown Abort
If the normal and immediate shutdown options do not work, you can abort the current
database instance. Aborting an instance proceeds with the following conditions:
• Current SQL statements being processed by the Oracle server are immediately terminated.
• Oracle does not wait for users currently connected to the database to disconnect.
• Database and redo buffers are not written to disk.
• Uncommitted transactions are not rolled back.
• The instance is terminated without closing the files.
• The database is not closed or dismounted.
• The next startup requires instance recovery, which occurs automatically.
a)Shutdown normal
b) Shutdown transactional
c) Shutdown immediate
d) Shutdown abort
Here are the difference between Shutdown Normal, transactional, immediate and abort:
Shutdown Normal
Normal is the default shutdown mode. Normal database shutdown proceeds with the following conditions:
• No new connections can be made.
• The Oracle server waits for all users to disconnect before completing the shutdown.
• Database and redo buffers are written to disk.
• Background processes are terminated, and the SGA is removed from memory.
• Oracle closes and dismounts the database before shutting down the instance.
• The next startup does not require an instance recovery.
Shutdown Transactional
A transactional shutdown prevents clients from losing work. A transactional database shutdown proceeds with the following conditions:
• No client can start a new transaction on this particular instance.
• A client is disconnected when the client ends the transaction that is in progress.
• When all transactions have finished, a shutdown immediately occurs.
• The next startup does not require an instance recovery.
Shutdown Immediate
Immediate database shutdown proceeds with the following conditions:
• Current SQL statements being processed by Oracle are not completed.
• The Oracle server does not wait for users currently connected to the database to disconnect.
• Oracle rolls back active transactions and disconnects all connected users.
• Oracle closes and dismounts the database before shutting down the instance.
• The next startup does not require an instance recovery.
Shutdown Abort
If the normal and immediate shutdown options do not work, you can abort the current
database instance. Aborting an instance proceeds with the following conditions:
• Current SQL statements being processed by the Oracle server are immediately terminated.
• Oracle does not wait for users currently connected to the database to disconnect.
• Database and redo buffers are not written to disk.
• Uncommitted transactions are not rolled back.
• The instance is terminated without closing the files.
• The database is not closed or dismounted.
• The next startup requires instance recovery, which occurs automatically.
Sunday, 31 March 2013
How to Remove OID References
Steps to Remove OID References
cd $FND_TOP/bin
$ perl txkrun.pl -script=SetSSOReg -removereferences=yes
You are removing all SSO-OID related preferences.
Enter Oracle E-Business apps database user password ?
*** Log File = /data/inst/apps/clone_test/logs/appl/rgf/TX
Beginning input parameter validation for Removing Preferences.
Input parameter validation for Removing Preferences completed.
BEGIN REMOVING PREFERENCES:
Oracle Home Instance preferences removed successfully
Updated APPS_SSO profile with value SSWA successfully.
SSO Preferences Removed successfully.
Removing OID References Started.
Removing OID References Completed Successfully.
Custom DIT Preferences Removed Successfully.
REMOVING PREFERENCES COMPLETED.
End of /data/apps/apps_st/appl/fnd/12.0.0/patch/115/bin/txkSetSSOR
DBMS_SQLPA object is Invalid
Issue:
DBMS_SQLPA object is Invalid.
Findings:
DBMS_SQLPA is a package owned by SYS that has a static SQL statement that queries the plan table which is expected to contain the other_xml column.
That is why we get an error as SYS.plan_table does not have this column.
Solution:
SQL>sqlplus / as sysdba
SQL>drop table plan_table;
SQL>@?/rdbms/admin/utlxplan
SQL>@?/rdbms/admin/prvtspao.plb
SQL>@?/rdbms/admin/utlrp.sql
Note: It is suggested to do this process in Odd hours.
DBMS_SQLPA object is Invalid.
Findings:
DBMS_SQLPA is a package owned by SYS that has a static SQL statement that queries the plan table which is expected to contain the other_xml column.
That is why we get an error as SYS.plan_table does not have this column.
Solution:
SQL>sqlplus / as sysdba
SQL>drop table plan_table;
SQL>@?/rdbms/admin/utlxplan
SQL>@?/rdbms/admin/prvtspao.plb
SQL>@?/rdbms/admin/utlrp.sql
Note: It is suggested to do this process in Odd hours.
Sunday, 17 March 2013
RAC Server Rebooted Abnormally
There are many reasons of Server reboot.You need to analyze the log files to diagnose the root cause.
Issue
The Server test1(10.10.10.1) was abnormally rebooted on 26-02-2011.
Fact
Node 1 was evicted by node 2 because connections could not be made from node 2 to node 1.
Node 2 cssd.log
[ CSSD]2011-02-26 03:04:01.745 [10] >TRACE: clssgmClientConnectMsg: Connect from con(6000000000048870) proc(6000000000103df0) pid() proto(10:2:1:1)
[ CSSD]2011-02-26 03:04:32.370 [5] >TRACE: clssnm_skgxncheck: CSS daemon failed on node 0
[ CSSD]2011-02-26 03:04:32.371 [5] >TRACE: clssnmDiscHelper: node test1 (0) connection failed ========>>>>>>>>>>>> cannot connect to node test1
[ CSSD]2011-02-26 03:04:32.707 [16] >TRACE: clssnmDoSyncUpdate: Initiating sync 13
[ CSSD]2011-02-26 03:04:32.707 [16] >TRACE: clssnmDoSyncUpdate: diskTimeout set to (597000)ms
[ CSSD]2011-02-26 03:04:32.707 [16] >TRACE: clssnmSetupAckWait: Ack message type (11)
[ CSSD]2011-02-26 03:04:32.707 [16] >TRACE: clssnmSetupAckWait: node(0) is ALIVE
[ CSSD]2011-02-26 03:04:32.708 [16] >TRACE: clssnmSetupAckWait: node(1) is ALIVE
[ CSSD]2011-02-26 03:04:32.708 [16] >TRACE: clssnmSendSync: syncSeqNo(13)
[ CSSD]2011-02-26 03:04:32.708 [16] >TRACE: clssnmWaitForAcks: Ack message type(11), ackCount(2)
[ CSSD]2011-02-26 03:04:32.708 [9] >TRACE: clssnmHandleSync: Acknowledging sync: src[1] srcName[poiddb02] seq[1] sync[13]
[ CSSD]2011-02-26 03:04:32.708 [16] >TRACE: clssnmWaitForAcks: node(0) is expiring, msg type(11)
[ CSSD]2011-02-26 03:04:32.708 [9] >TRACE: clssnmHandleSync: diskTimeout set to (597000)ms
[ CSSD]2011-02-26 03:04:32.709 [16] >TRACE: clssnmWaitForAcks: done, msg type(11)
[ CSSD]2011-02-26 03:04:32.709 [16] >TRACE: clssnmDoSyncUpdate: Terminating node 0, test1, misstime(21531) state(3)
[ CSSD]2011-02-26 03:04:32.709 [16] >TRACE: clssnmSetupAckWait: Ack message type (13)
[ CSSD]2011-02-26 03:04:32.709 [1] >USER: NMEVENT_SUSPEND [00][00][00][03]
[ CSSD]2011-02-26 03:04:32.709 [16] >TRACE: clssnmSetupAckWait: node(1) is ACTIVE
[ CSSD]2011-02-26 03:04:32.709 [16] >TRACE: clssnmSendVote: syncSeqNo(13)
[ CSSD]2011-02-26 03:04:32.710 [16] >TRACE: clssnmWaitForAcks: Ack message type(13), ackCount(1)
[ CSSD]2011-02-26 03:04:32.710 [9] >TRACE: clssnmSendVoteInfo: node(1) syncSeqNo(13)
[ CSSD]2011-02-26 03:04:32.711 [16] >TRACE: clssnmWaitForAcks: done, msg type(13)
[ CSSD]2011-02-26 03:04:32.711 [16] >TRACE: clssnmCheckDskInfo: Checking disk info...
[ CSSD]2011-02-26 03:04:32.712 [16] >TRACE: clssnmEvict: Start =====================>>>>>>>>>>>>>>>>>>>> Node 2 evicts node 1
[ CSSD]2011-02-26 03:04:32.712 [16] >TRACE: clssnmWaitOnEvictions: Start
[ CSSD]2011-02-26 03:04:32.712 [16] >TRACE: clssnmWaitOnEvictions: Node(0) down, LATS(2088370148),timeout(21643)
[ CSSD]2011-02-26 03:04:32.712 [16] >TRACE: clssnmSetupAckWait: Ack message type (15)
[ CSSD]2011-02-26 03:04:32.712 [16] >TRACE: clssnmSetupAckWait: node(1) is ACTIVE
[ CSSD]2011-02-26 03:04:32.712 [16] >TRACE: clssnmSendUpdate: syncSeqNo(13)
[ CSSD]2011-02-26 03:04:32.713 [16] >TRACE: clssnmWaitForAcks: Ack message type(15), ackCount(1)
[ CSSD]2011-02-26 03:04:32.713 [9] >TRACE: clssnmUpdateNodeState: node 0, state (0/0) unique (1293953635/1293953635) prevConuni(1293953635) birth (10/0) (old/new)
[ CSSD]2011-02-26 03:04:32.713 [9] >TRACE: clssnmDeactivateNode: node 0 (test1) left cluster
Findings
When two systems have access to the shared storage, integrity of the data depends on the systems communication through "HEARTBEATS" using the private interconnects. When the PRIVATE LINKS are LOST and FAILED,each system thinks the other system has exited the cluster, then it tries to become the master or form a sub-cluster and claim exclusive access to the shared storage.
To avoid such a tricky and undesirable situation,the basic approach is STOMITH(Shoot the Other Machine in the Head) fencing. In STOMITH systems, the errant cluster node is simply reset and forced to reboot.
Solution
Please check with System Admin for Network errors around the incident time.
Issue
The Server test1(10.10.10.1) was abnormally rebooted on 26-02-2011.
Fact
Node 1 was evicted by node 2 because connections could not be made from node 2 to node 1.
Node 2 cssd.log
[ CSSD]2011-02-26 03:04:01.745 [10] >TRACE: clssgmClientConnectMsg: Connect from con(6000000000048870) proc(6000000000103df0) pid() proto(10:2:1:1)
[ CSSD]2011-02-26 03:04:32.370 [5] >TRACE: clssnm_skgxncheck: CSS daemon failed on node 0
[ CSSD]2011-02-26 03:04:32.371 [5] >TRACE: clssnmDiscHelper: node test1 (0) connection failed ========>>>>>>>>>>>> cannot connect to node test1
[ CSSD]2011-02-26 03:04:32.707 [16] >TRACE: clssnmDoSyncUpdate: Initiating sync 13
[ CSSD]2011-02-26 03:04:32.707 [16] >TRACE: clssnmDoSyncUpdate: diskTimeout set to (597000)ms
[ CSSD]2011-02-26 03:04:32.707 [16] >TRACE: clssnmSetupAckWait: Ack message type (11)
[ CSSD]2011-02-26 03:04:32.707 [16] >TRACE: clssnmSetupAckWait: node(0) is ALIVE
[ CSSD]2011-02-26 03:04:32.708 [16] >TRACE: clssnmSetupAckWait: node(1) is ALIVE
[ CSSD]2011-02-26 03:04:32.708 [16] >TRACE: clssnmSendSync: syncSeqNo(13)
[ CSSD]2011-02-26 03:04:32.708 [16] >TRACE: clssnmWaitForAcks: Ack message type(11), ackCount(2)
[ CSSD]2011-02-26 03:04:32.708 [9] >TRACE: clssnmHandleSync: Acknowledging sync: src[1] srcName[poiddb02] seq[1] sync[13]
[ CSSD]2011-02-26 03:04:32.708 [16] >TRACE: clssnmWaitForAcks: node(0) is expiring, msg type(11)
[ CSSD]2011-02-26 03:04:32.708 [9] >TRACE: clssnmHandleSync: diskTimeout set to (597000)ms
[ CSSD]2011-02-26 03:04:32.709 [16] >TRACE: clssnmWaitForAcks: done, msg type(11)
[ CSSD]2011-02-26 03:04:32.709 [16] >TRACE: clssnmDoSyncUpdate: Terminating node 0, test1, misstime(21531) state(3)
[ CSSD]2011-02-26 03:04:32.709 [16] >TRACE: clssnmSetupAckWait: Ack message type (13)
[ CSSD]2011-02-26 03:04:32.709 [1] >USER: NMEVENT_SUSPEND [00][00][00][03]
[ CSSD]2011-02-26 03:04:32.709 [16] >TRACE: clssnmSetupAckWait: node(1) is ACTIVE
[ CSSD]2011-02-26 03:04:32.709 [16] >TRACE: clssnmSendVote: syncSeqNo(13)
[ CSSD]2011-02-26 03:04:32.710 [16] >TRACE: clssnmWaitForAcks: Ack message type(13), ackCount(1)
[ CSSD]2011-02-26 03:04:32.710 [9] >TRACE: clssnmSendVoteInfo: node(1) syncSeqNo(13)
[ CSSD]2011-02-26 03:04:32.711 [16] >TRACE: clssnmWaitForAcks: done, msg type(13)
[ CSSD]2011-02-26 03:04:32.711 [16] >TRACE: clssnmCheckDskInfo: Checking disk info...
[ CSSD]2011-02-26 03:04:32.712 [16] >TRACE: clssnmEvict: Start =====================>>>>>>>>>>>>>>>>>>>> Node 2 evicts node 1
[ CSSD]2011-02-26 03:04:32.712 [16] >TRACE: clssnmWaitOnEvictions: Start
[ CSSD]2011-02-26 03:04:32.712 [16] >TRACE: clssnmWaitOnEvictions: Node(0) down, LATS(2088370148),timeout(21643)
[ CSSD]2011-02-26 03:04:32.712 [16] >TRACE: clssnmSetupAckWait: Ack message type (15)
[ CSSD]2011-02-26 03:04:32.712 [16] >TRACE: clssnmSetupAckWait: node(1) is ACTIVE
[ CSSD]2011-02-26 03:04:32.712 [16] >TRACE: clssnmSendUpdate: syncSeqNo(13)
[ CSSD]2011-02-26 03:04:32.713 [16] >TRACE: clssnmWaitForAcks: Ack message type(15), ackCount(1)
[ CSSD]2011-02-26 03:04:32.713 [9] >TRACE: clssnmUpdateNodeState: node 0, state (0/0) unique (1293953635/1293953635) prevConuni(1293953635) birth (10/0) (old/new)
[ CSSD]2011-02-26 03:04:32.713 [9] >TRACE: clssnmDeactivateNode: node 0 (test1) left cluster
Findings
When two systems have access to the shared storage, integrity of the data depends on the systems communication through "HEARTBEATS" using the private interconnects. When the PRIVATE LINKS are LOST and FAILED,each system thinks the other system has exited the cluster, then it tries to become the master or form a sub-cluster and claim exclusive access to the shared storage.
To avoid such a tricky and undesirable situation,the basic approach is STOMITH(Shoot the Other Machine in the Head) fencing. In STOMITH systems, the errant cluster node is simply reset and forced to reboot.
Solution
Please check with System Admin for Network errors around the incident time.
Saturday, 2 March 2013
ORA-00604 ORA-01555 ORA-06512
Error While Converting Dictionary Managed Tablespace to Locally Managed Tablespace
SQL> EXECUTE DBMS_SPACE_ADMIN.TABLESPACE_MIGRATE_TO_LOCAL('TABLESPACE_NAME');
BEGIN DBMS_SPACE_ADMIN.TABLESPACE_MIGRATE_TO_LOCAL('TABLESPACE_NAME'); END;
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-01555: snapshot too old: rollback segment number 0 with name "SYSTEM" too
small
ORA-06512: at "SYS.DBMS_SPACE_ADMIN", line 227
ORA-06512: at line 1
Solution:
ORA-604 & ORA-1555 Rollback Segment 0 With Name "System" Too small (Doc ID 862469.1)
The SYSTEM rollback segment has no undo retention policy, and is usually not configured to be very large,it is possible to encounter a situation where we could run out of space and hence raise an ORA-1555 error.
set echo on
set feedback on
alter system set "_smu_debug_mode"=4;
drop table helper_for_1555;
create table helper_for_1555 (col1 varchar2(4000));
insert into helper_for_1555 values (rpad('a', 3999));
declare
a number;
begin
for a in 1..18 loop
insert /*+ APPEND +*/ into helper_for_1555 select * from helper_for_1555;
commit;
end loop;
end;
/
rem alter SYSTEM rbseg's storage params
alter rollback segment system storage (next 256M);
alter rollback segment system storage (optimal 1024M);
select segment_name, blocks, bytes, extents from dba_segments where segment_type='ROLLBACK';
rem bloat the segment
alter system set "_in_memory_undo"=false;
set transaction use rollback segment SYSTEM;
delete from helper_for_1555;
rollback;
select segment_name, blocks, bytes, extents from dba_segments where segment_type='ROLLBACK';
alter system set "_smu_debug_mode"=0;
alter system set "_in_memory_undo"=true;
Now again try to convert the tablespace
SQL> EXECUTE DBMS_SPACE_ADMIN.TABLESPACE_MIGRATE_TO_LOCAL('TABLESPACE_NAME');
SQL> EXECUTE DBMS_SPACE_ADMIN.TABLESPACE_MIGRATE_TO_LOCAL('TABLESPACE_NAME');
BEGIN DBMS_SPACE_ADMIN.TABLESPACE_MIGRATE_TO_LOCAL('TABLESPACE_NAME'); END;
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-01555: snapshot too old: rollback segment number 0 with name "SYSTEM" too
small
ORA-06512: at "SYS.DBMS_SPACE_ADMIN", line 227
ORA-06512: at line 1
Solution:
ORA-604 & ORA-1555 Rollback Segment 0 With Name "System" Too small (Doc ID 862469.1)
The SYSTEM rollback segment has no undo retention policy, and is usually not configured to be very large,it is possible to encounter a situation where we could run out of space and hence raise an ORA-1555 error.
set echo on
set feedback on
alter system set "_smu_debug_mode"=4;
drop table helper_for_1555;
create table helper_for_1555 (col1 varchar2(4000));
insert into helper_for_1555 values (rpad('a', 3999));
declare
a number;
begin
for a in 1..18 loop
insert /*+ APPEND +*/ into helper_for_1555 select * from helper_for_1555;
commit;
end loop;
end;
/
rem alter SYSTEM rbseg's storage params
alter rollback segment system storage (next 256M);
alter rollback segment system storage (optimal 1024M);
select segment_name, blocks, bytes, extents from dba_segments where segment_type='ROLLBACK';
rem bloat the segment
alter system set "_in_memory_undo"=false;
set transaction use rollback segment SYSTEM;
delete from helper_for_1555;
rollback;
select segment_name, blocks, bytes, extents from dba_segments where segment_type='ROLLBACK';
alter system set "_smu_debug_mode"=0;
alter system set "_in_memory_undo"=true;
Now again try to convert the tablespace
SQL> EXECUTE DBMS_SPACE_ADMIN.TABLESPACE_MIGRATE_TO_LOCAL('TABLESPACE_NAME');
Saturday, 23 February 2013
How To Change A Dictionary Managed Tablespace To A Locally Managed Tablespace
High Level Steps for Migration of Dictionary managed tablespaces to locally managed tablespaces:
1) Migrate all other tablespaces to Local first
EXECUTE DBMS_SPACE_ADMIN.TABLESPACE_MIGRATE_TO_LOCAL('TABLESPACE_NAME1');
EXECUTE DBMS_SPACE_ADMIN.TABLESPACE_MIGRATE_TO_LOCAL('TABLESPACE_NAME2');
2) Check system is not the default temporary tablespace
3) If TEMP is dictionary managed drop it and recreate TEMP tablespace
3) Put database in restricted mode
4) Put SYSAUX tablespace in offline mode
5) Put all other tablespaces in read only mode (except SYSTEM, TEMP and UNDO)
6) Migrate system to local
EXECUTE DBMS_SPACE_ADMIN.TABLESPACE_MIGRATE_TO_LOCAL('SYSTEM');
7) Disable restricted session
8) Revert SYSAUX to online
9) Revert all tablespaces to read write
How To Change A Dictionary Managed Tablespace To A Locally Managed Tablespace? [ID 735978.1]
1) Migrate all other tablespaces to Local first
EXECUTE DBMS_SPACE_ADMIN.TABLESPACE_MIGRATE_TO_LOCAL('TABLESPACE_NAME1');
EXECUTE DBMS_SPACE_ADMIN.TABLESPACE_MIGRATE_TO_LOCAL('TABLESPACE_NAME2');
2) Check system is not the default temporary tablespace
3) If TEMP is dictionary managed drop it and recreate TEMP tablespace
3) Put database in restricted mode
4) Put SYSAUX tablespace in offline mode
5) Put all other tablespaces in read only mode (except SYSTEM, TEMP and UNDO)
6) Migrate system to local
EXECUTE DBMS_SPACE_ADMIN.TABLESPACE_MIGRATE_TO_LOCAL('SYSTEM');
7) Disable restricted session
8) Revert SYSAUX to online
9) Revert all tablespaces to read write
How To Change A Dictionary Managed Tablespace To A Locally Managed Tablespace? [ID 735978.1]
Sunday, 17 February 2013
Internal Concurrent Manager status could not be determined
There can a scenario where while running
adcmctl status apps/password
The output is:
$ adcmctl.sh status apps/password
You are running adcmctl.sh version 115.28
Internal Concurrent Manager status could not be determined.
adcmctl.sh: exiting with status 0
Solution:
1. We can run cmclean.sql script.
As suggested by (Oracle Note: 134007.1 - CMCLEAN.SQL - Non Destructive Script to Clean Concurrent Manager Tables). Shutdown the CM, run the script (make sure you issue commit once the script is done), start the CM and check for the issue.
2. If after running the the cmclean.sql the issue still persists then take down the application tier service and run autoconfig. This will relink the binaries of the adcmctl.sh. After autoconfig is successful then take up the application services and and check the issue.
adcmctl.Sh Script Is Unable To Determine The ICM Status [ID 1320217.1]
adcmctl status apps/password
The output is:
$ adcmctl.sh status apps/password
You are running adcmctl.sh version 115.28
Internal Concurrent Manager status could not be determined.
adcmctl.sh: exiting with status 0
Solution:
1. We can run cmclean.sql script.
As suggested by (Oracle Note: 134007.1 - CMCLEAN.SQL - Non Destructive Script to Clean Concurrent Manager Tables). Shutdown the CM, run the script (make sure you issue commit once the script is done), start the CM and check for the issue.
2. If after running the the cmclean.sql the issue still persists then take down the application tier service and run autoconfig. This will relink the binaries of the adcmctl.sh. After autoconfig is successful then take up the application services and and check the issue.
adcmctl.Sh Script Is Unable To Determine The ICM Status [ID 1320217.1]
Sunday, 13 January 2013
adapcctl.sh: exiting with status 150
On E-Business Suite Release 12 environment, while starting midtier services using adstrtal.sh, below error occurs:
Executing service control script:
/oracle/apps/VIS/inst/apps/<CONTEXT>/admin/scripts/adapcctl.sh start
Timeout specified in context file: 100 second(s)
Script returned:
****************************************************
You are running adapcctl.sh version 120.7.12010000.2
Starting OPMN managed Oracle HTTP Server (OHS) instance ...
adapcctl.sh: exiting with status 150
adapcctl.sh: check the logfile /oracle/apps/VIS/inst/apps/<CONTEXT>/logs/appl/admin/log/adapcctl.txt for more information ...
.end std out.
.end err out.
****************************************************
Same error for adoacorectl.sh, adoafmctl.sh, adformsctl.sh.
CAUSE:
There is a states file called .opmndat at /oracle/apps/VIS/inst/apps/<CONTEXT>/ora/10.1.3/opmn/logs/states which gets created when services are started.
Whenever services are started this file gets created or updated.
Unfortunately for some reason this file is not updated when services are started.
SOLUTION:
1. Shutdown all Middle tier services and ensure no defunct processes exist running the following from the operating system:
# ps -ef | grep <applmgr>
If one finds any, kill these processes.
2. Navigate to $INST_TOP/ora/10.1.3/opmn/logs/states directory. It contains hidden file .opmndat:
# ls -lrt .opmndat
3. Delete this file .opmndat after making a backup of it:
# rm .opmndat
4. Restart the services.
5. Re-test the issue.
Executing service control script:
/oracle/apps/VIS/inst/apps/<CONTEXT>/admin/scripts/adapcctl.sh start
Timeout specified in context file: 100 second(s)
Script returned:
****************************************************
You are running adapcctl.sh version 120.7.12010000.2
Starting OPMN managed Oracle HTTP Server (OHS) instance ...
adapcctl.sh: exiting with status 150
adapcctl.sh: check the logfile /oracle/apps/VIS/inst/apps/<CONTEXT>/logs/appl/admin/log/adapcctl.txt for more information ...
.end std out.
.end err out.
****************************************************
Same error for adoacorectl.sh, adoafmctl.sh, adformsctl.sh.
CAUSE:
There is a states file called .opmndat at /oracle/apps/VIS/inst/apps/<CONTEXT>/ora/10.1.3/opmn/logs/states which gets created when services are started.
Whenever services are started this file gets created or updated.
Unfortunately for some reason this file is not updated when services are started.
SOLUTION:
1. Shutdown all Middle tier services and ensure no defunct processes exist running the following from the operating system:
# ps -ef | grep <applmgr>
If one finds any, kill these processes.
2. Navigate to $INST_TOP/ora/10.1.3/opmn/logs/states directory. It contains hidden file .opmndat:
# ls -lrt .opmndat
3. Delete this file .opmndat after making a backup of it:
# rm .opmndat
4. Restart the services.
5. Re-test the issue.
Saturday, 12 January 2013
Steps to perform switchover in physical standby
Steps to switchover the Standby database to Primary and Primary database to Standby in Oracle10g
1. Shutdown the primary database.
SQL>
shutdown immediate
2. Shutdown the standby database.
SQL>
shutdown immediate
3. Startup standby database
SQL>
startup nomount
SQL> alter
database mount standby database;
4. Startup primary database
SQL>
startup
5. SQL> alter system archive log
current; (On primary)
6. Start the managed recovery operation: (On
standby)
SQL>
recover managed standby database disconnect from session;
7.
Check
the MRP process (On
standby)
SQL>
select process, status from v$managed_standby;
8.
If
MRP process exist than stop it (On
standby)
SQL>alter
database recover managed standby database cancel;
9.
Stop
and start the listener (On
both)
$ lsnrctl
stop
$ lsnrctl
start
10. Shutdown the primary database.
SQL>
shutdown immediate
11. Shutdown the standby database.
SQL>
shutdown immediate
12. Startup standby database
SQL>
startup nomount
SQL>alter
database mount standby database;
13. Startup primary database
SQL>
startup
14. Check switchover status (On primary)
SQL>select
switchover_status from v$database;
It should be to_standby.
15. Now run commands (On
primary)
SQL>alter
database commit to switchover to standby with session shutdown;
SQL>
alter database commit to switchover to physical standby;
SQL>
shutdown immediate
SQL>startup
nomount
SQL>alter
database mount standby database;
SQL>
alter system set log_archive_dest_state_2=defer;
SQL>
recover managed standby database disconnect from session;
Now primary have converted in standby.
16. Check switchover status (On old standby)
SQL>select
switchover_status from v$database;
It should be switchover_pending.
17. Run the command (On old
standby)
SQL>
alter database commit to switchover to primary;
18. Shutdown the database (On
old standby)
SQL> shutdown immediate
SQL> startup
SQL> alter database force logging;
SQL> alter system set
log_archive_dest_state_2=enable;
Now old standby have converted in
Primary database.
NOTE:- If
you want to Check that data of primary
is applying on standby or not than you
can check by these steps:-
1. Make a test
table
SQL>
create table test(id number(20)); (On
new primary)
2. Switch the log to send the redo data to the standby database:
SQL> alter system switch logfile;
3. Start managed recovery, if necessary, and applies the
archived redo logs by entering the
following SQL statement:
SQL> alter
database recover managed standby database;
(On new standby)
4.
Cancel
managed recovery operations.
SQL> alter database recover
managed standby database cancel;
This
command will stop the MRP process. Now open the database in read only mode.
SQL>
alter database open read only
5. After that if you want standby database in archive
recover mode than firstly you will have to shutdown & startup the standby
database.
SQL>
shutdown immediate
SQL>
startup nomount
SQL>
alter database mount standby database;
6. Start the managed recovery operation on standby
database:
SQL>
recover managed standby database disconnect from session;
What Happens During a Hot Backup
What happens when we put Oracle database in hot backup mode
The BEGIN BACKUP command checkpoints the datafiles and advances the datafile header checkpoints to the begin backup checkpoint SCN. Until the END BACKUP command is executed, checkpoints will cease to update the datafile headers. It starts the logging of block images.Each time the block is read into cache an image of the whole block before the first change is logged.
The END BACKUP command creates a redo record containing the begin backup checkpoint SCN. It stops the logging of block images and causes the datafile checkpoints to be advanced to the database checkpoint.
Datafile updates by DBWR go on as usual during the time the backup is being copied so the backup gets a "fuzzy" copy of the datafile.Some blocks may be ahead in time versus other blocks and some may contain updates by transactions that are later rolled back. The "fuzzy" backup copy is unusable without the "focusing" via the redo log that occurs when the backup is restored and undergoes media recovery. Media recovery applies redo (from all threads) from the checkpoint SCN that was captured at BEGIN BACKUP time through the end-point of the recovery operation.
Once redo application is finished, uncommited transactions are rolled back to leave a transaction-consistent "focussed" version of the datafile.
Subscribe to:
Posts (Atom)