Wednesday 30 November 2011

GATHER SCHEMA STATISTICS JOB FAILED

ISSUE: GATHER SCHEMA STATISTICS JOB FAILED SINCE 09-NOV-2010.






ERROR:
ORA-0000: normal, successful completion
+---------------------------------------------------------------------------+
Start of log messages from FND_FILE
+---------------------------------------------------------------------------+
In GATHER_SCHEMA_STATS , schema_name= ALL percent= 20 degree = 8 internal_flag= NOBACKUP
stats on table AQ$_WF_CONTROL_P is locked
stats on table FND_CP_GSM_IPC_AQTBL is locked
Error #1: ERROR: While GATHER_TABLE_STATS:
object_name=AP.JE_FR_DAS_010***ORA-20001: invalid column name or duplicate columns/column groups/expressions in method_opt***
Error #2: ERROR: While GATHER_TABLE_STATS:
object_name=AP.JE_FR_DAS_010_NEW***ORA-20001: invalid column name or duplicate columns/column groups/expressions in method_opt***
Error #3: ERROR: While GATHER_TABLE_STATS:
object_name=AP.JG_ZZ_SYS_FORMATS_ALL_B***ORA-20001: invalid column name or duplicate columns/column groups/expressions in method_opt***






FINDINGS:
There are duplicate rows on FND_HISTOGRAM_COLS table for 'JE_FR_DAS_010', 'JE_FR_DAS_010_NEW',
'JG_ZZ_SYS_FORMATS_ALL_B'  tables.Because of this problem, FND_STATS tries to gather histogram
information using wrong command and
it fails with ora-20001 errors.

select a.column_name, nvl(a.hsize,254) hsize from FND_HISTOGRAM_COLS a where
table_name = 'JE_FR_DAS_010' order by column_name;

COLUMN_NAME                         HSIZE
------------------------------ ----------
TYPE_ENREG                            254
TYPE_ENREG                            254

2 rows selected.


select a.column_name, nvl(a.hsize,254) hsize from FND_HISTOGRAM_COLS a where
table_name = 'JE_FR_DAS_010_NEW' order by column_name;

COLUMN_NAME                         HSIZE
------------------------------ ----------
TYPE_ENREG                            254
TYPE_ENREG                            254

2 rows selected.

select a.column_name, nvl(a.hsize,254) hsize from FND_HISTOGRAM_COLS a where
table_name = 'JG_ZZ_SYS_FORMATS_ALL_B' order by column_name;

COLUMN_NAME                         HSIZE
------------------------------ ----------
JGZZ_EFT_TYPE                         254
JGZZ_EFT_TYPE                         254

2 rows selected.








SOLUTION:

Find out all duplicates and/or obsolete rows in FND_HISTOGRAM_COLS and delete one of them.
Remember to take backup of the FND_HISTOGRAM_COLS table before deleting any data.
-- identify duplicate rows

select table_name, column_name, count(*)
from FND_HISTOGRAM_COLS
group by table_name, column_name
having count(*) > 1;
-- Use above results on the following SQL to delete duplicates

delete from FND_HISTOGRAM_COLS where table_name = '&TABLE_NAME' and  column_name = '&COLUMN_NAME' and rownum=1;
-- Use following SQL to delete obsoleted rows

delete from FND_HISTOGRAM_COLS
where (table_name, column_name) in
(
select hc.table_name, hc.column_name
from FND_HISTOGRAM_COLS hc , dba_tab_columns tc
where hc.table_name  ='&TABLE_NAME'
and hc.table_name= tc.table_name (+)
and hc.column_name = tc.column_name (+)
and tc.column_name is null
);



Refer: Gather Schema Statistics fails with Ora-20001 errors after 11G database upgrade [ID 781813.1]

Tuesday 29 November 2011

An Orphaned Concurrent Job

Issue: An Orphaned concurrent Job is running since 16-DEC-2010 14:28:21

ebsdbprd1 > @jobs

REQ_ID STA PID        SID  TIME     U_REC USER_DES PGM_NAME

--------- --- -------- ----- ----- --------- -------- ------------------------------------------------

677944 R/R 1208422        #####         0 matgrp   CUS_BANK_RECONCILE_SUMMARY

Findings:

Name: CUS_BANK_RECONCILE_SUMMARY

Request Id: 677944

Application: CUS Application

It was submitted by user: MATGRP

Using responsibility: Cash Management Superuser, Agra

It was submitted on: 16-DEC-10 14:28:21

Arguments (5): 81, 1-OCT-10, 31-OCT-10, 10001,

This request did not finish processing due to an unrecoverable error, and the concurrent manager that was running this request is no longer active.

The Internal Concurrent Manager will mark this request as having completed with an error.

During PCP configuration this job might became unrecoverable. The log file of the request does not exists.

Logfile: /apps_PROD/inst/apps/ebsdbprd_ifebapp1/logs/appl/conc/log/l677944.req

Output file: /apps_PROD/inst/apps/ebsdbprd_ifebapp1/logs/appl/conc/out/o677944.out

ebsdbprd1 > select REQUEST_ID,REQUEST_DATE,PHASE_CODE,STATUS_CODE,REQUESTED_START_DATE,ACTUAL_START_DATE from fnd_concurrent_requests

where REQUEST_ID=677944;

REQUEST_ID REQUEST_D P S REQUESTED ACTUAL_ST

---------- --------- - - --------- ---------

677944 16-DEC-10 R R 16-DEC-10 16-DEC-10

Solution:

update fnd_concurrent_requests set Phase_code='C',status_code='E' where REQUEST_ID=677944;

commit;

Monday 28 November 2011

FNDFS and FNDSM

In Oracle 11i/R12 application listener support two services FNDFS and FNDSM.


Service "FNDFS" has 1 instance(s).
  Instance "FNDFS", status UNKNOWN, has 1 handler(s) for this service...
Service "FNDSM" has 1 instance(s).
  Instance "FNDSM", status UNKNOWN, has 1 handler(s) for this service...
The command completed successfully

 

FNDFS or the Report Review Agent (RRA) is the default text viewer within Oracle Applications, which allows users to view report output and log files. Report Review Agent is also referred to by the executable FNDFS. The default viewer must be configured correctly before external editors or browsers are used for viewing requests.

FNDSM is the Service manager. FNDSM is executable & core component in GSM ( Generic Service Management Framework ). You start FNDSM services via APPS listener on all Nodes in Application Tier.

Sunday 27 November 2011

How to apply patch when adpatch is already running

Sometimes while applying the Patch we realize that we forget to apply a Pre-req patch OR Sometimes during the patch implementation,  the patch failed and after analyzing the issue we find that we need to apply a patch in order to solve this failure, In this case we can do the following steps:

1. Using the adctrl utility, shutdown the workers.
2. Backup the FND_INSTALL_PROCESSES and AD_DEFERRED_JOBS tables.
3. Backup the .rf9 files located in $APPL_TOP/admin//restart directory.
4. Drop the FND_INSTALL_PROCESSES and AD_DEFERRED_JOBS tables.
5. Apply the new patch. Once this new patch is appiled successfully, then restart the previous failed patch by the following steps.
6. Restore the backed up .rf9 files.
7. Restore the FND_INSTALL_PROCESSES table.
8. Restore the AD_DEFERRED_JOBS table.
9. Re-create synonyms for APPLSYS.AD_DEFERRED_JOBS and APPLSYS.FND_INSTALL_PROCESSES

conn apps/pwd

- create synonym AD_DEFERRED_JOBS for APPLSYS.AD_DEFERRED_JOBS;
- create synonym FND_INSTALL_PROCESSES FOR APPLSYS.FND_INSTALL_PROCESSES;
10. Start adpatch, it will resume where it stopped previously.

Saturday 26 November 2011

Reorganize Oracle tables

Here i am sharing the steps to reorganize the tables to release the space.

Tables:

a) EMP

b) DEPT

c) ORDERS

d) DEMO

e) NEWJOIN

Free Space requied=10GB

Steps:

1) Take the Backup

  a)Export backup

  b)Full Backup

2) Take the Dump of Tables

select OWNER,SEGMENT_NAME,SEGMENT_TYPE,TABLESPACE_NAME,sum(BYTES/1024/1024)"size MB" from dba_segments where SEGMENT_NAME in('EMP',

'DEPT',

'ORDERS',

'DEMO',

'NEWJOIN')

and owner='SCOTT'

group by OWNER,SEGMENT_NAME,SEGMENT_TYPE,TABLESPACE_NAME

order by OWNER,SEGMENT_NAME,SEGMENT_TYPE,TABLESPACE_NAME

3) Take the Dump of Indexes

select owner,segment_name,segment_type,tablespace_name,BYTES/1024/1024 from dba_segments where segment_name in (select INDEX_NAME from dba_indexes where table_name in ('EMP',

'DEPT',

'ORDERS',

'DEMO',

'NEWJOIN')

and owner='SCOTT') and owner='SCOTT' order by segment_name

4) Check the available space in the Tablespaces

set pages 200

set lines 200

col n_extent format 999,999,999

ttitle 'Free Space by Tablespace'

col col1    format a28  heading "Tablespace"

col col2    format 999,999,999,999    heading "Bytes(KB)"

col col3    format 999,999,999    heading "Used(KB)"

col col4    format 999,999,999    heading "Free(KB)"

col col5    format 999.9    heading "Free(%)"

break on report

compute sum of col2 col3 col4 FREE_MAX  on report

select

-- b.file_id "File #",

b.tablespace_name col1,

b.bytes / 1024 col2,

((b.bytes - sum(nvl(a.bytes,0)))) / 1024 col3,

(sum(nvl(a.bytes,0))) / 1024 col4,

(sum(nvl(a.bytes,0)) / (b.bytes)) * 100 col5,

max(a.bytes)/1024 FREE_MAX, count(*)

from sys.dba_free_space a,

(select  tablespace_name, sum(bytes) bytes from sys.dba_data_files

group by tablespace_name ) b

-- where a.file_id(+) = b.file_id

where a.tablespace_name = b.tablespace_name

group by b.tablespace_name, b.bytes

order by 5;

SQL> set lines 200

SQL> col file_name for a60

SQL> col tablespace_name for a25

SQL> select file_name,tablespace_name,bytes/1024/1024,autoextensible from dba_data_files where tablespace_name='GLOBAL_RMS';

!bdf | grep /rms

Check the availablity of Space and Manage the Space as per the requirement

 

 

5) Check the status of the objects

SELECT COUNT(*) FROM DBA_OBJECTS;

SELECT COUNT(*) FROM DBA_OBJECTS WHERE STATUS='VALID';

SELECT COUNT(*) FROM DBA_OBJECTS WHERE STATUS='INVALID';

select owner,object_type,status,count(*) from dba_objects where owner='SCOTT' group by owner,object_type,status order by object_type;

SELECT OWNER,OBJECT_NAME,OBJECT_TYPE FROM dba_objects where status='INVALID' AND OWNER='SCOTT';

6) Check the Count of the objects

select count(*) from SCOTT.EMP;

select count(*) from SCOTT.DEPT;

select count(*) from SCOTT.ORDERS;

select count(*) from SCOTT.DEMO;

select count(*) from SCOTT.NEWJOIN;

7) Prepare the Script to move the TABLES

    a) Non-Partition

select ' alter table ' ||OWNER||'.'||TABLE_NAME||' move tablespace GLOBAL_RMS;'
from dba_tables where TABLE_NAME in ('EMP',

'DEPT',

'ORDERS',

'DEMO',

'NEWJOIN')

and owner='SCOTT' and PARTITIONED='NO'

b)Partition

select 'alter table ' ||table_owner||'.'||table_name || ' move  partition ' || partition_name || ' tablespace GLOBAL_RMS;' from dba_tab_partitions where table_name in('EMP',

'DEPT',

'ORDERS',

'DEMO',

'NEWJOIN')

and TABLE_OWNER='SCOTT'

Execute the script and generate the spool file.

Check the spool file for errors.

8) Prepare the Script to move the Indexes

   a) Non-Partition

select 'alter index ' ||owner||'.'||index_name || ' rebuild TABLESPACE GLOBAL_RMS;'
from dba_indexes where table_name in ('EMP',

'DEPT',

'ORDERS',

'DEMO',

'NEWJOIN')

and owner='SCOTT'

   b)Partition

select 'alter index ' ||index_owner||'.'||index_name || ' rebuild  partition ' || partition_name || ' tablespace GLOBAL_RMS;'

from dba_ind_partitions

where index_name in (select INDEX_NAME from dba_indexes where table_name in ('EMP',

'DEPT',

'ORDERS',

'DEMO',

'NEWJOIN')

and owner='SCOTT' and PARTITIONED='YES')

Execute the script and generate the spool file.

Check the spool file for errors.

9) Confirm the Status

 Confirm the Status and count of the objects using Step 4)and 5

If it is same then activity is successfull

10) RollBack Plan

Restore the Table from the Export Backup

Friday 25 November 2011

How to backup and restore statistics using dbms_stats

1) Check last analyzed date of scott schema tables

PREPROD>>select table_name,to_char(last_analyzed,'DD-MON-YYYY HH:MI:SS') FROM DBA_TABLES WHERE OWNER='SCOTT';

TABLE_NAME                     TO_CHAR(LAST_ANALYZE
------------------------------ --------------------
SALGRADE                       09-FEB-2009 10:00:04
BONUS                          09-FEB-2009 10:00:04
EMP                            09-FEB-2009 10:00:04
DEPT                           09-FEB-2009 10:00:04

2) Create stat table in users Tablespace

PREPROD>>exec dbms_stats.create_stat_table(ownname => 'SCOTT', stattab => 'stats_bkp_scott', tblspace => 'USERS');

PL/SQL procedure successfully completed.

3) Take the statistics backup of scott schema in stat table

PREPROD>>exec dbms_stats.export_schema_stats(ownname => 'SCOTT', stattab => 'stats_bkp_scott');

PL/SQL procedure successfully completed.

4) Take the export backup of scott schema or the owner of stats table

$ exp scott/scott1 file=scott_stat_bkp_09122010.dmp tables=scott.STATS_BKP_SCOTT

Export: Release 10.2.0.4.0 - Production on Thu Dec 9 14:37:42 2010

Copyright (c) 1982, 2007, Oracle.  All rights reserved.

Connected to: Oracle Database 10g Enterprise Edition Release 10.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Export done in US7ASCII character set and AL16UTF16 NCHAR character set
server uses WE8ISO8859P1 character set (possible charset conversion)

About to export specified tables via Conventional Path ...
. . exporting table                STATS_BKP_SCOTT         24 rows exported
Export terminated successfully without warnings.

5) Gather the statistics

14:41:39 PREPROD>>exec dbms_stats.gather_table_stats(ownname=>'SCOTT', tabname=>'DEPT', cascade=>true, method_opt => 'for all indexed columns',granularity =>'all',estimate_percent=> 30,degree=>12);

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.14

14:41:51 PREPROD>>exec dbms_stats.gather_table_stats(ownname=>'SCOTT', tabname=>'EMP', cascade=>true, method_opt => 'for all indexed columns',granularity =>'all',estimate_percent=> 30,degree=>12);

PL/SQL procedure successfully completed.

14:42:58 PREPROD>>exec dbms_stats.gather_table_stats(ownname=>'SCOTT', tabname=>'BONUS', cascade=>true, method_opt => 'for all indexed columns',granularity =>'all',estimate_percent=> 30,degree=>12);

PL/SQL procedure successfully completed.

14:43:19 PREPROD>>exec dbms_stats.gather_table_stats(ownname=>'SCOTT', tabname=>'SALGRADE', cascade=>true, method_opt => 'for all indexed columns',granularity =>'all',estimate_percent=> 30,degree=>12);

PL/SQL procedure successfully completed.

6) Check the last analyzed date of tables

14:43:41 PREPROD>> select table_name,to_char(last_analyzed,'DD-MON-YYYY HH:MI:SS') FROM DBA_TABLES WHERE OWNER='SCOTT';

TABLE_NAME                     TO_CHAR(LAST_ANALYZE
------------------------------ --------------------
STATS_BKP_SCOTT
SALGRADE                       09-DEC-2010 02:43:41
BONUS                          09-DEC-2010 02:42:59
EMP                            09-DEC-2010 02:42:27
DEPT                           09-DEC-2010 02:41:50

7) Import/Revert the statistics of one/two table from the backup

15:07:22 PREPROD>>exec dbms_stats.import_table_stats(ownname=>'scott', tabname=>'emp', statown=>'scott', stattab=>'stats_bkp_scott', cascade=>true);

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.30

15:10:28 PREPROD>>exec dbms_stats.import_table_stats(ownname=>'scott', tabname=>'dept', statown=>'scott', stattab=>'stats_bkp_scott', cascade=>true);

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.26

8) Check the last analyzed date of the tables

15:120:31 PREPROD>>select table_name,to_char(last_analyzed,'DD-MON-YYYY HH:MI:SS') FROM DBA_TABLES WHERE OWNER='SCOTT';

TABLE_NAME                     TO_CHAR(LAST_ANALYZE
------------------------------ --------------------
STATS_BKP_SCOTT
SALGRADE                       09-DEC-2010 02:43:41
BONUS                          09-DEC-2010 02:42:59
EMP                            09-FEB-2009 10:00:04
DEPT                           09-FEB-2009 10:00:04

Elapsed: 00:00:00.54

9) Revert the statistics of whole schema from the backup

15:40:38 PREPROD>>EXECUTE DBMS_STATS.IMPORT_SCHEMA_STATS ('SCOTT','stats_bkp_scott');

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.34

10) Check the last analyzed date of the tables

15:45:53 PREPROD>>select table_name,to_char(last_analyzed,'DD-MON-YYYY HH:MI:SS') FROM DBA_TABLES WHERE OWNER='SCOTT';

TABLE_NAME                     TO_CHAR(LAST_ANALYZE
------------------------------ --------------------
STATS_BKP_SCOTT
SALGRADE                       09-FEB-2009 10:00:04
BONUS                          09-FEB-2009 10:00:04
EMP                            09-FEB-2009 10:00:04
DEPT                           09-FEB-2009 10:00:04

Elapsed: 00:00:00.27

Thursday 24 November 2011

Enable Supplemental Logging

We can enable supplemental logging using,

ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;

But sometimes while enabling the supplemental logging we can face issue.


Issue:
Enabling supplemental logging hang or take too much time to complete.

Cause:
The statement ALTER DATABASE ADD SUPPLEMENTAL LOG DATA  is waiting for TX lock in shared mode when there is any uncommitted transaction. This is the expected behavior.
You can issue ALTER DATABASE ADD SUPPLEMENTAL LOG DATA when the database is open. However, Oracle Database will invalidate all DML cursors in the cursor cache, which will have an effect on performance until the cache is repopulated. Besides,we also need to wait for the completion of all the in-flight transaction so that any redo generated after this DDL would have the right supplemental logging attributes.

Solution:
1) Wait for the completion of all the in-flight transaction.
or
2) In those databases with high activity where there are always active transactions the supplemental logging can be enabled by bouncing the database and running the statement manually:
STARTUP MOUNT
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
ALTER DATABASE OPEN;

Monday 14 November 2011

Scripts to Check table Fragmentation in Oracle

Script 1:
set serveroutput on
declare
TOTAL_BLOCKS number;
TOTAL_BYTES number;
UNUSED_BLOCKS number;
UNUSED_BYTES number;
LAST_USED_EXTENT_FILE_ID number;
LAST_USED_EXTENT_BLOCK_ID number;
LAST_USED_BLOCK number;
begin
dbms_space.unused_space(
'APPLSYS', ---SHEMA NAME
'FND_CONCURRENT_REQUESTS', ---OBJECT NAME
'TABLE', ---OBJECT TYPE
TOTAL_BLOCKS,
TOTAL_BYTES,
UNUSED_BLOCKS,
UNUSED_BYTES,
LAST_USED_EXTENT_FILE_ID,
LAST_USED_EXTENT_BLOCK_ID,
LAST_USED_BLOCK);
dbms_output.put_line('TOTAL_BLOCKS = '||TOTAL_BLOCKS);
dbms_output.put_line('UNUSED_BLOCKS = '||UNUSED_BLOCKS);
end;
/


Script 2:
set serveroutput on

declare
v_unformatted_blocks number;
v_unformatted_bytes number;
v_fs1_blocks number;
v_fs1_bytes number;
v_fs2_blocks number;
v_fs2_bytes number;
v_fs3_blocks number;
v_fs3_bytes number;
v_fs4_blocks number;
v_fs4_bytes number;
v_full_blocks number;
v_full_bytes number;
begin
dbms_space.space_usage (
'APPLSYS',
'FND_CONCURRENT_REQUESTS',
'TABLE',
v_unformatted_blocks,
v_unformatted_bytes,
v_fs1_blocks,
v_fs1_bytes,
v_fs2_blocks,
v_fs2_bytes,
v_fs3_blocks,
v_fs3_bytes,
v_fs4_blocks,
v_fs4_bytes,
v_full_blocks,
v_full_bytes);
dbms_output.put_line('Unformatted Blocks = '||v_unformatted_blocks);
dbms_output.put_line('Blocks with 00-25% free space = '||v_fs1_blocks);
dbms_output.put_line('Blocks with 26-50% free space = '||v_fs2_blocks);
dbms_output.put_line('Blocks with 51-75% free space = '||v_fs3_blocks);
dbms_output.put_line('Blocks with 76-100% free space = '||v_fs4_blocks);
dbms_output.put_line('Full Blocks = '||v_full_blocks);

end;
/

Friday 11 November 2011

Script to check tablespace free space

set serverout on
set wrap off
set pages 300
set lines 200
col file_name for a50
col name for a50
col "Percent Used" for a20
col tablespace_name for a60
select t.tablespace,  t.totalspace as " Totalspace(MB)",
round((t.totalspace-nvl(fs.freespace,0)),2) as "Used Space(MB)",
nvl(fs.freespace,0) as "Freespace(MB)",
round(((t.totalspace-nvl(fs.freespace,0))/t.totalspace)*100,2) as "%Used",
round((nvl(fs.freespace,0)/t.totalspace)*100,2) as "% Free"
from
(select round(sum(d.bytes)/(1024*1024)) as totalspace,d.tablespace_name tablespace
from dba_data_files d
group by d.tablespace_name) t,
(select round(sum(f.bytes)/(1024*1024)) as freespace,f.tablespace_name tablespace
from dba_free_space f
group by f.tablespace_name) fs
where t.tablespace=fs.tablespace (+)
order by t.tablespace;

Saturday 5 November 2011

How to create custom concurrent manager

Step by step how to create custom concurrent manager:

1. Navigate to Concurrent / Manager / Define.

2. Manager Field: Custom Manager.

3. Short Name: CUSTOM.

4. Type: Concurrent Manager.

5. Program Library: FNDLIBR.

6. Enter desired cache.

7. Work Shifts: Standard.

8. Enter number of Processes.

9. Provide Specialization Rules (you can include or exclude program, id, user, types or combination).

10. Save.

11. Navigate to Concurrent / Manager / Administer.

12. Activate the Custom Manager.

 

Delete a concurrent manager:

We can disable the manager by checking the 'Enabled' checkbox. We can simply terminate the manager and it will not run Again unless you reactivate it. Or, if we really want to delete the custom manager we can query the manager in the 'Define Manager' form, and delete the row. But it is not Recommended.

Wednesday 2 November 2011

Display the Physical Memory and Swap Space on UNIX Systems

AIX:
/usr/sbin/lsattr -E -l sys0 -a realmem
/usr/sbin/lsps -s

HP-UX:
grep Physical /var/adm/syslog/syslog.log
/usr/sbin/swapinfo -t

Linux:
cat /proc/meminfo | grep MemTotal
/sbin/swapon -s

Solaris:
/usr/sbin/prtconf | grep "Memory size"
/usr/sbin/swap -s

Tru64:
vmstat -P| grep -i "Total Physical Memory ="
/sbin/swapon -s

Analyze Parameter - Oracle

Degree: When taking statistics you can perform parallel queries using the  degree parameter,this way your procedure will perform faster,the basic rule is to set the degree parameter equal to the number of CPUs (or cores) minus 1.

Estimate_percent: It is not easy to select the best size for the estimate_percent  parameter.If you set it too high.It will take a long time to collect the statistics.If you set it too low,you can gather the statistics quickly all right.but those statistics can very well be inaccurate.We can set value between 0 to 100.A rule of thumb here is that the more uniform the table's data,the smaller the sample size.On the other hand,if a tables data is highly skewed,you should use higher sample size.Of course,setting the parameter to value 100 means that the database is not doing an estimation.If you think the data is uniformaly distributed even a 1% - 2% sample size wil get you very accurate statistics and save you a bunch of time.By default the database uses the constant DBMS_STATS.AUTO_SAMPLE_SIZE TO DETERMINE THE BEST SAMPLE SIZE.