When talking about performance tuning, I will often tell developers that they need to use Oracle SQL*Trace and TKPROF because that is an absolutely reliable way to find out how their SQL executed, how long it took and hence where they should focus their attention to improve performance. However, that requires access to the trace files in the USER_DUMP_DEST directory on the database server, which is something that database administrators are justifiably reticent to provide. When I am on a customer site, I often experience the same difficulties.
Even if developers can get a log on to a development database server, trace files will only be readable on Unix servers within the DBA group unless _trace_file_public is set to true.
Another option is to retrieve trace files from the user_dump_dest directory via an external table. The idea is not mine. I've seen this technique demonstrated by Tanel Poder, there is an article by Jared Still that demonstrates how to retrieve the alert log, and there is also an posting on Ask Tom using utl_dir. However, I couldn't find a script to do trace files, so I wrote one.
The script assumes the trace file is created by the current session. The first thing it does is to create a database directory that maps to the USER_DUMP_DEST directory, and an external table that corresponds to the trace file. Then you can just query the trace file in SQL*PLus and spool the output to a local file.
Of course, this also shows that external tables are a significant security risk. The privilege to create and read from directories carefully controlled.
REM user_dump_dest.sql
REM (c) Go-Faster Consultancy Ltd.
REM 30.11.2006 initial version
REM trace the thing you are interested in.
ALTER SESSION SET tracefile_identifier = 'gfctest';
ALTER SESSION SET sql_trace = true;
SELECT * FROM dual;
ALTER SESSION SET sql_Trace = false;
REM determine path for user_dump_dest and create an database directory
set echo off feedback off verify on timi off
column dir new_value dir format a18
column path new_value path format a60
SELECT name dir, value path
FROM v$parameter
WHERE name = 'user_dump_dest'
/
CREATE OR REPLACE DIRECTORY &dir AS '&path';
REM determine the name of the trace file from show process ID, and database name and parameters
column tracefile_name new_value tracefile_name
SELECT LOWER(d.name)||'_ora_'||p.spid
||DECODE(p.value,'','','_'||value) tracefile_name
FROM v$parameter p, v$database d, sys.v_$session s, sys.v_$process p
,(SELECT sid FROM v$mystat WHERE rownum=1) m
WHERE p.name = 'tracefile_identifier'
AND s.paddr = p.addr
AND s.sid = m.sid
/
REM create an external table that corresponds to the trace file
DROP TABLE &tracefile_name;
CREATE TABLE &tracefile_name
(trace_line VARCHAR2(4000))
ORGANIZATION EXTERNAL
(TYPE ORACLE_LOADER DEFAULT DIRECTORY user_dump_dest
ACCESS PARAMETERS (
RECORDS DELIMITED BY NEWLINE NOBADFILE NODISCARDFILE NOLOGFILE
FIELDS MISSING FIELD VALUES ARE NULL
REJECT ROWS WITH ALL NULL FIELDS
(trace_line CHAR(4000)))
LOCATION ('&tracefile_name..trc')
);
REM just query the trace file back to a local spool file in SQL Plus
set head off pages 0 termout off
spool &tracefile_name..trc
SELECT * FROM &tracefile_name;
spool off
set termout on
DROP TABLE &tracefile_name;