December 21st 2006: Correction to script
When talking about performance tuning, I will often tell developers that they need to use Oracle SQL*Trace and TKPROF because that is an absolutely reliable way to find out how their SQL executed, how long it took and hence where they should focus their attention to improve performance. However, that requires access to the trace files in the USER_DUMP_DEST directory on the database server, which is something that database administrators are justifiably reticent to provide. When I am on a customer site, I often experience the same difficulties.
Even if developers can get a log on to a development database server, trace files will only be readable on Unix servers within the DBA group unless _trace_file_public is set to true.
Another option is to retrieve trace files from the user_dump_dest directory via an external table. The idea is not mine. I've seen this technique demonstrated by Tanel Poder, there is an article by Jared Still that demonstrates how to retrieve the alert log, and there is also an posting on Ask Tom using utl_dir. However, I couldn't find a script to do trace files, so I wrote one.
The script assumes the trace file is created by the current session. The first thing it does is to create a database directory that maps to the USER_DUMP_DEST directory, and an external table that corresponds to the trace file. Then you can just query the trace file in SQL*PLus and spool the output to a local file.
Of course, this also shows that external tables are a significant security risk. The privilege to create and read from directories carefully controlled.