Showing posts with label Performance Metrics. Show all posts
Showing posts with label Performance Metrics. Show all posts
Wednesday, May 08, 2019
PeopleSoft Administrator Podcast: #183 – Effective Performance Monitoring
I recently recorded a podcast with Dan Iverson and Kyle Benson for the PeopleSoft Administrator Podcast, this time about instrumentation, monitoring the performance of PeopleSoft system, and Performance Monitor. There is also just a little about cursor sharing.
Friday, March 08, 2019
Effective PeopleSoft Performance Monitoring
This advice note describes how to configure PeopleSoft systems on Oracle so that performance metrics are collected that are useful performance monitoring and subsequent performance tuning.
If module and action are consistently set to meaningful values as the application executes it is then possible to determine from where SQL statements originated and how different parts of the application are performing.
ASH is separately licenced as part the Diagnostics Pack, that is only available on Enterprise Edition of the database. However, most PeopleSoft customers running on Oracle do so on Enterprise Edition and are licenced for the diagnostics pack.
PeopleSoft has set client_info since PeopleTools 7.53 so the PeopleSoft operator ID can be associated with the database session, mainly to allow auditing to be done with database triggers. However, this attribute is not persisted to the ASH data. However, client_id is also set to the operator ID, and this is collected by ASH.
PeopleTools has set module and action since PeopleTools 8.50, though the exact values used have changed sometimes with the PeopleTools version.
A new application server/process scheduler domain parameter EnableAEMonitoring was introduced in PeopleTools 8.54 to control this behaviour and it is not enabled by default. If monitoring is not enabled module defaults to the program name and action remains blank.
Recommendation: Set EnableAEMonitoring=1 in all application server and process scheduler domains in order to enable PeopleSoft instrumentation online, in the integration broker, and in Application Engine programs.
See also:
In the Integration Broker module and action are set to service name and queue name.
Later on, it may be necessary to use regular expressions in SQL to process this string before profiling the ASH data.
Action is set to string concatenated from the Application Engine program, section, step name, and step type. For example: PSPMCSOSUM.GETCNT.CNT.P
The program name may be different from that shown in module if one AE program calls another.
Note: Application Engine still doesn't reset ACTION on DO SELECT steps
This technique was used prior to PeopleSoft adding instrumentation to PeopleTools. It can still be applied to all processes, including Application Engine because any PeopleSoft instrumentation will simply overwrite the value set by the trigger.
The same technique was also used prior to the introduction of ASH to enable Oracle SQL Trace if a particular run control was specified, and is still occasionally useful.
However, it would also be useful to know the report ID being run. When each individual nVision report starts it queries the runtime parameters from the PS_NVS_REPORT PeopleTools table. There is no update, so it is not possible to capture this with a DML trigger. Instead, it is possible to define a fine-grained audit policy on the query and set module within a PL/SQL handler package that is invoked by the fine-grained audit.
Recommendation: Implement Fine-Grained Audit policy and handler to instrument nVision processes if required
Recommendation: Reduce AWR snapshot frequency from the default of 1 hour to every 15 minutes. This change results in only a modest increase in overhead in processing and space on AWR, but it is worth the additional information that is captured.
This advice also applies to Statspack that may be used if you are not licenced for the Diagnostics Pack.
Recommendation: Enable Application Engine batch timings to at least database and preferably also file by setting AETrace=1152 in all Application Server and Process Scheduler domain configuration files.
The trace setting in the process scheduler configuration can be overridden by setting process-specific command line parameter overrides in the process definition. This is often done to set other trace settings, it is also common to see these unintentionally left in place longer than necessary. If trace is enabled in this way, the batch timings flags should also be set.
See PeopleBooks -> Development Tools -> Application Engine -> Enabling Application Engine Tracing
This trace setting can also be overridden by setting process-specific command line parameter overrides in the process definition. If trace is enabled in this manner, then it should always also set the batch timings flags.
The sampled data includes the number of busy application server processes and length of inbound Tuxedo service queues. This data can be used to validate the sizing of the application servers.
Recommendation: Performance Monitor is complex to set up and the delivered analytics are limited. Nonetheless, the information that can be obtained from the data collected can be worth the effort of configuration to address online configuration and performance issues.
Contents
- Oracle RDBMS Instrumentation
- On-Line Component/Page Information
- Application Engine Step Information
- Cobol/nVision Instrumentation
- nVision Layout Instrumentation
- 2nd Database Connection Instrumentation
- PeopleTools Performance Metrics
- Cobol and Application Engine Batch Timings
- PeopleSoft Performance Monitor
Summary of Recommendations
- Set EnableAEMonitoring=1 in all Application Server and process scheduler domains in order to enable PeopleSoft instrumentation on-line, in the integration broker, and in Application Engine programs.
- Implement the PSFTAPI package and trigger described above to set MODULE and ACTION at the start of all processes.
- Implement Fine-Grained Audit policy and handler to instrument nVision processes if required.
- Reduce AWR snapshot frequency from the default of 1 hour to every 15 minutes.
- Enable Application Engine batch timings to at least database and preferably also file by setting AETrace=1152 in all Application Server and Process Scheduler domain configuration files.
- Enable Cobol statement timings report should be written to log file by setting TraceSQL=128 in all Process Scheduler domain configuration files.
- Performance Monitor is complex to set up and the delivered analytics are limited. Nonetheless, the information that can be obtained from the data collected can be worth the effort of configuration to address on-line configuration and performance issues.
Oracle RDBMS Instrumentation
Oracle provides the dbms_application_info package to set certain attributes on the current database session. These attributes are visible in some of the dynamic performance version, and are picked up by Active Session History (ASH) and can also be seen in AWR reports, Enterprise Manager screens, SQL trace files, and other performance utilities such as EDB360. The package was first documented in Oracle 7.3.3"Application developers can use the DBMS_APPLICATION_INFO package to record the name of the executing module or transaction in the database for use later when tracking the performance of various modules… System administrators can also use this information to track resource usage by module. When an application registers with the database, its name and actions are recorded in the V$SESSION and V$SQLAREA views. Your applications should set the name of the module and name of the action automatically each time a user enters that module. The module name could be the name of a … code segment in an … application. The action name should usually be the name or description of the current transaction within a module."
• Oracle 7 Tuning, release 7.3.3 ©Oracle 1997, Chapter 23 Registering Applications• See also One of my Favourite Database Things: DBMS_APPLICATION_INFO
If module and action are consistently set to meaningful values as the application executes it is then possible to determine from where SQL statements originated and how different parts of the application are performing.
ASH is separately licenced as part the Diagnostics Pack, that is only available on Enterprise Edition of the database. However, most PeopleSoft customers running on Oracle do so on Enterprise Edition and are licenced for the diagnostics pack.
PeopleSoft has set client_info since PeopleTools 7.53 so the PeopleSoft operator ID can be associated with the database session, mainly to allow auditing to be done with database triggers. However, this attribute is not persisted to the ASH data. However, client_id is also set to the operator ID, and this is collected by ASH.
PeopleTools has set module and action since PeopleTools 8.50, though the exact values used have changed sometimes with the PeopleTools version.
A new application server/process scheduler domain parameter EnableAEMonitoring was introduced in PeopleTools 8.54 to control this behaviour and it is not enabled by default. If monitoring is not enabled module defaults to the program name and action remains blank.
Recommendation: Set EnableAEMonitoring=1 in all application server and process scheduler domains in order to enable PeopleSoft instrumentation online, in the integration broker, and in Application Engine programs.
See also:
- PeopleTools 8.50 uses DBMS_APPLICATION_INFO to Identify Database Sessions
- PeopleTools 8.52 Application Engine sets MODULE and ACTION
- Undocumented (until PeopleTools 8.55) Application Engine Parameter: EnableAEMonitoring
On-Line Component/Page Information
In the online application, module and action are set to the component and page name respectively. In a search dialogue, Action is set to 'xyzzy'.In the Integration Broker module and action are set to service name and queue name.
Application Engine Step Information
In Application Engine, module is set to a string that includes the name of the Application Engine main program the was called and the sessionid_num for the current process instance number recorded on the process scheduler request table PSPRCSQUE. For example: PSAE.PSPMCSOSUM.1448Later on, it may be necessary to use regular expressions in SQL to process this string before profiling the ASH data.
Action is set to string concatenated from the Application Engine program, section, step name, and step type. For example: PSPMCSOSUM.GETCNT.CNT.P
The program name may be different from that shown in module if one AE program calls another.
Note: Application Engine still doesn't reset ACTION on DO SELECT steps
Cobol/nVision Instrumentation
Cobol, nVision, and SQR do not set module and action. Instead, they can be set at the start of every program initiated by Process Scheduler using a trigger on PSPRCSRQST. The first thing a process does when it is initiated is to set the run status on its scheduler request record to 7, indicating that it is processing. This is visible in the Process Monitor component. A trigger on this transition can set module and action for the session to the program and process instance number on that row of data.This technique was used prior to PeopleSoft adding instrumentation to PeopleTools. It can still be applied to all processes, including Application Engine because any PeopleSoft instrumentation will simply overwrite the value set by the trigger.
- See null
The same technique was also used prior to the introduction of ASH to enable Oracle SQL Trace if a particular run control was specified, and is still occasionally useful.
nVision Layout Instrumentation
One of the challenges of tuning and monitoring nVision is to be able to identify each report being run. nVision reports always run as the same process name, either NVSRUN for a single report, RPTBOOK for a report book of many reports, or DRILLDWN for a nVision drill-down query. Knowing the process instance is useful because then we can look up the operator and run control IDHowever, it would also be useful to know the report ID being run. When each individual nVision report starts it queries the runtime parameters from the PS_NVS_REPORT PeopleTools table. There is no update, so it is not possible to capture this with a DML trigger. Instead, it is possible to define a fine-grained audit policy on the query and set module within a PL/SQL handler package that is invoked by the fine-grained audit.
Recommendation: Implement Fine-Grained Audit policy and handler to instrument nVision processes if required
2nd Database Connection Information
PeopleSoft programs use a second database connection to increment sequence numbers to minimise the row-level locking on such tables. It is like an AUTONOMOUS_TRANSACTION in PL/SQL. There is no PeopleSoft instrumentation on this session. It is possible to use an AFTER LOGON trigger to set client_info, module and action.Oracle Automatic Workload Repository (AWR) Snapshots
PeopleSoft generates a lot of non-shareable SQL.- Dynamically generated SQL, often in PeopleCode, concatenates strings of SQL with bind variables, thus the bind variables become literals in the final SQL statement. Statements with different literal values are considered to be different statements with different SQL_IDs.
- Similarly, dynamic Cobol statements result in literal values in the SQL statement.
- %BIND() variables in Application Engine will also become literal values in the SQL Statement unless the ReUseStatement attribute is set on the AE step, however, this cannot be set on statements with dynamic fragments code are introduced with %BIND(…,NOQUOTES).
- Application Engine programs that use temporary records can use different non-shared instances of the record in different executions, and this also results in different statements with different SQL_IDs.
- See also Performance Benefits of ReUse Statement Flag in Application Engine
Recommendation: Reduce AWR snapshot frequency from the default of 1 hour to every 15 minutes. This change results in only a modest increase in overhead in processing and space on AWR, but it is worth the additional information that is captured.
This advice also applies to Statspack that may be used if you are not licenced for the Diagnostics Pack.
PeopleTools Performance Metrics
Batch Timings
PeopleSoft Application Engine and Cobol programs can emit batch timings reports on successful completion.Application Engine
Application Engine batch timings are controlled by the AETrace flag in the Process Scheduler domain configuration file and for on-line AE programs in the Application Server domain configuration files.- AETrace=128: batch timings report is written to the AE Trace file to
- AETrace=1024: batch timings are written to PS_BAT_TIMINGS% tables in the database
Recommendation: Enable Application Engine batch timings to at least database and preferably also file by setting AETrace=1152 in all Application Server and Process Scheduler domain configuration files.
The trace setting in the process scheduler configuration can be overridden by setting process-specific command line parameter overrides in the process definition. This is often done to set other trace settings, it is also common to see these unintentionally left in place longer than necessary. If trace is enabled in this way, the batch timings flags should also be set.
See PeopleBooks -> Development Tools -> Application Engine -> Enabling Application Engine Tracing
Cobol
PeopleSoft Cobol programs can only write batching timings reports to file and not to the database. This is controlled by a different parameter.- TraceSQL = 128: Enable Cobol statement timings report
This trace setting can also be overridden by setting process-specific command line parameter overrides in the process definition. If trace is enabled in this manner, then it should always also set the batch timings flags.
PeopleSoft Performance Monitor
This provides information about the performance of the PIA including response times for the online transactions. Metrics are stored in a separate monitoring PeopleSoft system to minimize the effect of measurement intrusion. It optionally samples the state of each web server, application server and process scheduler collecting operating system and Tuxedo metrics. It also has a PIA session trace capability.The sampled data includes the number of busy application server processes and length of inbound Tuxedo service queues. This data can be used to validate the sizing of the application servers.
Recommendation: Performance Monitor is complex to set up and the delivered analytics are limited. Nonetheless, the information that can be obtained from the data collected can be worth the effort of configuration to address online configuration and performance issues.
Labels:
Performance Metrics
Wednesday, April 20, 2016
PS360: A Utility to Extract and Present PeopleSoft Configuration and Performance Data
Introduction
Most of a PeopleSoft application is itself stored in the database in PeopleTools tables. Therefore there is lot of information about the configuration and operation of a PeopleSoft system. There are also performance metrics, particularly about batch processes.PS360 is a new tool on which I am working. It just uses SQL scripts to extract that data to html files, and package them up in a zip file so that they can be sent for further analysis. The style and method is closely modelled on Enkitec's EDB360 by Carlos Sierra. This is another free tool used for health check and performance analysis of any Oracle database system. PS360 aims to gather PeopleSoft specific information that is not presented by EDB360. It also runs in Oracle's SQL*Plus tool, and so is only available for use with an Oracle database.
Every section of PS360 is just the output of a SQL query, sometimes pre-processing is done in an anonymous PL/SQL block. It does not install anything into the database, and does not update any table (other than the PLAN_TABLE which is used for temporary working storage). Each report is in tabular and/or graphical format. All the charts are produced with the Google chart API.
The output falls into three sections.
- Configuration: Simple reports of certain configuration tables.
- Performance Metrics: Process Scheduler and Application Engine timings
- Consistency Checks: both within PeopleTools tables and between the PeopleSoft and Oracle catalogues.
Sample PS360 Index Page generated on PeopleSoft Demo Database |
Instructions
The tool can be run by anyone with access to the PeopleSoft Owner database user (usually SYSADM). That user will already have the privilege to read the Oracle catalogue.Download the tool and unzip it into a directory.
- Navigate to the ps360 (master) directory
- Open SQL*Plus and connect as the owner of the PeopleSoft database (usually SYSADM).
- Execute the script ps360.sql.
- The output will be written to a zip file in the same directory.
- Unpack that zip file on your own PC.
- Open the file ps360_[database name]_0_index.html with a browser.
Feedback Enhancements
I am looking for feedback about the tool, and suggestions for further enhancements.Please either leave comments here or e-mail me at info@go-faster.co.uk.
Labels:
Healthcheck
,
Performance Metrics
,
ps360
Tuesday, August 28, 2007
Performance Metrics for Scheduled Queries in PeopleSoft
The ability to log executions of ad-hoc Queries executed via the Query Viewer in the PIA was introduced in PeopleTools 8.44. However, not all queries are executed directly, they can also be scheduled via the Process Scheduler. Queries whose result sets are larger than the maximum fetch size for the PSQRYSRV server raise an error, and they must be scheduled.
Don't be tempted to increase the maximum fetch size because the entire result set for queries run in the PIA is set to the Java pool in the web server, and so it can impact all other users of that web server.
Scheduling queries is a good thing because:
i) You no longer copy the results to the web server Java pool.
ii) You can restrict the number of queries that can execute concurrently with standard Process Scheduler functionality.
However, Query logging does apply to scheduled queries, and recently I needed to find a way to work out which scheduled queries were executed most frequently and took the longest time. With most batch processes, it is just a matter of querying the process scheduler request table PSPRCSRQST, but scheduled queries are executed by an Application Engine program called PSQUERY. The query name is specified on a run control, but operators tend to recycle the run controls, so I can't use them. All queries will look the same. However, the name of the query output file includes the name of the query. The list of posted output files is held on the table PS_CDM_FILE_LIST. By joining these two tables, I know how long it took to execute each query, and I can construct the following query for Oracle RDBMS (updated 10.10.2011).
But there some limitations:
i) Once a query has been purged from the Process Monitor is will no longer be included in the statistics.
ii) This technique cannot distinguish between a public and a private query of the same name. If any operators copy a public query to a private one of the same name, this query will report them all as one.
I have also produced a version of this query for SQL Server:
Don't be tempted to increase the maximum fetch size because the entire result set for queries run in the PIA is set to the Java pool in the web server, and so it can impact all other users of that web server.
Scheduling queries is a good thing because:
i) You no longer copy the results to the web server Java pool.
ii) You can restrict the number of queries that can execute concurrently with standard Process Scheduler functionality.
However, Query logging does apply to scheduled queries, and recently I needed to find a way to work out which scheduled queries were executed most frequently and took the longest time. With most batch processes, it is just a matter of querying the process scheduler request table PSPRCSRQST, but scheduled queries are executed by an Application Engine program called PSQUERY. The query name is specified on a run control, but operators tend to recycle the run controls, so I can't use them. All queries will look the same. However, the name of the query output file includes the name of the query. The list of posted output files is held on the table PS_CDM_FILE_LIST. By joining these two tables, I know how long it took to execute each query, and I can construct the following query for Oracle RDBMS (updated 10.10.2011).
column qryname format a30 heading 'Query Name'
column avg_secs format 9,999.9 heading 'Average|Exec|(s)'
column sum_secs format 999,990 heading 'Total|Exec|(s)'
column num_execs format 999,990 heading 'Number|of|Execs'
SELECT QRYNAME
, AVG(EXEC_SECS) AVG_SECS
, SUM(EXEC_SECS) SUM_SECS
, COUNT(*) NUM_EXECS
FROM (
SELECT
SUBSTR(F.FILENAME,1,INSTR(FILENAME,'-'||LTRIM(TO_CHAR(F.PRCSINSTANCE))||'.')-1
) QRYNAME
,((ENDDTTM+0)-(BEGINDTTM+0))*86400 EXEC_SECS
FROM PSPRCSRQST P, PS_CDM_FILE_LIST F
WHERE P.PRCSNAME = 'PSQUERY'
AND P.RUNSTATUS = 9
AND P.PRCSINSTANCE = F.PRCSINSTANCE
AND NOT F.CDM_FILE_TYPE IN('LOG','AET','TRC')
)
GROUP BY QRYNAME
ORDER BY SUM_SECS DESC
/
Average Total Number
Exec Exec of
Query Name (s) (s) Execs
------------------------------ -------- -------- --------
PYRL_SEGMENTATION 3,846.6 38,466 10
PYRL_MISC_ELEMENT 3,363.3 20,180 6
PYRL_ELEMENT_MISC 1,674.8 20,097 12
PYRL_NO_DED_RECIP 2,366.3 18,930 8
PYRL_LOAN_BY_PERIOD 2,888.0 5,776 2
PYRL_OPAY_AMT 1,237.0 4,948 4
PYRL_NET 4,893.0 4,893 1
Q_ELEMENT_AMOUNTS 1,083.5 2,167 2
PYRL_NI_EXCEP_LEAVERS 412.0 2,060 5
PYRL_BANK_DETAILS 179.5 1,795 10
...
But there some limitations:
i) Once a query has been purged from the Process Monitor is will no longer be included in the statistics.
ii) This technique cannot distinguish between a public and a private query of the same name. If any operators copy a public query to a private one of the same name, this query will report them all as one.
I have also produced a version of this query for SQL Server:
SELECT
SUBSTRING(F.FILENAME,1,PATINDEX('%-'+LTRIM(STR(F.PRCSINSTANCE))+'.%',FILENAME)-1) QRYNAME
, AVG(DATEDIFF(s,BEGINDTTM,ENDDTTM)) AVG_SECS
, SUM(DATEDIFF(s,BEGINDTTM,ENDDTTM)) SUM_SECS
, COUNT(*) NUM_EXECS
FROM PSPRCSRQST P, PS_CDM_FILE_LIST F
WHERE P.PRCSNAME = 'PSQUERY'
AND P.RUNSTATUS = 9
AND P.PRCSINSTANCE = F.PRCSINSTANCE
AND F.CDM_FILE_TYPE IN('CSV','XLS') --there might be more
GROUP BY SUBSTRING(F.FILENAME,1,PATINDEX('%-'+LTRIM(STR(F.PRCSINSTANCE))+'.%',FILENAME)-1)
ORDER BY SUM_SECS DESC
Labels:
Performance Metrics
,
Scheduled Queries
Thursday, June 15, 2006
Additional Batch Timings from Application Engine
I am doing some Application Engine development at the moment as a part of a project to replatform to Oracle. I go through an iterative process of changing SQL Server specific SQL (in a custom AE program) to run on Oracle, and then running the process until it fails again. However, not only must the processes run correctly on Oracle, but the new code needs to be efficient as possible in order to meet performance targets. So, I need to know how long each step in the AE program took to execute.
Application Engine batch timings are very useful metrics when tuning processes, but they are only written when the program ends, and they are only written to the timings tables in the database when a program completes successfully. I am repeatedly restarting the same process, gradually moving towards the end, so I will never get a complete timings report.
However, Application Engine programs writes a status string to table PSPRCSRQSTSTRNG which contains the step about to be executed. The value of this string can be seen in Process Monitor in the Process Detail page.
I created a table to which a trigger logs what steps are executed. I have also used a sequence number to produce a reliable primary key on that table because values in the string will repeat when Application Engine is in a loop or when a process is restarted process in which case it keeps the same process instance number. The sequence number is generated by an Oracle Sequence to avoid locking problems caused by using a table to generate a sequence (as PeopleSoft generally does). I am not worried by gaps in the sequence.
A trigger on PSPRCSRQSTSTRNG writes new values for PRCSRQSTSTRING to the log table. I have used an autonomous transaction in case the update rolls back after an error, and so that I can see the log record before Application Engine commits. I am also storing two timestamps when the audit record is written. The Oracle date type is only accurate to the second, but I have also used the newer timestamp datatype. On Oracle 9.2 (on Windows) I get millisecond accuracy, but on Oracle 10.2 I get microsecond accuracy.
Unfortunately, the status record is not always updated or deleted when the Application Engine program ends. Therefore, another trigger is required to delete the process record when the application updates its status to no longer processing.
This table can be used is various ways.
i) A simple report can be written to show the last 10 steps for each process instance, and their duration. I can run this while the program is executing to see how far it has got, and how long the current step has been executing.
ii) I can also aggregate the time for all the steps to determine what steps account for the longest execution time.
This method, although useful, is not foolproof. The status string is only updated if AE Restart is enabled, and then not for for every step followed by a COMMIT. I haven't yet been able to work out all the criteria for this.
Application Engine batch timings are very useful metrics when tuning processes, but they are only written when the program ends, and they are only written to the timings tables in the database when a program completes successfully. I am repeatedly restarting the same process, gradually moving towards the end, so I will never get a complete timings report.
However, Application Engine programs writes a status string to table PSPRCSRQSTSTRNG which contains the step about to be executed. The value of this string can be seen in Process Monitor in the Process Detail page.
desc PSPRCSRQSTSTRNG
Name Null? Type
-------------------- -------- ---------------
PRCSINSTANCE NOT NULL NUMBER(38)
RQSTSTRINGTYPE NOT NULL VARCHAR2(2)
PRCSRQSTSTRING NOT NULL VARCHAR2(254)
I created a table to which a trigger logs what steps are executed. I have also used a sequence number to produce a reliable primary key on that table because values in the string will repeat when Application Engine is in a loop or when a process is restarted process in which case it keeps the same process instance number. The sequence number is generated by an Oracle Sequence to avoid locking problems caused by using a table to generate a sequence (as PeopleSoft generally does). I am not worried by gaps in the sequence.
CREATE TABLE dmk_prcsrqststrng_log
(prcsinstance NUMBER NOT NULL
,sequence_nbr NUMBER NOT NULL
,rqststringtype VARCHAR2(2) NOT NULL
,prcsrqststring VARCHAR2(254) NOT NULL
,datestamp DATE NOT NULL
,datetimestamp TIMESTAMP NOT NULL
,CONSTRAINT dmk_prcsrqststrng_log_pk
PRIMARY KEY (prcsinstance, sequence_nbr)
);
CREATE SEQUENCE dmk_prcsrqststrng_seq;
A trigger on PSPRCSRQSTSTRNG writes new values for PRCSRQSTSTRING to the log table. I have used an autonomous transaction in case the update rolls back after an error, and so that I can see the log record before Application Engine commits. I am also storing two timestamps when the audit record is written. The Oracle date type is only accurate to the second, but I have also used the newer timestamp datatype. On Oracle 9.2 (on Windows) I get millisecond accuracy, but on Oracle 10.2 I get microsecond accuracy.
CREATE OR REPLACE TRIGGER dmk_prcsrqststrng_log
AFTER INSERT OR UPDATE OR DELETE
ON sysadm.psprcsrqststrng
FOR EACH ROW
DECLARE
PRAGMA autonomous_transaction;
BEGIN
IF INSERTING OR UPDATING THEN
INSERT INTO dmk_prcsrqststrng_log
(prcsinstance
,sequence_nbr
,rqststringtype
,prcsrqststring
,datestamp
,datetimestamp)
VALUES
(:new.prcsinstance
,dmk_prcsrqststrng_seq.nextval
,:new.rqststringtype
,:new.prcsrqststring
,SYSDATE
,SYSTIMESTAMP);
ELSIF DELETING THEN
INSERT INTO dmk_prcsrqststrng_log
(prcsinstance
,sequence_nbr
,rqststringtype
,prcsrqststring
,datestamp
,datetimestamp)
VALUES
(:old.prcsinstance
,dmk_prcsrqststrng_seq.nextval
,:old.rqststringtype
,''
,SYSDATE
,SYSTIMESTAMP);
END IF;
COMMIT;
EXCEPTION
WHEN OTHERS THEN NULL;
END;
/
show errors
Unfortunately, the status record is not always updated or deleted when the Application Engine program ends. Therefore, another trigger is required to delete the process record when the application updates its status to no longer processing.
CREATE OR REPLACE TRIGGER dmk_prcsrqststrng_del
AFTER UPDATE OF runstatus ON sysadm.psprcsrqst
FOR EACH ROW
WHEN (new.runstatus != 7 AND old.runstatus = 7
AND new.prcstype IN ('Application Engine'))
DECLARE
PRAGMA autonomous_transaction;
BEGIN
DELETE FROM psprcsrqststrng WHERE prcsinstance = :new.prcsinstance;
COMMIT;
EXCEPTION WHEN OTHERS THEN NULL;
END;
/
show errors
This table can be used is various ways.
i) A simple report can be written to show the last 10 steps for each process instance, and their duration. I can run this while the program is executing to see how far it has got, and how long the current step has been executing.
COLUMN ranking FORMAT 990 HEADING 'Rank'
COLUMN sequence_nbr FORMAT 990 HEADING 'Seq'
COLUMN prcsrqststring FORMAT a25
COLUMN ratio FORMAT 90.0 HEADING '%'
COLUMN sum_duration FORMAT 999,990.000000 HEADING 'Total|Duration'
COLUMN duration FORMAT 99,990.000000
COLUMN avg_duration FORMAT 99,990.000000 HEADING 'Average|Duration'
COLUMN max_duration FORMAT 99,990.000000 HEADING 'Maximum|Duration'
COLUMN datetimestamp FORMAT a25
COLUMN executions FORMAT 990 HEADING 'Execs'
BREAK ON REPORT ON PRCSINSTANCE SKIP 1
SELECT y.prcsinstance
, y.sequence_nbr
, y.prcsrqststring
, y.datetimestamp
, (TO_DATE(TO_CHAR(y.nextdatetimestamp,'YYYYMMDDHH24MISS'),'YYYYMMDDHH24MISS')
-TO_DATE(TO_CHAR(y.datetimestamp,'YYYYMMDDHH24MISS'),'YYYYMMDDHH24MISS'))*86400
+TO_NUMBER(TO_CHAR(y.nextdatetimestamp,'xFF'))
-TO_NUMBER(TO_CHAR(y.datetimestamp,'xFF')) duration
FROM (
SELECT x.prcsinstance
, x.sequence_nbr
, x.prcsrqststring
, x.datetimestamp
, CASE WHEN x.prcsrqststring = '' THEN x.nextdatetimestamp
ELSE NVL(x.nextdatetimestamp,SYSTIMESTAMP)
END as nextdatetimestamp
, x.ranking
FROM (
SELECT x.prcsinstance
, x.sequence_nbr
, x.prcsrqststring
, x.datetimestamp
, LEAD(x.datetimestamp,1) OVER (PARTITION BY x.prcsinstance ORDER BY x.sequence_nbr) AS nextdatetimestamp
, RANK() OVER (ORDER BY x.datetimestamp desc) AS ranking
FROM dmk_prcsrqststrng_log x
) x
) y
WHERE y.ranking <= 10 ORDER BY prcsinstance, sequence_nbr; PRCSINSTANCE Seq PRCSRQSTSTRING DATETIMESTAMP DURATION ------------ ---- ------------------------------ ------------------------- -------------- 242797 688 Main 15-JUN-06 09.56.49.000721 15.227588 689 XXXIF_PAY_FI.WRITEOUT.Step01 15-JUN-06 09.57.04.228309 15.233425 690 XXX_GEN_MD5.MAIN.Step01 15-JUN-06 09.57.19.461734 45.355060 691 XXXIF_PAY_FI.WRITEOUT.Step03 15-JUN-06 09.58.04.816794 2,738.654582 242814 681 XXXIF_PAYA.Z_DelOld.? 14-JUN-06 18.00.14.747175 8.575205 68214-JUN-06 18.00.23.322380
242815 683 Main 14-JUN-06 18.01.15.606713 19.816770
684 XXXIF_PAYH.Insert.Ins01 14-JUN-06 18.01.35.423483 15.516303
685 XXXIF_PAYH.Insert.Ins04 14-JUN-06 18.01.50.939786 15.160971
686 XXXIF_PAYH.Process.Hire 14-JUN-06 18.02.06.100757 163.917761
68714-JUN-06 18.04.50.018518
ii) I can also aggregate the time for all the steps to determine what steps account for the longest execution time.
SELECT *
FROM ( /*rank lines in report, also calculate ration to report*/
SELECT RANK() OVER (ORDER BY sum_duration DESC) AS ranking
, d.*
, 100*ratio_to_report(sum_duration) OVER () AS ratio
FROM ( /*calculate sum and durations*/
SELECT c.prcsrqststring
, SUM(c.duration) sum_duration
, COUNT(*) executions
, AVG(c.duration) avg_duration
, MAX(c.duration) max_duration
FROM (
SELECT b.prcsrqststring
, (TO_DATE(TO_CHAR(b.nextdatetimestamp
,'YYYYMMDDHH24MISS'),'YYYYMMDDHH24MISS')
-TO_DATE(TO_CHAR(b.datetimestamp
,'YYYYMMDDHH24MISS'),'YYYYMMDDHH24MISS'))*86400
+TO_NUMBER(TO_CHAR(b.nextdatetimestamp,'xFF'))
-TO_NUMBER(TO_CHAR(b.datetimestamp,'xFF')) duration
FROM (
SELECT a.prcsinstance
, a.sequence_nbr
, a.prcsrqststring
, a.datetimestamp
, LEAD(a.datetimestamp,1)
OVER (PARTITION BY a.prcsinstance
ORDER BY a.sequence_nbr) AS nextdatetimestamp
, RANK() OVER (ORDER BY a.datetimestamp desc) AS ranking
FROM dmk_prcsrqststrng_log a
) b
WHERE b.nextdatetimestamp IS NOT NULL
AND prcsrqststring != ''
) c
GROUP BY c.prcsrqststring
HAVING SUM(c.duration) > 0
) d
) e
WHERE ranking <= 10 ;
Total Average Maximum
Rank PRCSRQSTSTRING Duration Execs Duration Duration %
---- ------------------------------ --------------- ----- -------------- -------------- -----
1 XXXIF_PAYC.ADDR_LD.060 11,509.840101 15 767.322673 980.808154 29.0
2 XXXIF_PAYC.ADDR_LD.050 7,476.656270 15 498.443751 679.516699 18.8
3 XXXIF_PAYC.ADDR_LD.020 3,669.344222 14 262.096016 669.283625 9.2
4 XXXIF_PAYC.ASGN_LD.076 3,076.882638 13 236.683280 2,729.356869 7.7
5 XXXIF_PAYC.ASGN_LD.010 2,721.361198 15 181.424080 332.187057 6.8
6 XXXIF_PAYC.PURGE.020 2,644.848544 1 2,644.848544 2,644.848544 6.7
7 XXXIF_PAYC.ASGN_LD.020 624.892364 15 41.659491 155.971344 1.6
8 XXXIF_PAYC.ADDR_LD.010 623.354434 14 44.525317 136.919156 1.6
9 XXXIF_PAYC.ASGN_LD.030 583.222200 10 58.322220 425.172493 1.5
10 XXXIF_PAYC.ASGN_LD.060 543.455165 14 38.818226 125.473966 1.4
This method, although useful, is not foolproof. The status string is only updated if AE Restart is enabled, and then not for for every step followed by a COMMIT. I haven't yet been able to work out all the criteria for this.
Labels:
Performance Metrics
Subscribe to:
Posts
(
Atom
)