Building on the success of the inaugural UKOUG PeopleSoft Conference & Exhibition in 2008, this Conference, presented by UKOUG and GPUG, will provide a forum for the presentation and exchange of ideas and practical experiences within the areas of PeopleSoft Financials, HCM/HR and Technology. The multi-stream agenda will feature keynote presentations, technical and non-technical sessions, roundtables, panel discussions and more.
If you are interested in sharing your experience of using PeopleSoft technology and applications, here is your chance as the call for papers is now open.
Deadlines are tight. Submit your abstracts now at: www.oug.org/peoplesoft
Closing date for submissions: Friday 27th February.
The review panel, comprised of PeopleSoft community members, will evaluate all abstracts submitted by the closing date. The authors of accepted abstracts will receive confirmation at the beginning of March.
Wednesday, February 11, 2009
Friday, January 30, 2009
Managing Changes to the Number of Instances of Temporary Tables used in Application Engine Programs
When you run multiple copies of either the same Application Engine program, or different Application Engines that happen to use the same temporary work records, you need to worry about how many instances of the program or programs are likely to run concurrently, and hence how many instances of the temporary table to build.
But how do you manage these tables when you change the number of instances in the Application Engine properties?
How many tables are built for each Temporary Record?
The number of instances of each Application Engine program is set in the properties for that program (and stored on the PeopleTools table PSAEAPPLDEFN in the column TEMPTBLINSTANCES).
This screenshot is the Application Engine Program Properties for TL_TIMEADMIN. It is delivered configured for up to 10 concurrent instances.

The number of tables that are built for each Temporary Record is the sum of the instances in all the Application Engine programs in which the record is used, plus the number of Global Instances (in set up on the PeopleTools Options page.
Let’s take the record TL_PROF_LIST as an example. It is defined as a Temporary Record in 8 different Application Engine programs (in my HCM8.9 demo system).
So, across all the programs 50 temporary tables are required.
The system has three global instances.
So, Application Designer will build 54 tables based on this record.
Are there any temporary tables that have not been built but that should be built?
If you increase the number of instances of temporary tables on an Application Engine program then you may need to build the extra temporary tables. This query reports the missing tables.
I dropped PS_TL_PROF_LIST42, but I also increased the number of instance of TL_TIMEADMIN from 10 to 11. So the query reports that instances 42 and 54 of this table are missing, and all the other temporary tables for TL_TIMEADMIN also report a missing instance.
The remedy is very simple. Application Designer will build a script for just the missing tables.
Are there any temporary tables built that should not be built?
If you have reduced the number of temporary tables, then you may need to drop the excess tables.
This query reports tables that are beyond the number required.
I built the missing tables for the previous example, but then I reduced the number of instances on TL_TIMEADMIN back to 10. Now, the query reports that there is an extra table for each record beyond the number defined.
The problem is that Application Designer will not generate the DDL to drop any of these tables. If you reduce the number of temporary table instances, you will have to drop these tables yourself. However, now, it is easy to adjust the above query to produce the DDL to drop the tables.
The query then produces these commands.
The scripts in this posting can be downloaded from my website.
But how do you manage these tables when you change the number of instances in the Application Engine properties?
How many tables are built for each Temporary Record?
The number of instances of each Application Engine program is set in the properties for that program (and stored on the PeopleTools table PSAEAPPLDEFN in the column TEMPTBLINSTANCES).
This screenshot is the Application Engine Program Properties for TL_TIMEADMIN. It is delivered configured for up to 10 concurrent instances.

The number of tables that are built for each Temporary Record is the sum of the instances in all the Application Engine programs in which the record is used, plus the number of Global Instances (in set up on the PeopleTools Options page.
Let’s take the record TL_PROF_LIST as an example. It is defined as a Temporary Record in 8 different Application Engine programs (in my HCM8.9 demo system).
SELECT a.recname, a.ae_applid, b.temptblinstances FROM psaeappltemptbl a, psaeappldefn b WHERE a.ae_applid = b.ae_applid AND a.recname = 'TL_PROF_LIST' ORDER BY a.recname / RECNAME AE_APPLID TEMPTBLINSTANCES --------------- ------------ ---------------- TL_PROF_LIST TL_AGG_SECTN 1 TL_PROF_LIST TL_OUTTCD 5 TL_PROF_LIST TL_PUB_TM1 5 TL_PROF_LIST TL_SCHHRSRPT 5 TL_PROF_LIST TL_SCHRES_AE 9 TL_PROF_LIST TL_ST_LIB 5 TL_PROF_LIST TL_TIMEADMIN 10 TL_PROF_LIST TL_TRPROFILE 10
So, across all the programs 50 temporary tables are required.
SELECT * FROM pstemptblcntvw WHERE recname = 'TL_PROF_LIST' / RECNAME TEMPTBLINSTANCES --------------- ---------------- TL_PROF_LIST 50
The system has three global instances.
SELECT temptblinstances FROM psoptions / TEMPTBLINSTANCES ---------------- 3
So, Application Designer will build 54 tables based on this record.
- 3 ‘global’ instances for Application Engines without a Process Instance number. The table names will be suffixed with numbers 1 to 3.
- 50 ‘private’ instances for the number of requested instances of the Application Engines. The table names will be suffixed with numbers 4 to 53
- 1 ‘shared’ version without a suffix (in other words the usual name) which is used if there is no instance no already allocated to a process instance.
CREATE TABLE PS_TL_PROF_LIST (PROCESS_INSTANCE DECIMAL(10) NOT NULL, EMPLID VARCHAR2(11) NOT NULL, EMPL_RCD SMALLINT NOT NULL, START_DT DATE, END_DT DATE) TABLESPACE TLWORK STORAGE (INITIAL 40000 NEXT 100000 MAXEXTENTS UNLIMITED PCTINCREASE 0) PCTFREE 10 PCTUSED 80 / … CREATE TABLE PS_TL_PROF_LIST53 (PROCESS_INSTANCE DECIMAL(10) NOT NULL, EMPLID VARCHAR2(11) NOT NULL, EMPL_RCD SMALLINT NOT NULL, START_DT DATE, END_DT DATE) TABLESPACE TLWORK STORAGE (INITIAL 40000 NEXT 100000 MAXEXTENTS UNLIMITED PCTINCREASE 0) PCTFREE 10 PCTUSED 80 /
Are there any temporary tables that have not been built but that should be built?
If you increase the number of instances of temporary tables on an Application Engine program then you may need to build the extra temporary tables. This query reports the missing tables.
I dropped PS_TL_PROF_LIST42, but I also increased the number of instance of TL_TIMEADMIN from 10 to 11. So the query reports that instances 42 and 54 of this table are missing, and all the other temporary tables for TL_TIMEADMIN also report a missing instance.
RECNAME INSTANCE TEMPTBLINSTANCES --------------- ---------- ---------------- … TL_PMTCH_TMP1 19 19 TL_PMTCH_TMP2 19 19 TL_PROF_LIST 42 54 TL_PROF_LIST 54 54 TL_PROF_WRK 38 38 TL_PT_FINAL 29 29 …
The remedy is very simple. Application Designer will build a script for just the missing tables.
CREATE TABLE PS_TL_PROF_LIST42 (PROCESS_INSTANCE DECIMAL(10) NOT NULL, EMPLID VARCHAR2(11) NOT NULL, EMPL_RCD SMALLINT NOT NULL, START_DT DATE, END_DT DATE) TABLESPACE TLWORK STORAGE (INITIAL 40000 NEXT 100000 MAXEXTENTS UNLIMITED PCTINCREASE 0) PCTFREE 10 PCTUSED 80 /
Are there any temporary tables built that should not be built?
If you have reduced the number of temporary tables, then you may need to drop the excess tables.
This query reports tables that are beyond the number required.
I built the missing tables for the previous example, but then I reduced the number of instances on TL_TIMEADMIN back to 10. Now, the query reports that there is an extra table for each record beyond the number defined.
RECNAME INSTANCE TEMPTBLINSTANCES TABLE_NAME ------------- -------- ---------------- ------------------ … TL_PMTCH_TMP1 19 18 PS_TL_PMTCH_TMP119 TL_PMTCH_TMP2 19 18 PS_TL_PMTCH_TMP219 TL_PROF_LIST 54 53 PS_TL_PROF_LIST54 TL_PROF_WRK 38 37 PS_TL_PROF_WRK38 TL_PT_FINAL 29 28 PS_TL_PT_FINAL29 …
The problem is that Application Designer will not generate the DDL to drop any of these tables. If you reduce the number of temporary table instances, you will have to drop these tables yourself. However, now, it is easy to adjust the above query to produce the DDL to drop the tables.
SELECT 'DROP TABLE '||t.table_name||' PURGE;' cmd FROM …
The query then produces these commands.
… DROP TABLE PS_TL_PMTCH_TMP119 PURGE; DROP TABLE PS_TL_PMTCH_TMP219 PURGE; DROP TABLE PS_TL_PROF_LIST54 PURGE; DROP TABLE PS_TL_PROF_WRK38 PURGE; DROP TABLE PS_TL_PT_FINAL29 PURGE; …
The scripts in this posting can be downloaded from my website.
Labels:
Application Engine
,
Temporary Records
Monday, January 26, 2009
Factors Affecting Performance of Concurrent Truncate of Working Storage Tables
A longer version of this posting, with detailed experimental results, is available on my website.
Over the past year, I have seen problems with Local Write Wait in the Oracle database on two different Oracle systems. One occasion was in a PeopleSoft Time and Labour batch process, the other was in a custom PL/SQL process in non-PeopleSoft system.
In both cases, normal tables in the databases were being used for temporary working storage before that data was then written to another table. The content of the working storage tables was then cleared out by periodically truncating them. In order to increase overall batch throughput, several instances of the program were run in parallel. The resulting concurrent truncate operations contended with each other, and the processes did not scale well.
I have written about this subject previously. These problems have prompted me to do some research and testing, I am now able to make definite recommendations.
Oracle Note 334822.1 (which I have also quoted before) provides a good technical description of the database’s internal behaviour. Put simply; truncate (and drop) operations serialise. Only one process can truncate at any one time. If you have multiple concurrent processes all trying to truncate their own working storage tables, you could experience performance problems. Such processes not scale well as the number of concurrent processes increases.
Real Problems
In the case of the non-PeopleSoft PL/SQL process, I was able to recreate the working storage tables as Global Temporary Tables (GTTs) that deleted the rows on commit because the process committed only when each transaction was complete. Local write wait totally disappeared in this case. Temporary objects do not need to be recovered, so this mechanism does not apply to them.
The PeopleSoft scenario involved one of the ‘Time & Labor’ batch processes, TL_TIMEADMIN. However, GTTs cannot easily be introduced into the T&L batches because there are ‘restartable’. Therefore, the contents of temporary working storage tables need to be preserved after the process and its session terminates. This precludes the use of GTTs.
The combination of Local Write Wait and enq: RO - fast object reuse accounted for 31% of the total response time. This is a significant proportion of the total response time.
Tests
I created a simple test to model the behaviour of T&L. I created pairs of simple tables, populated one of each pair, and then repeatedly copied the data back and forth between them, truncating the source after the copy. The test script has evolved into a PL/SQL package procedure, mainly so that the tests could be submitted to and run concurrently by the Oracle job scheduler. There are also procedures to create, populate, and drop the pairs of working storage tables. The scripts can be downloaded from my website.
I was able to run the same controlled test in a variety of scenarios. I have run the tests on Oracle 10.2.0.3 on various platforms with similar results. A detailed set of results are available in the longer version of this document on my website.
General Recommendations
If you have to store temporarily working data in a database table, it is much better to use a Global Temporary Table, although the design of the application may preclude this. It is not possible to do this with data used by restartable Application Engine processes, because the contents of the GTT would be lost when the process terminates.
The Metalink note references unpublished bug 414780 in which a PeopleSoft customer reported this problem, but “they seemed to fix it by changing some PeopleSoft code to implement delete rather than truncate on small temporary tables”. However, my tests show that this probably degraded performance further. The individual delete statements take longer than the truncate operations, and the overall test times increased. Although the truncate operations serialise on the RO enqueue and wait for local writes, this is still better than deleting the data and waiting for the undo information to be written to the redo log. Furthermore, although the truncate operations did not scale well, the delete operations exhibited negative scalability for the same volumes and concurrency. They became bottlenecked on redo log.
Over time, PeopleSoft batch processing has moved slightly away from SQR and COBOL. These types of process cannot be restarted, and so tables used for temporary working storage within the process can usually be recreated as Global Temporary Tables. This will produce better performance and scalability that any option that involves retaining the permanent table.
However, we are seeing more processing in PeopleSoft applications done with Application Engine. If restart has been disabled for an Application Engine program, then temporary records can also be rebuilt as Global Temporary Tables, because their contents does not need to be preserved for another session to pick up.
Otherwise, move the temporary records and their indexes to tablespace with a 32Kb block size. The change of assigned tablespace can be managed within Application Designer, and released like any other patch or customisation. A 32Kb buffer cache must be created in the database instance. Sizing this is going to be a trade-off between how much memory can be taken from other activities to cache just working storage tables, and how much physical I/O you are going to have to wait for. Oracle’s Automatic Shared Memory Management is of no assistance here (until Oracle 11g), the KEEP, RECYCLE, and other block size buffer caches must be sized manually (see Oracle Reference Manual for SGA_TARGET).
No change to the application code is required. There is no performance improvement to be obtained by customising the application code, either to add the REUSE STORAGE option to the TRUNCATE TABLE commands, nor to use DELETE commands instead.
Added 4.7.2009:Tests on Oracle 10.2.0.3 have shown that there is an advantage to putting the working storage tables into a Non-ASSM tablespace. ASSM introduces additional blocks to map the storage in a segment. These are also maintained during a truncate. No having to maintain these blocks saved me 25% on the elapsed run time on some T&L batch processes.
Oracle Bug 4224840/4260477
Added 2.4.2009:Unfortunately, nothing is quite as simple as it seems. If you have a transaction that locks more than 4095 rows in a 32Kb block you can encounter block corruption (bug 4224840). The fix/workaround in Oracle 10g (bug 4260477) is that a transaction will fail with this message before the corruption occurs.
There is an excellent explanation of this problem, and a test script to reproduce it, on Hermant's Oracle DBA Blog
This error will not be resolved until Oracle 11g. However, it does not occur with smaller block sizes. The workaround is either to commit more frequently, or to move the table concerned back to a tablespace with a smaller block size. I have run into this with Time & Labor in a particular scenario.
Over the past year, I have seen problems with Local Write Wait in the Oracle database on two different Oracle systems. One occasion was in a PeopleSoft Time and Labour batch process, the other was in a custom PL/SQL process in non-PeopleSoft system.
In both cases, normal tables in the databases were being used for temporary working storage before that data was then written to another table. The content of the working storage tables was then cleared out by periodically truncating them. In order to increase overall batch throughput, several instances of the program were run in parallel. The resulting concurrent truncate operations contended with each other, and the processes did not scale well.
I have written about this subject previously. These problems have prompted me to do some research and testing, I am now able to make definite recommendations.
Oracle Note 334822.1 (which I have also quoted before) provides a good technical description of the database’s internal behaviour. Put simply; truncate (and drop) operations serialise. Only one process can truncate at any one time. If you have multiple concurrent processes all trying to truncate their own working storage tables, you could experience performance problems. Such processes not scale well as the number of concurrent processes increases.
Real Problems
In the case of the non-PeopleSoft PL/SQL process, I was able to recreate the working storage tables as Global Temporary Tables (GTTs) that deleted the rows on commit because the process committed only when each transaction was complete. Local write wait totally disappeared in this case. Temporary objects do not need to be recovered, so this mechanism does not apply to them.
The PeopleSoft scenario involved one of the ‘Time & Labor’ batch processes, TL_TIMEADMIN. However, GTTs cannot easily be introduced into the T&L batches because there are ‘restartable’. Therefore, the contents of temporary working storage tables need to be preserved after the process and its session terminates. This precludes the use of GTTs.
The combination of Local Write Wait and enq: RO - fast object reuse accounted for 31% of the total response time. This is a significant proportion of the total response time.
- local write wait occurs, as the name suggests, when the session is waiting for its own write operations. The RO enqueue is used to protect the buffer cache chain while it is scanned for dirty blocks in an object for the database writer to then write to the data files.
- enq: RO - fast object reuse occurs when a process waits to acquire the RO enqueue, in other words, while somebody else is truncating or dropping an object.
- The time taken to write the blocks to disk. Processes that are frequently truncating temporary working storage are also doing a lot of DML operations to populate the working storage and other tables. The disks under the data files are going to be busy. If the disk becomes a bottleneck, the duration of the local write wait will certainly increase.
- The time taken to scan the buffer cache for dirty blocks to be written to disk and flushed from cache. The larger the buffer cache, the longer it will take to find these blocks.
Tests
I created a simple test to model the behaviour of T&L. I created pairs of simple tables, populated one of each pair, and then repeatedly copied the data back and forth between them, truncating the source after the copy. The test script has evolved into a PL/SQL package procedure, mainly so that the tests could be submitted to and run concurrently by the Oracle job scheduler. There are also procedures to create, populate, and drop the pairs of working storage tables. The scripts can be downloaded from my website.
I was able to run the same controlled test in a variety of scenarios. I have run the tests on Oracle 10.2.0.3 on various platforms with similar results. A detailed set of results are available in the longer version of this document on my website.
General Recommendations
If you have to store temporarily working data in a database table, it is much better to use a Global Temporary Table, although the design of the application may preclude this. It is not possible to do this with data used by restartable Application Engine processes, because the contents of the GTT would be lost when the process terminates.
The Metalink note references unpublished bug 414780 in which a PeopleSoft customer reported this problem, but “they seemed to fix it by changing some PeopleSoft code to implement delete rather than truncate on small temporary tables”. However, my tests show that this probably degraded performance further. The individual delete statements take longer than the truncate operations, and the overall test times increased. Although the truncate operations serialise on the RO enqueue and wait for local writes, this is still better than deleting the data and waiting for the undo information to be written to the redo log. Furthermore, although the truncate operations did not scale well, the delete operations exhibited negative scalability for the same volumes and concurrency. They became bottlenecked on redo log.
- Using a recycle pool of the same block size as the rest of the database was not effective; possibly because these pools use the same LRU latches.
- Using a larger non-default block size improved performance of truncate. The performance with 32Kb blocks was better than with 16Kb.
- Using a larger uniform extent size also improved performance for just the truncate operations and the test as a whole. Fewer, larger extents were involved, and hence less time was spent on CPU and row cache lock. The overall throughput truncate operations degraded as the number of processes increased, although, the throughput of the test as whole did scale.
- The presence or absence of indexes did not have a significant effect on the relative test timings, and does not alter my advice.
- The effect of truncating with the REUSE STORAGE option is less clear cut. There are no waits on row cache lock because the blocks do not have to be cleared out of the buffer cache, but on the other hand more time is spent on local write wait because all the dirty blocks have to be written to disk, hence the RO enqueue is held for longer and more time is spent on enq: RO - fast object reuse. If you are using an AUTOALLOCATE tablespace then you would be better to use REUSE STORAGE option, but generally you would be slightly better to use a larger uniform extent size and not to use the REUSE STORAGE option.
Over time, PeopleSoft batch processing has moved slightly away from SQR and COBOL. These types of process cannot be restarted, and so tables used for temporary working storage within the process can usually be recreated as Global Temporary Tables. This will produce better performance and scalability that any option that involves retaining the permanent table.
However, we are seeing more processing in PeopleSoft applications done with Application Engine. If restart has been disabled for an Application Engine program, then temporary records can also be rebuilt as Global Temporary Tables, because their contents does not need to be preserved for another session to pick up.
Otherwise, move the temporary records and their indexes to tablespace with a 32Kb block size. The change of assigned tablespace can be managed within Application Designer, and released like any other patch or customisation. A 32Kb buffer cache must be created in the database instance. Sizing this is going to be a trade-off between how much memory can be taken from other activities to cache just working storage tables, and how much physical I/O you are going to have to wait for. Oracle’s Automatic Shared Memory Management is of no assistance here (until Oracle 11g), the KEEP, RECYCLE, and other block size buffer caches must be sized manually (see Oracle Reference Manual for SGA_TARGET).
No change to the application code is required. There is no performance improvement to be obtained by customising the application code, either to add the REUSE STORAGE option to the TRUNCATE TABLE commands, nor to use DELETE commands instead.
Added 4.7.2009:Tests on Oracle 10.2.0.3 have shown that there is an advantage to putting the working storage tables into a Non-ASSM tablespace. ASSM introduces additional blocks to map the storage in a segment. These are also maintained during a truncate. No having to maintain these blocks saved me 25% on the elapsed run time on some T&L batch processes.
Oracle Bug 4224840/4260477
Added 2.4.2009:Unfortunately, nothing is quite as simple as it seems. If you have a transaction that locks more than 4095 rows in a 32Kb block you can encounter block corruption (bug 4224840). The fix/workaround in Oracle 10g (bug 4260477) is that a transaction will fail with this message before the corruption occurs.
There is an excellent explanation of this problem, and a test script to reproduce it, on Hermant's Oracle DBA Blog
ORA-08007: Further changes to this block by this transaction not allowed
This error will not be resolved until Oracle 11g. However, it does not occur with smaller block sizes. The workaround is either to commit more frequently, or to move the table concerned back to a tablespace with a smaller block size. I have run into this with Time & Labor in a particular scenario.
Tuesday, January 06, 2009
PeopleSoft's PSADMIN Role
Recently, I have had a number of conversations about the privileges that are granted to the PeopleSoft OWNERID account (usually SYSADM) via the PSADMIN role.
A posting on the PeopleSoft DBA Forum, referenced Tanel Poder's blog entry: Oracle Security: All your DBAs are SYSDBAs and can have full OS access. Essentially, there is a security bug in Oracle where users with IMP_FULL_DATABASE and BECOME_USER can gain SYSDBA access. This hole has been closed up a patch delivered in the July 2008 Critical Patch Update.
The PSADMIN role has both of these privileges, as well as another 22 that are not required for normal operation of a PeopleSoft system.
There are two morals to this story:
Each privilege is discussed in Chapter 3 of PeopleSoft for the Oracle DBA.
A posting on the PeopleSoft DBA Forum, referenced Tanel Poder's blog entry: Oracle Security: All your DBAs are SYSDBAs and can have full OS access. Essentially, there is a security bug in Oracle where users with IMP_FULL_DATABASE and BECOME_USER can gain SYSDBA access. This hole has been closed up a patch delivered in the July 2008 Critical Patch Update.
The PSADMIN role has both of these privileges, as well as another 22 that are not required for normal operation of a PeopleSoft system.
There are two morals to this story:
- There are good reasons to keep up to date with Oracle's CPUs.
- PSADMIN has privileges that if given to application accounts could compromise database security.
- ANALYZE ANY
- ALTER SESSION
- CREATE SESSION
- CREATE TABLE
- CREATE TRIGGER
- CREATE_VIEW
- SELECT_CATALOG_ROLE
- ALTER USER
Each privilege is discussed in Chapter 3 of PeopleSoft for the Oracle DBA.
Sunday, December 21, 2008
Poor performance of PSPMSESSIONS_VW view affects Performance Monitor System Monitor Component
The System Performance Monitor component (PSPMSYSHEALTH; navigation: PeopleTools -> Performance Monitor -> System Performance) gives an overview of each system monitored by the PeopleSoft Performance Monitor. However, the poor performance of the view PSPMSESSIONS_VW can severely affect this component, to the extent that as transaction history builds up the component will not respond within the timeout.
Below is an extract from a PeopleTools trace. What happened is that the query ran until the Tuxedo service timed out. Tuxedo terminated application server process 31944, and spawned a new process (ID 3598). The user session received an error.
Even when each query finishes within a reasonable amount of time, the query can be run several times by the component.
This is the definition of PSPMSESSIONS_VW delivered by PeopleSoft.
Firstly, accurate object statistics are required on the transaction history table and its indexes.
An additional index is required on PSPMTRANSHIST:
Finally, I have changed the view.
The effect of these changes is a significant improvement in the performance of the query. I ran a test query for a single agent on a system with over 3 million rows on PSPMTRANSHIST:
Below is an extract from a PeopleTools trace. What happened is that the query ran until the Tuxedo service timed out. Tuxedo terminated application server process 31944, and spawned a new process (ID 3598). The user session received an error.
PSAPPSRV.31944 (70) 1-783 11.38.38 0.000072
Cur#1.31944.PMONITOR RC=0 Dur=0.000040 COM
Stmt=select count(*) from pspmsessions_vw where pm_agentid = :1
PSAPPSRV.31944 (70) 1-784 11.38.38 0.000008
Cur#1.31944.PMONITOR RC=0 Dur=0.000001
Bind-1 type=19 length=3 value=689
PSAPPSRV.3598 (112) 1-132 11.42.39 314.495972
Cur#1.3598.PMONITOR RC=0 Dur=0.000095 COM
Stmt=SELECT VERSION FROM PSVERSION WHERE OBJECTTYPENAME = 'SYS'
Even when each query finishes within a reasonable amount of time, the query can be run several times by the component.
This is the definition of PSPMSESSIONS_VW delivered by PeopleSoft.
SELECT T3.PM_CONTEXT_VALUE1
, T3.PM_AGENTID
, T3.PM_AGENT_STRT_DTTM
FROM PSPMTRANSHIST T3
WHERE T3.PM_TRANS_DEFN_ID = 116
AND PM_TRANS_STATUS = '1'
AND PM_TRANS_DURATION <> 0
AND T3.PM_CONTEXT_VALUE1 IN (
SELECT T.PM_CONTEXT_VALUE1
FROM PSPMTRANSHIST T
WHERE T.PM_TRANS_DEFN_SET = 1
AND T.PM_TRANS_DEFN_ID = 109
AND T.PM_MON_STRT_DTTM > %TimeAdd(%CurrentDateTimeIn, -720)
AND T.PM_PARENT_INST_ID <> PM_TOP_INST_ID
AND T.PM_CONTEXT_VALUE1 NOT IN (
SELECT T2.PM_CONTEXT_VALUE1
FROM PSPMTRANSHIST T2
WHERE T2.PM_TRANS_DEFN_SET = 1
AND T2.PM_TRANS_DEFN_ID = 108
AND T2.PM_CONTEXT_VALUE1 = T.PM_CONTEXT_VALUE1
AND T2.PM_MON_STRT_DTTM > %TimeAdd(%CurrentDateTimeIn, -720)))
Firstly, accurate object statistics are required on the transaction history table and its indexes.
An additional index is required on PSPMTRANSHIST:
CREATE INDEX PSPPSPMTRANSHIST ON PSPMTRANSHIST
(PM_TRANS_DEFN_SET
,PM_TRANS_DEFN_ID
,PM_CONTEXT_VALUE1
,PM_MON_STRT_DTTM)
TABLESPACE PSINDEX PCTFREE 0
PARALLEL NOLOGGING COMPRESS 3
/
ALTER INDEX PSPPSPMTRANSHIST NOPARALLEL LOGGING
/
Finally, I have changed the view.
- The IN() operators have been changed to WHERE EXISTS(). The new index supports the efficient execution of these sub-queries.
- The sub-queries are now both correlated back to the main query on T3.
- The ROWNUM criteria have been added to restrict the number of rows the sub-queries can return.
SELECT T3.PM_CONTEXT_VALUE1 /*Session ID*/
, T3.PM_AGENTID
, T3.PM_AGENT_STRT_DTTM
FROM PSPMTRANSHIST T3
WHERE T3.PM_TRANS_DEFN_ID = 116 /*Redirect after login*/
AND T3.PM_TRANS_DEFN_SET = 1 /*added*/
AND T3.PM_TRANS_STATUS = '1'
AND T3.PM_TRANS_DURATION <> 0
AND EXISTS(
SELECT 'x'
FROM PSPMTRANSHIST T
WHERE T.PM_TRANS_DEFN_SET = 1
AND T.PM_TRANS_DEFN_ID = 109 /*User Session Began*/
AND T.PM_MON_STRT_DTTM > %TimeAdd(%CurrentDateTimeIn, -720)
AND T.PM_PARENT_INST_ID <> T.PM_TOP_INST_ID
AND T.PM_CONTEXT_VALUE1 = T3.PM_CONTEXT_VALUE1
AND NOT EXISTS(
SELECT 'x'
FROM PSPMTRANSHIST T2
WHERE T2.PM_TRANS_DEFN_SET = 1
AND T2.PM_TRANS_DEFN_ID = 108 /*User Session Ended*/
AND T2.PM_MON_STRT_DTTM > %TimeAdd(%CurrentDateTimeIn, -720)
AND T2.PM_CONTEXT_VALUE1 = T3.PM_CONTEXT_VALUE1
AND ROWNUM <= 1)
AND ROWNUM <= 1)
The effect of these changes is a significant improvement in the performance of the query. I ran a test query for a single agent on a system with over 3 million rows on PSPMTRANSHIST:
- Original: 24895 consistent gets
- New Index: 22547 consistent gets
- and View Changes: 85 consistent gets
Labels:
Performance Monitor
,
PSPMSESSIONS_VW
Subscribe to:
Posts
(
Atom
)