Using any RDBMS (not just Oracle) as a temporary working store is not efficient. The database will certainly store that data, but it will go to a lot of time and trouble to make sure that it is stored securely and can be recovered in the event of a disaster. In an Oracle database, every insert into these working storage tables will generate redo, and the user process must wait for the redo to be written to disk when the session commits. Excessive redo generation can become a performance issue, it can also impact on a Data Guard DR solution, and on the time taken to perform incremental backup and recovery. However, it is not necessary to be able to recover the data in the working storage tables, only the final result data.
Effect of Frequently Truncating Tables
I have observed significant amounts of time spent truncating working storage tables during batch programs, especially when concurrent process frequently truncate working storage tables. This is a typical example from a Statspack report.
Top 5 Timed Events Avg %Total ~~~~~~~~~~~~~~~~~~ wait Call Event Waits Time (s) (ms) Time ----------------------------- ------- -------- ------ ------ local write wait 5,756 3,955 687 52.6 enq: RO - fast object reuse 1,500 2,685 1790 35.7
From Metalink, Oracle Note 334822.1 sums up the situation: "Processes that involve temporary tables being truncated and repopulated in multiple, concurrent batch streams may present this situation. The underlying problem is we have to write the object's dirty buffers to disk prior to actually truncating or dropping the object. This ensures instance recoverability and avoids a stuck recovery. It seems at first glance perfectly reasonable to simply truncate a temporary table, then repopulate for another usage. And then to do the temporary populate/truncate operations in concurrent batches to increase throughput. However, in reality the concurrent truncates get bogged down as dbwr gets busy flushing those dirty block buffers from the buffer cache. You will see huge CI enqueue waits. The multiple truncate operations in concurrent streams absolutely kill throughput. This is specially critical with large buffers.
"There was also a discussion in Bug: 4147840 (non-publish) where a PeopleSoft process was causing this behaviour because of the above explanation and they seemed to fix it by changing some PeopleSoft code to implement delete rather than truncate on small temporary tables."
But if we go back to deleting rows from these tables we will copy the contents to the redo log. Furthermore, deleting data will not reset the high watermark on the table.
GTTs
One option is to recreate working storage tables as Oracle Global Temporary Tables (GTT). There are two major advantages to GTTs.
- They do not generate redo, although they do generate undo. Most of the time batch processes will be inserting data into these tables. The undo on the insert is just to delete the data and so is much smaller. If the table cleared by truncation rather rather than deleting the data, there is virtually no undo generated. The problem with local write wait and contention on the RO enqueue does not occur with GTTs.
- For each session that references the same GTT, Oracle will create a separate physical copy of the table in the temporary segment. The Global Payroll engine is written in COBOL, and so it uses ordinary PeopleSoft records for working storage. If multiple 'streams' are used to run different sets of data simultaneously, the processes share the same working storage tables. Each session will have to scan through and discard working storage data from other sessions, and when updating or deleting from temporary tables the Oracle may have to generate read consistent copies of these data blocks. The same happens in Application Engine when there are no unlocked instances of working storage tables available. This effect can severely restrict the ability to scale the number of concurrent processes. If these tables were recreated as GTTs, then these problems are resolved, and the number of payroll streams can be scaled without creating any contention between them.
- Because a copy of the GTT is created for each referencing database session, GTTs cannot be used to pass data between database sessions, nor can they be used for any on-line process in PeopleSoft because there is no guarantee which application server process will handle which service request. So, for example, you cannot use GTTs to hold dynamic nVision selectors if you plan to run nVision windows client in 3-tier mode. The service request that populates the dynamic selector may be executed by one PSQRYSRV process, but the service to run the query may be run by another.
- The GTT will persist until either the end of the transaction or the session depending on how the GTT is created. In PeopleSoft, GTTs should be created PRESERVE ON COMMIT because we do not want to loose the data when process commits.
- Although it is possible to collect optimizer statistics on GTTs, only one set can be stored on each table. So one session could end up using statistics collected by another session (see demonstration of this behaviour). It is probably better to remove statistics on from the GTTs, and rely upon Optimizer Dynamic Sampling (I discussed this in my article in Issue 31 of Oracle Scene Magazine.
- Some AE programs can be restarted after a failure. They save their state information in the database as they progress, and can be restarted from the point at which they last committed. But, working storage data will not be preserved if it is held in a GTT, either because the database session will have terminated or because it will be restarted on a different PSAESRV server process. Therefore, restart must be disabled before GTTs can be introduced. This involves a customisation to the program.
- Ideally, AE programs that use GTTs should run by a stand-alone psae executable, not a PSAESRV server process. PSAESRV server processes where introduced in PeopleTools 8.44 when the Process Scheduler became a fully fledged Tuxedo domain. These Tuxedo server processes each maintain a persistent connection to the database, so they do not have to sign into the database for every AE program request. They are only recommended for systems where there are a very large number of short AE program request, such as CRM. However, it is advantageous for the GTTs to be cleared out at the end of each process, otherwise they will continue to have an overhead in the temporary segment. Even delivered AE programs are not always conscientious about this.
Take for example a Time & Labor working storage record TL_IPT3. The build script generated by Application Designer creates the table PS_TL_IPT3 as expected, but it used by Application Engine program TL_TIMEADMIN that specifies 10 instances, and there are 3 global instances, so it also created PS_TL_IPT31 through PS_TL_OPT313. So I get 14 copies of the table and its indexes. It will therefore come as no surprise that Temporary Records are the one of the major reasons why there are so many tables in a PeopleSoft database. In a vanilla demo HR database, there are over 6400 temporary tables out of a total of 17600 tables! Now, what if, having met the above criteria, I want to recreate make my Temporary Records as Global Temporary Tables? It is tempting to think that there is no longer any need to have multiple instances of the record, but there is some special behaviour for temporary records coded into PeopleTools. If you run out of instances of temporary tables, you use the table without the numeric suffix, and this treated as a shared temporary table. Thus temporary records should have the field PROCESS_INSTANCE as a part of their unique key, and if they do not you get a warning when you try to save them in Application Designer.
However, there is also a change in behaviour to the %TruncateTable macro. It is fast and efficient to clear a working storage table with TRUNCATE rather than DELETE. It generates only a trivial amount of redo and resets the high water mark on the table. The PeopleCode macro %TruncateTable evaluates to a TRUNCATE command on database platforms that support this command.
%TruncateTable (%Table(TL_IPT3))
-- 11.44.57 .......(TL_TA_RULES.C1_00200.Step02) (SQL) TRUNCATE TABLE SYSADM.PS_TL_IPT34 / -- Row(s) affected: 0 COMMIT /
-- 11.52.30 .......(TL_TA_RULES.C1_00200.Step02) (SQL) DELETE FROM PS_TL_IPT3 WHERE PROCESS_INSTANCE = 1085 / -- Row(s) affected: 0 COMMIT /
SELECT r.recname , o.temptblinstances+NVL(c.temptblinstances,0) temptblinstances FROM psoptions o , psrecdefn r LEFT OUTER JOIN pstemptblcntvw c ON c.recname = r.recname WHERE r.rectype = 7 AND NOT EXISTS( --omit tables where restart not disabled on referencing AE SELECT 'x' FROM psaeappltemptbl t , psaeappldefn a WHERE t.recname = r.recname AND a.ae_applid = t.ae_applid AND a.ae_disable_restart = 'N' --restart not disabled AND ROWNUM = 1)
CREATE **GLOBTEMP** TABLE [TBNAME] ([TBCOLLIST]) **GTSPCOM1** TABLESPACE [TBSPCNAME] STORAGE (INITIAL **INIT** NEXT **NEXT** MAXEXTENTS **MAXEXT** PCTINCREASE **PCT**) PCTFREE **PCTFREE** PCTUSED **PCTUSED** **GTSPCOM2**;
2: Create Index
CREATE [UNIQUE] **BITMAP** INDEX [IDXNAME] ON [TBNAME] ([IDXCOLLIST]) **GTSPCOM1** TABLESPACE **INDEXSPC** STORAGE (INITIAL **INIT** NEXT **NEXT** MAXEXTENTS **MAXEXT** PCTINCREASE **PCT**) PCTFREE **PCTFREE** PARALLEL NOLOGGING **GTSPCOM2**;
The three new parameters (in bold) should default to a blank string in the DDL model.- GLOBTEMP will be used to specify the keyword GLOBAL TEMPORARY.
- GTSPCOM1 will be used to start a comment in front on the physical table attributes. It is not legitimate to specify these parameters on a GTT. This variable will also be used to specify the GTT clause 'ON COMMIT PRESERVE ROWS'.
- GTSPCOM2 will be used to close the comment at the end of physical table attributes.
And all of its indexes.
It would be very tedious to add all these overrides manually, so I have produced two PL/SQL scripts that update the PeopleTools tables directly.
- globtemp.sql: implements GTTs for all temporary records where there is no restartable AE program that references the table.
- globtemp-undo.sql: reverts the record to normal tables where there is no restartable AE program (so that changes can be deliberately undone for testing).
DROP TABLE PS_GPCHAL074_TMP / CREATE GLOBAL TEMPORARY TABLE PS_GPCHAL074_TMP (PROCESS_INSTANCE DECIMAL(10) NOT NULL, FROMDATE DATE, TO_DATE DATE) ON COMMIT PRESERVE ROWS /* TABLESPACE GPAPP STORAGE (INITIAL 40000 NEXT 100000 MAXEXTENTS UNLIMITED PCTINCREASE 0) PCTFREE 10 PCTUSED 80 */ / CREATE UNIQUE INDEX PS_GPCHAL074_TMP ON PS_GPCHAL074_TMP (PROCESS_INSTANCE, FROMDATE, TO_DATE) /* TABLESPACE PSINDEX STORAGE (INITIAL 40000 NEXT 100000 MAXEXTENTS UNLIMITED PCTINCREASE 0) PCTFREE 10 PARALLEL NOLOGGING */ / ALTER INDEX PS_GPCHAL074_TMP NOPARALLEL LOGGING /
Unfortunately the last ALTER INDEX command in the script raises an Oracle error - ORA-14451: unsupported feature with temporary table.
But this command is added automatically to the create index commands and cannot be removed. This error should simply be ignored.
4 comments :
In the process of tools upgrade we were struck at SETINDEX SQR report. So we wanted to manually rebuid indexes. After generating an index recreation script which has DROP and CREATE commnads, we ran it. It fails that : unsupported feature for temporary tables. But how come the same tables have indexes which were dropped by us. I m confussed totally.
1. SETINDEX.sqr takes physical storage options (INITIAL, NEXT, MAXEXTENTS, PCTINCREASE, PCTFREE, PCTUSED, TABLESPACE) from the Oracle catalogue, and puts it into the PeopleSoft data dictionary as an object level DDL override on the indexes. However, these columns are null on GTTs and their indexes, and then you can't update the DDL override to NULL because PeopleSoft columns are not null.
2. You cannot use the standard PeopleSoft DDL model to build either GTTs or their indexes because, as you discovered, you are not allowed to specify physical storage options. My workaround for this is to add some extra variables to the create index DDL model. Then these parameters can be set to the start and end of multi-line comments in the DDL override, thus commenting out the unwanted parameters. It's messy, but it works.
CREATE [UNIQUE] **BITMAP** INDEX [IDXNAME] ON [TBNAME] ([IDXCOLLIST])
**GTSPCOM1**
TABLESPACE **INDEXSPC**
STORAGE (
INITIAL **INIT** NEXT **NEXT**
MAXEXTENTS **MAXEXT** PCTINCREASE **PCT**)
PCTFREE **PCTFREE**
**GTSPCOM2**;
This is discussed on page 136-7 of PeopleSoft for the Oracle DBA.
I have successfully changed the DDL model, created the project and ran the pl/sql to make the changes and build the project. However, now when I build the tables in app designer, it doesn't build the T1,T2,T3 and so on. Do you have any suggestions? I would like to get this implemented because we exhibit the same symptoms of crippling redo generation and CI enqueue due to high levels of concurrent truncates and DML on temp tables.
Additional instances of a table are only built for temporary records. I need more information. Please contact me directly or post this query to the PeopleSoft DBA Forum.
Post a Comment