Wednesday, April 20, 2016

PS360: A Utility to Extract and Present PeopleSoft Configuration and Performance Data

Introduction

Most of a PeopleSoft application is itself stored in the database in PeopleTools tables.  Therefore there is lot of information about the configuration and operation of a PeopleSoft system.  There are also performance metrics, particularly about batch processes.
PS360 is a new tool on which I am working.  It just uses SQL scripts to extract that data to html files, and package them up in a zip file so that they can be sent for further analysis.  The style and method is closely modelled on Enkitec's EDB360 by Carlos Sierra.  This is another free tool used for health check and performance analysis of any Oracle database system.  PS360 aims to gather PeopleSoft specific information that is not presented by EDB360.  It also runs in Oracle's SQL*Plus tool, and so is only available for use with an Oracle database.
Every section of PS360 is just the output of a SQL query, sometimes pre-processing is done in an anonymous PL/SQL block.  It does not install anything into the database, and does not update any table (other than the PLAN_TABLE which is used for temporary working storage).  Each report is in tabular and/or graphical format.  All the charts are produced with the Google chart API.
The output falls into three sections.  
  • Configuration: Simple reports of certain configuration tables.
  • Performance Metrics: Process Scheduler and Application Engine timings
  • Consistency Checks: both within PeopleTools tables and between the PeopleSoft and Oracle catalogues.
Sample PS360 index page
Sample PS360 Index Page generated on PeopleSoft Demo Database

Instructions

The tool can be run by anyone with access to the PeopleSoft Owner database user (usually SYSADM).  That user will already have privilege to read the Oracle catalogue.
Download the tool and unzip it into a directory.  Navigate to the ps360 (master) directory, open SQL*Plus and connect as SYSADM.  Execute the script ps360.sql.  The output will be written to a zip file in the same directory.  Unpack that zip file on your own PC and open the file ps360_[database name]_0_index.html with a browser.

Feedback Enhancements

I am looking for feedback about the tool, and suggestions for further enhancements.
Please either leave comments here or e-mail me at info@go-faster.co.uk.

Tuesday, April 19, 2016

nVision Performance Tuning: Coalescing Tree Leaves

I have blogged recently about the effectiveness of the Tree Performance options in improving the performance of nVision reports.
However, a consequence of the Tree Performance Access Method suppress join; use literal values is that the resulting SQL in nVision will have a criteria for every leaf on each of the selected nodes on that tree.
nVision Tree Performance Options|


There will be an equality condition for each single value leaf. I normally set Selector Options to Ranges of values (BETWEEN), so I get a between condition for each ranged leaf. Behind the scenes, Oracle rewrites between as a pair of inequalities, so there is no difference, but the SQL generated by nVision is slightly shorter.
The following is typical of nVision SQL with these performance options set.
SELECT A.ACCOUNT,SUM(A.POSTED_TOTAL_AMT) 
FROM
 PS_LEDGER A WHERE A.LEDGER='ACTUALS' AND A.FISCAL_YEAR=2015 AND 
  A.ACCOUNTING_PERIOD BETWEEN 1 AND 12 AND A.CURRENCY_CD='GBP' AND 
  A.STATISTICS_CODE=' ' AND (A.BUSINESS_UNIT=
…
) AND (
  A.DEPTID='C500' OR A.DEPTID='C512' OR A.DEPTID='C117' OR A.DEPTID='C157' OR 
  A.DEPTID='C340' OR A.DEPTID='C457' OR A.DEPTID='C510' OR A.DEPTID='A758' OR 
  A.DEPTID='8220' OR A.DEPTID='A704' OR A.DEPTID='A121' OR A.DEPTID='A110' OR 
  A.DEPTID BETWEEN 'A153' AND 'A154' OR A.DEPTID BETWEEN 'A151' AND 'A152' OR 
  A.DEPTID='A724' OR A.DEPTID BETWEEN 'A131' AND 'A133' OR A.DEPTID='A733' OR 
  A.DEPTID='A217' OR A.DEPTID='A437' OR A.DEPTID='A130' OR A.DEPTID='A134' OR 
  A.DEPTID='A703' OR A.DEPTID='A714' OR A.DEPTID='A218' OR A.DEPTID='A226' OR 
  A.DEPTID BETWEEN 'A135' AND 'A138'
…
A consequence of all the criteria is that Oracle can take a long time to parse the SQL statement. It may only be a few seconds in the case of a single SQL statement, but an nVision report book can consist of thousands of SQL statements.
To produce the following performance profile, I enabled Oracle SQL trace for a report book and profiled the trace. SQL Parse accounted for nearly 8% of the total runtime of this report book, so it can be significant, and can vary widely.
Event Name
% Time
Seconds
Calls
- Time per Call -
Avg
Min
Max
FETCH calls [CPU]
48.2%
3,699.8440s
16,068
0.2303s
0.0000s
178.0640s
db file sequential read
22.5%
1,728.2101s
4,413,352
0.0004s
0.0002s
0.1294s
SQL*Net message from client [idle]
8.0%
617.7042s
926
0.6671s
0.0002s
61.3147s
PARSE calls [CPU]
7.9%
605.9340s
5,383
0.1126s
0.0000s
11.0500s






Total
100.0%
7,681.4428s

Reducing the number of criteria in the SQL will reduce the parse time, but that is determined by the way the tree leaves are defined.
The leafcoal.sql script seeks to address this by repeatedly merging two consecutive leaves on the same tree node into a single ranged leaf where possible. It performs two checks before merging adjacent leaves on the same tree node:
  • There is not an intermediate value on the detail field defined in the tree structure record. So if the detail field was DEPT_TBL.DEPTID, the script checks that there are no values of DEPTID on PS_DEPT_TBL that are not currently selected by existing leaves that would be included in the merged leaf.
  • There is not another leaf on another node on the tree that would intersect with the merged leaf.

Instructions

leafcoal.sql was written as an anonymous PL/SQL block, so there is nothing to install. It should be run in SQL*Plus connected as the PeopleSoft owner ID (usually SYSADM). It is expected that there are some adjustments to the script that the user may need to make. As delivered, it runs in a test mode that does not update the database but reports on what it would do. Change k_testmode to FALSE to make it update the database.
k_testmode     CONSTANT BOOLEAN := FALSE; /*set this false to perform update*/
The level of output emitted depends on the variable l_debug_variable
l_debug_level  INTEGER := 4;
  • 1. end of processing message 
  • 2. start of processing for tree 
  • 3. number of leaves in tree and number of leaves coalesced 
  • 4. details of leaves being compressed 
  • 5. start and end of each procedure 
  • 6. parameters passed to functions 
  • 7. number of rows updated/deleted during coalesce 
  • 8. dynamic SQL statement 
The script reports on the work it has done. It does not commit its updates. That is left for the user to either commit or rollback.

.(3)Processing SHARE, ,XXX_ACCOUNT,151201                                       
.(4)634 nodes, 2636 leaves                                                      
.(4)1358 leaves coalesced (52%)                                                 
…
(1)Commit changes or rollback                                       
The query at the end of the script determines which trees will be processed and may need to be changed as required. For example, you might choose to coalesce the leaves on

  • specific trees,
  • most recent effective dated trees,
  • trees with literal values performance option

  FOR i IN (
    SELECT DISTINCT d.setid, d.setcntrlvalue, d.tree_name, d.effdt
    FROM   pstreedefn d    
    ,      pstreestrct s
    ,      psrecfielddb f
    WHERE  d.tree_strct_id = s.tree_strct_id
    AND    s.node_fieldname = 'TREE_NODE'
--  AND    d.TREE_ACC_METHOD = 'L' --literal values
    AND    s.dtl_recname = f.recname 
    AND    s.dtl_fieldname = f.fieldname
--  AND    tree_name = 'XXX_ACCOUNT'
  ) LOOP

Conclusion 

The number of leaves coalesced depends entirely on how the trees have been built. At one customer it has produced a reduction of over 50%, at another it was only 10%. The reduction in the number of leaves does produce a corresponding reduction in time spent on SQL parse time during nVision reports.

Friday, March 25, 2016

Interview with PeopleSoft Administrator Podcast

I recently recorded an interview with Dan Iverson and Kyle Benson for the PeopleSoft Administrator Podcast. It has been spread over three episodes. There is lots of other good stuff on the website and other episodes that are well worth listening to.

(25 March 2016) #21 - Temporary Tables


(8 April 2016) #23 - The Application Server


(15 April 2016) #24 - Application Server Tuning

You can listen to the podcast on psadmin.io, or subscribe with your favourite podcast player, or in iTunes.

Sunday, March 06, 2016

nVision Performance Tuning: General Principles

Over the years I have dealt with performance problems with nVision reporting on General Ledger on various Financials systems in various kinds of businesses.  Different businesses use nVision very differently, and have different challenges, but I have produced an approach that mostly works well at most customers.  I have collected that advice in to a document that I have recently published on my website (http://www.go-faster.co.uk/docs.htm#nVision Tuning.Generic.pdf).

The key points are
  • Indexing
    • Effective indexing of LEDGER and LEDGER_BUDG tables to match the analysis criteria of the reports.
    • Enhanced indexing of the PSTREESELECT tables, so that the indexes fully satisfy the queries without the need to visit the tables.
  • Collection of statistics and extended statistics on the PSTREESELECT tables.
  • Using the nVision performance options 
    • use static selectors instead of dynamic selectors.  It is difficult to maintain up-to-date optimizer statistics on the selector tables with dynamic selectors.
    • simplify SQL statements by replacing joins with literal criteria
    • updated 11.4.2016: reduce SQL parse time by coalescing leaves on trees.
  • I also suggest use of Oracle Fine Grained Auditing to 
    • enhance instrumentation,
    • detect the use of dynamic selectors.
  • Appropriate partitioning of the LEDGER and LEDGER_BUDG tables.
  • Archiving.
    • If the partitioning option is not available, then I strongly recommended that as much historical data as possible is purged from the LEDGER and LEDGER_BUDG tables.
Caveat: This is a general approach, and the document makes general statements.  Every customer is different, because their data is different and often their method of analysis differs.  There is always something that requires adjustment or an exception to the general approach.  Your mileage will vary!

    Thursday, February 11, 2016

    Implementing Index Compression (and other Physical Storage Options) via Application Designer

    There are some performance improvements that require physical storage options to be set on tables or indexes.
    One particular technique that I will take as an example for this article is index compression.  A good example in PeopleSoft is the tree node table, PSTREENODE.  It drives many security and hierarchical queries.  It is not updated very frequently, only as new trees are brought on.  Many of the indexes are good candidates for compression.
    This compression works by storing repeated column values only one per index leaf block.  Each distinct set of values in the columns up to the prefix length are stored in a symbol table.  The choice of prefix length can significantly effect the compression.  Oracle can calculate the optimal prefix length using
    ANALYZE INDEX … VALIDATE STRUCTURE
    I have written script to make it slightly easier, calc_opt_comp.sql.  This is the output on my demo database, but I get similar results on production systems.
                                              Optimal
                                          Compression                       Weighted
                                               Prefix               Current  Average
    Table Name         Index Name              Length FREQ PARTS     Blocks Saving %
    ------------------ ------------------ ----------- ---- ----- ---------- --------
    PSTREENODE         PSAPSTREENODE                4    1     0        280     39.0
                       PSBPSTREENODE                3    1     0        264     30.0
                       PSCPSTREENODE                1    1     0        120      7.0
                       PSDPSTREENODE                4    1     0        256     61.0
                       PSFPSTREENODE                2    1     0        256     67.0
                       PSGPSTREENODE                3    1     0        400     49.0
                       PS_PSTREENODE                4    1     0        256     44.0 
    However, I want to make sure that should the table need to rebuilt in the future, PeopleTools will generate the DDL with the appropriate settings.  The same principle would also apply to any other physical storage option.  I would always recommend that the compression prefix lengths be incorporated into the PeopleTools DDL override in Application Designer (figure 1).  While you could extend the DDL model and add another override for compression, I just append it to the PCTFREE setting.
    Index DDL Overrides
    Figure 1. Index DDL Overide
    However, there is catch.  PeopleTools has never examined DDL overrides when determining whether there is a difference between the PeopleSoft and database data dictionaries, even though that comparison must be platform specific.  DDL overrides and DDL models are just strings held in the PeopleTools tables.  They can be extended (or even removed) by customers.  I assume this is the reason; it was not felt possible to reliably check them,
    So, if the build settings (figure 2) are 'recreate index only if modified', which is the default, Application Designer will not generate a DDL script, nor execute any DDL.
    Build Settings
    Figure 2. Build Settings
    The workaround has always been to set the index creation option in the build settings to 'recreate index if it already exists'.  However, we then discover the override doesn't appear in the DDL.  As Application Designer has not detected a difference between PeopleTools and the database, it has instead used the Oracle DBMS_METADATA package to generate the storage clause from from the index that exists in the database.  Hence the DDL contains additional keywords not in the PeopleSoft DDL model.
    CREATE UNIQUE  INDEX PS_PSTREENODE ON PSTREENODE (SETID,
       SETCNTRLVALUE,
       TREE_NAME,
       EFFDT,
       TREE_NODE_NUM,
       TREE_NODE,
       TREE_BRANCH) 
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS 
      STORAGE(INITIAL 40960 NEXT 106496 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
      BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
      TABLESPACE "PSINDEX"
    /
    ALTER INDEX PS_PSTREENODE NOPARALLEL LOGGING
    /
    
    I have only checked this behaviour on PeopleTools 8.54, but use of DBMS_METADATA was introduced in PeopleTools 8.51, so this problem has probably existed since then.
    SELECT dbms_metadata.get_ddl('INDEX','PS_PSTREENODE')
    FROM dual
    
    DBMS_METADATA.GET_DDL('INDEX','PS_PSTREENODE')
    --------------------------------------------------------------------------------
      CREATE UNIQUE INDEX "SYSADM"."PS_PSTREENODE" ON "SYSADM"."PSTREENODE" ("SETID"
    , "SETCNTRLVALUE", "TREE_NAME", "EFFDT", "TREE_NODE_NUM", "TREE_NODE", "TREE_BRANCH")
      PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
      STORAGE(INITIAL 40960 NEXT 106496 MINEXTENTS 1 MAXEXTENTS 2147483645
      PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
      BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
      TABLESPACE "PSINDEX"
    
    However, if I drop the index and then regenerate the DDL script in Application Designer,
    DROP INDEX ps_pstreenode
    /
    PeopleTools generates the create index with the compression specified in the PeopleTools table.
    …
    CREATE UNIQUE  INDEX PS_PSTREENODE ON PSTREENODE (SETID,
       SETCNTRLVALUE,
       TREE_NAME,
       EFFDT,
       TREE_NODE_NUM,
       TREE_NODE,
       TREE_BRANCH) TABLESPACE PSINDEX STORAGE (INITIAL 40000 NEXT 100000
     MAXEXTENTS UNLIMITED PCTINCREASE 0) PCTFREE 0 COMPRESS 4 PARALLEL
     NOLOGGING
    /
    ALTER INDEX PS_PSTREENODE NOPARALLEL LOGGING
    /
    …
    
    Rather than go through the business of dropping the index so you can then generate the correct script to then recreate the index, I would suggest just implementing the change manually by rebuilding the indexes.
    ALTER INDEX PSAPSTREENODE REBUILD TABLESPACE PSINDEX COMPRESS 4;
    ALTER INDEX PSBPSTREENODE REBUILD TABLESPACE PSINDEX COMPRESS 3;
    ALTER INDEX PSCPSTREENODE REBUILD TABLESPACE PSINDEX COMPRESS 1;
    ALTER INDEX PSDPSTREENODE REBUILD TABLESPACE PSINDEX COMPRESS 4;
    ALTER INDEX PSFPSTREENODE REBUILD TABLESPACE PSINDEX COMPRESS 2;
    ALTER INDEX PSGPSTREENODE REBUILD TABLESPACE PSINDEX COMPRESS 3;
    ALTER INDEX PS_PSTREENODE REBUILD TABLESPACE PSINDEX COMPRESS 4;
    

    Conclusion

    This makes the business of implementing physical attributes through Application Designer much more complicated.  I would still recommend recording the settings in Application Designer, if only because it provides documentation, but then it may be easier to implement the changes manually.