Wednesday, December 20, 2017

nVision Performance Tuning 12: Hinting nVision with SQL Profiles

This blog post is part of a series that discusses how to get optimal performance from PeopleSoft nVision reporting as used in General Ledger.  It is a PeopleSoft specific version of a posting on my Oracle blog.

As I explained earlier in this series, it is not possible to add hints to nVision.  The dynamic nature of the nVision SQL means that it is not possible to use SQL Patches.  nVision SQL statements contain literal values and never use bind variables.  When dynamic selectors are used, the SELECTOR_NUM will be different for every execution. A SQL_ID found in one report will be not be seen again in another report.  Even static selector numbers will change after the tree is updated or when a new tree is created.
It is possible to use SQL Profiles to introduce hints because they can optionally match the force match signature of a SQL.  SQL statements that differ only in the literal values they contain will have different SQL IDs but will have the same force matching signature.  Although you will still have a lot of force matching signatures, you should find that you have far fewer force matching signatures than SQL_IDs.   Picking out the signatures that account for the most elapsed execution time and creating profiles for them is manageable.
Note: SQL Profiles require the Tuning Pack to be licenced.
As far as is possible, good nVision performance should be achieved by setting appropriate tree performance options at tree level.  These are global settings.  You may find that a particular setting on a particular tree is not optimal for all reports.  You may then choose to override the tree-level setting in specific layouts.  You may also find that you still need hints to control execution plans.
In particular, parallel query can be an effective tactic in nVision performance tuning.  However, you should put a degree of parallelism on PS_LEDGER or PS_LEDGER_BUDG because that will invoke parallelism in many other processes.  I have found that even putting a degree of parallelism on a summary ledger table can easily result in too many concurrent parallel queries.   On OLTP systems, such as PeopleSoft, I recommend that parallelism should be used sparingly and in a highly controlled and targetted fashion.

Example

Let's take the following nVision query as an example.
SELECT L2.TREE_NODE_NUM,L3.TREE_NODE_NUM,SUM(A.POSTED_TOTAL_AMT) 
FROM PS_XX_SUM_CONSOL_VW A, PSTREESELECT05 L2, PSTREESELECT10 L3 
WHERE A.LEDGER='S_USMGT' 
AND A.FISCAL_YEAR=2017 
AND A.ACCOUNTING_PERIOD BETWEEN 0 AND 12 
AND (A.DEPTID BETWEEN 'A0000' AND 'A8999' OR A.DEPTID BETWEEN 'B0000' AND 'B9149' 
OR A.DEPTID='B9156' OR A.DEPTID='B9158' OR A.DEPTID BETWEEN 'B9165' AND 'B9999' 
OR A.DEPTID BETWEEN 'C0000' AND 'C9999' OR A.DEPTID BETWEEN 'D0000' AND 'D9999' 
OR A.DEPTID BETWEEN 'G0000' AND 'G9999' OR A.DEPTID BETWEEN 'H0000' AND 'H9999' 
OR A.DEPTID='B9150' OR A.DEPTID=' ') 
AND L2.SELECTOR_NUM=10228 
AND A.BUSINESS_UNIT=L2.RANGE_FROM_05 
AND L3.SELECTOR_NUM=10231 
AND A.ACCOUNT=L3.RANGE_FROM_10 
AND A.CHARTFIELD1='0012345' 
AND A.CURRENCY_CD='GBP' 
GROUP BY L2.TREE_NODE_NUM,L3.TREE_NODE_NUM
/
We can tell from the equality join conditions that the two selectors still joined to the are dynamic selectors.
A third selector on DEPTID has been suppressed with the 'use literal values' performance option.  The number of DEPTID predicates in the statement will depend on the tree and the node selected for the report.  Note, that if these change then the statement will not force match the same profile.  SQL profiles might suddenly cease to work due to a tree or selection criteria change.
This is the plan I get initially and without a profile. It doesn't perform well.
Plan hash value: 808840077
-----------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                     | Name              | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
-----------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                              |                   |       |       | 10408 (100)|          |       |       |
|   1 |  HASH GROUP BY                                |                   |   517 | 50666 | 10408   (1)| 00:00:01 |       |       |
|   2 |   HASH JOIN                                   |                   |   517 | 50666 | 10407   (1)| 00:00:01 |       |       |
|   3 |    PARTITION RANGE SINGLE                     |                   |   731 | 13158 |     3   (0)| 00:00:01 | 10228 | 10228 |
|   4 |     INDEX FAST FULL SCAN                      | PSAPSTREESELECT05 |   731 | 13158 |     3   (0)| 00:00:01 | 10228 | 10228 |
|   5 |    HASH JOIN                                  |                   |   518 | 41440 | 10404   (1)| 00:00:01 |       |       |
|   6 |     PARTITION RANGE SINGLE                    |                   |   249 |  5727 |     2   (0)| 00:00:01 | 10231 | 10231 |
|   7 |      INDEX FAST FULL SCAN                     | PSAPSTREESELECT10 |   249 |  5727 |     2   (0)| 00:00:01 | 10231 | 10231 |
|   8 |     PARTITION RANGE ITERATOR                  |                   |  7785 |   433K| 10402   (1)| 00:00:01 |    28 |    40 |
|   9 |      TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| PS_X_LEDGER_ACCTS |  7785 |   433K| 10402   (1)| 00:00:01 |    28 |    40 |
|  10 |       SORT CLUSTER BY ROWID BATCHED           |                   |  5373 |       |  5177   (1)| 00:00:01 |       |       |
|  11 |        INDEX SKIP SCAN                        | PS_X_LEDGER_ACCTS |  5373 |       |  5177   (1)| 00:00:01 |    28 |    40 |
-----------------------------------------------------------------------------------------------------------------------------------
These are the hints I want to introduce (on Oracle 12c).
SELECT /*+OPT_PARAM('parallel_degree_policy','AUTO') OPT_PARAM('parallel_min_time_threshold',2) 
OPT_PARAM('parallel_degree_limit',4) REWRITE PX_JOIN_FILTER(PS_XX_SUM_GCNSL_MV)*/…
  • Use automatic parallel degree, statement queuing and in-memory parallel execution.
  • Invoke parallelism if the statement is estimated to run for at least 2 seconds
  • However, I will also limit the automatic parallelism to a degree of 4
  • Force materialize view rewrite
  • Use a Bloom filter when joining to the materialized view.
I have created a data-driven framework to create the profiles. I have created working storage table to hold details of each force matching signature for which I want to create a profile.
CREATE TABLE dmk_fms_profiles
(force_matching_signature    NUMBER NOT NULL
,sql_id                      VARCHAR2(13) 
,plan_hash_value             NUMBER
,module                      VARCHAR2(64)
,report_id                   VARCHAR2(32) /*Application Specific*/
,tree_list                   CLOB         /*Application Specific*/
,sql_profile_name            VARCHAR2(30)
,parallel_min_time_threshold NUMBER
,parallel_degree_limit       NUMBER
,other_hints                 CLOB
,delete_profile              VARCHAR2(1)
,sql_text                    CLOB
,CONSTRAINT dmk_fms_profiles_pk PRIMARY KEY (force_matching_signature)
,CONSTRAINT dmk_fms_profiles_u1 UNIQUE (sql_id)
,CONSTRAINT dmk_fms_profiles_u2 UNIQUE (sql_profile_name)
)
/
Using conditional parallelism with the PARALLEL_MIN_TIME_THRESHOLD, but limited with PARALLEL_DEGREE_LIMIT is an effective tactic with nVision, so I have specified columns in the metadata table for those hints, otherwise, hints are injected via a string. I identified the problematic SQL by analysis with ASH, and hence I also obtained the FORCE_MATCHING_SIGNATURE. The metadata is keyed by FORCE_MATCHING_SIGNATURE. I have specified a meaningful name for the SQL profile.
INSERT INTO dmk_fms_profiles (force_matching_signature, parallel_min_time_threshold, parallel_degree_limit, other_hints, sql_profile_name) 
VALUES (16625752171077499412, 1, 4, 'REWRITE PX_JOIN_FILTER(PS_XX_SUM_GCNSL_MV)', 'NVS_GBGL123I_BU_CONSOL_ACCOUNT');
COMMIT;
Profiles are created using the text of a SQL rather than the SQL_ID or FORCE_MATCHING_SIGNATURE directly. Therefore the SQL_TEXT must be extracted from the AWR, so this method also requires that the SQL statement has been captured by an AWR snapshot.
UPDATE dmk_fms_profiles a
SET    (module, action, sql_id, plan_hash_value, sql_text)
=      (SELECT s.module, s.action, s.sql_id, s.plan_hash_value, t.sql_text
        FROM   dba_hist_sqlstat s
        ,      dba_hist_sqltext t
        WHERE  t.dbid = s.dbid
        AND    t.sql_id = s.sql_id
        AND    s.force_matching_signature = a.force_matching_signature
        AND    s.snap_id = (
   SELECT MAX(s1.snap_id)
  FROM   dba_hist_sqlstat s1
  WHERE  s1.force_matching_signature = a.force_matching_signature
  AND    s1.module = 'RPTBOOK'     /*Application Specific*/
   AND    s1.action LIKE 'PI=%:%:%' /*Application Specific*/)
  AND    s.module = 'RPTBOOK'             /*Application Specific*/
  AND    s.action LIKE 'PI=%:%:%'         /*Application Specific*/
  AND    ROWNUM = 1)
WHERE sql_id IS NULL
/

MERGE INTO dmk_fms_profiles u
USING (
SELECT a.sql_id, a.force_matching_signature, p.name
FROM   dmk_fms_profiles a
,      dba_sql_profiles p
WHERE  p.signature = a.force_matching_signature
) s
ON (s.force_matching_signature = u.force_matching_signature)
WHEN MATCHED THEN UPDATE
SET u.sql_profile_name = s.name
/
Columns REPORT_ID and TREE_LIST contain application specific information extracted from the application instrumentation and tree selector logging.
/*Application Specific - extract report ID from ACTION*/
UPDATE dmk_fms_profiles a
SET    report_id = substr(regexp_substr(s.action,':([A-Za-z0-9_-])+',1,1),2)
WHERE  report_id IS NULL
AND    action IS NOT NULL
/
/*Application Specific - extract financial analysis tree from application logging*/
UPDATE dmk_fms_profiles a
SET    tree_list = 
       (SELECT LISTAGG(tree_name,', ') WITHIN GROUP (ORDER BY tree_name)
       FROM (select l.tree_name, MAX(l.length) length
             FROM   dba_hist_sql_plan p
             ,      ps_nvs_treeslctlog l
             WHERE  p.plan_hash_value = a.plan_hash_value
             AND    p.sql_id = a.sql_id
             AND    p.object_name like 'PS%TREESELECT__'
             AND    p.partition_start = partition_stop
             AND    p.partition_start = l.selector_num
             AND    l.tree_name != ' '
             GROUP BY l.tree_name)
      )
WHERE tree_list IS NULL
/
Now I can produce a simple report of the metadata in order to see what profiles should be created.
column sql_text word_wrapped on format a110
column module format a8
column report_id heading 'nVision|Report ID'
column tree_list word_wrapped on format a20
column plan_hash_value             heading 'SQL Plan|Hash Value'         format 9999999999
column parallel_min_time_threshold heading 'Parallel|Min Time|Threshold' format 999
column parallel_degree_limit       heading 'Parallel|Degree|Limit'       format 999
set long 500
SELECT * FROM dmk_fms_profiles
/

                                          SQL Plan                                                                           
FORCE_MATCHING_SIGNATURE SQL_ID         Hash Value MODULE   ACTION                                                           
------------------------ ------------- ----------- -------- ---------------------------------------------------------------- 
                                                                                      Parallel Parallel
nVision                                                                               Min Time   Degree
Report ID                        TREE_LIST            SQL_PROFILE_NAME               Threshold    Limit D
-------------------------------- -------------------- ------------------------------ --------- -------- -
OTHER_HINTS
--------------------------------------------------------------------------------
SQL_TEXT
--------------------------------------------------------------------------------------------------------------
    12803175998948432502 5pzxhha3392cs   988048519 RPTBOOK  PI=3186222:USGL233I:10008                                        
USGL233I                         BU_GAAP_CONSOL,      NVS_GBGL123I_BU_CONSOL_ACCOUNT                   1        4
                                 GAAP_ACCOUNT
REWRITE PX_JOIN_FILTER(PS_XX_SUM_GCNSL_MV)
SELECT L2.TREE_NODE_NUM,A.ACCOUNT,SUM(A.POSTED_TOTAL_AMT) FROM PS_LEDGER A, PSTREESELECT05 L2, PSTREESELECT10 L3
WHERE A.LEDGER='S_GBMGT' AND A.FISCAL_YEAR=2017 AND A.ACCOUNTING_PERIOD BETWEEN 0 AND 12 AND (A.DEPTID BETWEEN
'A0000' AND 'A8999' OR A.DEPTID BETWEEN 'B0000' AND 'B9149' OR A.DEPTID='B9156' OR A.DEPTID='B9158' OR A.DEPTID
BETWEEN 'B9165' AND 'B9999' OR A.DEPTID BETWEEN 'C0000' AND 'C9999' OR A.DEPTID BETWEEN 'D0000' AND 'D9999' OR
A.DEPTID BETWEEN 'G0000' AND 'G9999' OR A.DE
Next, this PL/SQL block will create or recreate SQL profiles from the metadata. The various hints can be concatenated into a single string and passed as a parameter to SQLPROF_ATTR. The SQL text is passed as a parameter when the profile is created.
set serveroutput on
DECLARE
  l_signature NUMBER;
  h       SYS.SQLPROF_ATTR;
  e_no_sql_profile EXCEPTION;
  PRAGMA EXCEPTION_INIT(e_no_sql_profile, -13833);
  l_description CLOB;
BEGIN

FOR i IN (
  SELECT f.*, s.name
  FROM   dmk_fms_profiles f
    LEFT OUTER JOIN dba_sql_profiles s
    ON f.force_matching_signature = s.signature
) LOOP

  BEGIN 
    IF i.name IS NOT NULL AND i.delete_profile = 'Y' THEN
      dbms_sqltune.drop_sql_profile(name => i.name);
    END IF;
    EXCEPTION WHEN e_no_sql_profile THEN NULL;
  END;

  IF i.delete_profile = 'Y' THEN 
    NULL;
  ELSIF i.sql_text IS NOT NULL THEN
    h := SYS.SQLPROF_ATTR(
q'[BEGIN_OUTLINE_DATA]',
CASE WHEN i.parallel_min_time_threshold>=0 THEN 'OPT_PARAM(''parallel_degree_policy'',''AUTO'') ' END||
CASE WHEN i.parallel_degree_limit      >=0 THEN 'OPT_PARAM(''parallel_degree_limit'','      ||i.parallel_degree_limit      ||') ' END||
CASE WHEN i.parallel_min_time_threshold>=0 THEN 'OPT_PARAM(''parallel_min_time_threshold'','||i.parallel_min_time_threshold||') ' END||
i.other_hints,
q'[END_OUTLINE_DATA]');

    l_signature := DBMS_SQLTUNE.SQLTEXT_TO_SIGNATURE(i.sql_text);
    l_description := 'coe nVision '||i.report_id||' '||i.tree_list||' '||i.force_matching_signature||'='||l_signature;
    dbms_output.put_line(i.sql_profile_name||' '||l_description);

    DBMS_SQLTUNE.IMPORT_SQL_PROFILE (
sql_text    => i.sql_text,
profile     => h,
name        => i.sql_profile_name,
description => l_description,
category    => 'DEFAULT',
validate    => TRUE,
replace     => TRUE,
force_match => TRUE /* TRUE:FORCE (match even when different literals in SQL). FALSE:EXACT (similar to CURSOR_SHARING) */ );

  END IF;
END LOOP;
END;
/
I can verify that the profile has been created, and the hints that it contains, thus:
SELECT profile_name,
       xmltype(comp_data) as xmlval
FROM   dmk_fms_profiles p
,      dbmshsxp_sql_profile_attr  x
WHERE  x.profile_name = p.sql_profile_name
AND    p.status = 'ENABLED'
ORDER BY 1
/

PROFILE_NAME                                                                                                                                                                                            
------------------------------
XMLVAL                                                                                                                                                                                                  
------------------------------------------------------------------------------------------------
NVS_GBGL123I_BU_CONSOL_ACCOUNT
<![CDATA[BEGIN_OUTLINE_DATA]]>                                                                                                                                                           
  <![CDATA[OPT_PARAM('parallel_degree_policy','AUTO') OPT_PARAM('parallel_degree_limit',4) OPT_PARAM('parallel_min_time_threshold',1) REWRITE PX_JOIN_FILTER(PS_XX_SUM_GCNSL_MV)]]>                                           
  <![CDATA[END_OUTLINE_DATA]]>
And now when the application runs, I get the plan that I wanted.
  • The query runs in parallel.
  • The SQL is rewritten to use materialized view.
  • There are no indexes on the materialized view, so it must full scan it.
  • It generates a bloom filter from PSTREESELECT10 and applies it to the materialized view.
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                         | Name               | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |    TQ  |IN-OUT| PQ Distrib |
---------------------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                                  |                    |       |       |  2219 (100)|          |       |       |        |      |            |
|   1 |  PX COORDINATOR                                   |                    |       |       |         |             |       |       |        |      |            |
|   2 |   PX SEND QC (RANDOM)                             | :TQ10004           |   111 |  9879 |  2219   (6)| 00:00:01 |       |       |  Q1,04 | P->S | QC (RAND)  |
|   3 |    HASH GROUP BY                                  |                    |   111 |  9879 |  2219   (6)| 00:00:01 |       |       |  Q1,04 | PCWP |            |
|   4 |     PX RECEIVE                                    |                    |   111 |  9879 |  2219   (6)| 00:00:01 |       |       |  Q1,04 | PCWP |            |
|   5 |      PX SEND HASH                                 | :TQ10003           |   111 |  9879 |  2219   (6)| 00:00:01 |       |       |  Q1,03 | P->P | HASH       |
|   6 |       HASH GROUP BY                               |                    |   111 |  9879 |  2219   (6)| 00:00:01 |       |       |  Q1,03 | PCWP |            |
|   7 |        HASH JOIN                                  |                    |   536 | 47704 |  2218   (6)| 00:00:01 |       |       |  Q1,03 | PCWP |            |
|   8 |         PX RECEIVE                                |                    |   536 | 38056 |  2215   (6)| 00:00:01 |       |       |  Q1,03 | PCWP |            |
|   9 |          PX SEND HYBRID HASH                      | :TQ10002           |   536 | 38056 |  2215   (6)| 00:00:01 |       |       |  Q1,02 | P->P | HYBRID HASH|
|  10 |           STATISTICS COLLECTOR                    |                    |       |       |         |             |       |       |  Q1,02 | PCWC |            |
|  11 |            HASH JOIN                              |                    |   536 | 38056 |  2215   (6)| 00:00:01 |       |       |  Q1,02 | PCWP |            |
|  12 |             BUFFER SORT                           |                    |       |       |         |             |       |       |  Q1,02 | PCWC |            |
|  13 |              JOIN FILTER CREATE                   | :BF0000            |   236 |  3776 |     2   (0)| 00:00:01 |       |       |  Q1,02 | PCWP |            |
|  14 |               PX RECEIVE                          |                    |   236 |  3776 |     2   (0)| 00:00:01 |       |       |  Q1,02 | PCWP |            |
|  15 |                PX SEND BROADCAST                  | :TQ10000           |   236 |  3776 |     2   (0)| 00:00:01 |       |       |        | S->P | BROADCAST  |
|  16 |                 PARTITION RANGE SINGLE            |                    |   236 |  3776 |     2   (0)| 00:00:01 | 36774 | 36774 |        |      |            |
|  17 |                  INDEX FAST FULL SCAN             | PSAPSTREESELECT10  |   236 |  3776 |     2   (0)| 00:00:01 | 36774 | 36774 |        |      |            |
|  18 |             JOIN FILTER USE                       | :BF0000            |  8859 |   475K|  2213   (6)| 00:00:01 |       |       |  Q1,02 | PCWP |            |
|  19 |              PX BLOCK ITERATOR                    |                    |  8859 |   475K|  2213   (6)| 00:00:01 |    29 |    41 |  Q1,02 | PCWC |            |
|  20 |               MAT_VIEW REWRITE ACCESS STORAGE FULL| PS_XX_SUM_GCNSL_MV |  8859 |   475K|  2213   (6)| 00:00:01 |    29 |    41 |  Q1,02 | PCWP |            |
|  21 |         BUFFER SORT                               |                    |       |       |         |             |       |       |  Q1,03 | PCWC |            |
|  22 |          PX RECEIVE                               |                    |   731 | 13158 |     3   (0)| 00:00:01 |       |       |  Q1,03 | PCWP |            |
|  23 |           PX SEND HYBRID HASH                     | :TQ10001           |   731 | 13158 |     3   (0)| 00:00:01 |       |       |        | S->P | HYBRID HASH|
|  24 |            PARTITION RANGE SINGLE                 |                    |   731 | 13158 |     3   (0)| 00:00:01 | 36773 | 36773 |        |      |            |
|  25 |             INDEX FAST FULL SCAN                  | PSAPSTREESELECT05  |   731 | 13158 |     3   (0)| 00:00:01 | 36773 | 36773 |        |      |            |
---------------------------------------------------------------------------------------------------------------------------------------------------------------------

Conclusion SQL 

Profiles can be used in much the same way as SQL Patches to introduce hints into application SQL without changing the code, the difference being that SQL Profiles can force match SQL.  However, SQL Profiles do require the Tuning pack to be licenced, whereas SQL Patches and Baselines do not.
Applying force matching SQL profiles to nVision is an effective, though reactive tactic.   Tree changes can result in changes to the number of literal criteria in nVision SQL statements that may, therefore, cease to match existing profiles.  nVision will always require on-going monitoring and introduction of new profiles.

Thursday, November 30, 2017

nVision Performance Tuning: 11. Excel -v- OpenXML

This blog post is part of a series that discusses how to get optimal performance from PeopleSoft nVision reporting as used in General Ledger.

The general objective the performance tuning changes described in this series of blog posts has been to improve the performance of individual nVision reports, but also to allow many reports to execute concurrently.
However, if you use Excel 2010, Excel 2013 or above, then you may notice run times are significantly longer than with Excel 2007.  Also, from PeopleTools 8.54, Excel 2007 is no longer certified.
The problem is discussed in Oracle support note E-NV: nVision Performance using Microsoft Office 2010 and above (Doc ID 1537491.1).  Essentially, Excel 2010 upwards only runs single threaded.  Only one Excel nVision process that is not waiting for a database call to return can run concurrently on any one Windows server at any one time.  If you want to be able to run 10 concurrent nVision reports you would need to run one on each of 10 process schedulers, on 10 different windows servers.
From PT8.54, OpenXML is the default and preferred engine for executing nVision report on the process scheduler.  This uses a different PeopleSoft executable (PSNVSSRV).  It does not suffer from the single-threading problem so multiple processes can run concurrently.  It can also be run on non-Windows environments.
However, there are some limitations with OpenXML:
  • Excel macros are ignored during report generation, although macros can be put into a layout that will execute when the generated report is subsequently opened in Excel.
  • There are problems with nPlosion.  
  • Any print area set in the layout is lost.
  • When rerunning nVision to file any pre-existing file is not overwritten.
Therefore, it may be necessary to continue to run some nVision reports on Excel.  This would require:
  • Separate process schedulers configured to run Excel rather than OpenXML on each available Windows server.  Excel is selected by setting the variable Excel Automation = 1, in the nVision section of the process scheduler configuration file (psprcs.cfg).  
  • A new Excel nVision process type should be configured to run specific layouts or reportbooks on Excel.  
  • That new process type should only run on these additional process schedulers.  It should have a maximum concurrence of 1, or at most 2, on each Process Scheduler.  These schedulers should be configured to run this new process type (and a single Application Engine so that the purge process can run).

Tuesday, November 28, 2017

nVision Performance Tuning: 10. Maintaining Statistics on Non-Partitioned Tree Selector Tables

This blog post is part of a series that discusses how to get optimal performance from PeopleSoft nVision reporting as used in General Ledger.

If you do not partition your selector tables, and if you predominantly use static selectors then you can maintain the statistics on the selector table as the new static selectors are created or updated.  PeopleTools table PSTREESELCTL is used to track static selectors.  It is keyed by selector number.  A row is inserted when a new selector is created, and the version number is updated when an existing selector is re-extracted.  Therefore, a trigger on this table can be used to submit a job to refresh the statistics on the tree selector.
CREATE OR REPLACE TRIGGER sysadm.pstreeselector_stats
BEFORE INSERT OR UPDATE ON sysadm.pstreeselctl
FOR EACH ROW
DECLARE
  l_jobno      NUMBER;
  l_cmd        VARCHAR2(1000);
  l_table_name VARCHAR2(18);
  l_suffix     VARCHAR2(2);
BEGIN
  l_table_name := 'PSTREESELECT'||LTRIM(TO_CHAR(:new.length,'00'));
  l_suffix     := SUBSTR(l_table_name,-2);
  l_cmd := 'dbms_stats.gather_table_stats(ownname=>user,tabname=>'''||l_table_name||''',force=>TRUE);'
       ||'dbms_stats.set_column_stats(ownname=>user,tabname=>'''||l_table_name||''',colname=>''RANGE_FROM_'||l_suffix||''',density=>1,force=>TRUE);'
       ||'dbms_stats.set_column_stats(ownname=>user,tabname=>'''||l_table_name||''',colname=>''RANGE_TO_'||l_suffix||''',density=>1,force=>TRUE);'
  dbms_output.put_line(l_cmd);
  dbms_job.submit(l_jobno,l_cmd);
EXCEPTION WHEN OTHERS THEN NULL;
END;
/
When all the selectors exist in a single segment, the optimizer will not correctly calculate the cardinality of the expressions in the nVision queries on the tree selector tables.  Therefore, I have found it necessary to manually set the density on the range columns to 1 to get the effective execution plans at least some of the time.
N.B. This trigger should not be used if the selector tables are partitioned.


Friday, November 24, 2017

nVision Performance Tuning: 9. Using Compression without the Advanced Compression Licence

This blog post is part of a series that discusses how to get optimal performance from PeopleSoft nVision reporting as used in General Ledger.

Table compression can significantly decrease the size of tables and reduce the volume of I/O required to retrieve data.  Compression of the ledger, ledger budget, and summary ledger tables can significantly improve the performance of scans in nVision.
The Advanced Compression licence enables compression features to be used where segments are still updated by the application.  Without this licence, only simple table compression can be used, although Hybrid Column Compression (HCC) can only be used on an engineered system.  Neither forms of compression can be used in on-line line transaction processing mode.  A table that is marked for compression, but that is populated in conventional path mode will not be compressed.  Any compressed blocks will no longer be compressed after being updated.
However, in most systems, ledger rows cease to be updated within a month or two of the period closing.  The ledger tables will usually be partitioned by fiscal year and accounting period, so a month after the period has closed the corresponding partition will cease to be updated.  Therefore, it could then be compressed.
I usually compress historical partitions when I introduce partitioning.  This following example comes from an engineered system, so Hybrid Columnar Compression has been used.
  • There is one range partition each for the whole of fiscal years 2014 and 2015.  This data is historical and rarely queried, and then not by a single period.  A more aggressive compression QUERY HIGH has been used.  
  • Monthly partitioning is used for the previous fiscal year, 2016.  These partitions are compressed, but using QUERY LOW which should deliver better performance with lower CPU overhead than QUERY HIGH (see Choosing the Right Compression).
  • Partition LEDGER_2017_05 is not partitioned because it is current (at the time of building this script) and could still be updated by the application.
CREATE TABLE sysadm.ps_ledger
(…)
TABLESPACE GLLARGE
PCTFREE 10 PCTUSED 80
PARTITION BY RANGE(FISCAL_YEAR,ACCOUNTING_PERIOD)
SUBPARTITION BY LIST (LEDGER)
(PARTITION ledger_2014 VALUES LESS THAN (2015,0) PCTFREE 0 COMPRESS FOR QUERY HIGH
(SUBPARTITION ledger_2014_actuals VALUES ('ACTUALS')
…
,SUBPARTITION ledger_2014_z_others VALUES (DEFAULT)
)
,PARTITION ledger_2015 VALUES LESS THAN (2016,0) PCTFREE 0 COMPRESS FOR QUERY HIGH
(SUBPARTITION ledger_2015_actuals VALUES ('ACTUALS')
…
,SUBPARTITION ledger_2015_z_others VALUES (DEFAULT)
)
,PARTITION ledger_2016_bf VALUES LESS THAN (2016,1) PCTFREE 0 COMPRESS FOR QUERY LOW
(SUBPARTITION ledger_2016_bf_actuals VALUES ('ACTUALS')
…
,SUBPARTITION ledger_2016_bf_z_others VALUES (DEFAULT)
)
…
,PARTITION ledger_2017_05 VALUES LESS THAN (2017,6)
(SUBPARTITION ledger_2017_05_actuals VALUES ('ACTUALS')
…
,SUBPARTITION ledger_2017_05_z_others VALUES (DEFAULT)
)
…
)
ENABLE ROW MOVEMENT
PARALLEL
NOLOGGING
/
As periods close there are more partitions that can be compressed.  However, it may not be practical to take an outage to rebuild ledger partitions each month, and it certainly isn't practical to rebuild the entire ledger table every month.  Instead, from 11g, partitions can be rebuilt in place in an online operation.  The compression attribute can only be specified on the partition, and then the sub-partitions can be rebuilt.  The data in the partition can still be read while it is being moved.
ALTER TABLE ps_ledger MODIFY PARTITON ledger_2017_05 PCTFREE COMPRESS FOR QUERY LOW;
ALTER TABLE ps_ledger MODIFY SUBPARTITION ledger_2017_05_actuals  ONLINE PARALLEL 32;
…
ALTER TABLE ps_ledger MODIFY SUBPARTITION ledger_2017_05_z_others ONLINE PARALLEL 32;
NB: Moving a partition online can fail in 12.1.0.1 or later of the database due to bug 2070300.  This is documented in MOS Doc ID 2040742.1.
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-00932: inconsistent datatypes: expected NUMBER got BINARY
The bug is fixed in 12.2 and can be addressed in 12.1 by patch 2070300.

Monday, November 20, 2017

nVision Performance Tuning: 8. Interval Partitioning and Statistics Maintenance of Tree Selector Tables

This blog post is part of a series that discusses how to get optimal performance from PeopleSoft nVision reporting as used in General Ledger.

The decision to use interval partitioning on the tree selector tables came from the need to have accurate statistics for the optimizer when parsing nVision queries.  It is not possible to introduce hints into nVision SQL. The dynamic nature of the code means it is not viable to consider any of the forms of database plan stability across the whole application, (although it may be possible to use SQL Profiles in limited cases). Therefore, as far as possible the optimizer has to choose the best plan on its own. Without accurate optimizer statistics, I have found that the optimizer will usually not choose to use a Bloom filter.
If the selector tables are not partitioned, then each table will usually contain rows for many different selectors. Even with perfectly up to date statistics, including a histogram on SELECTOR_NUM, and extended statistics on SELECTOR_NUM and RANGE_FROM_nn, I have found that Oracle miscosts the join on RANGE_FROMnn and the attribute on the ledger table.
I propose that the tree selector tables should be interval partition such that each selector goes into its own partition.
CREATE TABLE PSYPSTREESELECT10 
(SELECTOR_NUM INTEGER NOT NULL,
 TREE_NODE_NUM INTEGER NOT NULL,
 RANGE_FROM_10 VARCHAR2(10) NOT NULL,
 RANGE_TO_10   VARCHAR2(10) NOT NULL) 
PARTITION BY RANGE (selector_num) INTERVAL (1)
(PARTITION VALUES LESS THAN(2))
TABLESPACE "PTWORK"
STORAGE(INITIAL 128K NEXT 128K)
/
INSERT INTO PSYPSTREESELECT10 
(       SELECTOR_NUM, TREE_NODE_NUM, RANGE_FROM_10, RANGE_TO_10)
SELECT  SELECTOR_NUM, TREE_NODE_NUM, RANGE_FROM_10, RANGE_TO_10
FROM PSTREESELECT10
/
DROP TABLE PSTREESELECT10 
/
ALTER TABLE PSYPSTREESELECT10 RENAME TO PSTREESELECT10
/
  • nVision queries will reference a single selector with a literal value, and therefore Oracle will eliminate all but that single partition at parse time and will use the statistics on that partition to determine how to join it to other tables.
  • Statistics only have to be maintained at partition level, and not at table level. 
  • Now that there is only a single selector number in any one partition, there is no need for extended statistics. 
  • The need to use dynamic selectors, in order to get equality joins between selectors and ledger tables, in order to make use of the Bloom filter, means that statistics on selector table will inevitably be out of date. The PL/SQL triggers and package that log the selector usage, are also used to maintain statistics on the partition. 
  • Partitions do not have to be created in advance. They will be created automatically by Oracle as they are required by the application. 

Compound Triggers on Tree Selector Tables 

There are a pair of compound DML triggers on each tree selector tables, one for insert and one for delete.
  • The after row section captures the current selector number. The one for insert also counts the number of rows and tracks the minimum and maximum values of the RANGE_FROMnn and RANGE_TOnn columns. 
  • The after statement section updates the selector log. The insert trigger directly updates the statistics on the partition, including the minimum and maximum values of the range columns.
    • It is not possible to collect statistics in a trigger in the conventional manner because dbms_stats includes an implicit commit. If dbms_stats was called within an autonomous transaction it could not see the uncommitted insert into the tree selector that fired the trigger. Hence the trigger calls the XX_NVISION_SELECTORS package that uses dbms_stats.set_table_stats and dbms_stats.set_column_stats to set values directly. 
    • The trigger then submits a job to database job scheduler that will collect statistics on the partition in the conventional way using dbms_job. The job number is recorded on the selector log. The job will be picked up by the scheduler when the insert commits. However, there can be a short lag between scheduling the job, and it running. The first query in the nVision report can be parsed before the statistics are available. 
    • The procedure that directly sets the statistics has to make some sensible assumptions about the data. These mostly lead the optimizer to produce good execution plans. However, testing has shown that performance can be better with conventionally collected statistics. Therefore, the trigger both directly sets statistics and submits the database job to collect the statistics.
    • It is important that table level statistics are not maintained by either technique as this would lead to locking between sessions. Locking during partition statistics maintenance will not occur as no two sessions populate the same selector number, and each selector is in a different partition. A table statistics preference for granularity is set to PARTITION on each partitioned tree selector table. 
The combination of dynamics selectors, single value joins, interval partitioning of selector tables, logging triggers on the selector tables driving timely statistics maintenance on the partitions delivers execution plans that perform well and that make effective use of engineered system features.
However, there are two problems that then have to be worked around. 

Library Cache Contention 

Some data warehouse systems can need new partitions in tables daily or even hourly. If partitions were not created in a timely fashion, the application would either break because the partition was missing, or performance would degrade as data accumulated in a single partition. Oracle intended interval partitions to free the DBA from the need to actively manage such partitioning on a day-to-day basis by creating them automatically as the data was inserted. 
However, on a busy nVision system, this solution could create thousands of new selectors in a single day, and therefore thousands of new partitions. This is certainly not how Oracle intended interval partitioning to be used.  I freely admit that I am abusing the feature.
If you have multiple concurrent nVision reports running, using dynamic selectors, you will have multiple database sessions concurrently inserting rows into the tree selector tables each with a different selector number, and therefore each creating new partitions mostly into the same tables.
The recursive code that creates the new partitions, and maintains the data dictionary, acquires a lock the object handle in library cache to prevent other sessions from changing it at the same time.  As the number of concurrent nVisions increase you will start to see nVision sessions waiting on the library cache lock event during the insert into the tree selector table while the new partition is being created. Perversely, as the performance of the nVision queries improve (as you refine tree selector settings) you may find this contention increases. 
The workaround to this is to create multiple database schemas, each with copies of the partitioned tree selector tables (similarly interval partitioned) and the PSTREESELCTL table (to manage static selectors in those schemas). Synonyms will be required for all other tables referenced by nVision queries. 
Then a trigger on the process scheduler request table PSPRCSRQST will arbitarily set the current schema of the nVision batch processes to one of those schemas. The nVision reports still connect and run with privileges of the Owner ID (usually SYSADM), but the objects tables from the current schema. 
I have used a hash function to distribute nVision processes between schemas. I suggest the number of schemas should be a power of 2 (ie, 2, 4, 8 etc.).
CREATE OR REPLACE TRIGGER sysadm.nvision_current_schema
BEFORE UPDATE OF runstatus ON sysadm.psprcsrqst
FOR EACH ROW
WHEN (new.runstatus IN('7') AND new.prcsname = 'RPTBOOK' AND new.prcstype like 'nVision%')
BEGIN
  EXECUTE IMMEDIATE 'ALTER SESSION SET current_schema = NVEXEC'||LTRIM(TO_CHAR(dbms_utility.get_hash_value(:new.prcsinstance,1,8),'00'));
EXCEPTION WHEN OTHERS THEN NULL; --exception deliberately coded to suppress all exceptions
END;
/
Thus different nVision reports use different tree selector tables in different schemas rather than trying to create partitions in the same tree selector table, thus avoiding the library cache locking.

Limitation on the Maximum Number of Partitions 

In Oracle, it is not possible to have more than 1048576 partitions in a table. That applies to all forms of partitioning.
The tree selector tables are interval partitioned on selector number with an interval of 1 starting with 1. So the highest value of SELECTOR_NUM that they can store is 1048575.
INSERT INTO pstreeselect05 VALUES(1048576,0,' ',' ')
            *
ERROR at line 1:
ORA-14300: partitioning key maps to a partition outside maximum permitted number of partitions
New selector numbers are allocated sequentially from PSTREESELNUM. Left unattended, the selector numbers used by nVision will increase until they eventually hit this limit, and then nVision and ad-hoc queries that use the tree-exists operator will start to fail with this error.
Therefore, a procedure RESET_SELECTOR_NUM has been added to the PL/SQL package to reset the selector number allocation table PSTREESELNUM back to 0, delete any tree selector entries for which there is no logging entry, and then runs the regular selector PURGE procedure in the same
package that will drop unwanted interval partitions.

Recommendation: XX_NVISION_SELECTORS.RESET_SELECTOR_NUM should be scheduled run sufficiently frequently to prevent the selector number reaching the maximum.  

Monday, November 13, 2017

nVision Performance Tuning: 7 Analysis of Tree Usage with the Selector Log

This blog post is part of a series that discusses how to get optimal performance from PeopleSoft nVision reporting as used in General Ledger.

Over time, the selector log will build up a picture of how each tree is used in a system. Here are two examples of how it can be used.

You may look at a piece of SQL generated by nVision, it will have a literal value for the selector number, and you want to know about that particular selector.
REM treeanal.sql
WITH t as (
SELECT DISTINCT d.tree_name, s.dtl_fieldname, d.tree_acc_method, d.tree_acc_Selector, d.tree_acc_sel_opt
FROM pstreedefn d, pstreestrct s
WHERE d.tree_Strct_id = s.tree_strct_id
), l as (
SELECT  *
FROM ps_nvs_treeslctlog l
WHERE l.selector_Num = &selector_num
)
SELECT l.*, t.dtl_fieldname, t.tree_acc_method, t.tree_acc_Selector, t.tree_acc_sel_opt
FROM  t, l
WHERE t.tree_name = l.tree_name
/
Now you can see various pieces of information about the selector and the report in which it was used.
  • Process instance of the report, although it is also in the ACTION string
  • Length of the selector.  Here it was 10 characters so it was in PSTREESELECT10.
  • The number of rows inserted into the selector.  This is useful if you want to recreate the conditions at runtime by populating the selector table manually.
  • Time at which the selector was populated.
  • Session module, usually the process name.  
  • Session action string, which contains the process instance, report ID and business unit.
  • The client info string, containing the operator ID, database name, the name of the host running nVision and the name of the executable.
    • PSNVS is nVision running on Microsoft Excel
    • PSNVSSRV is nVision running in OpenXML
    • psae is a PS/Query using the IN TREE operator that has been scheduled to run on the Process Scheduler.
    • PSQED is the windows query client running in 2-tier mode and using an IN TREE operator.
    • PSSAMSRV indicates either nVision or PS/Query running either through the PIA or on the windows client in 3-tier mode.
  • Status of the selector.  X indicates that the selector has been deleted and the partition has been dropped.
  • Name of the tree.
  • Name of the schema in which the selector table is located.
  • Partition name in the tree selector table in which the data was held.  This will be blank if the partition has been dropped.
  • The current values of the three tree selector flags on the tree definition are also reported.
SELECTOR_NUM PROCESS_INSTANCE Len   NUM_ROWS TIMESTAMP                    MODULE       APPINFO_ACTION
------------ ---------------- --- ---------- ---------------------------- ------------ ----------------------------------------------------------------
CLIENT_INFO                                                      S TREE_NAME          OWNERID  PARTITION_NAME           JOB_NO DTL_FIELDNAME      T T T
---------------------------------------------------------------- - ------------------ -------- -------------------- ---------- ------------------ - - -
       10233          1780069  10        362 10-NOV-17 02.40.50.755038 AM RPTBOOK      PI=1780069:UKGL123I:UK001
GBNVISION,PSFINPRD,UKLONWIN001,,PSNVSSRV.EXE,                    X UKGL_ACCOUNT       SYSADM                             33052 ACCOUNT            J D S

The following query aggregated log entries to report the number of times each tree was used over the last 7 days, and provide various statistics about the numbers of rows extracted from trees into the selector tables, and the current tree performance options.
REM tree_usage.sql
WITH t AS (
SELECT DISTINCT d.tree_name, s.dtl_fieldname, d.tree_acc_method
,      d.tree_acc_Selector, d.tree_acc_sel_opt
FROM   pstreedefn d, pstreestrct s
WHERE  d.tree_Strct_id = s.tree_strct_id
), l AS (
SELECT tree_name, length
, COUNT(*) num_uses
, MIN(num_rows) min_rows
, AVG(num_rows) avg_rows
, MEDIAN(num_Rows) med_rows
, MAX(num_rowS) max_rows
, STDDEV(num_Rows) stddev_rows
, SUM(num_rows) sum_rows
, COUNT(distinct process_instance) processes
FROM ps_nvs_treeslctlog l
WHERE num_rows>0
AND timestamp >= sysdate-7
GROUP BY tree_name, length
)
SELECT l.*, t.dtl_fieldname, t.tree_acc_method, t.tree_acc_Selector, t.tree_acc_sel_opt
FROM   t, l
WHERE  t.tree_name = l.tree_name
ORDER BY sum_rows
/
The default recommendation is that all trees should use:
  • Literal values where possible when working with less than about 2000 rows in the selector.  However, where more than 2000 rows it may be better to join the table due to parse and execution overhead of each criterion.
  • Dynamic selectors 
  • Single Value joins 
This report can help to identify trees where extreme volumes mean that different options should be considered.
                          Num    Min Average Median    Max Std Dev       Sum   Num
TREE_NAME          Len   Uses   Rows    Rows   Rows   Rows    Rows      Rows Procs DTL_FIELDNAME      T T T
------------------ --- ------ ------ ------- ------ ------ ------- --------- ----- ------------------ - - -
…
CORP_ACCT           10      5   1147    2839   2616   6668    2263     14194     1 ACCOUNT            J D S
FUNCTION            10    480      9      32     35     35       8     15474    43 CHARTFIELD2        L D S
INT_SUP              6      7    225    2463   2838   2838     987     17243     1 PRODUCT            L D S
STAT_PRODUCT         6      8   2889    2889   2889   2889       0     23112     1 PRODUCT            J D S
AFFILIATE            5     43    215     576    509    938     223     24789    15 AFFILIATE          L D S
INT_GAAP_CON         5     62     82     486    522    730     225     30153    10 BUSINESS_UNIT      L D S
BU_GAAP_CON          5     96     44     619    614    731     115     59461    48 BUSINESS_UNIT      L D S
STAT_ACCOUNT        10     45     23    4204   6516   6516    2905    189182     6 ACCOUNT            J D S
INT_REP1            10    135    149    1563   1664   1664     379    211005     1 CHARTFIELD1        L D S
COMBO_CODE          10    172     17    1592   1532   2430     809    273846    18 CHARTFIELD1        L D S
UKGL_ACCOUNT        10   2586      2    1713   1147   7797    1793   4430262   110 ACCOUNT            J D S
  • The account trees have been set to join the tree to the ledger table rather than literal values because sometimes in excess of 6000 rows are extracted. A query with 6000 literal terms would be extremely large, take time to generate in nVision, and time on the database to parse and execute each criterion. 
  • STAT_PRODUCT has been set to join partly because it is large, it always extracts 2889 rows, but also because the whole tree is extracted every time so it does not cut down the result set.

Monday, November 06, 2017

nVision Performance Tuning: 6. Logging Selector Usage

This blog post is part of a series that discusses how to get optimal performance from PeopleSoft nVision reporting as used in General Ledger.

Static selectors are tracked by entries in the PSTREESELCTL table.  It is maintained after the tree is extracted to the selector table.  The version number on PSTREESELCTL is compared with the corresponding version number on the PSTREEDEFN to determine whether the extract of the tree to the selector table is still valid, or whether it needs to be reextracted because the tree has been updated.  Selectors that do not have an entry in PSTREESELCTL are therefore dynamic.
Static selectors are left in the PSTREESELECTnn table after the report completes so that they can be reused.  However, many customers periodically create new effective dated versions of many trees, and so static selectors on old effective dated versions of the tree will accumulate as there is nothing to purge them.
Dynamic selectors are extracted every time the report runs.  They should normally be deleted by nVision before the report finishes.  However, if a report crashes dynamic selectors can be left behind.  That creates a number of problems
  • The size of the selector tables will tend to increase over time as old static and dynamic selectors are left behind.
  • That in turn affects the statistics on these tables.  The range of values for SELECTOR_NUM will become wider faster than the number of rows in the table grows.  The minimum value will either be the oldest static selector number, or the oldest dynamic selector left after a crash.  The maximum value will be the last selector number inserted when statistics were collected.
Therefore, it is useful to be able to track creation and deletion of dynamic selectors by the various nVision reports and queries.  I have therefore created a logging table PS_NVS_TREESLCTLOG (see nvision_dynamic_selectors.sql), a PL/SQL package XX_NVISION_SELECTORS and compound DML triggers on every tree selector table.
The column names in the log table have been chosen for compatibility with the PeopleSoft data dictionary, so that a record can be defined in Application Designer.
Column Name
Data Type
Description
SELECTOR_NUM
NUMBER
Unique identifier for tree selector records.
PROCESS_INSTANCE
NUMBER
PeopleSoft process instance number for nVision/Query
LENGTH
NUMBER
Length of tree selector
NUM_ROWS
NUMBER
Number of rows inserted into tree selector. 
Counted by the AFTER ROW part of the triggers.
TIMESTAMP
TIMESTAMP
Time when rows inserted
MODULE
VARCHAR2(64)
Module attribute of session inserting selector rows. 
APPINFO_ACTION
VARCHAR2(64)
Action attribute of session inserting selector rows
CLIENTINFO
VARCHAR2(64)
CLIENT_INFO attribute of session inserting selector rows.  This will include:
  • PeopleSoft Operator ID.
  • Name of the application server or process scheduler domain.
  • Name of the machine where the client process is executing.
  • Name of the client executable process.
STATUS_FLAG
VARCHAR2(1)
I=Selectors Inserted
S=Static Selectors Inserted
D=Selectors Deleted
X=Selectors Deleted and Partition Dropped
TREE_NAME
VARCHAR2(18)
Name of the tree from which selector extracted.
Obtained by querying statement from V$SQL.
OWNER_ID
VARCHAR2(8)
Schema under which nVision report run
PARTITION_NAME
VARCHAR2(128)
Name of partition where selectors stored
JOB_NO
NUMBER
Database Job number to collect statistics on the partition.
All of the logic is kept in the PL/SQL package because it is common to the triggers on all the tree selector tables.  Insert triggers track population of selectors and delete triggers track the successful removal of dynamic selectors.  After row triggers track the selector number and count the number of rows inserted.  After statement triggers call the logging procedures.
CREATE OR REPLACE TRIGGER sysadm.pstreeselect10_insert
FOR INSERT ON sysadm.PSTREESELECT10 compound trigger
  l_err_msg VARCHAR2(100 CHAR);
AFTER EACH ROW IS
BEGIN
  sysadm.xx_nvision_selectors.rowins(:new.selector_num,:new.range_from_10,:new.range_to_10);
EXCEPTION WHEN OTHERS THEN NULL;
END after each row;

AFTER STATEMENT IS
BEGIN
  sysadm.xx_nvision_selectors.logins(10,'SYSADM');
EXCEPTION WHEN OTHERS THEN
  l_err_msg := SUBSTR(SQLERRM,1,100);
  dbms_output.put_line('Error:'||l_err_msg);
END after statement;
END;
/

CREATE OR REPLACE TRIGGER sysadm.pstreeselect10_delete
FOR DELETE ON sysadm.PSTREESELECT10 compound trigger
  l_err_msg VARCHAR2(100 CHAR);
AFTER EACH ROW IS
BEGIN
  sysadm.xx_nvision_selectors.rowdel(:old.selector_num);
EXCEPTION WHEN OTHERS THEN NULL;
END after each row;

AFTER STATEMENT IS
BEGIN
  sysadm.xx_nvision_selectors.logdel(10);
EXCEPTION WHEN OTHERS
THEN
  l_err_msg := SUBSTR(SQLERRM,1,100);
  dbms_output.put_line('Error:'||l_err_msg);
END after statement;
END;
/
Once the decision to bear the overhead of triggers on the selector tables is made, there are then various pieces of additional information that can be captured and stored in the logging table for very little additional overhead.  It is easy to record the current session attributes such as module, action, and client_info.  The process instance number is captured on startup and can then be read by the psftapi package.  The tree is identified by scanning the V$SQL for the SQL that fired the triggers.
It is also possible to maintain statistics on the selector tables.

Purging Selectors

The selector log can be used to drive purging of selectors not cleared by nVisions that failed to complete.  The package includes a purge procedure to clear selectors that are not already marked as having been deleted either when the logged process instance is no longer running, or if there is no known process instance then if it is more than 2 days since the selector was inserted.  If the selector table is also interval partitioned, then the partition will be dropped.
A trigger on PSTREESELCTL (see pstreeselctl.sql) tracks the logs static selector maintenance.
The purge process is invoked by a trigger on PSPRCSRQST that fires when nVision process status is changed away from processing.  The selector purge process should also be scheduled to run daily.
The selector log itself is not purged as it contains useful information about tree usage.

Recommendations

  • If the tree selector tables are not partitioned create a histogram on SELECTOR_NUM.  However, up to Oracle 11g, this will not be effective as the number of distinct values reaches the maximum number of buckets, 254.  Dynamic selectors should be purged before this happens.  From Oracle 12c the maximum number of buckets is 2048, and hybrid histograms can be used.

Thursday, November 02, 2017

nVision Performance Tuning: 5. Additional Instrumentation of nVision

This blog post is part of a series that discusses how to get optimal performance from PeopleSoft nVision reporting as used in General Ledger.

One of the challenges of tuning and monitoring nVision is to be able to identify each report being run. Calls to Oracle instrumentation package dbms_application_info  were added to the component processor in PeopleTools 8.50, and to Application Engine in PeopleTools 8.52.  However, COBOL, nVision, and SQR were never instrumented.
Therefore, there is still a place for the psftapi package and trigger.  When a PeopleSoft batch process starts, it sets the status on its request record on the Process Scheduler request record, psprcsrqst to 7, thus indicating that it is processing.  A trigger on that table fires on that update and calls the psftapi package.  The package sets module and action to the process name and process instance, and also stored the process instance number in a package global variable that can be read with another procedure in the package.  Every scheduled process will have module and action set to something meaningful.  Any PeopleSoft instrumentation will simply overwrite these values.  A sessions module and action are picked up Oracle monitoring tools, in particular, they are also stored in the Active Session History (ASH).
However, nVision reports always run as the same process name, either NVSRUN for a single report, RPTBOOK for a report book of many reports, or DRILLDWN for a nVision drill-down query.  Knowing the process instance is useful because then we can look up the operator and run control ID
However, we also need to know the report ID being run.  When each individual nVision report starts it queries the runtime parameters from the PS_NVS_REPORT PeopleTools table.  There is no update, so it is not possible to capture this with a DML trigger.  Instead, you can create a fine-grained audit1 policy on the query with a PL/SQL handler.  The handler package is then invoked by the audit policy.
BEGIN
  DBMS_FGA.ADD_POLICY(
   object_schema      => 'SYSADM',
   object_name        => 'PS_NVS_REPORT',
   policy_name        => 'PS_NVS_REPORT_SEL',
   handler_module     => 'GFC_FGA_NVISION_HANDLER',
   enable             =>  TRUE,
   statement_types    => 'SELECT',
   audit_trail        =>  DBMS_FGA.DB + DBMS_FGA.EXTENDED);
END;
/
The handler package runs in the session that triggered the audit.  It can access the audit record and extract the string of colon-separated bind variables thus obtaining the report ID and business unit.  It updates the session action attribute in the same way as psftapi.sql.
CREATE OR REPLACE PROCEDURE sysadm.gfc_fga_nvision_handler
(object_schema VARCHAR2
,object_name   VARCHAR2
,policy_name   VARCHAR2)
AS
  l_sqlbind VARCHAR2(4000);
  l_parm1   VARCHAR2(30);
  l_parm2   VARCHAR2(30);
  l_parm3   VARCHAR2(30);
  l_parm4   VARCHAR2(30);
BEGIN
  BEGIN
    SELECT x.lsqlbind
    ,      SUBSTR(x.lsqlbind,x.start1,LEAST(30,NVL(x.end1,x.lensqlbind+1)-x.start1)) parm1
    ,      SUBSTR(x.lsqlbind,x.start2,LEAST(30,NVL(x.end2,x.lensqlbind+1)-x.start2)) parm2
    ,      SUBSTR(x.lsqlbind,x.start3,LEAST(30,NVL(x.end3,x.lensqlbind+1)-x.start3)) parm3
    ,      SUBSTR(x.lsqlbind,x.start4,LEAST(30,NVL(x.end4,x.lensqlbind+1)-x.start4)) parm4
    INTO   l_sqlbind, l_parm1, l_parm2, l_parm3, l_parm4
    FROM   (
      SELECT l.*
      ,      NULLIF(REGEXP_INSTR(lsqlbind,' #[0-9]+\([0-9]+\)\:',1,1,1,'i'),0) start1
      ,      NULLIF(REGEXP_INSTR(lsqlbind,' #[0-9]+\([0-9]+\)\:',1,2,0,'i'),0) end1
      ,      NULLIF(REGEXP_INSTR(lsqlbind,' #[0-9]+\([0-9]+\)\:',1,2,1,'i'),0) start2
      ,      NULLIF(REGEXP_INSTR(lsqlbind,' #[0-9]+\([0-9]+\)\:',1,3,0,'i'),0) end2
      ,      NULLIF(REGEXP_INSTR(lsqlbind,' #[0-9]+\([0-9]+\)\:',1,3,1,'i'),0) start3
      ,      NULLIF(REGEXP_INSTR(lsqlbind,' #[0-9]+\([0-9]+\)\:',1,4,0,'i'),0) end3
      ,      NULLIF(REGEXP_INSTR(lsqlbind,' #[0-9]+\([0-9]+\)\:',1,4,1,'i'),0) start4
      ,      NULLIF(REGEXP_INSTR(lsqlbind,' #[0-9]+\([0-9]+\)\:',1,5,1,'i'),0) end4
      ,      LENGTH(lsqlbind) lensqlbind
      FROM   sys.fga_log$ l
    ) x
    WHERE  x.sessionid = USERENV('SESSIONID')
    AND    x.entryid   = USERENV('ENTRYID')
    AND    x.obj$name  = 'PS_NVS_REPORT';
  EXCEPTION
    WHEN no_data_found THEN
      RAISE_APPLICATION_ERROR(-20000,'GFC_FGA_NVISION_HANDER: No Audit Row');
  END;

  IF l_parm4 IS NULL THEN
    l_parm4 := l_parm3;
    l_parm3 := l_parm2;
    l_parm2 := l_parm1;
  END IF;

  IF l_parm4 IS NULL THEN
    l_parm4 := l_parm3;
    l_parm3 := l_parm2;
  END IF;

  IF l_parm4 IS NULL THEN
    l_parm4 := l_parm3;
  END IF;

  dbms_output.put_line(l_sqlbind);
  dbms_output.put_line(l_parm1);
  dbms_output.put_line(l_parm2);
  dbms_output.put_line(l_parm3);
  dbms_output.put_line(l_parm4);

  dbms_application_info.set_action(SUBSTR('PI='||psftapi.get_prcsinstance()||':'||l_parm4||':'||l_parm3,1,64));
--EXECUTE IMMEDIATE 'ALTER SESSION SET TRACEFILE_IDENTIFIER=''PI='||psftapi.get_prcsinstance()||':'||l_parm4||':'||l_parm3||'''';
END;
/
The action attribute is then picked up by the ASH data.  It is easy to extract the report ID and business unit from the action string with regular expressions, as in this example query.
set lines 160 trimspool on
column module format a12
column action format a32
column client_id format a12 
column prcsinstance format a9 heading 'Process|Instance'
column business_unit format a8 heading 'Business|Unit'
column report_id format a10
select DISTINCT module, action, client_id
,      REGEXP_SUBSTR(h.action,'[[:digit:]]+') prcsinstance
,      substr(regexp_substr(h.action,':([[:alnum:]])+',1,2),2) business_unit
,      substr(regexp_substr(h.action,':([A-Za-z0-9_-])+',1,1),2) report_id
from   v$active_session_History h
where  program like 'PSNVS%'
/
Here you can see how process instance, report ID and business unit are held in action and how they can be extracted.  Now, it is possible to profile ASH data for nVision processes, find the long-running SQL and determine which layout it came from.
                                                           Process   Business
MODULE       ACTION                           CLIENT_ID    Instance  Unit     REPORT_ID
------------ -------------------------------- ------------ --------- -------- ----------
RPTBOOK      PI=1780508:GBGL224S:UK001        GBNVISION    1780508   UK001    GBGL113S
RPTBOOK      PI=1780509:GBGL010E:UK002        GBNVISION    1780509   UK002    GBGL010E
RPTBOOK      PI=1780502:GBGL91PF:UK001        GBNVISION    1780502   UK001    GBGL91PF
RPTBOOK      PI=1780502:GBGL91FR:UK001        GBNVISION    1780502   UK001    GBGL91FR
RPTBOOK      PI=1780502:GBGL91GB:UK001        GBNVISION    1780502   UK001    GBGL91GB
RPTBOOK      PI=1780502:GBGL91DM:UK002        GBNVISION    1780502   UK002    GBGL91DM
RPTBOOK      PI=1780506:GBEXP2AM:UK001        GBNVISION    1780506   UK001    GBEXP2AM
RPTBOOK      PI=1780509:Processing            GBNVISION    1780509            Processing
RPTBOOK      PI=1780500:GBGL113S:UK003        GBNVISION    1780500   UK003    GBGL113S
RPTBOOK      PI=1780509:GBGL010E:UK000        GBNVISION    1780508   UK000    GBGL010E 
This code in this blog is available on github.

Other recommendations

  • Create an index on SYS.FGA_LOG$ to support the query in the FGA handler package.
CREATE INDEX sys.fga_log$_obj$name
ON sys.fga_log$ (obj$name, sessionid, entryid)
TABLESPACE sysaux PCTFREE 1 COMPRESS 1 
/
  • Put a regular purge of the FGA_LOG$ table in place, to purge rows after, say, 31 days.  Otherwise, it will grow indefinitely, one row will be added for every nVision report run.
DELETE FROM fga_log$ 
WHERE obj$name = 'PS_NVS_REPORT' 
AND ntimestamp# 
  • Move SYS.AUD$ and SYS.FGA_LOG$ from the SYSTEM tablespace to another ASSM tablespace using the instructions in Oracle support note 1328239.1.
1. My thanks to Tyler McPheeters for the original idea of using FGA