It has been almost 2 months since I wrote my last post. In this post I am sharing a real performance issue which I solved recently. Without going into much details, following is the OLD QUERY which was having performance issue.
Here DAY_D is a dimension table for dates. In the above query we are filtering DOCUMENT_USAGE_DSNP table by joining it with DAY_D and putting predicate on DAY_D. Which can also be achieved by putting the direct condition on DOCUMENT_USAGE_DSNP.DAILY_DT. When I looked at the structure of DOCUMENT_USAGE_DSNP table I found that this table is also monthly partitioned on DAILY_DT. So I plan to move this condition to DOCUMENT_USAGE_DSNP.DAILY_DT to achieve high performance by partition pruning.
Step by step drill down also revealed that after follow predicates are applied on W_LOV_D table, Oracle is going for NESTED LOOP Join on CONS_AUTHORS_DM and DOCUMENT_USAGE_DSNP table.
a. AND L.VALUE = 'PRIATHR'
b. AND L.R_TYPE = 'ATHRCD'
So I made following 2 changes in QUERY
1) Added a predicate: AND DUD.DAILY_DT >= TO_DATE('201605','YYYYMM')
2) Added an Join Operation Hint: /*+ NO_USE_NL(CA DUD) */
When I tested the new version of the query, I got following stats
Statistics:
- Execution Time is reduced from 11 minutes 24 seconds to only 16 Seconds.
- Consistent Gets are reduced from 238,347,130 to 58,846.
- Cost of the query got increased from 690 to 5,850.
Key Take-away:
1. Reducing cost is not ultimate goal. Focus on other aspects too like Consistent Gets and Physical Reads.
2. If applicable, use partition key to apply predicates, so that optimizer can eliminate unneeded partitions. It is called Partition Pruning.
3. If you know that you are joining two large datasets, consider (or atleast try) HASH JOIN over NESTED LOOP.
4. Knowing you database structure and your data is very important.
I hope you have enjoyed reading this article. You might have learnt something new or may have some suggestions. Please do post you feedback and suggestions in comment-box.
Read More on Tuning SQL
- Why Primary Key Foreign Key Relationship and Join Elimination
- VIEW PUSHED PREDICATE - A Powerful Optimizer Transformation feature
- Why my distinct query is not using index?
- Avoiding unnecessary function calls to optimize SQL statements
- Why do I prefer COALESCE over NVL
- Query optimization tips for Oracle
- Tune Complete Refresh of Materialized View by atomic_refresh
- How to Fast Refresh Materialized View with Joins
====================================================================
OLD QUERY
====================================================================
SELECT SUM (NVL (DUD.USAGE_COUNT, 0)) AS c1,
SUM (NVL (RKAS.ALLOCATION, 1)) AS c2,
RKAS.LEVEL1_CLASS_DESCR AS c3,
CONCAT (CONCAT (E.LAST_NAME, ','), E.FST_NAME) AS c4,
SF.ET_STATUS_VALUE AS c5
FROM CONS_AUTHORS CA,
EMPLOYEE E,
LOV L,
DAY_D D,
((USER_CONTACT_PERSON UCP
INNER JOIN SERVICE_FLAT SF
ON UCP.USER_KEYID = SF.USER_KEYID
AND SF.USER_KEYID > 0)
INNER JOIN DOCUMENT_USAGE_DSNP DUD
ON SF.USER_KEYID > 0
AND SF.ET_KEYID = DUD.ET_KEYID
AND SF.USER_KEYID = DUD.USER_KEYID)
LEFT OUTER JOIN RESOURCE_KI_ALLOC_SNP RKAS
ON RKAS.WEB_VISIBLE_FLG = 'Y'
AND RKAS.RESOURCE_KEYID = DUD.RESOURCE_KEYID
AND RKAS.TYPE_CD = 'WEB'
WHERE ( SF.ET_ORG_EBS_SPK = '39854'
AND D.RID = DUD.DAILY_TIME_WID
AND DUD.RESOURCE_KEYID = CA.CONS_KEYID
AND CA.EMP_KEYID = E.ROW_KEYID
AND CA.EMP_ROLE_LOV_WID = L.RID
AND L.VALUE = 'PRIATHR'
AND L.R_TYPE = 'ATHRCD'
AND '201604' < D.CURR_FISC_YEARMO
AND RKAS.LEVEL1_CLASS_DESCR IS NOT NULL)
GROUP BY RKAS.LEVEL1_CLASS_DESCR,
CONCAT (CONCAT (E.LAST_NAME, ','), E.FST_NAME),
SF.ET_STATUS_VALUE
ORDER BY c2 DESC,
c3,
c5,
c4
Elapsed: 00:11:24.74
Execution Plan
----------------------------------------------------------
Plan hash value: 2221711877
--------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
--------------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 690 (1)| 00:00:01 | | |
| 1 | SORT AGGREGATE | | 1 | | | | | |
| 2 | VIEW | | 1 | | 690 (1)| 00:00:01 | | |
| 3 | HASH GROUP BY | | 1 | 190 | 690 (1)| 00:00:01 | | |
| 4 | NESTED LOOPS | | 1 | 190 | 689 (0)| 00:00:01 | | |
| 5 | NESTED LOOPS | | 1 | 185 | 687 (0)| 00:00:01 | | |
| 6 | NESTED LOOPS | | 1 | 127 | 685 (0)| 00:00:01 | | |
| 7 | NESTED LOOPS | | 1 | 117 | 683 (0)| 00:00:01 | | |
| 8 | NESTED LOOPS | | 27 | 2430 | 624 (0)| 00:00:01 | | |
| 9 | NESTED LOOPS | | 1 | 64 | 72 (0)| 00:00:01 | | |
|* 10 | HASH JOIN | | 1 | 39 | 70 (0)| 00:00:01 | | |
| 11 | TABLE ACCESS BY INDEX ROWID | LOV | 1 | 24 | 2 (0)| 00:00:01 | | |
|* 12 | INDEX RANGE SCAN | LOV_M6 | 1 | | 1 (0)| 00:00:01 | | |
| 13 | TABLE ACCESS FULL | CONS_AUTHORS | 277K| 4072K| 68 (0)| 00:00:01 | | |
| 14 | TABLE ACCESS BY INDEX ROWID | EMPLOYEE | 1 | 25 | 2 (0)| 00:00:01 | | |
|* 15 | INDEX RANGE SCAN | EMPLOYEE_M6 | 1 | | 1 (0)| 00:00:01 | | |
|* 16 | TABLE ACCESS BY GLOBAL INDEX ROWID| DOCUMENT_USAGE_DSNP | 516 | 13416 | 552 (0)| 00:00:01 | ROWID | ROWID |
|* 17 | INDEX RANGE SCAN | DOCUMENT_USAGE_DSNP_F1 | 610 | | 2 (0)| 00:00:01 | | |
| 18 | TABLE ACCESS BY INDEX ROWID | SERVICE_FLAT | 1 | 27 | 3 (0)| 00:00:01 | | |
|* 19 | INDEX RANGE SCAN | SERVICE_FLAT_M2 | 1 | | 2 (0)| 00:00:01 | | |
|* 20 | TABLE ACCESS BY INDEX ROWID | DAY_D | 1 | 10 | 2 (0)| 00:00:01 | | |
|* 21 | INDEX RANGE SCAN | INDX_RID | 1 | | 1 (0)| 00:00:01 | | |
|* 22 | TABLE ACCESS BY INDEX ROWID | RESOURCE_KI_ALLOC_SNP | 1 | 58 | 2 (0)| 00:00:01 | | |
|* 23 | INDEX RANGE SCAN | RESOURCE_KI_ALLOC_SNP_F1 | 1 | | 1 (0)| 00:00:01 | | |
|* 24 | INDEX RANGE SCAN | WC_CUSTOMER_FLAT_M1 | 2 | 10 | 2 (0)| 00:00:01 | | |
--------------------------------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
10 - access("CA"."EMP_ROLE_LOV_WID"="L"."RID")
12 - access("L"."VALUE"='PRIATHR' AND "L"."R_TYPE"='ATHRCD')
15 - access("CA"."EMP_KEYID"="E"."ROW_KEYID")
16 - filter("DUD"."USER_KEYID">0)
17 - access("DUD"."RESOURCE_KEYID"="CA"."CONS_KEYID")
19 - access("SF"."ET_ORG_EBS_SPK"=39854 AND "SF"."USER_KEYID"="DUD"."USER_KEYID" AND
"SF"."ET_KEYID"="DUD"."ET_KEYID")
filter("SF"."USER_KEYID">0)
20 - filter("D"."CURR_FISC_YEARMO">201604)
21 - access("D"."RID"="DUD"."DAILY_TIME_WID")
22 - filter("RKAS"."WEB_VISIBLE_FLG"='Y' AND "RKAS"."LEVEL1_CLASS_DESCR" IS NOT NULL)
23 - access("RKAS"."RESOURCE_KEYID"="DUD"."RESOURCE_KEYID" AND "RKAS"."TYPE_CD"='WEB')
24 - access("UCP"."USER_KEYID"="SF"."USER_KEYID")
filter("UCP"."USER_KEYID">0)
Statistics
----------------------------------------------------------
8 recursive calls
0 db block gets
238347130 consistent gets
0 physical reads
0 redo size
347 bytes sent via SQL*Net to client
476 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
Here DAY_D is a dimension table for dates. In the above query we are filtering DOCUMENT_USAGE_DSNP table by joining it with DAY_D and putting predicate on DAY_D. Which can also be achieved by putting the direct condition on DOCUMENT_USAGE_DSNP.DAILY_DT. When I looked at the structure of DOCUMENT_USAGE_DSNP table I found that this table is also monthly partitioned on DAILY_DT. So I plan to move this condition to DOCUMENT_USAGE_DSNP.DAILY_DT to achieve high performance by partition pruning.
Step by step drill down also revealed that after follow predicates are applied on W_LOV_D table, Oracle is going for NESTED LOOP Join on CONS_AUTHORS_DM and DOCUMENT_USAGE_DSNP table.
a. AND L.VALUE = 'PRIATHR'
b. AND L.R_TYPE = 'ATHRCD'
So I made following 2 changes in QUERY
1) Added a predicate: AND DUD.DAILY_DT >= TO_DATE('201605','YYYYMM')
2) Added an Join Operation Hint: /*+ NO_USE_NL(CA DUD) */
When I tested the new version of the query, I got following stats
====================================================================
OPTIMIZED QUERY
====================================================================
SELECT /*+ NO_USE_NL(CA DUD) */
SUM (NVL (DUD.USAGE_COUNT, 0)) AS c1,
SUM (NVL (RKAS.ALLOCATION, 1)) AS c2,
RKAS.LEVEL1_CLASS_DESCR AS c3,
CONCAT (CONCAT (E.LAST_NAME, ','), E.FST_NAME) AS c4,
SF.ET_STATUS_VALUE AS c5
FROM CONS_AUTHORS CA,
EMPLOYEE E,
LOV L,
DAY_D D,
((USER_CONTACT_PERSON UCP
INNER JOIN SERVICE_FLAT SF
ON UCP.USER_KEYID = SF.USER_KEYID
AND SF.USER_KEYID > 0)
INNER JOIN DOCUMENT_USAGE_DSNP DUD
ON SF.USER_KEYID > 0
AND SF.ET_KEYID = DUD.ET_KEYID
AND SF.USER_KEYID = DUD.USER_KEYID)
LEFT OUTER JOIN RESOURCE_KI_ALLOC_SNP RKAS
ON RKAS.WEB_VISIBLE_FLG = 'Y'
AND RKAS.RESOURCE_KEYID = DUD.RESOURCE_KEYID
AND RKAS.TYPE_CD = 'WEB'
WHERE ( SF.ET_ORG_EBS_SPK = '39854'
AND D.RID = DUD.DAILY_TIME_WID
AND DUD.RESOURCE_KEYID = CA.CONS_KEYID
AND CA.EMP_KEYID = E.ROW_KEYID
AND CA.EMP_ROLE_LOV_WID = L.RID
AND L.VALUE = 'PRIATHR'
AND L.R_TYPE = 'ATHRCD'
AND '201604' < D.CURR_FISC_YEARMO
AND DUD.DAILY_DT >= TO_DATE('201605','YYYYMM')
AND RKAS.LEVEL1_CLASS_DESCR IS NOT NULL)
GROUP BY RKAS.LEVEL1_CLASS_DESCR,
CONCAT (CONCAT (E.LAST_NAME, ','), E.FST_NAME),
SF.ET_STATUS_VALUE
ORDER BY c2 DESC,
c3,
c5,
c4
Elapsed: 00:00:16.29
Execution Plan
----------------------------------------------------------
Plan hash value: 714223888
---------------------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
---------------------------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 5850 (1)| 00:00:02 | | |
| 1 | SORT AGGREGATE | | 1 | | | | | |
| 2 | VIEW | | 67 | | 5850 (1)| 00:00:02 | | |
| 3 | TEMP TABLE TRANSFORMATION | | | | | | | |
| 4 | LOAD AS SELECT | SYS_TEMP_0FD9D70C7_46C13EA8 | | | | | | |
|* 5 | TABLE ACCESS FULL | RESOURCE_KI_ALLOC_SNP | 26083 | 1477K| 266 (0)| 00:00:01 | | |
| 6 | HASH GROUP BY | | 67 | 12596 | 5584 (1)| 00:00:02 | | |
|* 7 | HASH JOIN | | 67 | 12596 | 5574 (1)| 00:00:02 | | |
|* 8 | HASH JOIN | | 45 | 8235 | 4869 (1)| 00:00:02 | | |
|* 9 | HASH JOIN | | 45 | 7110 | 4494 (1)| 00:00:02 | | |
|* 10 | HASH JOIN | | 45 | 6660 | 4470 (1)| 00:00:02 | | |
|* 11 | HASH JOIN | | 210 | 21000 | 4463 (1)| 00:00:02 | | |
|* 12 | HASH JOIN | | 2460 | 175K| 3634 (1)| 00:00:02 | | |
|* 13 | HASH JOIN | | 95 | 3705 | 840 (1)| 00:00:01 | | |
| 14 | TABLE ACCESS BY INDEX ROWID | LOV | 1 | 24 | 2 (0)| 00:00:01 | | |
|* 15 | INDEX RANGE SCAN | LOV_M6 | 1 | | 1 (0)| 00:00:01 | | |
| 16 | TABLE ACCESS BY INDEX ROWID | CONS_AUTHORS | 50587 | 741K| 838 (1)| 00:00:01 | | |
| 17 | BITMAP CONVERSION TO ROWIDS | | | | | | | |
| 18 | BITMAP AND | | | | | | | |
| 19 | BITMAP MERGE | | | | | | | |
| 20 | BITMAP KEY ITERATION | | | | | | | |
| 21 | TABLE ACCESS BY INDEX ROWID | LOV | 1 | 24 | 2 (0)| 00:00:01 | | |
|* 22 | INDEX RANGE SCAN | LOV_M6 | 1 | | 1 (0)| 00:00:01 | | |
| 23 | BITMAP CONVERSION FROM ROWIDS| | | | | | | |
|* 24 | INDEX RANGE SCAN | LOV_M7 | | | 37 (0)| 00:00:01 | | |
| 25 | BITMAP MERGE | | | | | | | |
| 26 | BITMAP KEY ITERATION | | | | | | | |
| 27 | TABLE ACCESS FULL | SYS_TEMP_0FD9D70C7_46C13EA8 | 26083 | 178K| 7 (0)| 00:00:01 | | |
| 28 | BITMAP CONVERSION FROM ROWIDS| | | | | | | |
|* 29 | INDEX RANGE SCAN | CONS_AUTHORS_F1 | | | 1 (0)| 00:00:01 | | |
| 30 | PARTITION RANGE ITERATOR | | 3697K| 119M| 2794 (0)| 00:00:01 | 102 |1048575|
| 31 | PARTITION LIST ALL | | 3697K| 119M| 2794 (0)| 00:00:01 | 1 | LAST |
|* 32 | TABLE ACCESS FULL | DOCUMENT_USAGE_DSNP | 3697K| 119M| 2794 (0)| 00:00:01 | 203 |1048575|
|* 33 | TABLE ACCESS FULL | SERVICE_FLAT | 311K| 8210K| 829 (0)| 00:00:01 | | |
| 34 | TABLE ACCESS FULL | SYS_TEMP_0FD9D70C7_46C13EA8 | 26083 | 1222K| 7 (0)| 00:00:01 | | |
|* 35 | VIEW | index$_join$_005 | 12762 | 124K| 24 (0)| 00:00:01 | | |
|* 36 | HASH JOIN | | | | | | | |
|* 37 | INDEX RANGE SCAN | INDX_CURR_FISC_YRMO | 12762 | 124K| 9 (0)| 00:00:01 | | |
| 38 | INDEX FAST FULL SCAN | INDX_RID | 12762 | 124K| 19 (0)| 00:00:01 | | |
| 39 | TABLE ACCESS FULL | EMPLOYEE | 334K| 8163K| 375 (0)| 00:00:01 | | |
|* 40 | INDEX FAST FULL SCAN | CUSTOMER_FLAT_M1 | 4453K| 21M| 705 (0)| 00:00:01 | | |
---------------------------------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
5 - filter("RKAS"."TYPE_CD"='WEB' AND "RKAS"."WEB_VISIBLE_FLG"='Y' AND "RKAS"."LEVEL1_CLASS_DESCR" IS NOT NULL)
7 - access("UCP"."USER_KEYID"="SF"."USER_KEYID" AND "UCP"."USER_KEYID"="DUD"."USER_KEYID")
8 - access("CA"."EMP_KEYID"="E"."ROW_KEYID")
9 - access("D"."RID"="DUD"."DAILY_TIME_WID")
10 - access("C0"="DUD"."RESOURCE_KEYID" AND "C0"="CA"."CONS_KEYID")
11 - access("SF"."ET_KEYID"="DUD"."ET_KEYID" AND "SF"."USER_KEYID"="DUD"."USER_KEYID")
12 - access("DUD"."RESOURCE_KEYID"="CA"."CONS_KEYID")
13 - access("CA"."EMP_ROLE_LOV_WID"="L"."RID")
15 - access("L"."VALUE"='PRIATHR' AND "L"."R_TYPE"='ATHRCD')
22 - access("L"."VALUE"='PRIATHR' AND "L"."R_TYPE"='ATHRCD')
24 - access("CA"."EMP_ROLE_LOV_WID"="L"."RID")
29 - access("CA"."CONS_KEYID"="C0")
32 - filter("DUD"."DAILY_DT">=TO_DATE(' 2016-05-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss') AND "DUD"."USER_KEYID">0)
33 - filter("SF"."ET_ORG_EBS_SPK"=39854 AND "SF"."USER_KEYID">0)
35 - filter("D"."CURR_FISC_YEARMO">201604)
36 - access(ROWID=ROWID)
37 - access("D"."CURR_FISC_YEARMO">201604)
40 - filter("UCP"."USER_KEYID">0)
Note
-----
- star transformation used for this statement
Statistics
----------------------------------------------------------
11 recursive calls
58 db block gets
58846 consistent gets
49 physical reads
576 redo size
347 bytes sent via SQL*Net to client
476 bytes received via SQL*Net from client
2 SQL*Net roundtrips to/from client
0 sorts (memory)
0 sorts (disk)
1 rows processed
Statistics:
- Execution Time is reduced from 11 minutes 24 seconds to only 16 Seconds.
- Consistent Gets are reduced from 238,347,130 to 58,846.
- Cost of the query got increased from 690 to 5,850.
Key Take-away:
1. Reducing cost is not ultimate goal. Focus on other aspects too like Consistent Gets and Physical Reads.
2. If applicable, use partition key to apply predicates, so that optimizer can eliminate unneeded partitions. It is called Partition Pruning.
3. If you know that you are joining two large datasets, consider (or atleast try) HASH JOIN over NESTED LOOP.
4. Knowing you database structure and your data is very important.
I hope you have enjoyed reading this article. You might have learnt something new or may have some suggestions. Please do post you feedback and suggestions in comment-box.
Read More on Tuning SQL
- Why Primary Key Foreign Key Relationship and Join Elimination
- VIEW PUSHED PREDICATE - A Powerful Optimizer Transformation feature
- Why my distinct query is not using index?
- Avoiding unnecessary function calls to optimize SQL statements
- Why do I prefer COALESCE over NVL
- Query optimization tips for Oracle
- Tune Complete Refresh of Materialized View by atomic_refresh
- How to Fast Refresh Materialized View with Joins
To be very true only understood the partition pruning but what makes u think that partition prunning can help u to solve the issue is yet a mystery bcuz failed to understand the query very badly..... thank u sir ji..
ReplyDeleteWhen you create partitions, a table is divided in multiple segment. If we can provide the filter on the partition key only the related partition segments are used. The table was monthly partitioned and data was from lets say 2010, our condition was >= 201605, so we would only need partitions from 201505 to 201701 (only 9 partitions out of 100+ partitions). So performance gain is there as we avoided almost 90 partitions segments and used only 9 partitions segments
Deletethis is what called partition prunning dat i understood ..... the thing which i didn't understand was the below statement from ur post..... When I looked at the structure of the Tables I found that DOCUMENT_USAGE_DSNP table was partition on DAILY_DT column, and we are filtering the same table but using DAY_D table and joining it back to DOCUMENT_USAGE_DSNP .....
ReplyDeleteoh, day_d is a dimension table for dates, and in the original query we were filtering DOCUMENT_USAGE_DSNP table by joining it with DAY_D and filter was on DAY_D. Which has almost same impact as putting the direct condition on DOCUMENT_USAGE_DSNP.DAILY_DT and this table was partitioned on DAILY_DT too. So I moved this condition to DOCUMENT_USAGE_DSNP.DAILY_DT
DeleteNice article .. I just saw .. Thanks Nimish..It's very nice of you
ReplyDeletewhat is ((USER_CONTACT_PERSON UCP in u r query , I m seeing first time we are using bracket in table name , could you please clarify my doubt , Thank in advance
ReplyDeleteIt is a table
DeleteWhy we used brackets? I know it is silly question but need clarification
DeleteThis query was created by OBIEE. Just to group joins and perform them in a particular order.
DeleteFor writing this much of code at a time means it's too tuff....
ReplyDeleteGood article.
ReplyDeleteVery well written, concise article !
ReplyDeleteHi there very nice blog!! Guy .. Beautiful .. Wonderful ..
ReplyDeleteI'll bookmark your site and take the feeds also?
I'm glad to seek out numerous useful information here within the submit, we need work out more
techniques in this regard, thank you for sharing. . . . . .
I would like to request you to write a blog our analysis of execution plan like how nested loops or hash loop or merge join works and we can improve the performance for the same
ReplyDeleteThank you for sharing your info. I truly appreciate your efforts and I will be waiting for
ReplyDeleteyour further write ups thank you once again.
We're a gaggle of volunteers and starting a brand new scheme in our community.
ReplyDeleteYour website provided us with helpful info to work on.
You have performed an impressive job and our entire community will probably be thankful to you.
Good day! This is my 1st comment here so I just wanted to give a quick shout out and say I genuinely enjoy reading through your
ReplyDeleteblog posts. Can you recommend any other blogs/websites/forums
that cover the same topics? Thank you so much!
Nimish
ReplyDeleteCan you clarify how you convinced OBIEE to add the beneficial predicates?
Jim
I passed this info to obiee team, and they mad the changes in model. How, I am not sure.
DeleteHi Nimish,
ReplyDeleteGreeting,
I don't understand the partition concept in oracle. Can you please give me simple example with all types?
You made such an interesting piece to read, giving every subject enlightenment for us to gain knowledge. Thanks for sharing the such information with us to read this... élagage laurentides
ReplyDelete