Feed aggregator

ORA-22835 Buffer too small for CLOB to CHAR or BLOB to RAW conversion when reading AUD$

Tom Kyte - 9 hours 30 min ago
Hello Tom, I am trying to read sqltext from AUD$. However, I am skipping the 32K and sql is giving me an ORA Error. ORA-22835 Buffer too small for CLOB to CHAR or BLOB to RAW conversion(actual 4664, maximum 4000). Sql is : SELECT TO_CHAR(NTIME...
Categories: DBA Blogs

I want to use case in a SQL statement but getting error: too many values.

Tom Kyte - 9 hours 30 min ago
My Query is this: <code> Select cm.CASE_NUM , lrc.AGENCY_NAME , lg.GENDER , (Select PAT_STAT_PREG, CASE PAT_STAT_PREG when 0 then 'No' when 1 then 'Yes' when 2 then 'Unknown' else 'Not Applicable' END from case_pat_info ) from safetyreport sr...
Categories: DBA Blogs

Objects with Stat locked

Tom Kyte - 9 hours 30 min ago
Hi, We have noticed that for few tables and indexes in our DB stats are locked and its causing some queries to use bad plan. We are trying to unlock them and do gather stats to try and help them run faster. We can find the tables and indexes fo...
Categories: DBA Blogs

service issue with pluggable database

Tom Kyte - 9 hours 30 min ago
I used windows server 2012 with 64 Bit Operating system and I install Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production. When Create pluggable database as per following link. https://www.oracle.com/technetwork/articles/da...
Categories: DBA Blogs

ACL created but accessing gives ORA-29273 ORA-12541

Tom Kyte - 9 hours 30 min ago
I have created a ACL and assigned it to a host. When accessing I get the above erros. I did the following steps SQL> exec dbms_network_acl_admin.create_acl(acl=>'testlitle.xml', description=> 'all hctra.net connections',principal=>'TAG_OWNER't=...
Categories: DBA Blogs

LOAD JAVA Erorrs

Tom Kyte - 9 hours 30 min ago
Hi, We have default ORACLE8I(8.1.6) server installation. Following is the JAVA Program. import java.sql.*; import java.io.*; public class DropAny{ public static void object(String object_type,String object_name) ...
Categories: DBA Blogs

Max value to be used in a parallel hint

Tom Kyte - 9 hours 30 min ago
Hi, I am working on Oracle Database 11gR2 Enterprise Edition, Single Instance and i want to use parallel hint in one of the query. Below are the values taken from v$parameter : parallel_min_time_threshold AUTO parallel_max_servers...
Categories: DBA Blogs

Partioning -- 13a : Relocating a Partition

Hemant K Chitale - Sun, 2018-12-16 05:19
When you want to / need to move a Partition to a different Tablespace (e.g. as part of a LifeCycle Management Policy), you may need downtime to relocate the Partition.  However, version 12cRelease1 allows Online Relocation of a Partition.

Let's say I have a SALES_DATA table and I need to move the Year 2016 data to a tablespace with datafiles on "cheaper" (lesss-performant) storage :

SQL> select partition_name, tablespace_name
2 from user_tab_partitions
3 where table_name = 'SALES_DATA'
4 order by partition_position
5 /

PARTITION_NAME TABLESPACE_NAME
------------------------------ ------------------------------
P_2016 TBS_YEAR_2016
P_2017 TBS_YEAR_2017
P_2018 TBS_YEAR_2018
P_2019 TBS_YEAR_2019
P_2020 TBS_YEAR_2020
P_MAXVALUE USERS

6 rows selected.

SQL>
SQL> select index_name, tablespace_name, status
2 from user_indexes
3 where table_name = 'SALES_DATA'
4 /

INDEX_NAME TABLESPACE_NAME STATUS
------------------------------ ------------------------------ --------
SYS_C0017514 HEMANT VALID
SALES_DATA_LCL_NDX_1 N/A
SALES_DATA_LCL_NDX_2 N/A

SQL>
SQL> l
1 select index_name, partition_name, tablespace_name, status
2 from user_ind_partitions
3 where index_name in
4 (select index_name
5 from user_indexes
6 where table_name = 'SALES_DATA')
7* order by index_name, partition_position
SQL> /

INDEX_NAME PARTITION_NA TABLESPACE_NAM STATUS
------------------------------ ------------ -------------- --------
SALES_DATA_LCL_NDX_1 P_2016 TBS_YEAR_2016 USABLE
SALES_DATA_LCL_NDX_1 P_2017 TBS_YEAR_2017 USABLE
SALES_DATA_LCL_NDX_1 P_2018 TBS_YEAR_2018 USABLE
SALES_DATA_LCL_NDX_1 P_2019 TBS_YEAR_2019 USABLE
SALES_DATA_LCL_NDX_1 P_2020 TBS_YEAR_2020 USABLE
SALES_DATA_LCL_NDX_1 P_MAXVALUE USERS USABLE
SALES_DATA_LCL_NDX_2 P_2016 TBS_YEAR_2016 USABLE
SALES_DATA_LCL_NDX_2 P_2017 TBS_YEAR_2017 USABLE
SALES_DATA_LCL_NDX_2 P_2018 TBS_YEAR_2018 USABLE
SALES_DATA_LCL_NDX_2 P_2019 TBS_YEAR_2019 USABLE
SALES_DATA_LCL_NDX_2 P_2020 TBS_YEAR_2020 USABLE
SALES_DATA_LCL_NDX_2 P_MAXVALUE USERS USABLE

12 rows selected.

SQL>


I then want to move the Year 2016 data to the Tablespace ARCHIVE_SALES_DATA :

SQL> alter table SALES_DATA
2 move partition P_2016
3 tablespace ARCHIVE_SALES_DATA
4 /

Table altered.

SQL>
SQL> select partition_name, tablespace_name
2 from user_tab_partitions
3 where table_name = 'SALES_DATA'
4 order by partition_position
5 /

PARTITION_NA TABLESPACE_NAME
------------ ------------------
P_2016 ARCHIVE_SALES_DATA
P_2017 TBS_YEAR_2017
P_2018 TBS_YEAR_2018
P_2019 TBS_YEAR_2019
P_2020 TBS_YEAR_2020
P_MAXVALUE USERS

6 rows selected.

SQL>
SQL> select index_name, tablespace_name, partitioned, status
2 from user_indexes
3 where table_name = 'SALES_DATA'
4 order by index_name
5 /

INDEX_NAME TABLESPACE_NAME PAR STATUS
------------------------------ ------------------ --- --------
SALES_DATA_LCL_NDX_1 YES N/A
SALES_DATA_LCL_NDX_2 YES N/A
SYS_C0017514 HEMANT NO UNUSABLE

SQL> alter index SYS_C0017514 rebuild ;

Index altered.

SQL> select index_name, tablespace_name, partitioned, status
2 from user_indexes
3 where table_name = 'SALES_DATA'
4 order by index_name
5 /

INDEX_NAME TABLESPACE_NAME PAR STATUS
------------------------------ ------------------ --- --------
SALES_DATA_LCL_NDX_1 YES N/A
SALES_DATA_LCL_NDX_2 YES N/A
SYS_C0017514 HEMANT NO VALID

SQL>
SQL> l
1 select index_name, partition_name, tablespace_name, status
2 from user_ind_partitions
3 where index_name in
4 (select index_name
5 from user_indexes
6 where table_name = 'SALES_DATA')
7* order by index_name, partition_position
SQL> /

INDEX_NAME PARTITION_NA TABLESPACE_NAME STATUS
------------------------------ ------------ ------------------ --------
SALES_DATA_LCL_NDX_1 P_2016 TBS_YEAR_2016 UNUSABLE
SALES_DATA_LCL_NDX_1 P_2017 TBS_YEAR_2017 USABLE
SALES_DATA_LCL_NDX_1 P_2018 TBS_YEAR_2018 USABLE
SALES_DATA_LCL_NDX_1 P_2019 TBS_YEAR_2019 USABLE
SALES_DATA_LCL_NDX_1 P_2020 TBS_YEAR_2020 USABLE
SALES_DATA_LCL_NDX_1 P_MAXVALUE USERS USABLE
SALES_DATA_LCL_NDX_2 P_2016 TBS_YEAR_2016 UNUSABLE
SALES_DATA_LCL_NDX_2 P_2017 TBS_YEAR_2017 USABLE
SALES_DATA_LCL_NDX_2 P_2018 TBS_YEAR_2018 USABLE
SALES_DATA_LCL_NDX_2 P_2019 TBS_YEAR_2019 USABLE
SALES_DATA_LCL_NDX_2 P_2020 TBS_YEAR_2020 USABLE
SALES_DATA_LCL_NDX_2 P_MAXVALUE USERS USABLE

12 rows selected.

SQL>
SQL> alter index SALES_DATA_LCL_NDX_1
2 rebuild partition P_2016
3 tablespace ARCHIVE_SALES_DATA
4 /

Index altered.

SQL>
SQL> alter index SALES_DATA_LCL_NDX_2
2 rebuild partition P_2016
3 tablespace ARCHIVE_SALES_DATA
4 /

Index altered.

SQL>
SQL> l
1 select index_name, partition_name, tablespace_name, status
2 from user_ind_partitions
3 where index_name in
4 (select index_name
5 from user_indexes
6 where table_name = 'SALES_DATA')
7* order by index_name, partition_position
SQL> /

INDEX_NAME PARTITION_NA TABLESPACE_NAME STATUS
------------------------------ ------------ ------------------ --------
SALES_DATA_LCL_NDX_1 P_2016 ARCHIVE_SALES_DATA USABLE
SALES_DATA_LCL_NDX_1 P_2017 TBS_YEAR_2017 USABLE
SALES_DATA_LCL_NDX_1 P_2018 TBS_YEAR_2018 USABLE
SALES_DATA_LCL_NDX_1 P_2019 TBS_YEAR_2019 USABLE
SALES_DATA_LCL_NDX_1 P_2020 TBS_YEAR_2020 USABLE
SALES_DATA_LCL_NDX_1 P_MAXVALUE USERS USABLE
SALES_DATA_LCL_NDX_2 P_2016 ARCHIVE_SALES_DATA USABLE
SALES_DATA_LCL_NDX_2 P_2017 TBS_YEAR_2017 USABLE
SALES_DATA_LCL_NDX_2 P_2018 TBS_YEAR_2018 USABLE
SALES_DATA_LCL_NDX_2 P_2019 TBS_YEAR_2019 USABLE
SALES_DATA_LCL_NDX_2 P_2020 TBS_YEAR_2020 USABLE
SALES_DATA_LCL_NDX_2 P_MAXVALUE USERS USABLE

12 rows selected.

SQL>


When I relocated the P_2016 Partition (to the ARCHIVE_SALES_DATA Tablespace), the ROWIDs for rows in that Partition changed.  So the Non-Partitioned Index SYS_C0017514 and the corresponding Local Partitions of the two Partitioned Indexes became "UNUSABLE".  These had to be rebuilt. Alternatively, I could have added the UPDATE INDEXES clause to to the ALTER TABLE ... MOVE PARTITION .. statement to reset the Indexes to Usable but this would not have relocated the Local Partitions for those two Indexes to the new Tablespace.

Note that for Table Partitions, the MOVE clause relocates the Partition but for Index Partition the REBUILD clause is used to relocate (as well as make Usable) the Partition.

I would encourage you to view documentation and examples of the MOVE ONLINE facility in 12c to relocate a Table Partition without downtime.


Categories: DBA Blogs

The job where I learned the most valuable lessons was...

Joel Kallman - Sat, 2018-12-15 14:07
...McDonald's!  Prior to Oracle, I worked for a number of different companies: in IT at a bank and in product development at a couple other large companies.  But the most valuable lessons I ever learned and what has helped me the most in my entire career has been the education I received in in my late teens at McDonald's.

Often times, when people reflect on who shaped them the most in life, it's usually a coach or a high school teacher or a university professor.  In my case, it was Louis Stallman, Jr. - the general manager of a McDonald's franchise in Wadsworth, Ohio, where I worked at in high school and in the summer breaks between my years at university.

I'm second-generation American, the product of Polish and German immigrants.  I'm the youngest of 7 children (my Mom was also also the 7th child in her family).  My parents grew up during the Depression and passed their life lessons onto us - waste nothing, speak when you're spoken to, and always work & do your best.  My Dad was in the US Army Air Corps and flew on B-29s during World War II.  He was the disciplinarian.  At the time, my friends thought that I grew up in an overly strict household, but I can say today with certainty that this disciplined household provided complete structure.  There was a lucid difference between doing something right and doing something wrong.

When I started at McDonald's in high school, I wasn't sure what to expect.  It probably was a bit more rigid than what most kids were used to, but having grown up in a disciplined family, it was pretty easy to adapt to.  I certainly wasn't perfect - I was the typical crass teenager who thought he knew everything.  But it was a very structured environment and there was always a right and a wrong way to do something.

The general manager of this McDonald's was Louis Stallman, Jr.  While he was a kind man, he was also a bit of a disciplinarian.  There was no slouching about.  This is where I learned the infamous phrase "if you have time to lean, you have time to clean".  I made a whopping $3.35 an hour and we were put to task and earned every cent of it.  I worked a number of years there, in the summers, over holidays, in some cases even on holidays.  I had the good fortune to interact with a lot of people, some nice, some not so nice.  Lou Stallman coached everyone who interacted with a customer, and when you would fail or omit something, he was always there to remind you.  What I learned from Lou Stallman and what I've carried forward into my professional life:

  1. Greet the customer with a smile, always.
  2. When talking to a customer, look at them directly in the eye.
  3. Talk clearly and repeat back to the customer what they told you.
  4. Treat the customer (and really everyone) with respect and dignity.
  5. Genuinely thank the customer and wish them farewell.

That's it.  Seems obvious, right?  It is, but I see countless people forego these very basic courtesies when interacting with any customer.

I'll greet anyone at any time.  It's the kind thing to do.  Where I live in the USA (Ohio), it's generally considered rude not to greet someone, to put on a smile and say "good morning" while waiting for an elevator or say "hello" while passing them on the street.  I always enjoy going to Oracle HQ in California and greet everyone I pass on the sidewalk - they look at me like I'm from Mars.

In a disconnected, smartphone crazy-world, I find people are a bit afraid to engage someone and look them in the eye while talking to them or listening to them.  There couldn't be a better way to say "you have my full attention" than looking at them directly, without distractions.

When listening to a customer, pay careful attention to what they're asking, and repeat back to them what you think they said.  Maybe they're complaining and need to vent some steam.  Maybe what they really want isn't accurately expressed in the words they're using, so repeat back to them what you think they want.  That's the perfect time to ensure you have a mutual understanding.  And people like to know that someone has truly listened to them.

The customer (and really everyone) should always, always, always be treated with respect and dignity.  At all times.  The customer may not always be right, but they are deserving of respect and dignity.

Always thank the customer.  Always.  You may have killed yourself for your customer and you might think that they owe you something.  But still thank them.  They're the reason why you're able to feed your family.  Competition is everywhere, and an easy way to stand out is through a personal connection.  Something is wrong if you can't extend a simple courtesy to your customer, thanking them for their business.

There you have it - the simple lessons I learned from McDonald's general manager Louis Stallman, Jr.  They're not technical and they're not difficult to grasp.  But they have broad applicability to almost any job in any field anywhere in the world.  They have served me well and I encourage you to try them yourself.

Off Canvas Menu in Oracle VBCS/JET Cloud

Andrejus Baranovski - Sat, 2018-12-15 13:59
These days I'm actively working with VBCS/JET Cloud product from Oracle. The more I work with VBCS the more I like it. VBCS follows similar declarative development concepts as Oracle ADF, this makes it easy to get up to speed with VBCS development. VBCS with declarative JavaScript development approach brings unique solution for JavaScript systems implementation for enterprise.

I will share sample with off canvas menu implementation for VBCS app. Sample is based on step by step guide shared by Shay Shmeltzer. I don't describe steps how to build off canvas in VBCS from scratch, you should watch Shay's video for the instructions.

Off canvas menu rendered in VBCS app:


You should check how to build multiple flows in VBCS app in my previous post - Flow Navigation Menu Control in Oracle VBCS. I have defined three flows in my sample, this means there will be three menu items:


To render menu in off canvas block, I'm using JET navigation list component:


Sample app code which can be imported into your VBCS instance is available on GitHub.

[FREE Masterclass] How To Become Oracle Certified Cloud Architect [1Z0-932] in 8 Weeks

Online Apps DBA - Sat, 2018-12-15 04:16

Oracle Get2 Cloud (OCI) is must know for everyone [Apps DBA, DBA Architects, SOA Admins, Architects, FMW Admins & Security Professionals] because all Oracle software runs on top of Oracle Cloud Infrastructure (OCI) i.e. Oracle’s Gen2 Cloud! Learning & Being Certified On Cloud will take your career to next level in 2019 Visit: HERE to Join […]

The post [FREE Masterclass] How To Become Oracle Certified Cloud Architect [1Z0-932] in 8 Weeks appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Extreme Nulls

Jonathan Lewis - Fri, 2018-12-14 13:01

This note is a variant of a note that I wrote a few months ago about the impact of nulls on column groups. The effect showed up recently on a client site with a little camouflage that confused the issue for a little while, so I thought it would be worth a repeat.  We’ll start with a script to generate some test date:

rem
rem     Script:         pt_hash_cbo_anomaly.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Dec 2018
rem     Purpose:        
rem
rem     Last tested 
rem             12.1.0.2
rem

create table t1 (
        hash_col,
        rare_col,
        n1,
        padding
)
nologging
partition by hash (hash_col)
partitions 32
as
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        mod(rownum,128),
        case when mod(rownum,1021) = 0 
                then rownum + trunc(dbms_random.value(-256, 256))
        end case,
        rownum,
        lpad('x',100,'x')               padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1048576 -- > comment to avoid WordPress format issue
;

create index t1_i1 on t1(hash_col, rare_col) nologging
local compress 1
;

begin
        dbms_stats.gather_table_stats(
                ownname     => null,
                tabname     => 'T1',
                granularity => 'ALL',
                method_opt  => 'for all columns size 1'
        );
end;
/

I’ve got a hash-partitioned table with 32 partitions; the partitioning key is called hash_col, and there is another column called rare_col that is almost alway null – roughly 1 row in every 1,000 holds a value. I’ve added a local index on (hash_col, rare_col) compressing the leading column since hash_col is very repetitive, and gathered stats on the partitions and table. Here’s a view of the data for a single value of hash_col, and a summary report of the whole data set:

select  
        hash_col, rare_col, count(*)
from
        t1
where
        hash_col = 63
group by
        hash_col, rare_col
order by
        hash_col, rare_col
;

  HASH_COL   RARE_COL   COUNT(*)
---------- ---------- ----------
        63     109217          1
        63     240051          1
        63     370542          1
        63     501488          1
        63     631861          1
        63     762876          1
        63     893249          1
        63    1023869          1
        63                  8184

9 rows selected.

select
        count(*), ct
from    (
        select
                hash_col, rare_col, count(*) ct
        from
                t1
        group by
                hash_col, rare_col
        order by
                hash_col, rare_col
        )
group by ct
order by count(*)
;

  COUNT(*)         CT
---------- ----------
         3       8183
       125       8184
      1027          1

Given the way I’ve generated the data any one value for hash_col will have there are 8,184 (or 8,183) rows where the rare_col is null; but there are 1027 rows which have a value for both hash_col and rare_col with just one row for each combination.

Now we get to the problem. Whenever rare_col is non null the combination of hash_col and rare_col is unique (though this wasn’t quite the cased at the client site) so when we query for a given hash_col and rare_col we would hope that the optimizer would be able to estimate a cardinality of one row; but this is what we see:


variable n1 number
variable n2 number

explain plan for
select /*+ index(t1) */
        n1
from
        t1
where
        hash_col = :n1
and     rare_col = :n2
;

select * from table(dbms_xplan.display);

========================================

--------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                  | Name  | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
--------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                           |       |   908 | 10896 |    76   (0)| 00:00:01 |       |       |
|   1 |  PARTITION HASH SINGLE                     |       |   908 | 10896 |    76   (0)| 00:00:01 |   KEY |   KEY |
|   2 |   TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| T1    |   908 | 10896 |    76   (0)| 00:00:01 |   KEY |   KEY |
|*  3 |    INDEX RANGE SCAN                        | T1_I1 |   908 |       |     2   (0)| 00:00:01 |   KEY |   KEY |
--------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("HASH_COL"=TO_NUMBER(:N1) AND "RARE_COL"=TO_NUMBER(:N2))

The optimizer has predicted a massive 908 rows. A quick check of the object stats shows us that this is “number of rows in table” / “number of distinct keys in index” (1,048,576 / 1,155, rounded up).

Any row with rare_col set to null cannot match the predicate “rare_col = :n2”, but because the optimizer is looking at the statistics of complete index entries (and there are 1048576 of them, with 1155 distinct combinations, and none that are completely null) it has lost sight of the frequency of nulls for rare_col on its own. (The same problem appears with column groups – which is what I commented on in my previous post on this topic).

I’ve often said in the past that you shouldn’t create histograms on data unless your code is going to use them. In this case I need to stop the optimizer from looking at the index.distinct_keys and one way to do that is to create a histogram on one of the columns that defines the index; and I’ve chosen to do this with a fairly arbitrary size of 10 buckets:


execute dbms_stats.gather_table_stats(user,'t1',method_opt=>'for columns rare_col size 10')

explain plan for
select /*+ index(t1) */
        n1
from
        t1
where
        hash_col = :n1
and     rare_col = :n2
;

select * from table(dbms_xplan.display);

========================================

--------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                  | Name  | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
--------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                           |       |     1 |    12 |     2   (0)| 00:00:01 |       |       |
|   1 |  PARTITION HASH SINGLE                     |       |     1 |    12 |     2   (0)| 00:00:01 |   KEY |   KEY |
|   2 |   TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| T1    |     1 |    12 |     2   (0)| 00:00:01 |   KEY |   KEY |
|*  3 |    INDEX RANGE SCAN                        | T1_I1 |     1 |       |     1   (0)| 00:00:01 |   KEY |   KEY |
--------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("HASH_COL"=TO_NUMBER(:N1) AND "RARE_COL"=TO_NUMBER(:N2))

Bonus observation

This problem came to my attention (and I’ve used a partitioned table in my demonstration) because I had noticed an obvious optimizer error in the client’s execution plan for exactly this simple a query. I can demonstrate the effect the client saw by running the test again without creating the histogram but declaring hash_col to be not null. Immediately after creating the index I’m going to add the line:


alter table t1 modify hash_col not null;

(The client’s system didn’t declare the column not null, but their equivalent of hash_col was part of the primary key of the table which meant it was implicitly declared not null). Here’s what my execution plan looked like with this constraint in place:


--------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                  | Name  | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
--------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                           |       |   908 | 10896 |    76   (0)| 00:00:01 |       |       |
|   1 |  PARTITION HASH SINGLE                     |       |   908 | 10896 |    76   (0)| 00:00:01 |   KEY |   KEY |
|   2 |   TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| T1    |   908 | 10896 |    76   (0)| 00:00:01 |   KEY |   KEY |
|*  3 |    INDEX RANGE SCAN                        | T1_I1 |    28 |       |     2   (0)| 00:00:01 |   KEY |   KEY |
--------------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("HASH_COL"=TO_NUMBER(:N1) AND "RARE_COL"=TO_NUMBER(:N2))

Spot the difference.

The estimate of index rowids is far smaller than the estimate of the rows that will be fetched using those rowids. This is clearly an error.

If you’re wondering how Oracle got this number divide 908 by 32 (the number of partitions in the table) – the answer is 28.375.

Fortunately it’s (probably) an error that doesn’t matter despite looking worryingly wrong. Critically the division hasn’t changed the estimate of the number of table rows (we’ll ignore the fact that that estimate is wrong thanks to a different error), and the cost of the index range scan and table access have not changed. The error is purely cosmetic in effect.

Interestingly if you modify the query to be index-only (i.e. you restrict the select list to columns in the index) this extra division disappears.

Summary

1) If you have a B-tree index where one (or more) of the columns is null for a large fraction of the entries then the optimizer may over-estimate the cardinality of a predicate of the form: “(list of all index columns) = (list of values)” as it will be using the index.distinct_keys in its calculations and ignore the effects of nulls in the individual columns. If you need to work around this issue then creating a histogram on one of the index columns will be sufficient to switch Oracle back to the strategy of multiplying the individual column selectivities.

2) There are cases of plans for accessing partitioned tables where Oracle starts by using table-level statistics to get a suitable set of estimates but then displays a plan with the estimate of rows for an index range scan scaled down by the number of partitions in the table. This results in a visible inconsistency between the index estimate and the table estimate, but it doesn’t affect the cardinality estimate for the table access or either of the associated costs – so it probably doesn’t have a destabilising effect on the plan.

Lightning Web Components - the dawn of (another) new era

Robert Baillie - Fri, 2018-12-14 12:18
Salesforce have a new technology. Lightning Components look like they’re on the way out, and are being replaced with a new technology ‘Lightning Web Components’. The reasons behind that, and the main principles behind its design are covered very nicely in this article on developer.salesforce.com. From that we can then get to a series of examples here. (Note: some of the code snippets used below, to illustrate points, are taken from the recipes linked above) Now I’m a big supporter of evolution, and I love to see new tools being given to developers on the Salesforce platform, so, with a couple of hours to play with it - what’s the immediate impression? This is an article on early impressions, based on reviewing and playing with the examples - I fully expect there to be misunderstandings, bad terminology, and mistakes in here - If you're OK with that, I'm OK with that. I admit, I got excited and wanted to post something as quickly as possible before my cynical side took over. So...

Lightning Web Components - the dawn of (another) new era

Rob Baillie - Fri, 2018-12-14 08:04

Salesforce have a new technology. Lightning Components look like they’re on the way out, and are being replaced with a new technology ‘Lightning Web Components’.

The reasons behind that, and the main principles behind its design are covered very nicely in this article on developer.salesforce.com.

From that we can then get to a series of examples here.

(Note: some of the code snippets used below, to illustrate points, are taken from the recipes linked above)

Now I’m a big supporter of evolution, and I love to see new tools being given to developers on the Salesforce platform, so, with a couple of hours to play with it - what’s the immediate impression?

This is an article on early impressions, based on reviewing and playing with the examples - I fully expect there to be misunderstandings, bad terminology, and mistakes in here - If you're OK with that, I'm OK with that. I admit, I got excited and wanted to post something as quickly as possible before my cynical side took over. So here it is - mistakes and all.

WOW. Salesforce UI development has grown up.

Salesforce aren’t lying when they’ve said that they’re trying to bring the development toolset up to the modern standards.

We get imports, what look like annotations and decorators, and there’s even mention of Promises. Maybe there’s some legs in this…

It’s easy to dismiss this as ‘Oh no, yet another change’, but the thing is - the rest of industry develops and improves its toolset - why shouldn’t Salesforce?

The only way to keep the product on point IS to develop the frameworks, replace the technology, upgrade, move on. If you don’t do that then the whole Salesforce Ecosystem starts to stagnate.

Or to put it another way - in every other part of the developer community, learning from what was built yesterday and evolving is seen as a necessity. It’s good to see Salesforce trying to keep up.

So what are the big things that I’ve spotted immediately?

import is supported, and that makes things clearer

Import is a massive addition to Javascript that natively allows us to define the relationships between javascript files within javascript, rather than at the HTML level.

Essentially, this replaces the use of most ‘script’ tags in traditional Javascript development.

For Lightning Web Components,we use this to bring in capabilities from the framework, as well as static resources.

E.g. Importing modules from the Lightning Web Components framework:


import { LightningElement, track } from 'lwc';

Importing from Static Resources:


import { loadScript } from 'lightning/platformResourceLoader’;
import chartjs from '@salesforce/resourceUrl/chart';

What this has allowed Salesforce to do is to split up the framework into smaller components. If you don’t need to access Apex from your web component, then you don’t need to import the part of the framework that enables that capability.

This *should* make individual components much more lightweight and targeted - only including the capabilities that are required, when they are required.

Getting data on screen is simpler

Any javascript property is visible to the HTML template.

E.g.


export default class WebAppComponentByMe extends LightningElement {
contacts;

We can then render this property in the HTML with {contacts} (none of those attributes to define and none of those pesky v dot things to forget).

Much neater, much more concise.

We track properties

Looking at the examples, my assumption was that if we want to perform actions when a property is changed, we mark the property trackable using the @track decorator.

For example:


export default class WebAppComponentByMe extends LightningElement {
@track contacts;

I was thinking that, at this point, anything that references this property (on page, or in Javascript) will be notified whenever that property changes.

However, at this point I can't really tell what the difference is between tracked and non-tracked properties - a mystery for another day

Wiring up to Apex is much simpler

One of the big criticisms of Lightning Components that I always had was the amount of code you need to write in order to call an Apex method. OK, so you have force:recordData for a lot of situations, but there are many times when only an Apex method will do.

In Web Components, this is much simpler.

In order to connect to Apex, we import the ‘wire’ module, and then import functions into our javascript


import { LightningElement, wire } from 'lwc';
import getContactList from '@salesforce/apex/ContactController.getContactList';

The first line imports the wire capabilities from the framework, the second then imports the Apex method as a javascript method, therefore making it available to the component.

We can then connect a javascript property up to the method using the wire decorator:


@wire(getContactList) contacts;

Or wire up a javascript method:


@wire(getContactList)
wiredContacts({ error, data }) {
if (data) {
this.contacts = data;
} else if (error) {
this.error = error;
}
}

When the component is initialised, the getContactList method will be executed.

If the method has parameters, that’s also very simple (E.g. wiring to a property):


@wire(getContactList, { searchKey: '$searchKey' })
contacts;

Changing the value of a property causes Apex to re-execute

Having wired up a property as a parameter to an Apex bound Javascript function, any changes to that property will cause the function to be re-executed

For example, if we:


searchKey = '';

@wire(findContacts, { searchKey: '$searchKey' })
contacts;

Whenever the searchKey property changes, the Apex method imported as ‘findContacts’ will be executed and the contacts property is updated.

Thankfully, we can control when that property changes, as it looks like changing the property in the UI does not automatically fire a change the property on the Javascript object. In order to do that, we need to change the property directly.

E.g. Let’s say we extend the previous example and there’s an input that is bound to the property, and there’s an onchange event defined:



And the handler does the following:


handleKeyChange(event) {
this.searchKey = event.target.value;
}

This will cause the findContacts method to fire whenever the value in the input is changed.

Note that it is the assignment to this.searchKey that causes the event to fire - it looks like the binding from the HTML is 1-way. I admit that I need to investigate this further.

Events do not require configuration to be implemented

Events work in a completely different way - but then that’s not a problem - Application and Component events were different enough to cause headaches previously. The model is actually much simpler.

The example in the above referenced repository to look at is ‘PubSub’.

It’s much too involved to into detail here, but the result is that you need to:

  • Implement a Component that acts as the messenger (implementing registerListener, unregisterListener and fireEvent)
  • Any component that wants to fire an event, or listen for an event will import that component to do so, firing events or registering listeners.

This would seem to imply that (at least a certain amount of) state within components is shared - looking like those defined with 'const'

Whatever the precise nature of the implementation, a pure Javascript solution is surely one that anyone involved in OO development will welcome.

I suspect that, in a later release, this will become a standard component.

Summary

Some people will be thinking "Man, glad I didn’t migrate from Classic / Visualforce to Lightning Experience / Components - maybe I’ll just wait a little longer for it all to settle down”.

You’re wrong - it won’t settle, it’ll continue to evolve and the technologies will be continually replaced by new ones. Eventually, the jump from what you have to where you need to get to will be so huge that you’ll find it incredibly hard. There’s a reason why Salesforce pushes out 3 releases a year, whether you want it or not, these technology jumps are just the same. The more you put it off, the more painful it’ll be.

The change from Lightning Components to Lightning Web Components is vast - a lot more than a single 3 letter word would have you suspect. The only real similarities between the two frameworks that I’ve seen up to now are:

  • Curlies are used to bind things
  • The Base Lightning Components are the same
  • You need to know Javascript

Other than that, they’re a world apart.

Also, I couldn’t find any real documentation - only examples - although those examples are a pretty comprehensive starting point.

Now, obviously it's early days - we're in pre-release right now, but what I've seen gives me great hope for the framework, it's a significant step forward and I can't wait to see what happens next. I wonder if a Unit Testing framework might follow (I can but hope)

You could wait, but hey, really, what are you waiting for? Come on, jump in. The change is exciting...

Oracle Database 18c XE available for Everyone

Oracle Database Express Edition (XE) Release 18.4.0.0.0 (18c) was released on October 19, 2018 and is now available for DBAs, developers, data scientists or anyone curious about the worlds #1...

We share our skills to maximize your revenue!
Categories: DBA Blogs

[BLOG] Templates in Oracle SOA Suite 12c

Online Apps DBA - Fri, 2018-12-14 06:35

Are you learning SOA Dev and want to enhance your skills on the same? If yes, then visit: https://k21academy.com/soadev23 and learn about: ✔What are Templates in SOA 12C ✔The 3 levels of templates ✔Steps to create different levels of templates & much more… Are you learning SOA Dev and want to enhance your skills on […]

The post [BLOG] Templates in Oracle SOA Suite 12c appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

[Solved] Oracle GoldenGate Extract/Replicat Abended- OGG-02091: Operation not supported

Online Apps DBA - Fri, 2018-12-14 03:02

Are you learning Oracle GoldenGate and are facing issues such as ‘Extract or Replicat processes getting abended’? If you are facing the above issue and want to know all about it, then visit https://k21academy.com/goldengate32 where we have covered: ✔’Extract or Replicat processes getting abended’ issue in detail ✔The Reason why it occurs ✔The Solution to […]

The post [Solved] Oracle GoldenGate Extract/Replicat Abended- OGG-02091: Operation not supported appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Packages are invalid -- ORA-04061

Tom Kyte - Fri, 2018-12-14 02:46
Hi, In our DB, few packages became invalid. And when we verified it we saw that there are no errors related to them. I was expecting sessions to run this package without any error, however when it was executed for first time, we got ORA-04061 erro...
Categories: DBA Blogs

Full-Text Index (Table, Index and LOB files) Size Creep

Tom Kyte - Fri, 2018-12-14 02:46
Q: How do we "manage" the size of a table's associated SYS_LOB files? Background: I have a table [SRCH_CACHE] that was setup as a lookup table, because the two base tables could not be joined/filtered with sufficient speed. The table is pretty ...
Categories: DBA Blogs

How to forecast archive generation for a 350 GB table with no large objects.

Tom Kyte - Fri, 2018-12-14 02:46
Hi Gurus, Thanks for helping us whenever we need you. I have a task to import a 350 GB table. I will create index later. Its a 12.2 CDB with one single PDB and have a standby DB with force_logging enabled. Is there a method so that I can f...
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator