Feed aggregator

Court Upholds Permanent Injunction Against Rimini Street

Chris Warticki - Fri, 2018-08-17 16:18

On August 15, a Federal Court in Nevada once again granted Oracle's motion for a permanent injunction against Rimini Street and required payment to Oracle of $28 million in attorney’s fees due to Rimini’s egregious litigation misconduct. 

In upholding the injunction, the Court made clear that Rimini’s business “was built entirely on its infringement of Oracle’s copyrighted software.” In addition, the court noted that Rimini’s improper conduct not only enabled the company to grow quickly, but also unfairly “eroded the bonds and trust that Oracle has with its customers.”

In a statement, Dorian Daley, Oracle's Executive Vice President and General Counsel, reinforced the Court’s findings. “As the Court's Order today makes clear, Rimini Street's business has been built entirely on unlawful conduct, and Rimini's executives have repeatedly lied to cover up their company's illegal acts.”

To learn more about the Federal Court ruling:

1. Read full press release

2. "Oracle Punches Rimini Street Hard in Court" - Enterprise Times

The size of Oracle Home: from 9GB to 600MB – What about PostgreSQL?

Yann Neuhaus - Fri, 2018-08-17 13:20

A recent blog post from Franck and a tweet around that topic is the inspiration for this blog post, thanks Jan for requesting :). In short it is about how small you can get the binaries. Is that important? At least when it comes to Docker images it might get important as you usually try make the image as small as possible. Well, comparing PostgreSQL and Oracle in that field is unfair as Oracle comes with many stuff by default which PostgreSQL is just not shipping (e.g. Apex, SQL Developer, …), so please treat this more a as fun post, please.

The way we usually compile PostgreSQL is this (not in /var/tmp in real life):

postgres@pgbox:/home/postgres/ [pg103] cd /var/tmp/
postgres@pgbox:/var/tmp/ [pg103] wget https://ftp.postgresql.org/pub/source/v10.5/postgresql-10.5.tar.bz2
postgres@pgbox:/var/tmp/ [pg103] tar -axf postgresql-10.5.tar.bz2
postgres@pgbox:/var/tmp/ [pg103] cd postgresql-10.5/
postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] PGHOME=/var/tmp/pg105
postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] SEGSIZE=2
postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] BLOCKSIZE=8
postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] WALSEGSIZE=16
postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] ./configure --prefix=${PGHOME} \
                                                             --exec-prefix=${PGHOME} \
                                                             --bindir=${PGHOME}/bin \
                                                             --libdir=${PGHOME}/lib \
                                                             --sysconfdir=${PGHOME}/etc \
                                                             --includedir=${PGHOME}/include \
                                                             --datarootdir=${PGHOME}/share \
                                                             --datadir=${PGHOME}/share \
                                                             --with-pgport=5432 \
                                                             --with-perl \
                                                             --with-python \
                                                             --with-openssl \
                                                             --with-pam \
                                                             --with-ldap \
                                                             --with-libxml \
                                                             --with-libxslt \
                                                             --with-segsize=${SEGSIZE} \
                                                             --with-blocksize=${BLOCKSIZE} \
                                                             --with-wal-segsize=${WALSEGSIZE}  \
                                                             --with-systemd
postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] make -j 4 all
postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] make install
postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] cd contrib
postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] make -j 4 install

When we do this against the PostgreSQL 10.5 source code the result is this (without the documentation, of course, but containing all the extensions ):

postgres@pgbox:/var/tmp/postgresql-10.5/contrib/ [pg103] du -sh /var/tmp/pg105/
28M	/var/tmp/pg105/

Can we get that even smaller? Let’s try to skip the extensions:

postgres@pgbox:/var/tmp/postgresql-10.5/contrib/ [pg103] cd ..
postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] make clean
postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] ./configure --prefix=${PGHOME} \
                                                             --exec-prefix=${PGHOME} \
                                                             --bindir=${PGHOME}/bin \
                                                             --libdir=${PGHOME}/lib \
                                                             --sysconfdir=${PGHOME}/etc \
                                                             --includedir=${PGHOME}/include \
                                                             --datarootdir=${PGHOME}/share \
                                                             --datadir=${PGHOME}/share \
                                                             --with-pgport=5432 \
                                                             --with-perl \
                                                             --with-python \
                                                             --with-openssl \
                                                             --with-pam \
                                                             --with-ldap \
                                                             --with-libxml \
                                                             --with-libxslt \
                                                             --with-segsize=${SEGSIZE} \
                                                             --with-blocksize=${BLOCKSIZE} \
                                                             --with-wal-segsize=${WALSEGSIZE}  \
                                                             --with-systemd
postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] make -j 4 all
postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] rm -rf /var/tmp/pg105/
postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] make install

What do we have now?

postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] du -sh /var/tmp/pg105/
25M	/var/tmp/pg105/

We saved another 3MB. Can we do more? Let’s try to skip all the “–with” flags that enable perl and so on for the configure command:

postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] make clean
postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] ./configure --prefix=${PGHOME} \
                                                             --exec-prefix=${PGHOME} \
                                                             --bindir=${PGHOME}/bin \
                                                             --libdir=${PGHOME}/lib \
                                                             --sysconfdir=${PGHOME}/etc \
                                                             --includedir=${PGHOME}/include \
                                                             --datarootdir=${PGHOME}/share \
                                                             --datadir=${PGHOME}/share \
                                                             --with-pgport=5432 \
                                                             --with-segsize=${SEGSIZE} \
                                                             --with-blocksize=${BLOCKSIZE} \
                                                             --with-wal-segsize=${WALSEGSIZE}  \
                                                             --with-systemd
postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] make -j 4 all
postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] rm -rf /var/tmp/pg105/
postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] make install

Do we see a change?

postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] du -sh /var/tmp/pg105/
25M	/var/tmp/pg105/

No, that does not change anything. Franck stripped the Oracle binaries and libraries, so lets try to do the same (although I am not sure right now if that is supported):

postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] du -sh /var/tmp/pg105/
25M	/var/tmp/pg105/
postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] strip /var/tmp/pg105/bin/*
postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] strip /var/tmp/pg105/lib/*
strip: Warning: '/var/tmp/pg105/lib/pkgconfig' is not an ordinary file
strip: Warning: '/var/tmp/pg105/lib/postgresql' is not an ordinary file
postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] du -sh /var/tmp/pg105/
24M	/var/tmp/pg105/

So, another 1MB less. Can we still initialize and start PostgreSQL?

postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] /var/tmp/pg105/bin/initdb -D /var/tmp/testpg
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locales
  COLLATE:  en_US.utf8
  CTYPE:    en_US.utf8
  MESSAGES: en_US.utf8
  MONETARY: de_CH.UTF-8
  NUMERIC:  de_CH.UTF-8
  TIME:     en_US.UTF-8
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

creating directory /var/tmp/testpg ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok

WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.

Success. You can now start the database server using:

    /var/tmp/pg105/bin/pg_ctl -D /var/tmp/testpg -l logfile start

postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] /var/tmp/pg105/bin/pg_ctl -D /var/tmp/testpg/ start
waiting for server to start....2018-08-17 18:57:50.329 CEST [8528] LOG:  listening on IPv6 address "::1", port 5432
2018-08-17 18:57:50.329 CEST [8528] LOG:  listening on IPv4 address "127.0.0.1", port 5432
2018-08-17 18:57:50.334 CEST [8528] LOG:  listening on Unix socket "/tmp/.s.PGSQL.5432"
2018-08-17 18:57:50.354 CEST [8529] LOG:  database system was shut down at 2018-08-17 18:57:31 CEST
2018-08-17 18:57:50.358 CEST [8528] LOG:  database system is ready to accept connections
 done
server started

Looks good and we are able to connect:

postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] /var/tmp/pg105/bin/psql -c "select version()" postgres
                                                 version                                                 
---------------------------------------------------------------------------------------------------------
 PostgreSQL 10.5 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-28), 64-bit
(1 row)

Time: 1.428 ms

What else can we do? When you do not need the utilities on the server you could just remove them (as said, this is a fun post, don’t do this):

postgres@pgbox:/var/tmp/postgresql-10.5/ [pg103] cd /var/tmp/pg105/bin
postgres@pgbox:/var/tmp/pg105/bin/ [pg103] rm clusterdb createdb createuser dropdb dropuser pg_archivecleanup pg_basebackup pg_dump pg_dumpall pg_isready pg_receivewal pg_recvlogical pg_resetwal pg_restore pg_rewind pg_test_fsync pg_test_timing pg_upgrade pg_waldump reindexdb vacuumdb

We could probably even remove pgbench and psql but these two I will need to show that the server is still working. What do we have now?

postgres@pgbox:/var/tmp/pg105/bin/ [pg103] du -sh /var/tmp/pg105/
21M	/var/tmp/pg105/

Another 3MB less. Can we still restart and connect?

postgres@pgbox:/var/tmp/pg105/bin/ [pg103] /var/tmp/pg105/bin/pg_ctl -D /var/tmp/testpg/ stop
waiting for server to shut down....2018-08-17 19:08:49.588 CEST [9144] LOG:  received fast shutdown request
2018-08-17 19:08:49.593 CEST [9144] LOG:  aborting any active transactions
2018-08-17 19:08:49.597 CEST [9144] LOG:  worker process: logical replication launcher (PID 9151) exited with exit code 1
2018-08-17 19:08:49.598 CEST [9146] LOG:  shutting down
2018-08-17 19:08:49.625 CEST [9144] LOG:  database system is shut down
 done
server stopped
postgres@pgbox:/var/tmp/pg105/bin/ [pg103] /var/tmp/pg105/bin/pg_ctl -D /var/tmp/testpg/ start
waiting for server to start....2018-08-17 19:08:51.949 CEST [9368] LOG:  listening on IPv6 address "::1", port 9999
2018-08-17 19:08:51.949 CEST [9368] LOG:  listening on IPv4 address "127.0.0.1", port 9999
2018-08-17 19:08:51.953 CEST [9368] LOG:  listening on Unix socket "/tmp/.s.PGSQL.9999"
2018-08-17 19:08:51.966 CEST [9369] LOG:  database system was shut down at 2018-08-17 19:08:49 CEST
2018-08-17 19:08:51.969 CEST [9368] LOG:  database system is ready to accept connections
 done
server started
postgres@pgbox:/var/tmp/pg105/bin/ [pg103] /var/tmp/pg105/bin/psql -c "select version()" postgres
                                                 version                                                 
---------------------------------------------------------------------------------------------------------
 PostgreSQL 10.5 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-28), 64-bit
(1 row)

Time: 2.043 ms

Looks good. Now lets do the final step and remove the rest which is not required for the server, but before that we do an initdb as we can not do that afterwards:

postgres@pgbox:/var/tmp/pg105/bin/ [pg103] /var/tmp/pg105/bin/pg_ctl -D /var/tmp/testpg/ stop
waiting for server to shut down....2018-08-17 19:10:31.693 CEST [9368] LOG:  received fast shutdown request
2018-08-17 19:10:31.696 CEST [9368] LOG:  aborting any active transactions
2018-08-17 19:10:31.696 CEST [9368] LOG:  worker process: logical replication launcher (PID 9375) exited with exit code 1
2018-08-17 19:10:31.697 CEST [9370] LOG:  shutting down
2018-08-17 19:10:31.712 CEST [9368] LOG:  database system is shut down
 done
server stopped
postgres@pgbox:/var/tmp/pg105/bin/ [pg103] rm -rf /var/tmp/testpg/
postgres@pgbox:/var/tmp/pg105/bin/ [pg103] /var/tmp/pg105/bin/initdb -D /var/tmp/testpg
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locales
  COLLATE:  en_US.utf8
  CTYPE:    en_US.utf8
  MESSAGES: en_US.utf8
  MONETARY: de_CH.UTF-8
  NUMERIC:  de_CH.UTF-8
  TIME:     en_US.UTF-8
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

creating directory /var/tmp/testpg ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok

WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.

Success. You can now start the database server using:

    /var/tmp/pg105/bin/pg_ctl -D /var/tmp/testpg -l logfile start

So, remove the rest:

postgres@pgbox:/var/tmp/pg105/bin/ [pg103] rm pg_config pg_controldata psql pgbench initdb ecpg pgbench pg_ctl
postgres@pgbox:/var/tmp/pg105/bin/ [pg103] du -sh /var/tmp/pg105/
20M	/var/tmp/pg105/

We are down to 20MB but we can still start the instance:

postgres@pgbox:/var/tmp/pg105/bin/ [pg103] /var/tmp/pg105/bin/postgres -D /var/tmp/testpg/ &
[1] 9486
postgres@pgbox:/var/tmp/pg105/bin/ [pg103] 2018-08-17 19:13:54.917 CEST [9486] LOG:  listening on IPv6 address "::1", port 9999
2018-08-17 19:13:54.917 CEST [9486] LOG:  listening on IPv4 address "127.0.0.1", port 9999
2018-08-17 19:13:54.924 CEST [9486] LOG:  listening on Unix socket "/tmp/.s.PGSQL.9999"
2018-08-17 19:13:54.955 CEST [9487] LOG:  database system was shut down at 2018-08-17 19:10:56 CEST
2018-08-17 19:13:54.960 CEST [9486] LOG:  database system is ready to accept connections

postgres@pgbox:/var/tmp/pg105/bin/ [pg103] ps -ef | grep postgres
root      1061   941  0 18:26 ?        00:00:00 sshd: postgres [priv]
postgres  1064  1061  0 18:26 ?        00:00:02 sshd: postgres@pts/0
postgres  1065  1064  0 18:26 pts/0    00:00:01 -bash
postgres  9486  1065  0 19:13 pts/0    00:00:00 /var/tmp/pg105/bin/postgres -D /var/tmp/testpg/
postgres  9488  9486  0 19:13 ?        00:00:00 postgres: checkpointer process   
postgres  9489  9486  0 19:13 ?        00:00:00 postgres: writer process   
postgres  9490  9486  0 19:13 ?        00:00:00 postgres: wal writer process   
postgres  9491  9486  0 19:13 ?        00:00:00 postgres: autovacuum launcher process   
postgres  9492  9486  0 19:13 ?        00:00:00 postgres: stats collector process   
postgres  9493  9486  0 19:13 ?        00:00:00 postgres: bgworker: logical replication launcher  
postgres  9496  1065  0 19:14 pts/0    00:00:00 ps -ef
postgres  9497  1065  0 19:14 pts/0    00:00:00 grep --color=auto postgres

Using another psql on that box we can confirm that we can connect:

postgres@pgbox:/var/tmp/pg105/bin/ [pg103] /u01/app/postgres/product/10/db_4/bin/psql -c "select version()" postgres
                                                 version                                                 
---------------------------------------------------------------------------------------------------------
 PostgreSQL 10.5 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-28), 64-bit
(1 row)

Still too much? What else can we? What is consuming space:

postgres@pgbox:/var/tmp/pg105/bin/ [pg103] du -sh /var/tmp/pg105/*
6.6M	/var/tmp/pg105/bin
5.9M	/var/tmp/pg105/include
4.1M	/var/tmp/pg105/lib
2.9M	/var/tmp/pg105/share

We can not do more in the “bin” directory, nothing left to delete:

postgres@pgbox:/var/tmp/pg105/bin/ [pg103] ls -l /var/tmp/pg105/bin
total 6660
-rwxr-xr-x. 1 postgres postgres 6817480 Aug 17 18:56 postgres
lrwxrwxrwx. 1 postgres postgres       8 Aug 17 18:54 postmaster -> postgres

Everything else will probably safe us a few bytes such as the sample files:

postgres@pgbox:/var/tmp/pg105/ [pg103] find . -name *sample*
./share/postgresql/tsearch_data/synonym_sample.syn
./share/postgresql/tsearch_data/thesaurus_sample.ths
./share/postgresql/tsearch_data/hunspell_sample.affix
./share/postgresql/tsearch_data/ispell_sample.affix
./share/postgresql/tsearch_data/ispell_sample.dict
./share/postgresql/tsearch_data/hunspell_sample_long.affix
./share/postgresql/tsearch_data/hunspell_sample_long.dict
./share/postgresql/tsearch_data/hunspell_sample_num.affix
./share/postgresql/tsearch_data/hunspell_sample_num.dict
./share/postgresql/pg_hba.conf.sample
./share/postgresql/pg_ident.conf.sample
./share/postgresql/postgresql.conf.sample
./share/postgresql/recovery.conf.sample
./share/postgresql/pg_service.conf.sample
./share/postgresql/psqlrc.sample

So how much space do we consume for the PostgreSQL installation and the files which make up the instance?

postgres@pgbox:/var/tmp/pg105/bin/ [pg103] du -sh /var/tmp/pg105/
20M	/var/tmp/pg105/
postgres@pgbox:/var/tmp/pg105/bin/ [pg103] du -sh /var/tmp/testpg/
41M	/var/tmp/testpg/

… 61MB. When we add the wal file Jan mentioned in his tweet we come the 77MB. Not much.

The final question is if PostgreSQL is still working. Let’s use pgbench from another installation on the same server against this:

postgres@pgbox:/var/tmp/pg105/ [pg103] /u01/app/postgres/product/10/db_3/bin/pgbench -i -s 10 postgres
NOTICE:  table "pgbench_history" does not exist, skipping
NOTICE:  table "pgbench_tellers" does not exist, skipping
NOTICE:  table "pgbench_accounts" does not exist, skipping
NOTICE:  table "pgbench_branches" does not exist, skipping
creating tables...
100000 of 1000000 tuples (10%) done (elapsed 0.08 s, remaining 0.75 s)
200000 of 1000000 tuples (20%) done (elapsed 0.24 s, remaining 0.95 s)
300000 of 1000000 tuples (30%) done (elapsed 0.42 s, remaining 0.98 s)
400000 of 1000000 tuples (40%) done (elapsed 0.49 s, remaining 0.74 s)
500000 of 1000000 tuples (50%) done (elapsed 0.70 s, remaining 0.70 s)
600000 of 1000000 tuples (60%) done (elapsed 0.88 s, remaining 0.58 s)
700000 of 1000000 tuples (70%) done (elapsed 0.95 s, remaining 0.41 s)
800000 of 1000000 tuples (80%) done (elapsed 1.14 s, remaining 0.29 s)
900000 of 1000000 tuples (90%) done (elapsed 1.32 s, remaining 0.15 s)
1000000 of 1000000 tuples (100%) done (elapsed 1.41 s, remaining 0.00 s)
vacuum...
set primary keys...
done.
postgres@pgbox:/var/tmp/pg105/ [pg103] /u01/app/postgres/product/10/db_3/bin/pgbench -s 10 postgres
scale option ignored, using count from pgbench_branches table (10)
starting vacuum...end.
transaction type: 
scaling factor: 10
query mode: simple
number of clients: 1
number of threads: 1
number of transactions per client: 10
number of transactions actually processed: 10/10
latency average = 4.436 ms
tps = 225.435296 (including connections establishing)
tps = 285.860401 (excluding connections establishing)

Looks good. So you can come down to 20MB for the PostgreSQL installation and another 41Mb for the files you need to start the instance. You could even drop the postgres database to save another 7MB. But remember: Please don’t do that, you are still fine with around 30MB :)

 

Cet article The size of Oracle Home: from 9GB to 600MB – What about PostgreSQL? est apparu en premier sur Blog dbi services.

Oracle Database Configurations using Docker and Vagrant

Hemant K Chitale - Fri, 2018-08-17 02:16
Oracle now makes available configurations for the Database (and other products) on both Docker and Vagrant via GitHub.

Good time to familiarize oneself with GitHub, Docker and/or Vagrant.


For the Docker configuration see : https://github.com/oracle/docker-images/tree/master/OracleDatabase


For the Vagrant configuration see : https://github.com/oracle/vagrant-boxes/tree/master/OracleDatabase


(Note : The Examples have been available on GitHub for quite some time at https://github.com/oracle/oracle-db-examples )



Categories: DBA Blogs

Subtract hours and show milliseconds for a TIMESTAMP(6) column

Tom Kyte - Fri, 2018-08-17 02:06
I want to SELECT a TIMESTAMP(6) with milliseconds, but at the same time I need to subtract 3 hours (0.125 of a day) from that TIMESTAMP to convert it to my timezone. So I tried: <code>SELECT To_Char(UTCSCANTIME-0.125,'YYYY-MM-DD HH24:MI:SS') AS LO...
Categories: DBA Blogs

RMAN Commands

Michael Dinh - Fri, 2018-08-17 00:21

Note to self to configure and clear settings, and all things RMAN.

Will add more as time goes by.

catalog backuppiece '/u01/app/oracle/backup/HAWK_3241949199_20180816_bctand12_1_1.bkp';
++++ Remove summary to get full details.
list backup of archivelog all summary completed after 'sysdate-1';
list backup of archivelog from sequence 243 summary;
list backup summary completed after 'sysdate-1';

list backup of controlfile summary tag=STBY;
list backup of controlfile summary;
list backup of spfile summary;

list backup by file;

backup incremental level 0 check logical database filesperset 1 tag=LEVEL0 plus archivelog filesperset 8 tag=LEVEL0;
backup current controlfile for standby tag=STBY;

+++ Remove preview/validate/summary for actual restore.
restore controlfile validate from tag=STBY;
restore controlfile validate preview summary from tag=STBY; (Error if insufficient backup)
restore spfile validate preview summary;

restore database validate;
restore database validate preview summary;
restore database until time "TRUNC(sysdate)+17/24" validate preview summary;

configure controlfile autobackup on;
configure controlfile autobackup clear;

configure controlfile autobackup format for device type disk to '/media/swrepo/backup/%d_%i_%f.ctl';
configure controlfile autobackup format for device type disk clear;

configure device type disk backup type to compressed backupset parallelism 2;
configure device type disk clear;

configure channel device type disk format '/media/swrepo/backup/%d_%i_%t_%u.bkp' maxpiecesize 1 g maxopenfiles 1;
configure channel device type disk clear;

configure archivelog deletion policy to backed up 1 times to disk applied on all standby;
configure archivelog deletion policy clear;

configure db_unique_name 'hawka' connect identifier 'hawka';
configure db_unique_name 'hawka' clear;

“Let’s Talk Database” coming to Wellington, Auckland, Brisbane and Perth in September.

Richard Foote - Thu, 2018-08-16 19:08
Due to popular demand, I’ve been asked by Oracle to again run some more “Let’s Talk Database” events in September. Dates and venues are as follows: Monday, 2 September – Wellington (Wellington Oracle Office): Registration Link. Tuesday, 3 September – Auckland (Auckland Oracle Office): Registration Link. Wednesday, 12 September – Brisbane (Brisbane Oracle Office): Registration Link. Thursday, 13 September – Perth (Perth Oracle Office): Registration Link. […]
Categories: DBA Blogs

ADF BC REST Query and SQL Nesting Control Solution

Andrejus Baranovski - Thu, 2018-08-16 15:04
I will talk about expert mode View Object (with hand written SQL), this View Object is created based on SQL join. So, thats my use case for today example. I will describe issue related to generated SQL statement and give a hint how to solve it. This is in particular useful, if you want to expose complex VO (SQL with joins and calculating totals) over ADF BC REST service and then run queries against this REST resource.

Code is available on my GitHub repository.

Here is SQL join and expert mode VO (the one where you can modify SQL by hand):


This VO is exposed through ADF BC REST, I will not go through those details, you can find more info about it online. Once application is running, REST resource is accessible through GET. ADF BC REST syntax allows to pass query string along with REST request, here I'm filtering based on StreetAddress='ABC':


On backend this works OK by default and generates nested query (this is expected behaviour for expert mode VOs, all additional criteria clauses will be added through SQL wrapping). While such query executes just fine, this is not what we want in some use cases. If we calculate totals or average aggregated values in SQL, we don't want it to be wrapped:


To prevent SQL wrapping we can call ADF BC API method in VO constructor:


While probably this works with regular ADF BC, it doesn't work with criteria coming from ADF BC REST. SQL query is generated with two WHERE clauses, after query nesting was disabled:


Possible solution proposed by me - override executeQueryForCollection method, do some parsing and change second WHERE to be AND, apply changed query string and then execute super:


This trick helps and query is generated as we would expect, criteria added from ADF BC REST query call is appended at the end of WHERE clause:

The size of Oracle Home: from 9GB to 600MB

Yann Neuhaus - Thu, 2018-08-16 14:43

This is research only and totally unsupported. When building docker images to run Oracle Database in a container, we try to get the smallest image possible. One way is to remove some subdirectories that we know will not be used. For example, the patch history is not used anymore once we have the required version. The dbca templates can be removed as soon as we have created the database… In this post I take the opposite approach: run some workload on a normal Oracle Home, and keep only the files that were used.

I have Oracle Database 18c installed in /u00/app/oracle/product/18EE and it takes 9GB on my host:

[oracle@vmreforatun01 ~]$ du --human-readable --max-depth=1 $ORACLE_HOME | sort -h | tail -10
 
352M /u00/app/oracle/product/18EE/jdk
383M /u00/app/oracle/product/18EE/javavm
423M /u00/app/oracle/product/18EE/inventory
437M /u00/app/oracle/product/18EE/assistants
605M /u00/app/oracle/product/18EE/md
630M /u00/app/oracle/product/18EE/bin
673M /u00/app/oracle/product/18EE/apex
1.4G /u00/app/oracle/product/18EE/.patch_storage
2.3G /u00/app/oracle/product/18EE/lib
9.4G /u00/app/oracle/product/18EE

Gigabytes of libraries (most of them used only to link the executables), hundreds of megabytes of binaries, templates for new databases, applied patches, old object files, options, tools, command line and graphical interfaces,… Do we need all that?

For a full installation in production, yes for sure. The more we have, the better it is. When you have to connect at 2 a.m because you are on-call and a critical alert wakes you up, then you will appreciate to have all tools on the server. Especially if you connect through a few security obstacles such as remote VPN, desktop, Wallix, tunnels to finally get a high latency tty with no copy-paste possibilities. With a full Oracle Home, you can face any issue. You have efficient command line interfaces (sqlplus and hopefully sqlcl) or graphical (SQLDeveloper, asmca,…). For severe problems, you can even re-link, apply or rollback patches, quickly create a new database to import something in it,…

But what if you just want to provide a small container where a database is running, and no additional administration support? Where you will never re-install the software, apply patches, re-create the database, troubleshoot weird issues. Just have users connect through the listener port and never log to the container. Then, most of these 9.4 GB are useless.

But how to know which files are useful or not?

If you can rely on Linux ‘access time’ then you may look at the files accessed during the last days – after any installation or database creation is done:

[oracle@vmreforatun01 ~]$ find $ORACLE_HOME -atime -1 -exec stat -L -c "%x %y %z %F %n" {} \; | sort

But this is not reliable. Access time depends on the file type, filesystem, mount options,… and is usually bypassed as much as possible because writing something just to log that you read something is not a very good idea.

Here, I’ll trace all system calls related to file names (strace -e trace=file). I’ll trace them from the start of the database, so that I run strace on dbstart with the -f arguments to trace across forks. Then, I’ll trace the listener, the instance processes and any user process created through the listener.

I pipe the output to an awk script which extracts the file names (which is enclosed in double quotes in the strace output). Basically, the awk is just setting the field separator with -F” and prints the $2 token for each line. There are many single and double quotes here because of shell interpretation.

[oracle@vmreforatun01 ~]$ dbshut $ORACLE_HOME ; strace -fe trace=file -o "|awk -F'"'"'"' '"'{print $2}'"'" sh -xc "dbstart $ORACLE_HOME >&2" | grep "^$ORACLE_HOME" | sort -u > /tmp/files.txt &

Then I run some activity. I did this on our Oracle Tuning training workshop lab, when reviewing all exercises after upgrading the lab VM to 18c. This runs some usual SQL for application (we use Swingbench) and monitoring. The idea is to run through all features that you want to be available on the container you will build.

When I’m done, I dbshut (remember this is for a lab only – strace is not for production) and then strace output gets deduplicated (sort -u) and written to a file.txt in /tmp.

This file contains all files referenced by system calls. Surprisingly, there is one that is not captured here, the ldap messages file, but if I do not take it then the remote connections will fail with:

ORA-07445: exception encountered: core dump [gslumcCalloc()+41] [SIGSEGV] [ADDR:0x21520] [PC:0x60F92D9] [Address not mapped to object] []

I got it with a very empirical approach, will try to understand later. For the moment, I just add it to the list:

[oracle@vmreforatun01 ~]$ ls $ORACLE_HOME/ldap/mesg/ldapus.msb >> /tmp/files.txt

I also add adrci and dbshut scripts as they are small and may be useful:

[oracle@vmreforatun01 ~]$ ls $ORACLE_HOME/bin/adrci $ORACLE_HOME/bin/dbshut >> /tmp/files.txt

From this list, I check thise which are not directories, and tar all regular files and symbolic links into /tmp/smalloh.tar:

[oracle@vmreforatun01 ~]$ stat -c "%F %n" $(cat /tmp/files.txt) | awk '!/^directory/{print $3}' | tar -cvf /tmp/smalloh.tar --dereference --files-from=-

This is a 600M tar:

[oracle@vmreforatun01 ~]$ du -h /tmp/smalloh.tar
 
598M /tmp/smalloh.tar

Then I can remove my Oracle Home

[oracle@vmreforatun01 ~]$ cd $ORACLE_HOME/..
[oracle@vmreforatun01 product]$ rm -rf 18EE
[oracle@vmreforatun01 product]$ mkdir 18EE

and extract the files from my tar:

[oracle@vmreforatun01 /]$ tar -xf /tmp/smalloh.tar

I forgot that there are some setuid executables so I must be root to set them:

[oracle@vmreforatun01 /]$ ls -l $ORACLE_HOME/bin/oracle
-rwxr-x--x. 1 oracle oinstall 437157251 Aug 11 18:40 /u00/app/oracle/product/18EE/bin/oracle
[oracle@vmreforatun01 /]$ su
Password:
[root@vmreforatun01 /]# tar -xf /tmp/smalloh.tar
[root@vmreforatun01 /]# exit
[oracle@vmreforatun01 /]$ ls -l $ORACLE_HOME/bin/oracle
-rwsr-s--x. 1 oracle oinstall 437157251 Aug 11 18:40 /u00/app/oracle/product/18EE/bin/oracle

That’s a 600MB Oracle Home then. You can reduce it further by stripping the binaries:

[oracle@vmreforatun01 18EE]$ du -hs $ORACLE_HOME
599M /u00/app/oracle/product/18EE
[oracle@vmreforatun01 18EE]$ strip $ORACLE_HOME/bin/* $ORACLE_HOME/lib/*
[oracle@vmreforatun01 18EE]$ du -hs $ORACLE_HOME
570M /u00/app/oracle/product/18EE

but for only 30MB I really prefer to have all symbols. As I’m doing something completely unsupported, I may have to do some toubleshooting.

Now I’m ready to start the database and the listener:

[oracle@vmreforatun01 18EE]$ dbstart $ORACLE_HOME
Processing Database instance "DB1": log file /u00/app/oracle/product/18EE/rdbms/log/startup.log

and I run some Swingbench workload to check that everything is fine:

[oracle@vmreforatun01 18EE]$ /home/oracle/swingbench/bin/charbench -cs //localhost:1521/APP -u soe -p soe -uc 10 -min 5 -max 20 -a -v
Author : Dominic Giles
Version : 2.5.0.932
 
Results will be written to results.xml.
 
Time Users TPM TPS
 
6:35:15 PM 0 0 0
...
6:35:44 PM 10 12 9
6:35:45 PM 10 16 4
6:35:46 PM 10 21 5
6:35:47 PM 10 31 10

The only errors in alert.log are about checking the patches at install:

QPI: OPATCH_INST_DIR not present:/u00/app/oracle/product/18EE/OPatch
Unable to obtain current patch information due to error: 20013, ORA-20013: DBMS_QOPATCH ran mostly in non install area
ORA-06512: at "SYS.DBMS_QOPATCH", line 767
ORA-06512: at "SYS.DBMS_QOPATCH", line 547
ORA-06512: at "SYS.DBMS_QOPATCH", line 2124

Most of those 600MB are in the server executable (bin/oracle) and client shared library (lib/libclntsh.so):

[oracle@vmreforatun01 ~]$ size -td /u00/app/oracle/product/18EE/bin/* /u00/app/oracle/product/18EE/lib/* | sort -n
 
text data bss dec hex filename
2423 780 48 3251 cb3 /u00/app/oracle/product/18EE/lib/libofs.so
4684 644 48 5376 1500 /u00/app/oracle/product/18EE/lib/libskgxn2.so
5301 732 48 6081 17c1 /u00/app/oracle/product/18EE/lib/libodm18.so
10806 2304 1144 14254 37ae /u00/app/oracle/product/18EE/bin/sqlplus
13993 2800 1136 17929 4609 /u00/app/oracle/product/18EE/bin/adrci
46456 3008 160 49624 c1d8 /u00/app/oracle/product/18EE/lib/libnque18.so
74314 4824 1248 80386 13a02 /u00/app/oracle/product/18EE/bin/oradism
86396 23968 1144 111508 1b394 /u00/app/oracle/product/18EE/bin/lsnrctl
115523 2196 48 117767 1cc07 /u00/app/oracle/product/18EE/lib/libocrutl18.so
144591 3032 160 147783 24147 /u00/app/oracle/product/18EE/lib/libdbcfg18.so
216972 2564 48 219584 359c0 /u00/app/oracle/product/18EE/lib/libclsra18.so
270692 13008 160 283860 454d4 /u00/app/oracle/product/18EE/lib/libskjcx18.so
321701 5024 352 327077 4fda5 /u00/app/oracle/product/18EE/lib/libons.so
373988 7096 9536 390620 5f5dc /u00/app/oracle/product/18EE/lib/libmql1.so
717398 23224 110088 850710 cfb16 /u00/app/oracle/product/18EE/bin/orabaseconfig
717398 23224 110088 850710 cfb16 /u00/app/oracle/product/18EE/bin/orabasehome
878351 36800 1144 916295 dfb47 /u00/app/oracle/product/18EE/bin/tnslsnr
928382 108920 512 1037814 fd5f6 /u00/app/oracle/product/18EE/lib/libcell18.so
940122 56176 2376 998674 f3d12 /u00/app/oracle/product/18EE/lib/libsqlplus.so
1118019 16156 48 1134223 114e8f /u00/app/oracle/product/18EE/lib/libocr18.so
1128954 5936 160 1135050 1151ca /u00/app/oracle/product/18EE/lib/libskgxp18.so
1376814 18548 48 1395410 154ad2 /u00/app/oracle/product/18EE/lib/libocrb18.so
1685576 130464 160 1816200 1bb688 /u00/app/oracle/product/18EE/lib/libasmclntsh18.so
2517125 16496 15584 2549205 26e5d5 /u00/app/oracle/product/18EE/lib/libipc1.so
3916867 86504 111912 4115283 3ecb53 /u00/app/oracle/product/18EE/lib/libclntshcore.so.18.1
4160241 26320 69264 4255825 40f051 /u00/app/oracle/product/18EE/lib/libmkl_rt.so
5120001 459984 7784 5587769 554339 /u00/app/oracle/product/18EE/lib/libnnz18.so
10822468 302312 21752 11146532 aa1524 /u00/app/oracle/product/18EE/lib/libhasgen18.so
11747579 135320 160 11883059 b55233 /u00/app/oracle/product/18EE/lib/libshpkavx218.so
61758209 2520896 134808 64413913 3d6e0d9 /u00/app/oracle/product/18EE/lib/libclntsh.so.18.1
376147897 3067672 602776 379818345 16a39169 /u00/app/oracle/product/18EE/bin/oracle
487369241 7106932 1203944 495680117 1d8b7a75 (TOTALS)

Of course, this is probably not sufficient, especially if you want to run APEX, OJVM, OracleText. The method is there: run a workload that covers everything you need, and build the Oracle Home from the files used there. I used strace here, but auditd can also be a good idea. Ideally, this job will be done one day by Oracle itself in a supported way, so that we can build a core container for Oracle Database and add features as Dockerfile layers. This had be done to release Oracle XE 11g which is 300MB only. However Oracle XE 18c announced for October will probably be larger as it includes nearly all option.

 

Cet article The size of Oracle Home: from 9GB to 600MB est apparu en premier sur Blog dbi services.

Unable to gather table stats in parallel

Tom Kyte - Thu, 2018-08-16 07:46
Hi, We're running gather_table_stats with the following arguments: dbms_stats.gather_table_stats(ownname=>'&owner', tabname=>'&table',estimate_percent=>0.1,block_sample=>true, method_opt=>'FOR ALL INDEXED COLUMNS SIZE AUTO',cascade=>fals...
Categories: DBA Blogs

Merge 2 PDF/png/jpg Files?

Tom Kyte - Thu, 2018-08-16 07:46
Good Evening Tom Is it possible to merge 2 binary files using oracle PL/SQL? I have seen examples using Java but not PL/SQL. Thanks for your time.
Categories: DBA Blogs

Replace deprecated apex_util.string_to_table (APEX 5.1/18.1)

Dimitri Gielis - Wed, 2018-08-15 15:44
Sometimes the Oracle APEX documentation announces some packages will become deprecated in a release. It's not that those packages are suddenly gone, but you should not use them anymore. Your code will run fine still, but in the future, APEX might take it out completely, so it's best to replace them with the new package.

One of those packages announced in Oracle APEX 5.1 that are deprecated, and which I used a lot, was apex_util.string_to_table.

For example, in APEX Office Print (AOP) we read the session state of page items and we have some code like this:
declare
l_string varchar2(4000) := 'P1_X:P1_Y';
l_page_items_arr apex_application_global.vc_arr2;
begin
l_page_items_arr := apex_util.string_to_table(p_string => l_string, p_separator => ':');
for i in 1..l_page_items_arr.count
loop
sys.htp.p(l_page_items_arr(i)||':'||apex_util.get_session_state(l_page_items_arr(i)));
end loop;
end;

As the function is deprecated and APEX is already on release 18.1, it's good to start replacing those calls. The new function you can use is apex_string.split.

The above code becomes then:
declare
l_string varchar2(4000) := 'P1_X:P1_Y';
l_page_items_arr apex_t_varchar2;
begin
l_page_items_arr := apex_string.split(p_str => l_string, p_sep => ':');
for i in 1..l_page_items_arr.count
loop
sys.htp.p(l_page_items_arr(i)||':'||apex_util.get_session_state(l_page_items_arr(i)));
end loop;
end;

Depending on your application, you might need to be careful. For example with AOP, we support customers with versions of Oracle APEX 5.0, 5.1 and 18.1. We can't really force customers to move to higher APEX versions, so the way we solve it is by using conditional compilation of our code. If we see you are on APEX 5.1 or above we will use apex_string.split if not, and you are still on an earlier version, we will use apex_util.string_to_table.

Here's an example of what the code with conditional compilation looks like:
  $if wwv_flow_api.c_current >= 20160824
$then

l_page_items_arr := apex_string.split(p_str=>l_string, p_sep=>':');
$else
l_page_items_arr := apex_util.string_to_table(p_string=>l_string, p_separator=>':');
$end

Note the conditional compilation you also need to do on the variable if they are different, or you can choose to conditional compile on the entire function.

To conclude, I recommend with every new release of Oracle APEX to look for deprecated components and search for those and make notes to change those when needed.
Categories: Development

Patching E-Business Suite without Downtime

Gerger Consulting - Wed, 2018-08-15 11:58

Downtime during upgrades can cause a lot of headaches to E-Business Suite customers. No more!

Attend the free webinar by Oracle ACE Associate and Chicago OUG President Alfredo Abate and learn how you can patch your EBS installations without any downtime.

Register at this link.


Categories: Development

August 2018 Update to E-Business Suite Technology Codelevel Checker (ETCC)

Steven Chan - Wed, 2018-08-15 11:56

The E-Business Suite Technology Codelevel Checker (ETCC) tool helps you identify application or database tier overlay patches that need to be applied to your Oracle E-Business Suite Release 12.2 system. ETCC maps missing overlay patches to the default corresponding Database Patch Set Update (PSU) patches, and displays them in a patch recommendation summary.

What’s New

ETCC has been updated to include bug fixes and patching combinations for the following recommended versions of the following updates:

  • Oracle Database Proactive BP 12.1.0.2.180717
  • Oracle Database PSU 12.1.0.2.180717
  • Oracle JavaVM Component Database PSU 12.1.0.2.180717
  • Oracle Database Patch for Exadata BP 12.1.0.2.180717
  • Oracle Database PSU 12.1.0.2.180717
  • Oracle JavaVM Component Database PSU 12.1.0.2.180717
  • Microsoft Windows Database BP 12.1.0.2.180717
  • Oracle JavaVM Component 12.1.0.2.180717 on Windows
  • Microsoft Windows Database BP 12.1.0.2.180717
  • Oracle JavaVM Component 12.1.0.2.180717 on Windows

Obtaining ETCC

We recommend always using the latest version of ETCC, as new bugfixes will not be checked by older versions of the utility. The latest version of the ETCC tool can be downloaded via Patch 17537119 from My Oracle Support.

References

Related Articles

Categories: APPS Blogs

Introducing GraphPipe

OTN TechBlog - Wed, 2018-08-15 11:00
Dead Simple Machine Learning Model Serving

There has been rapid progress in machine learning over the past few years. Today, you can grab one of a handful of frameworks, follow some online tutorials, and have a working machine learning model in a matter of hours. Unfortunately, when you are ready to deploy that model into production you still face several unique challenges.

First, there is no standard for model serving APIs, so you are likely stuck with whatever your framework gives you. This might be protocol buffers or custom JSON. Your business application will generally need a bespoke client just to talk to your deployed model. And it's even worse if you are using multiple frameworks. If you want to create ensembles of models from multiple frameworks, you'll have to write custom code to combine them.

Second, building your model server can be incredibly complicated. Deployment gets much less attention than training, so out-of-the-box solutions are few and far between. Try building a GPU version of TensorFlow-serving, for example. You better be prepared to bang your head against it for a few days.

Finally, many of the existing solutions don't focus on performance, so for certain use cases they fall short. Serving a bunch of tensor data from a complex model via a python-JSON API not going to cut it for performance-critical applications.

We created GraphPipe to solve these three challenges. It provides a standard, high-performance protocol for transmitting tensor data over the network, along with simple implementations of clients and servers that make deploying and querying machine learning models from any framework a breeze. GraphPipe's efficient servers can serve models built in TensorFlow, PyTorch, mxnet, CNTK, or caffe2. We are pleased to announce that GraphPipe is available on Oracle's GitHub. Documentation, examples, and other relevant content can be found at https://oracle.github.io/graphpipe.

The Business Case

In the enterprise, machine-learning models are often trained individually and deployed using bespoke techniques. This impacts an organizations’ ability to derive value from its machine learning efforts. If marketing wants to use a model produced by the finance group, they will have to write custom clients to interact with the model. If the model becomes popular sales wants to use it as well, the custom deployment may crack under the load.

It only gets worse when the models start appearing in customer-facing mobile and IoT applications. Many devices are not powerful enough to run models locally and must make a request to a remote service. This service must be efficient and stable while running models from varied machined learning frameworks.

A standard allows researchers to build the best possible models, using whatever tools they desire, and be sure that users can access their models' predictions without bespoke code. Models can be deployed across multiple servers and easily aggregated into larger ensembles using a common protocol. GraphPipe provides the tools that the business needs to derive value from its machine learning investments.

Implementation Details

GraphPipe is an efficient network protocol designed to simplify and standardize transmission of machine learning data between remote processes. Presently, no dominant standard exists for how tensor-like data should be transmitted between components in a deep learning architecture. As such it is common for developers to use protocols like JSON, which is extremely inefficient, or TensorFlow-serving's protocol buffers, which carries with it the baggage of TensorFlow, a large and complex piece of software. GraphPipe is designed to bring the efficiency of a binary, memory-mapped format while remaining simple and light on dependencies.

GraphPipe includes:

  • A set of flatbuffer definitions
  • Guidelines for serving models consistently according to the flatbuffer definitions
  • Examples for serving models from TensorFlow, ONNX, and caffe2
  • Client libraries for querying models served via GraphPipe

In essence, a GraphPipe request behaves like a TensorFlow-serving predict request, but using flatbuffers as the message format. Flatbuffers are similar to google protocol buffers, with the added benefit of avoiding a memory copy during the deserialization step. The flatbuffer definitions provide a request message that includes input tensors, input names and output names. A GraphPipe remote model accepts the request message and returns one tensor per requested output name. The remote model also must provide metadata about the types and shapes of the inputs and outputs that it supports.

Performance

First, we compare serialization and deserialization speed of float tensor data in python using a custom ujson API, protocol buffers using a TensorFlow-serving predict request, and a GraphPipe remote request. The request consists of about 19 million floating-point values (consisting of 128 224x224x3 images) and the response is approximately 3.2 million floating point values (consisting of 128 7x7x512 convolutional outputs). The units on the left are in seconds.

Graphpipe is especially performant on the deserialize side, because flatbuffers provide access to underlying data without a memory copy.

Second, we compare end-to-end throughput using a Python-JSON TensorFlow model server, TensorFlow-serving, and the GraphPipe-go TensorFlow model server. In each case the backend model is the same. Large requests are made to the server using 1 thread and then again with 5 threads. The units on the left are rows calculated by the model per second.

Note that this test uses the recommended parameters for building Tensorflow-serving. Although the recommended build parameters for TensorFlow-serving do not perform well, we were ultimately able to discover compilation parameters that allow it to perform on par with our GraphPipe implementation. In other words, an optimized TensorFlow-serving performs similarly to GraphPipe, although building TensorFlow-serving to perform optimally is not documented nor easy.

Where Do I Get it?

You can find plenty of documentation and examples at https://oracle.github.io/graphpipe. The GraphPipe flatbuffer spec can be found on Oracle's GitHub along with servers that implement the spec for Python and Go. We also provide clients for Python, Go, and Java (coming soon), as well as a plugin for TensorFlow that allows the inclusion of a remote model inside a local TensorFlow graph.

Oracle Ksplice patch for CVE-2018-3620 and CVE-2018-3646 for Oracle Linux UEK r4

Wim Coekaerts - Wed, 2018-08-15 10:52

There was an Intel disclosure yesterday of a set of vulnerabilities around L1TF. You can read a summary here.

We released, as you can see from the blog, a number of kernel updates for Oracle Linux and a Ksplice patch for the same.  I wanted to take the opportunity again to show off how awesome Oracle Ksplice is.

The kernel patch we have for L1TF was about 106 different patches together. 54 files changed, 2079 insertions(+), 501 deletions(-). About 1.2Mb binary size of the ksplice kernel module for this patch. All this went into a single Ksplice patch!

Applied in a few microseconds. On one server I have in Oracle Cloud, I always run # uptrack-upgrade manually, on another server I have autoinstall=yes.

# uptrack-upgrade The following steps will be taken: Install [1vao34m9] CVE-2018-3620, CVE-2018-3646: Information leak in Intel CPUs under terminal fault. Go ahead [y/N]? y Installing [1vao34m9] CVE-2018-3620, CVE-2018-3646: Information leak in Intel CPUs under terminal fault. Your kernel is fully up to date. Effective kernel version is 4.1.12-124.18.1.el7uek

My other machine was up to date automatically and I didn't even know it.  I had to run # uptrack-show and it already had it applied. No reboot, no impact on my stuff I run here. Just autonomously done. Patched. Current.

Folks sometimes ask me about other live patch abilities from some other vendors. Well,  We have the above for every errata kernel released since the spectre/meltdown CVEs (as this is a layer on top of that code) at the same time as the kernel RPMs were released, as an integrated service. 'nuf said.

Oh and everyone in Oracle Cloud, remember, the Oracle Ksplice tools (uptrack) are installed in every OL image by default and you can run this without any additional configuration (or additional charges).

Easily manage dual backup destination with RMAN

Yann Neuhaus - Wed, 2018-08-15 10:40

Backup on disk with RMAN is great. It’s fast, you can set as many channels as your platform can handle for faster backups. And you can restore as fast as you can read and write files on disk with these multiple channels. As far as you’re using Enterprise Edition because Standard Edition is stuck to a single channel.

Disk space is very often limited and you’ll probably have to find another solution to keep backups longuer if you want to. You can think about tapes or you can connect RMAN to a global backup tool, but it requires additional libraries that are not free, and it definitely adds complexity.

The other solution is to have dual disk destination for the backups. The first one will be the main destination for your daily backups, the other one will be dedicated to long-term backups, maybe on slower disks but with more free space available. This second destination can eventualy be backed up with another tool without using any library.

For the demonstration, assume you have 2 filesystems, /backup is dedicated to latest daily backups and /lt_backup is for long-term backups.

du -hs backup; ls -lrt backup/* | tail -n 8 ; echo ;du -hs lt_backup; ls -lrt lt_backup/* | tail -n 8

4.0K    backup
ls: cannot access backup/*: No such file or directory

4.0K    lt_backup
ls: cannot access lt_backup/*: No such file or directory

 

First of all, take a backup on the first destination:

RMAN> backup as compressed backupset database format '/oracle/backup/%U';

 

This is a small database and backup is done with the default single channel, so there is only two backupsets, one for the datafiles and the other for the controlfile and the spfile:

du -hs backup; ls -lrt backup/* | tail -n 8 ; echo ;du -hs lt_backup; ls -lrt lt_backup/* | tail -n 8

162M    backup
-rw-r-----. 1 oracle oinstall 168067072 Aug 15 01:27 backup/2btaj0mt_1_1
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 01:27 backup/2ctaj0nm_1_1

4.0K    lt_backup
ls: cannot access lt_backup/*: No such file or directory

 

It’s quite easy to move the backup to the long term destination with RMAN:

RMAN> backup backupset all format '/oracle/lt_backup/%U' delete input;

 

BACKUP BACKUPSET with DELETE INPUT is basically the same as a system mv or move. But it does not require to recatalog the backup files as RMAN is doing this automatically.

Now our backup is located in the second destination:

du -hs backup; ls -lrt backup/* | tail -n 8 ; echo ;du -hs lt_backup; ls -lrt lt_backup/* | tail -n 8

4.0K    backup
ls: cannot access backup/*: No such file or directory

162M    lt_backup
-rw-r-----. 1 oracle oinstall 168067072 Aug 15 01:28 lt_backup/2btaj0mt_1_2
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 01:28 lt_backup/2ctaj0nm_1_2

 

You can see here that backup filename has changed: last number increased. Oracle knows that this is the second copy of these backupsets (even the first ones don’t exist anymore).

Like a mv command you can put again your backup in previous destination:

RMAN> backup backupset all format '/oracle/backup/%U' delete input;

162M    backup
-rw-r-----. 1 oracle oinstall 168067072 Aug 15 01:29 backup/2btaj0mt_1_3
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 01:29 backup/2ctaj0nm_1_3

4.0K    lt_backup
ls: cannot access lt_backup/*: No such file or directory

 

All the backupsets are now back to the first destination only, and you can see another increase on the filename. And RMAN catalog is up-to-date.

Now let’s make the first folder the default destination for the backups, and go for compressed backupset as a default behavior:

RMAN> CONFIGURE DEVICE TYPE DISK PARALLELISM 1 BACKUP TYPE TO COMPRESSED BACKUPSET ;
RMAN> CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT '/oracle/backup/%U';

 

Now you only need a 2-word command to backup the database:

RMAN> backup database;

 

New backup is in first destination as expected:

du -hs backup; ls -lrt backup/* | tail -n 8 ; echo ;du -hs lt_backup; ls -lrt lt_backup/* | tail -n 8

323M    backup
-rw-r-----. 1 oracle oinstall 168067072 Aug 15 01:29 backup/2btaj0mt_1_3
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 01:29 backup/2ctaj0nm_1_3
-rw-r-----. 1 oracle oinstall 168050688 Aug 15 01:35 backup/2dtaj15o_1_1
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 01:35 backup/2etaj16h_1_1

4.0K    lt_backup
ls: cannot access lt_backup/*: No such file or directory

 

Suppose you want to move the oldest backups, those done before 1.30AM:

RMAN> backup backupset completed before 'TRUNC(SYSDATE)+1.5/24' format '/oracle/lt_backup/%U' delete input;

 

Everything is working as expected, latest backup is still in the first destination, and the oldest one is in the lt_backup filesystem. With another increase of the number ending the filename:

du -hs backup; ls -lrt backup/* | tail -n 8 ; echo ;du -hs lt_backup; ls -lrt lt_backup/* | tail -n 8

162M    backup
-rw-r-----. 1 oracle oinstall 168050688 Aug 15 01:35 backup/2dtaj15o_1_1
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 01:35 backup/2etaj16h_1_1

162M    lt_backup
-rw-r-----. 1 oracle oinstall 168067072 Aug 15 01:38 lt_backup/2btaj0mt_1_4
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 01:38 lt_backup/2ctaj0nm_1_4

 

Now that the tests are OK, let’s simulate a real world example. First, tidy up all the backups:

RMAN> delete noprompt backupset;

 

Let’s take a new backup.

RMAN> backup database;

 

Backup is in default destination:

du -hs backup; ls -lrt backup/* | tail -n 8 ; echo ;du -hs lt_backup; ls -lrt lt_backup/* | tail -n 8

162M    backup
-rw-r-----. 1 oracle oinstall 168050688 Aug 15 01:43 backup/2ftaj1lv_1_1
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 01:43 backup/2gtaj1mo_1_1

4.0K    lt_backup
ls: cannot access lt_backup/*: No such file or directory

 

Let’s take another backup later:

RMAN> backup database;

du -hs backup; ls -lrt backup/* | tail -n 8 ; echo ;du -hs lt_backup; ls -lrt lt_backup/* | tail -n 8

323M    backup
-rw-r-----. 1 oracle oinstall 168050688 Aug 15 01:43 backup/2ftaj1lv_1_1
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 01:43 backup/2gtaj1mo_1_1
-rw-r-----. 1 oracle oinstall 168181760 Aug 15 02:00 backup/2htaj2m4_1_1
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 02:01 backup/2itaj2mt_1_1

4.0K    lt_backup
ls: cannot access lt_backup/*: No such file or directory

 

Now let’s move the oldest backup to the other folder:

RMAN> backup backupset completed before 'TRUNC(SYSDATE)+2/24' format '/oracle/lt_backup/%U' delete input;

du -hs backup; ls -lrt backup/* | tail -n 8 ; echo ;du -hs lt_backup; ls -lrt lt_backup/* | tail -n 8

162M    backup
-rw-r-----. 1 oracle oinstall 168181760 Aug 15 02:00 backup/2htaj2m4_1_1
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 02:01 backup/2itaj2mt_1_1

162M    lt_backup
-rw-r-----. 1 oracle oinstall 168050688 Aug 15 02:02 lt_backup/2ftaj1lv_1_2
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 02:02 lt_backup/2gtaj1mo_1_2

 

Storing only the oldest backups in the long-term destination is not so clever, imagine you loose your first backup destination? It could be great to have the latest backup in both destinations. You can do that with a BACKUP BACKUPSET COMPLETED AFTER and no DELETE INPUT for basically the same as a cp or copy command:

RMAN> backup backupset completed after 'TRUNC(SYSDATE)+2/24' format '/oracle/lt_backup/%U';

du -hs backup; ls -lrt backup/* | tail -n 8 ; echo ;du -hs lt_backup; ls -lrt lt_backup/* | tail -n 8

162M    backup
-rw-r-----. 1 oracle oinstall 168181760 Aug 15 02:00 backup/2htaj2m4_1_1
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 02:01 backup/2itaj2mt_1_1

323M    lt_backup
-rw-r-----. 1 oracle oinstall 168050688 Aug 15 02:02 lt_backup/2ftaj1lv_1_2
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 02:02 lt_backup/2gtaj1mo_1_2
-rw-r-----. 1 oracle oinstall 168181760 Aug 15 02:03 lt_backup/2htaj2m4_1_2
-rw-r-----. 1 oracle oinstall   1130496 Aug 15 02:03 lt_backup/2itaj2mt_1_2

 

That’s it, you now have a first destination for newest backups, and a second one for all the backups. And you just have to schedule these 2 BACKUP BACKUPSET after your daily backup of your database.

Note that backups will stay in both destinations until they reach the retention limit you defined for your database. The DELETE OBSOLETE will purge the backupsets wherever they are and delete all the known copies.

 

Cet article Easily manage dual backup destination with RMAN est apparu en premier sur Blog dbi services.

Oracle 18c DataGuard : Rman RECOVER STANDBY DATABASE

Yann Neuhaus - Wed, 2018-08-15 10:08

With Oracle Database 18c, we can now refresh a standby database over the network using one RMAN command, RECOVER STANDBY DATABASE.

The RECOVER STANDBY DATABASE command restarts the standby instance, refreshes the control file from the primary database, and automatically renames data files, temp files, and online logs. It restores new data files that were added to the primary database and recovers the standby database up to the current time.
When you use the RECOVER STANDBY DATABASE command to refresh a standby database, you specify either a FROM SERVICE clause or a NOREDO clause. The FROM SERVICE clause specifies the name of a primary service. The NOREDO clause specifies that backups should be used for the refresh, which allows a standby to be rolled forward to a specific time or SCN.
The MRP must be manually stopped on the standby before any attempt is made to sync with primary database.

In this blog I am doing some tests of standby refresh using the Recover Standby Database command.

From a fine Data Guard let’s set the property StandbyFileManagement to MANUAL

DGMGRL> show configuration;

Configuration - CONT18C_DR

  Protection Mode: MaxPerformance
  Members:
  CONT18C_SITE  - Primary database
    CONT18C_SITE1 - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 18 seconds ago)

DGMGRL>

DGMGRL> edit database 'CONT18C_SITE' set property StandbyFileManagement=MANUAL;
Property "standbyfilemanagement" updated
DGMGRL> edit database 'CONT18C_SITE1' set property StandbyFileManagement=MANUAL;
Property "standbyfilemanagement" updated
DGMGRL> show  database 'CONT18C_SITE' StandbyFileManagement;
  StandbyFileManagement = 'manual'
DGMGRL> show  database 'CONT18C_SITE1' StandbyFileManagement;
  StandbyFileManagement = 'manual'
DGMGRL>

And Then I create add new tablespace and new table in the primary

SQL> create tablespace TBS_2 datafile '/u01/app/oracle/oradata/CONT18C/PDB1/tbs_201.dbf' size 5M ;

Tablespace created.

SQL> create table test (id number) tablespace TBS_2;

Table created.

SQL> insert into test values (1);

1 row created.

SQL> insert into test values (2);

1 row created.

SQL> commit;

Commit complete.

SQL>

As expected the changes are not being replicated as shown in the standby alert logfile and in the broker sonfiguration

(3):File #14 added to control file as 'UNNAMED00014' because
(3):the parameter STANDBY_FILE_MANAGEMENT is set to MANUAL
(3):The file should be manually created to continue.
MRP0 (PID:6307): MRP0: Background Media Recovery terminated with error 1274
2018-08-15T13:31:08.343276+02:00
Errors in file /u01/app/oracle/diag/rdbms/cont18c_site1/CONT18C/trace/CONT18C_mrp0_6307.trc:
ORA-01274: cannot add data file that was originally created as '/u01/app/oracle/oradata/CONT18C/PDB1/tbs_201.dbf'
MRP0 (PID:6307): Managed Standby Recovery not using Real Time Apply
Recovery interrupted!

Using the broker

DGMGRL> show database 'CONT18C_SITE1';

Database - CONT18C_SITE1

  Role:               PHYSICAL STANDBY
  Intended State:     APPLY-ON
  Transport Lag:      0 seconds (computed 1 second ago)
  Apply Lag:          4 minutes 33 seconds (computed 1 second ago)
  Average Apply Rate: 3.00 KByte/s
  Real Time Query:    OFF
  Instance(s):
    CONT18C

  Database Error(s):
    ORA-16766: Redo Apply is stopped

  Database Warning(s):
    ORA-16853: apply lag has exceeded specified threshold

Database Status:
ERROR

DGMGRL>

Now let’s try to sync the standby database using the RECOVER command. First let’s stop the recovery process.

DGMGRL> edit database 'CONT18C_SITE1' set state ='APPLY-OFF';
Succeeded.
DGMGRL> show database 'CONT18C_SITE1';

Database - CONT18C_SITE1

  Role:               PHYSICAL STANDBY
  Intended State:     APPLY-OFF
  Transport Lag:      0 seconds (computed 0 seconds ago)
  Apply Lag:          26 minutes 28 seconds (computed 0 seconds ago)
  Average Apply Rate: (unknown)
  Real Time Query:    OFF
  Instance(s):
    CONT18C

Database Status:
SUCCESS

DGMGRL>

After let’s connect with Rman as the target to the standby and let’s run the command
If we try to run the command while connecting to the primary as target we will get following error

RMAN> RECOVER STANDBY DATABASE FROM SERVICE CONT18c_SITE;

Starting recover at 15-AUG-18
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 08/15/2018 14:00:15
RMAN-05146: must be connected to standby database to issue RECOVER STANDBY DATABASE

RMAN>

So from the standby as target. Note that outputs are truncated

[oracle@primaserver admin]$ rman target sys/root@cont18c_site1

Recovery Manager: Release 18.0.0.0.0 - Production on Wed Aug 15 14:03:55 2018
Version 18.3.0.0.0

Copyright (c) 1982, 2018, Oracle and/or its affiliates.  All rights reserved.

connected to target database: CONT18C (DBID=4292751651)

RMAN>  RECOVER STANDBY DATABASE FROM SERVICE CONT18c_SITE;

Starting recover at 15-AUG-18
using target database control file instead of recovery catalog
Executing: alter database flashback off
Oracle instance started

Total System Global Area     956299440 bytes

Fixed Size                     8902832 bytes
Variable Size                348127232 bytes
Database Buffers             595591168 bytes
Redo Buffers                   3678208 bytes

contents of Memory Script:
{
   restore standby controlfile from service  'CONT18c_SITE';
   alter database mount standby database;
}
executing Memory Script

Starting restore at 15-AUG-18
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=39 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: using network backup set from service CONT18c_SITE
channel ORA_DISK_1: restoring control file
channel ORA_DISK_1: restore complete, elapsed time: 00:00:04
output file name=/u01/app/oracle/oradata/CONT18C/control01.ctl
output file name=/u01/app/oracle/oradata/CONT18C/control02.ctl
Finished restore at 15-AUG-18

released channel: ORA_DISK_1
Statement processed

contents of Memory Script:
{
set newname for datafile  14 to
 "/u01/app/oracle/oradata/CONT18C/PDB1/tbs_201.dbf";
   restore from service  'CONT18c_SITE' datafile
    14;
   catalog datafilecopy  "/u01/app/oracle/oradata/CONT18C/PDB1/tbs_201.dbf";
   switch datafile all;
}
executing Memory Script

executing command: SET NEWNAME

Starting restore at 15-AUG-18
Starting implicit crosscheck backup at 15-AUG-18
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=51 device type=DISK
Crosschecked 5 objects
Finished implicit crosscheck backup at 15-AUG-18

Starting implicit crosscheck copy at 15-AUG-18
using channel ORA_DISK_1
Finished implicit crosscheck copy at 15-AUG-18

searching for all files in the recovery area
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /u01/app/oracle/fast_recovery_area/CONT18C/CONT18C_SITE1/archivelog/2018_08_15/o1_mf_1_47_fq7q5ls5_.arc
File Name: /u01/app/oracle/fast_recovery_area/CONT18C/CONT18C_SITE1/archivelog/2018_08_15/o1_mf_1_48_fq7qn5s3_.arc
File Name: /u01/app/oracle/fast_recovery_area/CONT18C/CONT18C_SITE1/archivelog/2018_08_15/o1_mf_1_49_fq7r0715_.arc
File Name: 
…
…

contents of Memory Script:
{
  recover database from service  'CONT18c_SITE';
}
executing Memory Script

Starting recover at 15-AUG-18
using channel ORA_DISK_1
skipping datafile 5; already restored to SCN 1550044
skipping datafile 6; already restored to SCN 1550044
skipping datafile 8; already restored to SCN 1550044
skipping datafile 14; already restored to SCN 2112213
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: using network backup set from service CONT18c_SITE
destination for restore of datafile 00001: /u01/app/oracle/oradata/CONT18C/system01.dbf
channel ORA_DISK_1: restore complete, elapsed time: 00:00:35
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: using network backup set from service CONT18c_SITE
destination for restore of datafile 00003: 
…
…
destination for restore of datafile 00012: /u01/app/oracle/oradata/CONT18C/PDB1/users01.dbf
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
channel ORA_DISK_1: starting incremental datafile backup set restore
channel ORA_DISK_1: using network backup set from service CONT18c_SITE
destination for restore of datafile 00013: /u01/app/oracle/oradata/CONT18C/PDB1/tbs_nolog01.dbf
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01

starting media recovery

media recovery complete, elapsed time: 00:00:00
Finished recover at 15-AUG-18
flashback needs to be reenabled on standby open
Finished recover at 15-AUG-18

RMAN>

And we can verify that the configuration is now sync

DGMGRL> edit database 'CONT18C_SITE1' set state ='APPLY-ON';
Succeeded.
DGMGRL> show configuration;

Configuration - CONT18C_DR

  Protection Mode: MaxPerformance
  Members:
  CONT18C_SITE  - Primary database
    CONT18C_SITE1 - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 37 seconds ago)

DGMGRL>

After opening the standby on read only mode we can verify that everything is now fine

SQL> alter session set container=pdb1;

Session altered.

SQL> select name from v$tablespace;

NAME
------------------------------
SYSTEM
SYSAUX
UNDOTBS1
TEMP
USERS
TBS_NOLOG
TBS_2

7 rows selected.

SQL> select name from v$datafile;

NAME
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/CONT18C/PDB1/system01.dbf
/u01/app/oracle/oradata/CONT18C/PDB1/sysaux01.dbf
/u01/app/oracle/oradata/CONT18C/PDB1/undotbs01.dbf
/u01/app/oracle/oradata/CONT18C/PDB1/users01.dbf
/u01/app/oracle/oradata/CONT18C/PDB1/tbs_nolog01.dbf
/u01/app/oracle/oradata/CONT18C/PDB1/tbs_201.dbf

6 rows selected.

SQL> select * from test;

        ID
----------
         1
         2

SQL>
 

Cet article Oracle 18c DataGuard : Rman RECOVER STANDBY DATABASE est apparu en premier sur Blog dbi services.

So you want to use JSON in Oracle Database with Node.js?

Christopher Jones - Wed, 2018-08-15 05:52

The JavaScript JSON.parse() and JSON.stringify() methods make it easy to work with JavaScript objects in Node.js and store them in Oracle Database using the node-oracledb module.

I'll start with some examples showing a simple, naive, implementation which you can use with all versions of Oracle Database. Then I'll go on to show some of the great JSON functionality introduced in Oracle Database 12.1.0.2.

The examples below use the async/await syntax available in Node 7.6, but they can be rewritten to use promises or callbacks, if you have an older version of Node.js.

Storing JSON as character data in Oracle Database 11.2

At the simplest, you can stores JSON as character strings, such as in the column C of MYTAB:

CREATE TABLE mytab (k NUMBER, c CLOB);

Using a CLOB means we don't need to worry about the length restrictions of a VARCHAR2.

A JavaScript object like myContent can easily be inserted into Oracle Database with the node-oracledb module by stringifying it:

const oracledb = require('oracledb'); let connection, myContent, json, result; async function run() { try { connection = await oracledb.getConnection( {user: "hr", password: "welcome", connectString: "localhost/orclpdb"}); myContent = {name: "Sally", address: {city: "Melbourne"}}; json = JSON.stringify(myContent); result = await connection.execute( 'insert into mytab (k, c) values (:kbv, :cbv)', { kbv: 1, cbv: json } ); console.log('Rows inserted: ' + result.rowsAffected); } catch (err) { console.error(err); } finally { if (connection) { try { await connection.close(); } catch (err) { console.error(err); } } } } run();

If you are just inserting one record you may want to autocommit, but make sure you don't unnecessarily commit, or break transactional consistency by committing a partial set of data:

myContent = {name: "Sally", address: {city: "Melbourne"}}; json = JSON.stringify(myContent); result = await connection.execute( 'insert into mytab (k, c) values (:kbv, :cbv)', { kbv: 1, cbv: json }, { autoCommit: true} ); console.log('Rows inserted: ' + result.rowsAffected);

The output is:

Rows inserted: 1

To retrieve the JSON content you have to use a SQL query. This is fine when you only need to lookup records by their keys:

result = await connection.execute( 'select c from mytab where k = :kbv', { kbv: 1 }, // the key to find { fetchInfo: {"C": {type: oracledb.STRING } }}); if (result.rows.length) { js = JSON.parse(result.rows[0]); console.log('Name is: ' + js.name); console.log('City is: ' + js.address.city); } else { console.log('No rows fetched'); }

The fetchInfo clause is used to return the CLOB as a string. This is simpler and generally faster than the default, streamed access method for LOBs. (Streaming is great for huge data streams such as videos.)

The JSON.parse() call converts the JSON string into a JavaScript object so fields can be accessed like 'js.address.city'.

Output is:

Name is: Sally City is: Melbourne

Code gets trickier if you need to match JSON keys in the query. You need to write your own matching functionality using LOB methods like dbms_lob.instr():

result = await connection.execute( 'select c from mytab where dbms_lob.instr(c, \'"name":"\' || :cbv ||\'"\') > 0', { cbv: 'Sally' }, { fetchInfo: {"C": {type: oracledb.STRING } }}); if (result.rows.length) { js = JSON.parse(result.rows[0]); console.log('Name is: ' + js.name); console.log('City is: ' + js.address.city); } else { console.log('No rows fetched'); }

You can see this could be slow to execute, error prone to do correctly, and very hard to work with when the JSON is highly nested. But there is a solution . . .

Oracle Database 12c JSON

With Oracle 12.1.0.2 onward you can take advantage of Oracle's JSON functionality. Data is stored as VARCHAR2 or LOB so the node-oracledb code is similar to the naive storage solution above. However, in the database, extensive JSON functionality provides tools for data validation, indexing and matching, for working with GeoJSON, and even for working with relational data. Check the JSON Developer's Guide for more information. You may also be interested in some of the JSON team's blog posts.

To start with, when you create a table, you can specify that a column should be validated so it can contain only JSON:

c CLOB CHECK (c IS JSON)) LOB (c) STORE AS (CACHE)

In this example I also take advantage of Oracle 12c's 'autoincrement' feature called 'identity columns'. This automatically creates a monotonically increasing sequence number for the key. The complete CREATE TABLE statement used for following examples is:

CREATE TABLE myjsontab (k NUMBER GENERATED BY DEFAULT ON NULL AS IDENTITY(START WITH 1), c CLOB CHECK (c IS JSON)) LOB (c) STORE AS (CACHE);

Strictly speaking, since I know my application will insert valid JSON, I could have improved database performance by creating the table without the CHECK (c IS JSON) clause. However, if you don't know where your data is coming from, letting the database do validation is wise.

Inserting a JavaScript object data uses the same stringification as the previous section. Since we don't need to supply a key now, we can use a DML RETURNING clause to get the new key's autoincremented value:

myContent = {name: "Sally", address: {city: "Melbourne"}}; json = JSON.stringify(myContent); result = await connection.execute( 'insert into myjsontab (c) values (:cbv) returning k into :kbv', { cbv: json, kbv: { type: oracledb.NUMBER, dir: oracledb.BIND_OUT } }, { autoCommit: true} ); console.log('Data key is: ' + result.outBinds.kbv);

This inserts the data and returns the key of the new record. The output is:

Data key is: 1

To extract data by the key, a standard SQL query can be used, identical to the naive CLOB implementation previously shown.

Oracle Database's JSON functionality really comes into play when you need to match attributes of the JSON string. You may even decide not to have a key column. Using Oracle 12.2's 'dotted' query notation you can do things like:

result = await connection.execute( 'select c from myjsontab t where t.c.name = :cbv', { cbv: 'Sally' }, { fetchInfo: {"C": {type: oracledb.STRING } }}); if (result.rows.length) { js = JSON.parse(result.rows[0]); console.log('Name is: ' + js.name); console.log('City is: ' + js.address.city); } else { console.log('No rows fetched'); }

Output is:

Name is: Sally City is: Melbourne

(If you use Oracle Database 12.1.0.2, then the dotted notation used in the example needs to be replaced with a path expression, see the JSON manual for the syntax).

Other JSON functionality is usable, for example to find any records that have an 'address.city' field:

select c FROM myjsontab where json_exists(c, '$.address.city')

If you have relational tables, Oracle Database 12.2 has a JSON_OBJECT function that is a great way to convert relational table data to JSON:

result = await connection.execute( `select json_object('deptId' is d.department_id, 'name' is d.department_name) department from departments d where department_id < :did`, { did: 50 }, { fetchInfo: {"C": {type: oracledb.STRING } }}); if (result.rows.length) { for (var i = 0; i < result.rows.length; i++) { console.log("Department: " + result.rows[i][0]); js = JSON.parse(result.rows[i][0]); console.log('Department Name is: ' + js.name); } } else { console.log('No rows fetched'); }

Output is:

Department: {"deptId":10,"name":"Administration"} Department Name is: Administration Department: {"deptId":20,"name":"Marketing"} Department Name is: Marketing Department: {"deptId":30,"name":"Purchasing"} Department Name is: Purchasing Department: {"deptId":40,"name":"Human Resources"} Department Name is: Human Resources

If you are working with JSON tables that use BLOB storage instead of CLOB, for example:

CREATE TABLE myjsonblobtab (k NUMBER GENERATED BY DEFAULT ON NULL AS IDENTITY(START WITH 1), c BLOB CHECK (c IS JSON)) LOB (c) STORE AS (CACHE);

Then you need to bind a Buffer for insert:

myContent = {name: "Sally", address: {city: "Melbourne"}}; json = JSON.stringify(myContent); b = Buffer.from(json, 'utf8'); result = await connection.execute( 'insert into myjsonblobtab (k, c) values (:kbv, :cbv)', { kbv: 1, cbv: b }, { autoCommit: true} ); console.log('Rows inserted: ' + result.rowsAffected); Querying needs to return a Buffer too: result = await connection.execute( 'select c from myjsonblobtab t where t.c.name = :cbv', { cbv: 'Sally' }, { fetchInfo: {"C": {type: oracledb.BUFFER } }}); if (result.rows.length) { js = JSON.parse(result.rows[0].toString('utf8')); console.log('Name is: ' + js.name); console.log('City is: ' + js.address.city); } else { console.log('No rows fetched'); } A final JSON tip

One final tip is to avoid JSON.parse() if you don't need it. An example is where you need to pass a JSON string to a web service or browser. You may be able pass the JSON string returned from a query directly. In some cases the JSON string may need its own key, in which case simple string concatenation may be effective. In this example, the Oracle Locator method returns a GeoJSON string:

result = await connection.execute( `select sdo_util.to_geojson( sdo_geometry(2001, 8307, sdo_point_type(-90, 20, null), null, null)) as c from dual`, { }, // no binds { fetchInfo: {"C": {type: oracledb.STRING } }}); json = '{"geometry":' + result.rows[0][0] + '}'; console.log(json);

The concatenation above avoids the overhead of a parse and re-stringification:

js = JSON.parse(result.rows[0][0]); jstmp = {geometry: js}; json = JSON.stringify(jstmp); Summary

The JavaScript JSON.parse() and JSON.stringify() methods make it easy to work with JSON in Node.js and Oracle Database. Combined with node-oracledb's ability to work with LOBs as Node.js Strings, database access is very efficient. Oracle Database 12.1.0.2's JSON features make JSON operations in the database simple. Advances in Oracle Database 12.2 and 18c further improve the functionality and usability.

Resources

Node-oracledb installation instructions are here.

Node-oracledb documentation is here.

Issues and questions about node-oracledb can be posted on GitHub.

The Oracle JSON Developer's Guide is here.

Podcast: Developer Evolution: What's rockin’ roles in IT?

OTN TechBlog - Tue, 2018-08-14 23:00

The good news is that the US Bureau of Labor Statistics predicts 24% growth in software developer jobs through 2026. That’s well above average. The outlook for Database administrators certainly isn’t bleak, but with projected job growth of 11% to 2026, that’s less than half the growth projected for developers. Job growth for System administrators, at 6% through 2016, is considered average by the BLS. So while the news is positive all around, developers certainly have an advantage. Each of these roles certainly has separate and distinct responsibilities. But why is the outlook so much better for developers, and what does this say about what’s happening in the IT ecosystem?

"More than ever," says Oracle Developer Champion Rolando Carrasco, "institutions, organizations, and governments are keen to generate a new crop of developers that can help them to to create something new." In today's business climate competition is tough, and high premium is placed on innovation. "But developers have a lot of tools,  a lot of abilities within reach, and the opportunity to make something that can make a competitive difference."

But the role of the developer is morphing into something new, according to Oracle ACE Director Martin Giffy D'Souza. "In the next couple years we're also going to see that  the typical developer is not going to be the traditional developer that went to school, or the script kitties that just got into the business. We're going see what is called the citizen developer. We're going to see a lot more people transition to that simply because it adds value to their job. Those people are starting to hit the limits of writing VBA macros in Excel and they want to write custom apps. I think that's what we're going to see more and more of, because we already know there's a developer job shortage."

But why is the job growth for developers outpacing that for DBAs and SysAdmins? "If you take it at very high level, devs produce things," Martin says. "They produce value. They produce products.  DBAs and IT people are maintainers. They’re both important, but the more products and solutions we can create," the more value to the business.

Oracle ACE Director Mark Rittman has spent the last couple of years working as a product manager in a start-up, building a tech platform. "I never saw a DBA there," he admits. "It was at the point that if I were to try to explain what a DBA was to people there, all of whom are uniformly half my age, they wouldn't know what I was talking about. That's because the platforms people use these days, within the Oracle ecosystem or Google or Amazon or whatever, it's all very much cloud, and it's all very much NoOPs, and it's very much the things that we used to spend ages worrying about,"

This frees developers to do what they do best. "There are far fewer people doing DBA work and SysAdmin work," Mark says. "That’s all now in the cloud. And that also means that developers can also develop now. I remember, as a BI developer working on projects, it was surprising how much of my time was spent just getting the system working in the first place, installing things, configuring things, and so on. Probably 75% of every project was just getting the thing to actually work."

Where some roles may vanish altogether, others will transform. DBAs have become data engineers or infrastructure engineers, according to Mark. "So there are engineers around and there are developers around," he observes, "but I think administrator is a role that, unless you work for one of the big cloud companies in one of those big data centers, is largely kind of managed away now."

Phil Wilkins, an Oracle ACE, has witnessed the changes. DBAs in particular, as well as network people focused on infrastructure, have been dramatically affected by cloud computing, and the ground is still shaking. "With the rise and growth in cloud adoption these days, you're going to see the low level, hard core technical skills that the DBAs used to bring being concentrated into the cloud providers, where you're taking a database as a service. They're optimizing the underlying infrastructure, making sure the database is running. But I'm just chucking data at it, so I don't care about whether the storage is running efficiently or not. The other thing is that although developers now get a get more freedom, and we've got NoSQL and things like that, we're getting more and more computing power, and it's accelerating at such a rate now that, where 10 years ago we used to have to really worry about the tuning and making sure the database was performant, we can now do a lot of that computing on an iPhone. So why are we worrying when we've got huge amounts of cloud and CPU to the bucketload?

These comments represent just a fraction of the conversation captured in this latest Oracle Developer Community Podcast, in which the panelists dive deep into the forces that are shaping and re-shaping roles, and discuss their own concerns about the trends and technologies that are driving that evolution. Listen!

The Panelists Rolando Carrasco

Rolando Carrasco
Oracle Developer Champion
Oracle ACE
Co-owner, Principal SOA Architect, S&P Solutions
Twitter LinkedIn

Martin Giffy D'Souza

Martin Giffy D'Souza
Oracle ACE Director
Director of Innovation, Insum Solutions
Twitter LinkedIn 

Mark Rittman

Mark Rittman
Oracle ACE Director
Chief Executive Officer, MJR Analytics
Twitter LinkedIn 

Phil Wilkins

Phil Wilkins
Oracle ACE
Senior Consultant, Capgemini
Twitter LinkedIn 5

Related Oracle Code One Sessions

The Future of Serverless is Now: Ci/CD for the Oracle Fn Project, by Rolando Carrasco and Leonardo Gonzalez Cruz [DEV5325]

Other Related Content

Podcast: Are Microservices and APIs Becoming SOA 2.0?

Vibrant and Growing: The Current State of API Management

Video: 2 Minute Integration with Oracle Integration Cloud Service

It's Always Time to Change

Coming Soon

The next program, coming on Sept 5, will feature a discussion of "DevOps to NoOps," featuring panelists Baruch Sadogursky, Davide Fiorentino, Bert Jan Schrijver, and others TBA. Stay tuned!

Subscribe

Never miss an episode! The Oracle Developer Community Podcast is available via:

ODC Latin America Tour : Birmingham to Quito

Tim Hall - Tue, 2018-08-14 17:15

The day started at 03:00, which is a silly time to start the day. I got a taxi to the airport and after a short queue for bag drop, I was waiting for the gate to open.

The first flight from Birmingham to Amsterdam went well and we arrived ahead of schedule. After about 90 minutes I was on the plane for the journey from Amsterdam to Quito. I could see there were some free business class seats, so I asked for the price of the upgrade. At 540 euros I decided to upgrade so I could work and sleep during the 11-12 hour flight.

The first issue was my laptop charger wouldn’t work with the power from the plane. My phone charger was fine, but not the laptop charger. I was worried maybe the charger was broken, so I switched off the computer to save the battery in case I needed to look for a new charger in Quito. I decided to watch movies and sleep instead.

I watched Black Panther and Avengers : Infinity War, both of which were good. I think both were over-hyped, but still enjoyable. My main criticism of the later would be, more Hulk please!

Towards the end of the last film I started to get a headache, so I took some paracetamol, put my seat flat and slept for a short while. I woke feeling rather nauseous and that’s where the problems started. For the last few hours of the flight I was being sick every few minutes. By the time the flight was over I was in a bit of a bad way. So much so they brought medical staff on to the plane, who were giving me injections to stop the headache and nausea. I got wheeled off in a wheelchair to the medical centre, where I pretty much slept for the next 3 hours.

When I came out Diego and Paola were waiting for me. I did tell the medical centre people there was someone waiting for me, but nobody told the information desk, so the taxi driver left. Luckily my saviours were at hand! They drove me to the hotel and I slept a lot!

Big thanks to the KLM staff and the Quito medical staff for helping me. Also, many thanks to Diego and Paola for getting me from the airport to the hotel. That would have been a tough journey in a taxi.

So it seems like my mission to be sick in every country in the world is still on track.

The Ecuador event starts tomorrow (probably today by the time I publish this). Fingers crossed everything will go well.

Cheers

Tim…

ODC Latin America Tour : Birmingham to Quito was first posted on August 14, 2018 at 11:15 pm.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Pages

Subscribe to Oracle FAQ aggregator