Feed aggregator

RMAN - Full vs. Incremental - performance problem

Tom Kyte - Thu, 2019-04-25 22:46
Hi. I am testing RMAN. I have run an incremental 0 and an incremental 1 cumulative test without having any database activity in-between these test. I am using OmniBack media manager with RMAN. I have three databases. Here are the timing results ...
Categories: DBA Blogs

Two New “Dive Into Containers and Cloud Native” Podcasts

OTN TechBlog - Thu, 2019-04-25 16:48

Oracle Cloud Native Services cover container orchestration and management, build pipelines, infrastructure as code, streaming and real-time telemetry. Join Kellsey Ruppel and me for two new podcasts about these services.

In the first podcast, you can learn more about three services for containers: Container Engine for Kubernetes, Cloud Infrastructure Registry, and Container Pipelines.

In the second podcast, you can learn more about Resource Manager for infrastructure as code, Streaming for event-based architectures, Monitoring for real-time telemetry, and Notifications for real-time alerts based on infrastructure changes.

You can find these and other podcasts at the Oracle Cloud Café. Please take a few minutes to listen-in and share any feedback you may have. 

Using Drop Zones to Isolate Customizations

PeopleSoft Technology Blog - Thu, 2019-04-25 13:33

The ability to customize PeopleSoft applications has always been a powerful and popular aspect of PeopleSoft products.  This power and flexibility came at a cost, however. Although customizations are valuable in that they enable customers to meet unique and important requirements that are not part of the standard delivered products, they are difficult and costly to maintain.

PeopleSoft has been working hard to enable customers to continue developing those valuable customizations, but implement them in a way that they are isolated from our delivered products.  This minimizes life cycle impact and allows customers to take new images without having to re-implement customizations each time.  Providing the ability to isolate customizations is a high priority investment for us.  We've developed several features that facilitate the ability to isolate customizations.  The latest is Drop Zones.  Drop Zones became available with PeopleTools 8.57, and customers must be on 8.57 to implement them. 

First let's look at the benefits as well as things you must consider when using Drop Zones:

Benefits
  • Customers can add custom fields and other page elements without life cycle impact
  • You have the full power of PeopleTools within Drop Zones.  You can apply PeopleCode to custom elements
  • Reduces LCM time when taking new image.  No need to re-implement customizations!
  • Works on Fluid pages
Considerations
  • It's still developer work, some of which is done in App Designer
  • Doesn't reduce implementation time for customization (the benefit is during LCM)
  • Some pages won't work with Drop Zones
  • No support for Classic pages (as of 8.57)
What Does PeopleSoft Deliver?

You can only use drop zones on pages delivered by PeopleSoft applications.  (Don't add your own--that would be a customization.)  Drop Zones will be delivered with the following application images:

  • FSCM 31
  • HCM 30
  • ELM 19
  • CRM 17

Our application teams are delivering drop zones in pages where customizations are most common.  This was determined in consultation with customers.  Typical pages have two drop zones: one at the top, the other at the bottom.  However, there may be cases with more or fewer drop zones.

How Do I Implement Drop Zones?

Move to PeopleTools 8.57 or later. Take Application images that have drop zones.

  1. Review and catalog your page customizations and determine whether they can be moved to Drop Zones.  Compare your list to delivered pages with Drop Zones.  (Lists for all applications are available on peoplesoftinfo.com>Key Concepts>Configuration.)

  2. Create subpages with customizations you want to implement (custom fields, labels, other widgets...)  In the example here, we've created a simple subpage with static text and a read-only Employee ID field.
  3. Insert your custom subpage into the Drop Zone.
  4. Configure your subpage to the component containing the page.  There may be more than one drop zone available, so make sure you choose the one you want.  Subpages are identified by labels on their group boxes.


    Your custom subpage will be dynamically inserted at runtime.  Any fields and other changes on your subpages are loaded into the component buffer along with delivered content.  Your subpages are displayed as if part of the main page definition.  End users will see no difference between custom content and delivered content.

Now this customization is isolated, and will not be affected when you take the next application image.  Your customization will be carried forward, and you don't have to re-implement it every time you take a new image.  These changes will not appear in a Compare Report.

What Can I Do to Prepare for Using Drop Zones?

Even if you are not yet on an application image that contains Drop Zones, you can prepare ahead of time, making implementation faster. 

  1. Review and catalog your page customizations and compare them against the pages with delivered drop zones in the images you will eventually uptake.  
  2. Consider which page customizations you want to implement with Drop Zones.  Prioritize them.
  3. Start building subpages containing those customizations.

When you move to the application images that contain drop zones, you can simply insert the subpages you've created as described above.

See this video for a nice example of islolating a customization with Drop Zones.

 

B2B Brands Turn to Oracle Customer Experience Cloud to Meet the Evolving Demands of Buyers

Oracle Press Releases - Thu, 2019-04-25 07:00
Blog
B2B Brands Turn to Oracle Customer Experience Cloud to Meet the Evolving Demands of Buyers

By Jeri Kelley, director of product strategy, Oracle Commerce Cloud—Apr 25, 2019

We are living in the experience economy, where a customer’s experience with a brand is inseparable from the value of the goods and services a business provides. You may be thinking that I am speaking about B2C buyers but the B2B landscape is no different. Buying a complex machine or service should be as convenient as receiving groceries ordered online.

As the expectations of B2B buyers continue to shift, what should businesses do to meet the evolving needs of today’s buyer?

Simply put—businesses need to adapt to the way people want to buy. It’s easier said than done, but it is achievable by connecting data, intelligence and experiences. And that’s exactly what we will be talking about at B2B Online this week in Chicago. Please come visit the Oracle Customer Experience (CX) Cloud team at booth #202 to say hello and learn more about Oracle Commerce Cloud, Oracle CPQ Cloud and Oracle Subscription Management Cloud.

Whether it’s launching a new site, expanding to a new channel, or exploring a new business model, businesses are choosing Oracle CX Cloud to meet the complex and ever-changing needs of B2B organizations. Brands like Motorola Solutions and Construction Specialties are at the forefront of this shift in purchasing behavior and have partnered with Oracle to streamline ecommerce buying experiences, drive unique digital touchpoints and increase business agility. 

Motorola Solutions is a global leader in mission-critical communications that make cities safer and helps communities and businesses thrive. As Motorola Solutions looked to replace a legacy commerce platform with a scalable solution that allows for growth, it turned to Oracle Commerce Cloud and Oracle CPQ Cloud to enable self-service buying to both its B2C and B2B buyers.

“With Oracle Commerce Cloud, business teams can make the updates they need at a rapid pace and our development teams get to leverage the modern architecture of the unified platform for both B2C and B2B sites,” explains John Pirog, senior director of digital development, Motorola Solutions. “Also, it was key that our commerce platform integrated with applications like Oracle CPQ Cloud to allow our customers to quote and purchase configurable products online.”

Construction Specialties, a global manufacturer of specialty building products, looked to move away from highly customized, antiquated technologies that were hindering its ecommerce growth. One of the main goals was to make it easier for its buyers to conduct business with Construction Specialties.

“Now that we are in the cloud with Oracle, including Oracle ERP Cloud and Oracle Commerce Cloud, it is a lot easier to get what we need faster whether it’s an upgrade or an extension,” says Michael Weissberg, digital marketing manager, Construction Specialties. “One example surrounds mobile capabilities. We have seen a shift in how our customers want to buy and our old site was not mobile friendly. Since launching our responsive site on Oracle Commerce Cloud, we now see greater conversion rates on mobile devices that result in higher sales. In fact, we received our highest online sale immediately after our launch.”

Meeting customer expectations is never easy, but having the flexibility to change, adopt and experiment quickly gives businesses the ability to deliver the experiences B2B buyers want at scale. We look forward to seeing you at B2B Online and be sure to attend our session on how B2B companies can succeed in the Experience Economy. If you’re not attending, please visit Oracle CX for more information.

 

Presentation Persuasion: Calls for Proposals for Upcoming Events

OTN TechBlog - Thu, 2019-04-25 05:00

Sure you've got solid technical chops, and you share your knowledge through your blog, articles, and videos. But if you want to walk it like you talk it you have to get yourself in front of a live audience and keep them awake for about an hour. If you do it right, who knows? You might just spend the time after your session signing autographs and posing for selfies to calm your new fans. The first step in accomplishing all that is to respond to calls for proposals for conferences, meet-ups, and other live events like these:

  • AUSOUG Webinar Series 2019
    Ongoing series of webinars hosted by the Australian Oracle User Group. No CFP deadline posted.
     
  • NCOAUG Training Day 2019
    CFP Deadline: May 17, 2019
    North Central Oracle Applications User Group
    Location: Oakbrook Terrace, Ill
    Event: August 1, 2017
     
  • MakeIT Conference
    CFP Deadline: May 17, 2019

    Organized by the Slovenian Oracle User Group (SIOUG).
    Event: October 14-15, 2019
     
  • HrOUG 2019
    CFP Deadline: May 27, 2019
    Organized by the Croatian Oracle Users Group
    Event: October 15-18, 2019
     
  • DOAG 2019 Conference and Exhibition
    CFP Deadline: June 3, 2019
    Organized by the Deutsche Oracle Anwendergruppe (German Oracle Users Group)>
    Location: Nürnberg, Germany
    Event: November 19-20, 2019

Good luck!

Related Content

Oracle Solaris Continuous Delivery

Chris Warticki - Wed, 2019-04-24 21:45
Protect Your Mission-Critical Environments and Resources

Oracle is committed to protect your investment in Oracle Solaris and is mindful of your mission-critical environments and resources. Since January 2017, Oracle Solaris 11 follows a Continuous Delivery model[1], delivering new functionality as updates to the existing release. This means you are not required to upgrade to gain access to new features and capabilities. Oracle Support provides support for Oracle Solaris 11 through at least 2031 under Oracle Premier Support and Extended Support through 2034. These dates will be evaluated annually for updates.

With no additional cost, Oracle Solaris 11 customers under Oracle Premier Support can benefit from:

  • New features, bug fixes, and security vulnerability updates through both dot releases, and monthly Support Repository Updates (SRUs).
  • Binary Application Guarantee—guaranteed compatibility of your application from one Oracle Solaris release to the next, on premises or in the cloud.
  • 24/7 access via My Oracle Support to:
    • Oracle systems specialists
    • Online knowledge articles
    • Oracle Solaris proactive tools
    • Oracle Solaris communities 

 

In addition, the agile deployment of Oracle Solaris 11 allows customers to take advantage of more frequent, simpler, and less disruptive delivery of new features and capabilities. This helps with faster adoption and eliminates the need for slow and expensive re-qualifications. 

Learn more with these resources:

Policies

Oracle Solaris 11

Oracle Solaris 10

General

 

 

 

[1] Support dates are evaluated for update annually, and will be provided through at least the dates listed in the Oracle and Sun System Software: Lifetime Support Policy. See the Oracle Solaris Releases for details.

Solving DBFS UnMounting Issue

Michael Dinh - Wed, 2019-04-24 18:34

Often, I am quite baffle with Oracle’s implementations and documentations.

RAC GoldenGate DBFS implementation has been a nightmare and here is one example DBFS Nightmare

I am about to show you another.

In general, I find any implementation using ACTION_SCRIPT is good in theory, bad in practice, but I digress.

Getting ready to shutdown CRS for system patching to find out CRS failed to shutdown.

# crsctl stop crs
CRS-2675: Stop of 'dbfs_mount' on 'host02' failed
CRS-2675: Stop of 'dbfs_mount' on 'host02' failed
CRS-2673: Attempting to stop 'dbfs_mount' on 'host02'
CRS-2675: Stop of 'dbfs_mount' on 'host02' failed
CRS-2673: Attempting to stop 'dbfs_mount' on 'host02'
CRS-2675: Stop of 'dbfs_mount' on 'host02' failed
CRS-2799: Failed to shut down resource 'dbfs_mount' on 'host02'
CRS-2799: Failed to shut down resource 'ora.GG_PROD.dg' on 'host02'
CRS-2799: Failed to shut down resource 'ora.asm' on 'host02'
CRS-2799: Failed to shut down resource 'ora.dbfs.db' on 'host02'
CRS-2799: Failed to shut down resource 'ora.host02.ASM2.asm' on 'host02'
CRS-2794: Shutdown of Cluster Ready Services-managed resources on 'host02' has failed
CRS-2675: Stop of 'ora.crsd' on 'host02' failed
CRS-2799: Failed to shut down resource 'ora.crsd' on 'host02'
CRS-2795: Shutdown of Oracle High Availability Services-managed resources on 'host02' has failed
CRS-4687: Shutdown command has completed with errors.
CRS-4000: Command Stop failed, or completed with errors.

Check /var/log/messages to find errors and there’s no clue for resolution.

# grep -i dbfs /var/log/messages
Apr 17 19:42:26 host02 DBFS_/ggdata: unmounting DBFS from /ggdata
Apr 17 19:42:26 host02 DBFS_/ggdata: umounting the filesystem using '/bin/fusermount -u /ggdata'
Apr 17 19:42:26 host02 DBFS_/ggdata: Stop - stopped, but still mounted, error

Apr 17 20:45:59 host02 DBFS_/ggdata: mount-dbfs.sh mounting DBFS at /ggdata from database DBFS
Apr 17 20:45:59 host02 DBFS_/ggdata: /ggdata already mounted, use mount-dbfs.sh stop before attempting to start

Apr 17 21:01:29 host02 DBFS_/ggdata: unmounting DBFS from /ggdata
Apr 17 21:01:29 host02 DBFS_/ggdata: umounting the filesystem using '/bin/fusermount -u /ggdata'
Apr 17 21:01:29 host02 DBFS_/ggdata: Stop - stopped, but still mounted, error
Apr 17 21:01:36 host02 dbfs_client[71957]: OCI_ERROR 3114 - ORA-03114: not connected to ORACLE
Apr 17 21:01:41 host02 dbfs_client[71957]: /FS1/dirdat/ih000247982 Block error RC:-5

Apr 17 21:03:06 host02 DBFS_/ggdata: unmounting DBFS from /ggdata
Apr 17 21:03:06 host02 DBFS_/ggdata: umounting the filesystem using '/bin/fusermount -u /ggdata'
Apr 17 21:03:06 host02 DBFS_/ggdata: Stop - stopped, now not mounted
Apr 17 21:09:19 host02 DBFS_/ggdata: filesystem /ggdata not currently mounted, no need to stop

Apr 17 22:06:16 host02 DBFS_/ggdata: mount-dbfs.sh mounting DBFS at /ggdata from database DBFS
Apr 17 22:06:17 host02 DBFS_/ggdata: ORACLE_SID is DBFS2
Apr 17 22:06:17 host02 DBFS_/ggdata: doing mount /ggdata using SID DBFS2 with wallet now
Apr 17 22:06:18 host02 DBFS_/ggdata: Start -- ONLINE

The messages in log of script agent show below error (MOS documentation).
Anyone know where script agent is located at?

2019-04-17 20:56:02.793903 :    AGFW:3274315520: {1:53477:37077} Agent received the message: AGENT_HB[Engine] ID 12293:16017523
2019-04-17 20:56:19.124667 :CLSDYNAM:3276416768: [dbfs_mount]{1:53477:37077} [check] Executing action script: /u02/app/12.1.0/grid/crs/script/mount-dbfs.sh[check]
2019-04-17 20:56:19.176927 :CLSDYNAM:3276416768: [dbfs_mount]{1:53477:37077} [check] Checking status now
2019-04-17 20:56:19.176973 :CLSDYNAM:3276416768: [dbfs_mount]{1:53477:37077} [check] Check -- ONLINE
2019-04-17 20:56:32.794287 :    AGFW:3274315520: {1:53477:37077} Agent received the message: AGENT_HB[Engine] ID 12293:16017529
2019-04-17 20:56:43.312534 :    AGFW:3274315520: {2:37893:29307} Agent received the message: RESOURCE_STOP[dbfs_mount host02 1] ID 4099:16017535
2019-04-17 20:56:43.312574 :    AGFW:3274315520: {2:37893:29307} Preparing STOP command for: dbfs_mount host02 1
2019-04-17 20:56:43.312584 :    AGFW:3274315520: {2:37893:29307} dbfs_mount host02 1 state changed from: ONLINE to: STOPPING
2019-04-17 20:56:43.313088 :CLSDYNAM:3276416768: [dbfs_mount]{2:37893:29307} [stop] Executing action script: /u02/app/12.1.0/grid/crs/script/mount-dbfs.sh[stop]
2019-04-17 20:56:43.365201 :CLSDYNAM:3276416768: [dbfs_mount]{2:37893:29307} [stop] unmounting DBFS from /ggdata
2019-04-17 20:56:43.415516 :CLSDYNAM:3276416768: [dbfs_mount]{2:37893:29307} [stop] umounting the filesystem using '/bin/fusermount -u /ggdata'
2019-04-17 20:56:43.415541 :CLSDYNAM:3276416768: [dbfs_mount]{2:37893:29307} [stop] /bin/fusermount: failed to unmount /ggdata: Device or resource busy
2019-04-17 20:56:43.415552 :CLSDYNAM:3276416768: [dbfs_mount]{2:37893:29307} [stop] Stop - stopped, but still mounted, error
2019-04-17 20:56:43.415611 :    AGFW:3276416768: {2:37893:29307} Command: stop for resource: dbfs_mount host02 1 completed with status: FAIL
2019-04-17 20:56:43.415929 :CLSFRAME:3449863744:  TM [MultiThread] is changing desired thread # to 3. Current # is 2
2019-04-17 20:56:43.415970 :CLSDYNAM:3276416768: [dbfs_mount]{2:37893:29307} [check] Executing action script: /u02/app/12.1.0/grid/crs/script/mount-dbfs.sh[check]
2019-04-17 20:56:43.416033 :    AGFW:3274315520: {2:37893:29307} Agent sending reply for: RESOURCE_STOP[dbfs_mount host02 1] ID 4099:16017535
2019-04-17 20:56:43.467939 :CLSDYNAM:3276416768: [dbfs_mount]{2:37893:29307} [check] Checking status now
2019-04-17 20:56:43.467964 :CLSDYNAM:3276416768: [dbfs_mount]{2:37893:29307} [check] Check -- ONLINE

ACTION_SCRIPT can be found using crsctl as shown if you have not found where log of script agent is at.

oracle@host02 ~ $ $GRID_HOME/bin/crsctl stat res -w "TYPE = local_resource" -p | grep mount-dbfs.sh
ACTION_SCRIPT=/u02/app/12.1.0/grid/crs/script/mount-dbfs.sh
ACTION_SCRIPT=/u02/app/12.1.0/grid/crs/script/mount-dbfs.sh

Here’s a test case to resolve “failed to unmount /ggdata: Device or resource busy”.

First thought was to use fuser and kill the process.

# fuser -vmM /ggdata/
                     USER        PID ACCESS COMMAND
/ggdata:             root     kernel mount /ggdata
                     mdinh     85368 ..c.. bash
                     mdinh     86702 ..c.. vim
# 

Second thought, might not be a good idea and better idea is to let the script handle this if it can.
Let’s see what options are available for mount-dbfs.sh:

oracle@host02 ~ $ /u02/app/12.1.0/grid/crs/script/mount-dbfs.sh -h
Usage: mount-dbfs.sh { start | stop | check | status | restart | clean | abort | version }

oracle@host02 ~ $ /u02/app/12.1.0/grid/crs/script/mount-dbfs.sh version
20160215
oracle@host02 ~ 

Stop DBFS failed as expected.

oracle@host02 ~ $ /u02/app/12.1.0/grid/crs/script/mount-dbfs.sh status
Checking status now
Check -- ONLINE

oracle@host02 ~ $ /u02/app/12.1.0/grid/crs/script/mount-dbfs.sh stop
unmounting DBFS from /ggdata
umounting the filesystem using '/bin/fusermount -u /ggdata'
/bin/fusermount: failed to unmount /ggdata: Device or resource busy
Stop - stopped, but still mounted, error
oracle@host02 ~ $

Stop DBFS using clean option. Notice PID killed is 40047 and not the same as (mdinh 86702 ..c.. vim)
Note: not all output displayed for brevity.

oracle@host02 ~ $ /bin/bash -x /u02/app/12.1.0/grid/crs/script/mount-dbfs.sh clean
+ msg='cleaning up DBFS nicely using (fusermount -u|umount)'
+ '[' info = info ']'
+ /bin/echo cleaning up DBFS nicely using '(fusermount' '-u|umount)'
cleaning up DBFS nicely using (fusermount -u|umount)
+ /bin/logger -t DBFS_/ggdata -p user.info 'cleaning up DBFS nicely using (fusermount -u|umount)'
+ '[' 1 -eq 1 ']'
+ /bin/fusermount -u /ggdata
/bin/fusermount: failed to unmount /ggdata: Device or resource busy
+ /bin/sleep 1
+ FORCE_CLEANUP=0
+ '[' 0 -gt 1 ']'
+ /u02/app/12.1.0/grid/crs/script/mount-dbfs.sh status
+ '[' 0 -eq 0 ']'
+ FORCE_CLEANUP=1
+ '[' 1 -eq 1 ']'
+ logit error 'tried (fusermount -u|umount), still mounted, now cleaning with (fusermount -u -z|umount -f) and kill'
+ type=error
+ msg='tried (fusermount -u|umount), still mounted, now cleaning with (fusermount -u -z|umount -f) and kill'
+ '[' error = info ']'
+ '[' error = error ']'
+ /bin/echo tried '(fusermount' '-u|umount),' still mounted, now cleaning with '(fusermount' -u '-z|umount' '-f)' and kill
tried (fusermount -u|umount), still mounted, now cleaning with (fusermount -u -z|umount -f) and kill
+ /bin/logger -t DBFS_/ggdata -p user.error 'tried (fusermount -u|umount), still mounted, now cleaning with (fusermount -u -z|umount -f) and kill'
+ '[' 1 -eq 1 ']'
================================================================================
+ /bin/fusermount -u -z /ggdata
================================================================================
+ '[' 1 -eq 1 ']'
++ /bin/ps -ef
++ /bin/grep -w /ggdata
++ /bin/grep dbfs_client
++ /bin/grep -v grep
++ /bin/awk '{print $2}'
+ PIDS=40047
+ '[' -n 40047 ']'
================================================================================
+ /bin/kill -9 40047
================================================================================
++ /bin/ps -ef
++ /bin/grep -w /ggdata
++ /bin/grep mount.dbfs
++ /bin/grep -v grep
++ /bin/awk '{print $2}'
+ PIDS=
+ '[' -n '' ']'
+ exit 1

oracle@host02 ~ $ /u02/app/12.1.0/grid/crs/script/mount-dbfs.sh status
Checking status now
Check -- OFFLINE

oracle@host02 ~ $ /u02/app/12.1.0/grid/crs/script/mount-dbfs.sh status
Checking status now
Check -- OFFLINE

oracle@host02 ~ $ df -h|grep /ggdata
/dev/asm/acfs_vol-177                                  299G  2.8G  297G   1% /ggdata1
dbfs-@DBFS:/                                            60G  1.4G   59G   3% /ggdata

oracle@host02 ~ $ /u02/app/12.1.0/grid/crs/script/mount-dbfs.sh status
Checking status now
Check -- ONLINE
oracle@host02 ~ $ 

What PID was killed when running mount-dbfs.sh clean?
It’s for dbfs_client.

mdinh@host02 ~ $ ps -ef|grep dbfs
oracle   34865     1  0 Mar29 ?        00:02:43 oracle+ASM1_asmb_dbfs1 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
oracle   40047     1  0 Apr22 ?        00:00:10 /u01/app/oracle/product/12.1.0/db_1/bin/dbfs_client /@DBFS -o allow_other,direct_io,wallet /ggdata
oracle   40081     1  0 Apr22 ?        00:00:27 oracle+ASM1_user40069_dbfs1 (DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))
mdinh    88748 87565  0 13:30 pts/1    00:00:00 grep --color=auto dbfs
mdinh@host02 ~ $ 

It would have been so much better for mount-dbfs.sh to provide the info as part of the kill versus having user go debug script and trace process.

If you have read this far, then it’s only fair to provide log of script agent.

$ grep mount-dbfs.sh $ORACLE_BASE/diag/crs/$(hostname -s)/crs/trace/crsd_scriptagent_oracle.trc | grep "2019-04-17 20:5"

Oracle Powers Full BIM Model Coordination for Design and Construction Teams

Oracle Press Releases - Wed, 2019-04-24 17:07
Press Release
Oracle Powers Full BIM Model Coordination for Design and Construction Teams Aconex cloud solution connects all project participants in model management process to eliminate complexity and speed project success

Future of Projects, Washington D.C.—Apr 24, 2019

Building information modeling (BIM) is an increasingly important component of construction project delivery, but is currently limited by a lack of collaboration, reliance on multiple applications, and missing integrations. The Oracle Aconex Model Coordination Cloud Service eliminates these challenges by enabling construction design and project professionals to collaboratively manage BIM models across the entire project team in a true common data environment (CDE). As such, organizations can reduce the risk of errors and accelerate project success by ensuring each team member has access to accurate, up-to-date models. 

The BIM-methodology uses 3D, 4D and 5D modeling, in coordination with a number of tools and technologies, to provide digital representations of the physical and functional characteristics of places.

“Issues with model management means projects go over budget, run over schedule, and end up with a higher total cost of ownership for the client. As part of the early access program for Oracle Aconex Model Coordination, it was great to experience how Oracle has solved these challenges,” said Davide Gatti, digital manager, Multiplex.

Single Source of Truth for Project Data

With Oracle Aconex Model Coordination, organizations can eliminate the need for various point solutions in favor of project-wide BIM participation that drives productivity with faster processes and cycle times, enables a single source of truth for project information, and delivers a fully connected data set at handover for asset operation.

The Model Coordination solution enhances Oracle Aconex’s existing CDE capabilities, which are built around Open BIM standards (e.g., IFC 4 and BCF 2.1) and leverage a cloud-based, full model server to enable efficient, secure, and comprehensive model management at all stages of the project lifecycle.

The Oracle Aconex CDE, which is based on ISO 19650 and DIN SPEC 91391 definitions, provides industry-leading neutrality, security, and data interoperability. By enabling model management in this environment, Oracle Aconex unlocks new levels of visibility, coordination, and productivity across people and processes, including enabling comprehensive model-based issue and clash management.    

Key features of the new solution include: 

  • Seamless clash and design issue management and resolution
  • Dashboard overview and reporting
  • Creation of viewpoints – e.g. personal “bookmarks” within models and the linking of documents to objects
  • Integrated measurements
  • Process support and a full audit trail with the supply chain

“With Oracle Aconex Model Coordination, we’re making the whole model management process as seamless and easy as possible. By integrating authoring and validation applications to the cloud, users don’t need to upload and download their issues and clashes anymore,” said Frank Weiss, director of new products, BIM and innovation at Oracle Construction and Engineering.

“There’s so much noise and confusion around BIM and CDEs, much of it driven by misinformation in the market about what each term means. We believe everybody on a BIM project should work with the best available tool for their discipline. Therefore, open formats are critical for interoperability, and the use of a true CDE is key to efficient and effective model management.”

For more information on the Model Coordination solution, please visit https://www.oracle.com/industries/construction-engineering/aconex-products.html.

Contact Info
Judi Palmer
Oracle
+1.650.784.7901
judi.palmer@oracle.com
Brent Curry
H+K Strategies
+1.312.255.3086
brent.curry@hkstrategies.com
About Oracle Construction and Engineering

Asset owners and project leaders rely on Oracle Construction and Engineering solutions for the visibility and control, connected supply chain, and data security needed to drive performance and mitigate risk across their processes, projects, and organization. Our scalable cloud solutions enable digital transformation for teams that plan, build, and operate critical assets, improving efficiency, collaboration, and change control across the project lifecycle. www.oracle.com/construction-and-engineering.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle, please visit us at oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Judi Palmer

  • +1.650.784.7901

Brent Curry

  • +1.312.255.3086

Subtract time from a constant time

Tom Kyte - Wed, 2019-04-24 10:06
Hello there, I want to create a trigger that will insert a Time difference value into a table Example: I have attendance table Sign_in date; Sign_out date; Late_in number; Early_out number; Now I want to create a trigger that will insert la...
Categories: DBA Blogs

Clob column in RDBMS(oracle) table with key value pairs

Tom Kyte - Wed, 2019-04-24 10:06
In our product in recent changes, oracle tables are added with clob column having key value pairs in xml/json format with new columns. <b>example of employee:(Please ignore usage of parenthesis) </b> 100,Adam,{{"key": "dept", "value": "Marketi...
Categories: DBA Blogs

current value of sequence

Tom Kyte - Wed, 2019-04-24 10:06
Hi. Simple question :-) Is it possible to check current value of sequence? I though it is stored in SEQ$ but that is not true (at least in 11g). So is it now possible at all? Regards
Categories: DBA Blogs

Build it Yourself — Chatbot API with Keras/TensorFlow Model

Andrejus Baranovski - Wed, 2019-04-24 08:59
Is not that complex to build your own chatbot (or assistant, this word is a new trendy term for chatbot) as you may think. Various chatbot platforms are using classification models to recognize user intent. While obviously, you get a strong heads-up when building a chatbot on top of the existing platform, it never hurts to study the background concepts and try to build it yourself. Why not use a similar model yourself. Chatbot implementation main challenges are:
  1. Classify user input to recognize intent (this can be solved with Machine Learning, I’m using Keras with TensorFlow backend)
  2. Keep context. This part is programming and there is nothing much ML related here. I’m using Node.js backend logic to track conversation context (while in context, typically we don’t require a classification for user intents — user input is treated as answers to chatbot questions)
Complete source code for this article with readme instructions is available on my GitHub repo (open source).

This is the list of Python libraries which are used in the implementation. Keras deep learning library is used to build a classification model. Keras runs training on top of TensorFlow backend. Lancaster stemming library is used to collapse distinct word forms:


Chatbot intents and patterns to learn are defined in a plain JSON file. There is no need to have a huge vocabulary. Our goal is to build a chatbot for a specific domain. Classification model can be created for small vocabulary too, it will be able to recognize a set of patterns provided for the training:


Before we could start with classification model training, we need to build vocabulary first. Patterns are processed to build a vocabulary. Each word is stemmed to produce generic root, this would help to cover more combinations of user input:


This is the output of vocabulary creation. There are 9 intents (classes) and 82 vocabulary words:


Training would not run based on the vocabulary of words, words are meaningless for the machine. We need to translate words into bags of words with arrays containing 0/1. Array length will be equal to vocabulary size and 1 will be set when a word from the current pattern is located in the given position:


Training data — X (pattern converted into array [0,1,0,1…, 0]), Y (intents converted into array [1, 0, 0, 0,…,0], there will be single 1 for intents array). Model is built with Keras, based on three layers. According to my experiments, three layers provide good results (but it all depends on training data). Classification output will be multiclass array, which would help to identify encoded intent. Using softmax activation to produce multiclass classification output (result returns an array of 0/1: [1,0,0,…,0] — this set identifies encoded intent):


Compile Keras model with SGD optimizer:


Fit the model — execute training and construct classification model. I’m executing training in 200 iterations, with batch size = 5:


Model is built. Now we can define two helper functions. Function bow helps to translate user sentence into a bag of words with array 0/1:


Check this example — translating the sentence into a bag of words:


When the function finds a word from the sentence in chatbot vocabulary, it sets 1 into the corresponding position in the array. This array will be sent to be classified by the model to identify to what intent it belongs:


It is a good practice to save the trained model into a pickle file to be able to reuse it to publish through Flask REST API:


Before publishing model through Flask REST API, is always good to run an extra test. Use model.predict function to classify user input and based on calculated probability return intent (multiple intents can be returned):


Example to classify sentence:


The intent is calculated correctly:


To publish the same function through REST endpoint, we can wrap it into Flask API:


I have explained how to implement the classification part. In the GitHub repo referenced at the beginning of the post, you will find a complete example of how to maintain the context. Context is maintained by logic written in JavaScript and running on Node.js backend. Context flow must be defined in the list of intents, as soon as the intent is classified and backend logic finds a start of the context — we enter into the loop and ask related questions. How advanced is context handling all depends on the backend implementation (this is beyond Machine Learning scope at this stage).

Chatbot UI:

Oracle Linux certified under Common Criteria and FIPS 140-2

Oracle Security Team - Wed, 2019-04-24 08:40

Oracle Linux 7 has just received both a Common Criteria (CC) Certification which was performed against the National Information Assurance Partnership (NIAP) General Purpose Operating System Protection Profile (OSPP) v4.1 as well as a FIPS 140-2 validation of its cryptographic modules.  Oracle Linux is currently one of only two operating systems – and the only Linux distribution – on the NIAP Product Compliant List. 

U.S. Federal procurement policy requires IT products sold to the Department of Defense (DoD) to be on this list; therefore, Federal cloud customers who select Oracle Cloud Infrastructure can now opt for a NIAP CC-certified operating system that also includes FIPS 140-2 validated cryptographic modules, by making Oracle Linux 7 the platform for their cloud services solution.

Common Criteria Certification for Oracle Linux 7

The National Information Assurance Partnership (NIAP) is “responsible for U.S. implementation of the Common Criteria, including management of the NIAP Common Criteria Evaluation and Validation Scheme (CCEVS) validation body.”(See About NIAP at https://www.niap-ccevs.org/ )

The Operating Systems Protection Profile (OSPP) series are the only NIAP-approved Protection Profiles for operating systems. “A Protection Profile is an implementation-independent set of security requirements and test activities for a particular technology that enables achievable, repeatable, and testable (CC) evaluations.”  They are intended to “accurately describe the security functionality of the systems being certified in terms of [CC] and to define functional and assurance requirements for such products.”  In other words, the OSPP enables organizations to make an accurate comparison of operating systems security functions. (For both quotations, see NIAP Frequently Asked Questions (FAQ) at https://www.niap-ccevs.org/Ref/FAQ.cfm)

In addition, products that certify against these Protection Profiles can also help you meet certain US government procurement rules.  As set forth in the Committee on National Security Systems Policy (CNSSP) #11, National Policy Governing the Acquisition of Information Assurance (IA) and IA-Enabled Information Technology Products (published in June 2013), “All [common off-the-shelf] COTS IA and IA-enabled IT products acquired for use to protect information on NSS shall comply with the requirements of the NIAP program in accordance with NSA-approved processes.”  

Oracle Linux is now the only Linux distribution on the NIAP Product Compliant List.  It is one of only two operating systems on the list.

You may recall that Linux distributions (including Oracle Linux) have previously completed Common Criteria evaluations (mostly against a German standard protection profile), these evaluations are now limited because they are only officially recognized in Germany and within the European SOG-IS agreement. Furthermore, the revised Common Criteria Recognition Arrangement (CCRA) announcement on the CCRA News Page from September 8th 2014, states that “After September 8th 2017, mutually recognized certificates will either require protection profile-based evaluations or claim conformance to evaluation assurance levels 1 through 2 in accordance with the new CCRA.”  That means evaluations conducted within the CCRA acceptance rules, such as the Oracle Linux 7.3 evaluation, are globally recognized in the 30 countries that have signed the CCRA. As a result, Oracle Linux 7.3 is the only Linux distribution that meets current US procurement rules.

It is important to recognize that the exact status of the certifications of operating systems under the NIAP OSPP has significant implications for the use of cloud services by U.S. government agencies.  The Federal Risk and Authorization Management Program (FedRAMP) website states that it is a “government-wide program that provides a standardized approach to security assessment, authorization, and continuous monitoring for cloud products and services.” For both FedRamp Moderate and High, the SA-4 Guidance states “The use of Common Criteria (ISO/IEC 15408) evaluated products is strongly preferred.

FIPS 140-2 Level 1 Validation for Oracle Linux 6 and 7

In addition to the Common Criteria Certification, Oracle Linux cryptographic modules are also now FIPS 140-2 validated. FIPS 140-2 is a prerequisite for NIAP Common Criteria evaluations. “All cryptography in the TOE for which NIST provides validation testing of FIPS-approved and NIST-recommended cryptographic algorithms and their individual components must be NIST validated (CAVP and/or CMVP). At a minimum an appropriate NIST CAVP certificate is required before a NIAP CC Certificate will be awarded.” (See NIAP Policy Letter #5, June 25, 2018 at https://www.niap-ccevs.org/Documents_and_Guidance/ccevs/policy-ltr-5-update3.pdf )

FIPS is also a mandatory standard for all cryptographic modules used by the US government. “This standard is applicable to all Federal agencies that use cryptographic-based security systems to protect sensitive information in computer and telecommunication systems (including voice systems) as defined in Section 5131 of the Information Technology Management Reform Act of 1996, Public Law 104-106.” (See Cryptographic Module Validation Program; What Is The Applicability Of CMVP To The US Government? at https://csrc.nist.gov/projects/cryptographic-module-validation-program ).

Finally, FIPS is required for any cryptography that is a part of a FedRamp certified cloud service. “For data flows crossing the authorization boundary or anywhere else encryption      is required, FIPS 140 compliant/validated cryptography must be employed. FIPS 140 compliant/validated products will have certificate numbers. These certificate numbers will be required to be identified in the SSP as a demonstration of this capability. JAB TRs will not authorize a cloud service that does not have this capability.” (See FedRamp Tips & Cues Compilation, January 2018, at https://www.fedramp.gov/assets/resources/documents/FedRAMP_Tips_and_Cues.pdf ).

Oracle includes FIPS 140-2 Level 1 validated cryptography into Oracle Linux 6 and Oracle Linux 7 on x86-64 systems with the Unbreakable Enterprise Kernel and the Red Hat Compatible Kernel. The platforms used for FIPS 140 validation testing include Oracle Server X6-2 and Oracle Server X7-2, running Oracle Linux 6.9 and 7.3. Oracle “vendor affirms” that the FIPS validation is maintained on other x86-64 equivalent hardware that has been qualified in its Oracle Linux Hardware Certification List (HCL), on the corresponding Oracle Linux releases.

Oracle Linux cryptographic modules enable FIPS 140-compliant operations for key use cases such as data protection and integrity, remote administration (SSH, HTTPS TLS, SNMP, and IPSEC), cryptographic key generation, and key/certificate management.

Federal cloud customers who select Oracle Cloud Infrastructure can now opt for a NIAP CC-certified operating system (that also includes FIPS 140-2 validated cryptographic modules) by making Oracle Linux 7 the bedrock of their cloud services solution.

Oracle Linux is engineered for open cloud infrastructure. It delivers leading performance, scalability, reliability, and security for enterprise SaaS and PaaS workloads as well as traditional enterprise applications. Oracle Linux Support offers access to award-winning Oracle support resources and Linux support specialists, zero-downtime updates using Ksplice, additional management tools such as Oracle Enterprise Manager and lifetime support, all at a low cost.

For a matrix of Oracle security evaluations currently in progress as well as those completed, please refer to the Oracle Security Evaluations.

Visit Oracle Linux Security to learn how Oracle Linux can help keep your systems secure and improve the speed and stability of your operations.

 

Partner Webcast – Continuous Integration & Delivery on Oracle SOA Cloud Service

Oracle SOA Cloud Service provides an integration platform as a service (iPaaS) so that you can quickly provision your new platform, start developing and deploying your APIs and integration projects...

We share our skills to maximize your revenue!
Categories: DBA Blogs

University of California San Diego to Streamline Finance with Oracle ERP Cloud

Oracle Press Releases - Wed, 2019-04-24 07:00
Press Release
University of California San Diego to Streamline Finance with Oracle ERP Cloud Cutting-edge research institution boosts efficiency, insights and decision making with modern cloud platform

Redwood Shores, Calif.—Apr 24, 2019

UC San Diego Publications

Photo Credit: UC San Diego Publications

The University of California San Diego, one of the country’s top research institutions, is replacing its legacy financial management system with Oracle Enterprise Resource Planning (ERP) Cloud. Oracle ERP Cloud will enable UC San Diego to increase overall productivity and bolster decision making with real-time business insights and easily incorporate emerging technologies going forward.

Established in 1960, UC San Diego is one of the world’s most reputable and innovative research universities with more than 36,000 students and nearly $5 billion in annual revenue. To increase efficiencies and improve decision-making, UC San Diego needed to replace its existing financial management system—which had become complex and expensive to update—with a secure, scalable and configurable business platform that could reduce redundant business processes and enable different campus systems to share financial data securely. After a nine-month competitive bid process, which included participation from more than 100 stakeholders and subject matter experts, the university selected Oracle ERP Cloud.

“Making sense of the data in our heavily-customized legacy ERP system was creating headaches for our finance team and required significant levels of technical support. It wasn’t sustainable long term and was holding us back,” said William McCarroll, Senior Director, Business & Financial Services General Accounting at UC San Diego. “We anticipate that Oracle ERP Cloud will give us better access to innovative new technology, without painful software upgrades, and improve our finance team’s overall efficiency. Ultimately, we are implementing these system changes to keep UC San Diego moving forward as we nurture the next generation of changemakers.”

Oracle ERP Cloud is designed to allow organizations like UC San Diego to be able to increase productivity, lower costs and improve controls. One benefit of Oracle ERP Cloud is it enables the university to move from overnight (and weekend) batch data processing to real-time business insights that significantly speed up month-end and year-end closing for the finance team. In addition, Oracle ERP Cloud will provide UC San Diego with tools to embrace finance best practices and more easily access and deploy emerging technologies to support changing organizational demands.

“We are seeing many higher education institutions leveraging our modern business applications to create efficiencies while reducing cost and dramatically impacting productivity,” said Rondy Ng, senior vice president of applications development at Oracle. “With Oracle ERP Cloud, UC San Diego will be able to empower its finance team to play a strategic role in the university’s success. We look forward to partnering with UC San Diego as it embraces new innovations.”

Contact Info
Bill Rundle
Oracle
+1.650.506.1891
bill.rundle@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

About UC San Diego

At the University of California San Diego, we constantly push boundaries and challenge expectations. Established in 1960, UC San Diego has been shaped by exceptional scholars who aren’t afraid to take risks and redefine conventional wisdom. Today, as one of the top 15 research universities in the world, we are driving innovation and change to advance society, propel economic growth and make our world a better place. Learn more at www.ucsd.edu.

Additional Information

For additional information on Oracle ERP Cloud applications, visit Oracle Enterprise Resource Planning (ERP) Cloud’s Facebook and Twitter or the Modern Finance Leader blog.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Bill Rundle

  • +1.650.506.1891

Bloom Filter Efficiency And Cardinality Estimates

Randolf Geist - Tue, 2019-04-23 18:45
I've recently came across an interesting observation I've not seen documented yet, so I'm publishing a simple example here to demonstrate the issue.

In principle it looks like that the efficiency of Bloom Filter operations are dependent on the cardinality estimates. This means that in particular cardinality under-estimates of the optimizer can make a dramatic difference how efficient a corresponding Bloom Filter operation based on such a cardinality estimate will work at runtime. Since Bloom Filters are crucial for efficient processing in particular when using Exadata or In Memory column store this can have significant impact on the performance of affected operations.

While other operations based on SQL workareas like hash joins for example can be affected by such cardinality mis-estimates, too, these seem to be capable of adapting at runtime - at least to a certain degree. However I haven't seen such an adaptive behaviour of Bloom Filter operations at runtime (not even when executing the same statement multiple times and statistics feedback not kicking in).

To demonstrate the issue I'll create two simple tables that get joined and one of them gets a filter applied:

create table t1 parallel 4 nologging compress
as
with
generator1 as
(
select /*+
cardinality(1e3)
materialize
*/
rownum as id
, rpad('x', 100) as filler
from
dual
connect by
level <= 1e3
),
generator2 as
(
select /*+
cardinality(1e4)
materialize
*/
rownum as id
, rpad('x', 100) as filler
from
dual
connect by
level <= 1e4
)
select
id
, id as id2
, rpad('x', 100) as filler
from (
select /*+ leading(b a) */
(a.id - 1) * 1e4 + b.id as id
from
generator1 a
, generator2 b
)
;

alter table t1 noparallel;

create table t2 parallel 4 nologging compress as select * from t1;

alter table t2 noparallel;

All I did here is create two tables with 10 million rows each, and I'll look at the runtime statistics of the following query:

select /*+ no_merge(x) */ * from (
select /*+
leading(t1)
use_hash(t2)
px_join_filter(t2)
opt_estimate(table t1 rows=1)
--opt_estimate(table t1 rows=250000)
monitor
*/
t1.id
, t2.id2
from
t1
, t2
where
mod(t1.id2, 40) = 0
-- t1.id2 between 1 and 250000
and t1.id = t2.id
) x
where rownum > 1;

Note: If you try to reproduce make sure you get actually a Bloom Filter operation - in an unpatched version 12.1.0.2 I had to add a PARALLEL(2) hint to actually get the Bloom Filter operation.

The query filters on T1 so that 250K rows will be returned and then joins to T2. The first interesting observation regarding the efficiency of the Bloom Filter is that the actual data pattern makes a significant difference: When using the commented filter "T1.ID2 BETWEEN 1 and 250000" the resulting cardinality will be same as when using the "MOD(T1.ID2, 40) = 0", but the former will result in a perfect filtering of the Bloom Filter regardless of the OPT_ESTIMATE hint used, whereas when using the latter the efficiency will be dramatically different.

This is what I get when using version 18.3 (12.1.0.2 showed very similar results) and force the under-estimate using the OPT_ESTIMATE ROWS=1 hint - the output is from my XPLAN_ASH script and edited for brevity:

------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Execs | A-Rows | PGA |
------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 1 | 0 | |
| 1 | COUNT | | | | 1 | 0 | |
|* 2 | FILTER | | | | 1 | 0 | |
| 3 | VIEW | | 1 | 26 | 1 | 250K | |
|* 4 | HASH JOIN | | 1 | 24 | 1 | 250K | 12556K |
| 5 | JOIN FILTER CREATE| :BF0000 | 1 | 12 | 1 | 250K | |
|* 6 | TABLE ACCESS FULL| T1 | 1 | 12 | 1 | 250K | |
| 7 | JOIN FILTER USE | :BF0000 | 10M| 114M| 1 | 10000K | |
|* 8 | TABLE ACCESS FULL| T2 | 10M| 114M| 1 | 10000K | |
------------------------------------------------------------------------------------

The Bloom Filter didn't help much, only a few rows were actually filtered (otherwise my XPLAN_ASH script would have shown "10M" as actually cardinality instead of "10000K", which is something slightly less than 10M rounded up).

Repeat the same but this time using the OPT_ESTIMATE ROWS=250000 hint:

-------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Execs | A-Rows| PGA |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | | 1 | 0 | |
| 1 | COUNT | | | | | 1 | 0 | |
|* 2 | FILTER | | | | | 1 | 0 | |
| 3 | VIEW | | 252K| 6402K| | 1 | 250K | |
|* 4 | HASH JOIN | | 252K| 5909K| 5864K| 1 | 250K | 12877K |
| 5 | JOIN FILTER CREATE| :BF0000 | 250K| 2929K| | 1 | 250K | |
|* 6 | TABLE ACCESS FULL| T1 | 250K| 2929K| | 1 | 250K | |
| 7 | JOIN FILTER USE | :BF0000 | 10M| 114M| | 1 | 815K | |
|* 8 | TABLE ACCESS FULL| T2 | 10M| 114M| | 1 | 815K | |
-------------------------------------------------------------------------------------------

So we end up with exactly the same execution plan but the efficiency of the Bloom Filter at runtime has changed dramatically due to the different cardinality estimate the Bloom Filter is based on.

I haven't spent much time yet with the corresponding undocumented parameters that might influence the Bloom Filter behaviour, but when I repeated the same and used the following settings in the session (and ensuring an adequate PGA_AGGREGATE_TARGET setting otherwise the hash join might be starting spilling to disk, which means the Bloom Filter size is considered when calculating SQL workarea sizes):

alter session set "_bloom_filter_size" = 1000000;

I got the following result:

-----------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Execs | A-Rows| PGA |
-----------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 1 | 0 | |
| 1 | COUNT | | | | 1 | 0 | |
|* 2 | FILTER | | | | 1 | 0 | |
| 3 | VIEW | | 1 | 26 | 1 | 250K | |
|* 4 | HASH JOIN | | 1 | 24 | 1 | 250K | 12568K |
| 5 | JOIN FILTER CREATE| :BF0000 | 1 | 12 | 1 | 250K | |
|* 6 | TABLE ACCESS FULL| T1 | 1 | 12 | 1 | 250K | |
| 7 | JOIN FILTER USE | :BF0000 | 10M| 114M| 1 | 815K | |
|* 8 | TABLE ACCESS FULL| T2 | 10M| 114M| 1 | 815K | |
-----------------------------------------------------------------------------------

which shows a slightly increased PGA usage compared to the first output but the same efficiency as when having the better cardinality estimate in place.

Increasing the size I couldn't however convince Oracle to make the Bloom Filter even more efficient, even when the better cardinality estimate was in place.

Summary

Obviously the efficiency / internal sizing of the Bloom Filter vector at runtime depends on the cardinality estimates of the optimizer. Depending on the actual data pattern this can make a significant difference in terms of efficiency. Yet another reason why having good cardinality estimates is a good thing and yet sometimes so hard to achieve, in particular for join cardinalities.

Footnote

On MyOracleSupport I've found the following note regarding Bloom Filter efficiency:

Bug 8932139 - Bloom filtering efficiency is inversely proportional to DOP (Doc ID 8932139.8)

Another interesting behaviour - the bug is only fixed in version 19.1 but also included in the latest RU(R)s of 18c and 12.2 from January 2019 on.

Chinar Aliyev's Blog

Randolf Geist - Tue, 2019-04-23 17:04
Chinar Aliyev has recently started to pick up on several of my blog posts regarding Parallel Execution and the corresponding new features introduced in Oracle 12c.

It is good to see that obviously Oracle has since then improved some of these and added new ones as well.

Here are some links to the corresponding posts:

New automatic Parallel Outer Join Null Handling in 18c

Improvements regarding automatic parallel distribution skew handling in 18c

Chinar has also put some more thoughts on the HASH JOIN BUFFERED operation:

New thoughts about the HASH JOIN BUFFERED operation

There are also a number of posts on his blog regarding histograms and in particular how to properly calculate the join cardinality in the presence of additional filters and resulting skew, which is a very interesting topic and yet to be handled properly by the optimizer even in the latest versions.

Parse Calls

Jonathan Lewis - Tue, 2019-04-23 12:31

When dealing with the library cache / shared pool it’s always worth checking from time to time to see if a new version of Oracle has changed any of the statistics you rely on as indicators of potential problems. Today is also (coincidentally) a day when comments about “parses” and “parse calls” entered my field of vision from two different directions. I’ve tweeted out references to a couple of quirkly little posts I did some years ago about counting parse calls and what a parse call may entail, but I thought I’d finish the day off with a little demo of what the session cursor cache does for you when your client code issues parse calls.

There are two bit of information I want to highlight – activity in the library cache and a number that shows up in the session statistics. Here’s the code to get things going:

rem
rem     Script:         12c_session_cursor_cache.sql
rem     Author:         Jonathan Lewis
rem     Dated:          Apr 2019
rem
rem     Note:
rem     start_1.sql contains the one line
rem          select * from t1 where n1 = 0;
rem

create table t1 
as
select 99 n1 from dual
;

execute dbms_stats.gather_table_stats(user,'t1')

spool 12c_session_cursor_cache

prompt  =======================
prompt  No session cursor cache
prompt  =======================

alter session set session_cached_cursors = 0;

set serveroutput off
set feedback off

execute snap_libcache.start_snap
execute snap_my_stats.start_snap

execute snap_libcache.start_snap
execute snap_my_stats.start_snap

@start_1000

set feedback on
set serveroutput on

execute snap_my_stats.end_snap
execute snap_libcache.end_snap


prompt  ============================
prompt  Session cursor cache enabled
prompt  ============================


alter session set session_cached_cursors = 50;

set serveroutput off
set feedback off

execute snap_libcache.start_snap
execute snap_my_stats.start_snap

execute snap_libcache.start_snap
execute snap_my_stats.start_snap

@start_1000

set feedback on
set serveroutput on

execute snap_my_stats.end_snap
execute snap_libcache.end_snap

spool off

I’ve made use of a couple of little utilities I wrote years ago to take snapshots of my session statistics and the library cache (v$librarycache) stats. I’ve also used my “repetition” framework to execute a basic query 1,000 times. The statement is a simple “select from t1 where n1 = 0”, chosen to return no rows.

The purpose of the whole script is to show you the effect of running exactly the same SQL statement many times – first with the session cursor cache disabled (session_cached_cursors = 0) then with the cache enabled at its default size.

Here are some results from an instance of 12.2.0.1 – which I’ve edited down by eliminating most of the single-digit numbers.

=======================
No session cursor cache
=======================
---------------------------------
Session stats - 23-Apr 17:41:06
Interval:-  4 seconds
---------------------------------
Name                                                                         Value
----                                                                         -----
Requests to/from client                                                      1,002
opened cursors cumulative                                                    1,034
user calls                                                                   2,005
session logical reads                                                        9,115
non-idle wait count                                                          1,014
session uga memory                                                          65,488
db block gets                                                                2,007
db block gets from cache                                                     2,007
db block gets from cache (fastpath)                                          2,006
consistent gets                                                              7,108
consistent gets from cache                                                   7,108
consistent gets pin                                                          7,061
consistent gets pin (fastpath)                                               7,061
logical read bytes from cache                                           74,670,080
calls to kcmgcs                                                              5,005
calls to get snapshot scn: kcmgss                                            1,039
no work - consistent read gets                                               1,060
table scans (short tables)                                                   1,000
table scan rows gotten                                                       1,000
table scan disk non-IMC rows gotten                                          1,000
table scan blocks gotten                                                     1,000
buffer is pinned count                                                       2,000
buffer is not pinned count                                                   2,091
parse count (total)                                                          1,035
parse count (hard)                                                               8
execute count                                                                1,033
bytes sent via SQL*Net to client                                           338,878
bytes received via SQL*Net from client                                     380,923
SQL*Net roundtrips to/from client                                            1,003

PL/SQL procedure successfully completed.

---------------------------------
Library Cache - 23-Apr 17:41:06
Interval:-      4 seconds
---------------------------------
Type      Cache                           Gets        Hits Ratio        Pins        Hits Ratio   Invalid    Reload
----      -----                           ----        ---- -----        ----        ---- -----   -------    ------
NAMESPACE SQL AREA                       1,040       1,032   1.0       1,089       1,073   1.0         0         1
NAMESPACE TABLE/PROCEDURE                   17          16    .9         101          97   1.0         0         0
NAMESPACE BODY                               9           9   1.0          26          26   1.0         0         0
NAMESPACE SCHEDULER GLOBAL ATTRIBU          40          40   1.0          40          40   1.0         0         0

PL/SQL procedure successfully completed.

The thing to notice, of course, is the large number of statistics that are (close to) multiples of 1,000 – i.e. the number of executions of the SQL statement. In particular you can see the ~1,000 “parse count (total)” which is not reflected in the “parse count (hard)” because the statement only needed to be loaded into the library cache and optimized once.

The other notable statistics come from the library cache where we do 1,000 gets and pins on the “SQL AREA” – the “get” creates a “KGL Lock” (the “breakable parse lock”) that is made visible as an entry in v$open_cursor (x$kgllk), and the the “pin” created a “KGL Pin” that makes it impossible for anything to flush the child cursor from memory while we’re executing it.

So what changes when we enabled the session cursor cache:


============================
Session cursor cache enabled
============================

Session altered.

---------------------------------
Session stats - 23-Apr 17:41:09
Interval:-  3 seconds
---------------------------------
Name                                                                         Value
----                                                                         -----
Requests to/from client                                                      1,002
opened cursors cumulative                                                    1,004
user calls                                                                   2,005
session logical reads                                                        9,003
non-idle wait count                                                          1,013
db block gets                                                                2,000
db block gets from cache                                                     2,000
db block gets from cache (fastpath)                                          2,000
consistent gets                                                              7,003
consistent gets from cache                                                   7,003
consistent gets pin                                                          7,000
consistent gets pin (fastpath)                                               7,000
logical read bytes from cache                                           73,752,576
calls to kcmgcs                                                              5,002
calls to get snapshot scn: kcmgss                                            1,002
no work - consistent read gets                                               1,000
table scans (short tables)                                                   1,000
table scan rows gotten                                                       1,000
table scan disk non-IMC rows gotten                                          1,000
table scan blocks gotten                                                     1,000
session cursor cache hits                                                    1,000
session cursor cache count                                                       3
buffer is pinned count                                                       2,000
buffer is not pinned count                                                   2,002
parse count (total)                                                          1,002
execute count                                                                1,003
bytes sent via SQL*Net to client                                           338,878
bytes received via SQL*Net from client                                     380,923
SQL*Net roundtrips to/from client                                            1,003

PL/SQL procedure successfully completed.

---------------------------------
Library Cache - 23-Apr 17:41:09
Interval:-      3 seconds
---------------------------------
Type      Cache                           Gets        Hits Ratio        Pins        Hits Ratio   Invalid    Reload
----      -----                           ----        ---- -----        ----        ---- -----   -------    ------
NAMESPACE SQL AREA                           5           5   1.0       1,014       1,014   1.0         0         0
NAMESPACE TABLE/PROCEDURE                    7           7   1.0          31          31   1.0         0         0
NAMESPACE BODY                               6           6   1.0          19          19   1.0         0         0

PL/SQL procedure successfully completed.

The first thing to note is that “parse count (total)” still shows up 1,000 parse calls. However we also see the statistic “session cursor cache hits” at 1,000. Allowing for a little noise around the edges virtually every parse call has turned into a short-cut that takes us through the session cursor cache directly to the correct cursor.

This difference shows up in the library cache activity where we still see 1,000 pins – we have to pin the cursor to execute it – but we no longer see 1,000 “gets”. In the absence of the session cursor cache the session has to keep searching for the statement then creating and holding a KGL Lock while we execute the statement – but when the cache is enabled the session will very rapidly recognise that the statement is one we are likely to re-use, so it will continue to hold the KGL lock
after we have finished executing the statement and we can record the location of the KGL lock in a session state object. After the first couple of executions of the statement we no longer have to search for the statement and attach a spare lock to it, we can simply navigate from our session state object to the cursor.

As before, the KGL Lock will show up in v$open_cursor – though this time it will not disappear between executions of the statement. Over the history of Oracle versions the contents of v$open_cursor have become increasingly helpful, so I’ll just show you what the view held for my session by the end of the test:


SQL> select cursor_type, sql_text from V$open_cursor where sid = 250 order by cursor_type, sql_text;

CURSOR_TYPE                                                      SQL_TEXT
---------------------------------------------------------------- ------------------------------------------------------------
DICTIONARY LOOKUP CURSOR CACHED                                  BEGIN DBMS_OUTPUT.DISABLE; END;
DICTIONARY LOOKUP CURSOR CACHED                                  BEGIN snap_libcache.end_snap; END;
DICTIONARY LOOKUP CURSOR CACHED                                  BEGIN snap_my_stats.end_snap; END;
DICTIONARY LOOKUP CURSOR CACHED                                  SELECT DECODE('A','A','1','2') FROM SYS.DUAL
OPEN                                                             begin         dbms_application_info.set_module(
OPEN                                                             table_1_ff_2eb_0_0_0
OPEN-RECURSIVE                                                    SELECT VALUE$ FROM SYS.PROPS$ WHERE NAME = 'OGG_TRIGGER_OPT
OPEN-RECURSIVE                                                   select STAGING_LOG_OBJ# from sys.syncref$_table_info where t
OPEN-RECURSIVE                                                   update user$ set spare6=DECODE(to_char(:2, 'YYYY-MM-DD'), '0
PL/SQL CURSOR CACHED                                             SELECT INDX, KGLSTTYP LTYPE, KGLSTDSC NAME, KGLSTGET GETS, K
PL/SQL CURSOR CACHED                                             SELECT STATISTIC#, NAME, VALUE FROM V$MY_STATS WHERE VALUE !
SESSION CURSOR CACHED                                            BEGIN DBMS_OUTPUT.ENABLE(1000000); END;
SESSION CURSOR CACHED                                            BEGIN DBMS_OUTPUT.GET_LINES(:LINES, :NUMLINES); END;
SESSION CURSOR CACHED                                            BEGIN snap_libcache.start_snap; END;
SESSION CURSOR CACHED                                            BEGIN snap_my_stats.start_snap; END;
SESSION CURSOR CACHED                                            select * from t1 where n1 = 0
SESSION CURSOR CACHED                                            select /*+ no_parallel */ spare4 from sys.optstat_hist_contr

17 rows selected.

The only one of specific interest is the penultimate one in the output – its type is “SESSION CURSOR CACHED” and we can recognise our “select from t1” statement.

Deploying A Micronaut Microservice To The Cloud

OTN TechBlog - Tue, 2019-04-23 10:17

So you've finally done it. You created a shiny new microservice. You've written tests that pass, ran it locally and everything works great. Now it's time to deploy and you're ready to jump to the cloud. That may seem intimidating, but honestly there's no need to worry. Deploying your Micronaut application to the Oracle Cloud is really quite easy and there are several options to chose from. In this post I'll show you a few of those options and by the time you're done reading it you'll be ready to get your app up and running.

If you haven't yet created an application, feel free to check out my last post and use that code to create a simple app that uses GORM to interact with an Oracle ATP instance.  Once you've created your Micronaut application you'll need to create a runnable JAR file. For this blog post I'll assume you followed my blog post and any assets that I refer to will reflect that assumption. With Micronaut creating a runnable JAR is as easy as using ./gradlew assemble or ./mvnw package (depending on which build automation tool your project uses). Creating the artifact will take a bit longer than you're probably used to if you haven't used Micronaut before. That's because Micronaut precompiles all necessary metadata for Dependency Injection so that it can minimize/reduce runtime reflection to obtain that metadata. Once your task completes you will have a runnable JAR file in the build/libs directory of your project. You can launch your application locally by running java -jar /path/to/your.jar. So to launch the JAR created from the previous blog post, I set some environment variables and run:

Which results in the application running locally:

So far, pretty easy. But we want to do more than launch a JAR file locally. We want to run it in the cloud, so let's see what that takes. The first method I want to look at is more of a "traditional" approach: launching a simple compute instance and deploying the JAR file.

Creating A Virtual Network

If this is your first time creating a compute instance you'll need to set up virtual networking.  If you have a network ready to go, skip down to "Creating An Instance" below. 

Your instance needs to be associated with a virtual network in the Oracle Cloud. Virtual cloud networks (hereafter referred to as VCNs) can be pretty complicated, but as a developer you need to know enough about them to make sure that your app is secure and accessible from the internet. To get started creating a VCN, either click "Create a virtual cloud network" from the dashboard:

Or select "Networking" -> "Virtual Cloud Networks" from the sidebar menu and then click "Create Virtual Cloud Network" on the VCN overview page:

In the "Create Virtual Cloud Network" dialog, populate a name and choose the option "Create Virtual Cloud Network Plus Related Resources" and click "Create Virtual Cloud Network" at the bottom of the dialog:

The "related resources" here refers to the necessary Internet Gateways, Route Table, Subnets and related Security Lists for the network. The security list by default will allow SSH, but not much else, so we'll edit that once the VCN is created.  When everything is complete, you'll receive confirmation:

Close the dialog and back on the VCN overview page, click on the name of the new VCN to view details:

On the details page for the VCN, choose a subnet and click on the Security List to view it:

On the Security List details page, click on "Edit All Rules":

And add a new rule that will expose port 8080 (the port that our Micronaut application will run on) to the internet:

Make sure to save the rules and close out. This VCN is now ready to be associated with an instance running our Micronaut application.

Creating An Instance

To get started with an Oracle Cloud compute instance log in to the cloud dashboard and either select "Create a VM instance":

Or choose "Compute" -> "Instances" from the sidebar and click "Create Instance" on the Instance overview page:

In the "Create Instance" dialog you'll need to populate a few values and make some selections. It seems like a long form, but there aren't many changes necessary from the default values for our simple use case. The first part of the form requires us to name the instance, select an Availability Domain, OS and instance type:

 

The next section asks for the instance shape and boot volume configuration, both of which I leave as the default. At this point I select a public key that I can use later on to SSH in to the machine:

Finally, select the a VCN that is internet accessible with port 8080 open:

Click "Create" and you'll be taken to the instance details page where you'll notice the instance in a "Provisioning" state.  Once the instance has been provisioned, take note of the public IP address:

Deploying Your Application To The New Instance

Using the instance public IP address, SSH in via the private key associated with the public key used to create the instance:

We're almost ready to deploy our application, we just need a few things.  First, we need a JDK.  I like to use SDKMAN for that, so I first install SDKMAN, then use it to install the JDK with sdk install java 8.0.212-zulu and confirm the installation:

We'll also need to open port 8080 on the instance firewall so that our instance will allow the traffic:

We can now upload our instance with SCP:

I've copied the JAR file, my Oracle ATP wallet and 2 simple scripts to help me out. The first script sets some environment variables:

The second script is what we'll use to launch the application:

Next, move the wallet directory from the user home directory to the root with sudo mv wallet/ /wallet and source the environment variables with . ./env.sh. Now run the application with ./run.sh:

And hit the public IP in your browser to confirm the app is running and returning data as expected!

You've just deployed your Micronaut application to the Oracle Cloud! Of course, a manual VM install is just one method for deployment and isn't very maintainable long term for many applications, so in future posts we'll look at some other options for deploying that fit in the modern application development cycle.

.gist{ border-left: none } code { padding: 2px 4px; font-size: 90%; display: inline; margin: 0;}

Pages

Subscribe to Oracle FAQ aggregator