Feed aggregator

Oracle Ushers in New Era of Analytics

Oracle Press Releases - Tue, 2019-06-25 12:15
Press Release
Oracle Ushers in New Era of Analytics Oracle announces new vision, new experience and new era of augmented analytics to automate insights

Redwood Shores, Calif.—Jun 25, 2019

Today, Oracle unveiled a new, customer-centric vision for Oracle Analytics at the company’s Analytics Summit. With Oracle’s industry-leading data platform and business applications, Oracle Analytics is uniquely positioned to marry data, analytics and applications, and address the needs of business users, analysts and IT. Oracle Analytics empowers customers with industry-leading AI-powered self-service analytic capabilities for data preparation, visualization, enterprise reporting, augmented analysis, and natural language processing (NLP). 

Key Highlights
  • One Offering: Oracle Analytics. Simplified product offering and clarity of direction by rationalizing 18+ products down to a single brand.
  • Powered by the Autonomous Data Warehouse and Machine Learning: Demonstrating the industry’s leading application analytics built on the Autonomous Data Warehouse and powered by Oracle Analytics Cloud.
  • Enabling Broad Enterprise Adoption: Affordable per user pricing for departmental business users plus per-CPU pricing for broad enterprise scale.
 

“We are committed to helping our customers get the most value from their data and to delivering the best analytics experience,” said T.K. Anand, senior vice president, AI, Data Analytics and Cloud, Oracle. “Today, we are announcing a new vision, product experience, and commitment to customer success that will enable us to collaborate with our entire ecosystem and deliver a new era of enterprise analytics.”

“Our clients are seeking next generation analytical solutions that are built with the enterprise in mind. Today, executives have access to more volumes of data than ever before, but what they really need are industrial strength platforms that can turn all that data into information to drive insights across their organization at different levels,” said Richard Solari, managing director, Deloitte Consulting LLP, and global Oracle analytics and cognitive leader. “Deloitte is committed to creating value for organizations enabled by the Oracle Analytics Cloud. Together, we bridge the gap between data and information and help leaders reach impactful business decisions using Oracle’s next generation analytics platforms and applications.”

Oracle’s analytic capabilities are available in the cloud via Oracle Analytics Cloud, on premises via Oracle Analytics Server, and within applications via Oracle Analytics for Oracle Cloud Applications. These solutions leverage Oracle’s existing analytics capabilities and add new features, including augmented analytics and NLP, which are embedded throughout the platform. In addition, Oracle Analytics now offers an integrated user experience across self-service data discovery and reporting and dashboards, delivering effortless access to insights that can be consumed in the cloud, on the desktop, and mobile.

Oracle Analytics Cloud

Built first for the cloud, Oracle Analytics Cloud is the centerpiece of Oracle Analytics. Oracle Analytics Cloud empowers business users with governed self-service analytic capabilities for data preparation, visualization, augmented analysis, and natural language processing. Oracle Analytics Cloud’s governed self-service experience enables Oracle Analytics users at enterprises around the world to drive faster insights and optimize business results.

“We love analytics, we love BI, and we love the fact that Oracle is putting all of this R&D into the cloud, and we want to benefit from that,” said Bill Roy, senior director, EPM and BI, Western Digital. “We see the cloud as enabling our internal customers to develop their own content and to be self-serving. That’s really where we see the benefit of using Oracle Analytics Cloud.”

“In business today, disruption is constant, causing organizations an array of unprecedented challenges. To succeed and potentially excel in this environment, leaders must exploit data to unlock valuable insights and drive better decisions”, said Todd Randolph, principal, Technology Enablement Practice, KPMG and US Oracle Analytics Leader. “With these new, simplified and powerful Oracle analytics offerings, we believe our clients will continue to adopt our Oracle Analytics Cloud-enabled solutions to support sustainable change through performance insights to create lasting value.”

Oracle Analytics Server

Oracle Analytics Server will comprise all of Oracle’s on-premises BI offerings, delivering competitive value to thousands of existing customers, as well as enabling customers in highly regulated industries or with multi-cloud architectures to experience the latest analytic capabilities on their own terms while ensuring an easy path to the cloud.

“We needed a solution. We went out to the marketplace and the best solution was chosen,” said John Cronin, group CIO, An Post. “Oracle Analytics for An Post has made a huge impact not only for ourselves and our ease of access to information but for our common customers as well. The future is all about analytics, artificial intelligence around analytics, and advanced analytics.”

“Our clients across all industries have realized the importance of data and analytics for decades. What is different now is their expectations on how analytics will be a key enabler to guide their business strategies. With advancements in technical capabilities such as artificial intelligence, machine learning, big data platforms and visualizations, our clients are demanding more out of their analytics investments,” said Hema Kadali, partner, Data and Analytics Leader, PwC. “Leveraging Oracle Analytics, we are helping our clients execute on industry-specific use cases that allow them to innovate, automate and transform their business operations with actionable insights that drive real business outcomes.”

Oracle Analytics for Oracle Cloud Applications

Oracle Analytics for Oracle Cloud Applications will be built on Oracle Analytics Cloud and powered by Oracle Autonomous Data Warehouse, bringing personalized application analytics, benchmarks and machine learning-powered predictive insights to business users, functions and processes.

Contact Info
Carolin Bachmann
Oracle
+1.650.506.1352
carolin.bachmann@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release, timing, and pricing of any features or functionality described for Oracle’s products may change and remains at the sole discretion of Oracle Corporation.

Please see www.deloitte.com/us/about for a detailed description of Deloitte’s legal structure.

Talk to a Press Contact

Carolin Bachmann

  • +1.650.506.1352

Oracle Recognized as a Leader in Gartner Magic Quadrant for Warehouse Management Systems

Oracle Press Releases - Tue, 2019-06-25 09:00
Press Release
Oracle Recognized as a Leader in Gartner Magic Quadrant for Warehouse Management Systems Oracle named a Leader based on completeness of vision and ability to execute

Redwood Shores, Calif.—Jun 25, 2019

Oracle has been named a Leader in Gartner’s 2019 "Magic Quadrant for Warehouse Management Systems1" report for the fourth consecutive year. Oracle Warehouse Management (WMS) Cloud is positioned as a Leader based on its ability to execute and completeness of vision.

Of the 14 products evaluated, Oracle WMS Cloud was recognized as a Leader for its ability to execute and completeness of vision.

According to Gartner, “Leaders combine the uppermost characteristics of vision and thought leadership with a strong consistent Ability to Execute. Leaders in the WMS market are present in a high percentage of new WMS deals, and they win a significant number of them. They have robust core WMSs and offer reasonable — although not necessarily leading-edge — capabilities in extended WMS areas, such as labor management, work planning and optimization, slotting, returns management, yard management and dock scheduling, and value-added services. To be a Leader, a vendor doesn’t necessarily need to have the absolute broadest or deepest WMS application. Its offerings must meet most mainstream warehousing requirements in complex warehouses without significant modifications, and a substantial number of high-quality implementations must be available to validate this. Leaders must anticipate where customer demands, markets and technology are moving, and must have strategies to support these emerging requirements ahead of actual customer demand. Leading vendors should have coherent strategies to support SCE convergence, and must invest in and have processes to exploit innovation. Leaders also have robust market momentum, market penetration and market awareness as well as strong client satisfaction — in the vendor’s local markets as well as internationally. Because Leaders are often well-established in leading-edge and complex user environments, they benefit from a user community that helps them remain in the forefront of emerging needs. Key characteristics: Reasonably broad and deep WMS offerings; Proven success in moderate- to high-complexity warehouse environments; Participation in a high percentage of new deals; Large customer installed base; A strong and consistent track record; Consistent performance, and vigorous new client growth and retention; Enduring visibility in the marketplace from both sales and marketing perspectives; Compelling SCE convergence strategy and capabilities; A proven ecosystem of partners; Global scale.”

“Supply chains have changed dramatically in the last five years as businesses have evolved to meet more demanding customer expectations. We now expect to be able to buy on multiple channels, have our orders delivered faster, and receive or return products from anywhere,” said Diego Pantoja-Navajas, vice president, WMS Cloud Development, Oracle. “The leading warehouse management solution built on a modern cloud architecture, Oracle WMS Cloud enables customers to benefit from new innovations in machine learning, blockchain and IoT to meet and exceed customer expectations. We believe this report is a validation of our product strengths, investment in innovation, and customer successes.”

Oracle’s suite of supply chain cloud applications has garnered industry recognition. Oracle was named a Leader in Gartner’s recent “Magic Quadrant for Supply Chain Planning System of Record2,” and Oracle was recognized in the “Magic Quadrant for Transportation Management Systems.3”

1Gartner, Magic Quadrant for Warehouse Management Systems, C. Klappich, Simon Tunstall, 8 May 2019
2Gartner, Magic Quadrant for Supply Chain Planning System of Record, Amber Salley, Tim Payne, Alex Pradhan, 21 August 2018
3Gartner, Magic Quadrant for Transportation Management Systems, Bart De Muynck, Brock Johns, Oscar Sanchez Duran, 27 March 2019

Gartner Disclaimer
Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Additional Information

For additional information on Oracle Supply Chain Management (SCM) Cloud, visit FacebookTwitter or the Oracle SCM blog.

Contact Info
Bill Rundle
Oracle
+1.650.506.1891
bill.rundle@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Bill Rundle

  • +1.650.506.1891

SQLcl ALIAS – because you can’t remember everything.

The Anti-Kyte - Tue, 2019-06-25 08:47

I want to find out which file is going to hold any trace information generated by my database session. Unfortunately, I keep forgetting the query that I need to run to find out.
Fortunately I’m using SQLcl, which includes the ALIAS command.
What follows is a quick run-through of this command including :

  • listing the aliases that are already set up in SQLcl
  • displaying the code that an alias will execute
  • creating your own alias interactively
  • deleting an alias
  • using files to manage custom aliases

Whilst I’m at it, I’ll create the alias for the code to find that pesky trace file too.

In the examples that follow, I’m connected to an Oracle XE18c PDB using SQLcl 18.4 from my Ubuntu 16.4 LTS laptop via the Oracle Thin Client. Oh, and the Java details are :

Meet the ALIAS command

As so often in SQLcl, it’s probably a good idea to start with the help :

help alias

…which explains that :

“Alias is a command which allows you to save a sql, plsql or sqlplus script and assign it a shortcut command.”

A number of aliases are already included in SQLcl. To get a list of them simply type :

alias

…which returns :

locks
sessions
tables
tables2

If we want to see the code that will be run when an alias is invoked, we simply need to list the alias :

alias list tables

tables - tables <schema> - show tables from schema
--------------------------------------------------

select table_name "TABLES" from user_tables

Connected as HR, I can run the alias to return a list of tables that I own in the database :

Creating an ALIAS

To create an alias of my own, I simply need to specify the alias name and the statement I want to associate it with. For example, to create an alias called whoami :

alias whoami =
select sys_context('userenv', 'session_user')
from dual;

I can now confirm that the alias has been created :

alias list whoami
whoami
------

select sys_context('userenv', 'session_user')
from dual

…and run it…

I think I want to tidy up that column heading. I could do this by adding an alias in the query itself. However, alias does support the use of SQL*Plus commands…

alias whoami =
column session_user format a30
select sys_context('userenv', 'session_user') session_user
from dual;

…which can make the output look slightly more elegant :

A point to note here is that, whilst it is possible to include SQL*Plus statements in an alias for a PL/SQL block (well, sort of)…

alias whoami=set serverout on
exec dbms_output.put_line(sys_context('userenv', 'session_user'));

…when the alias starts with a SQL*Plus statement, it will terminate at the first semi-colon…

Where you do have a PL/SQL alias that contains multiple statement terminators (‘;’) you will need to run any SQL*Plus commands required prior to invoking it.
Of course, if you find setting output on to be a bit onerous, you can save valuable typing molecules by simply running :

alias output_on = set serverout on size unlimited

I can also add a description to my alias so that there is some documentation when it’s listed :

alias desc whoami The current session user

When I now list the alias, the description is included…more-or-less…

I’m not sure if the inclusion of the text desc whoami is simply a quirk of the version and os that I’m running on. In any case, we’ll come to a workaround for this minor annoyance in due course.

In the meantime, I’ve decided that I don’t need this alias anymore. To remove it, I simply need to run the alias drop command :

alias drop whoami


At this point, I know enough about the alias command to implement my first version of the session tracefile alias that started all this.
The query, that I keep forgetting, is :

select value
from v$diag_info
where name = 'Default Trace File'
/

To create the new alias :

alias tracefile =
select value "Session Trace File"
from v$diag_info
where name = 'Default Trace File';

I’ll also add a comment at this point :

alias desc tracefile The full path and filename on the database server of the tracefile for this session

My new alias looks like this :

The aliases.xml file

Unlike the pre-supplied aliases, the code for any alias you create will be held in a file called aliases.xml.

On Windows, this file will probably be somewhere under your OS user’s AppData directory.
On Ubuntu, it’s in $HOME/.sqlcl

With no custom aliases defined the file looks like this :

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<aliases/>

Note that, even though I have now defined a custom alias, it won’t be included in this file until I end the SQLcl session in which it was created.

Once I disconnect from this session, the file includes the new alias definition :

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<aliases>
<alias name="tracefile">
<description><![CDATA[desc tracefile The full path and filename on the database server of the tracefile for this session
]]></description>
<queries>
<query>
<sql><![CDATA[select value "Session Trace File"
from v$diag_info
where name = 'Default Trace File']]></sql>
</query>
</queries>
</alias>
</aliases>

Incidentally, if you’ve played around with SQLDeveloper extensions, you may find this file structure rather familiar.

The file appears to be read by SQLcl once on startup. Therefore, before I run SQLcl again, I can tweak the description of my alias to remove the extraneous text…

<description><![CDATA[The full path and filename on the database server of the tracefile for this session]]></description>

Sure enough, next time I start an SQLcl session, this change is now reflected in the alias definition :

Loading an alias from a file

The structure of the aliases.xml file gives us a template we can use to define an alias in the comfort of a text editor rather than on the command line. For example, we have the following PL/SQL block, which reads a bind variable :

declare
v_msg varchar2(100);
begin
if upper(:mood) = 'BAD' then
if to_char(sysdate, 'DAY') != 'MONDAY' then
v_msg := q'[At least it's not Monday!]';
elsif to_number(to_char(sysdate, 'HH24MI')) > 1200 then
v_msg := q'[At least it's not Monday morning!]';
else
v_msg := q'[I'm not surprised. It's Monday morning !]';
end if;
elsif upper(:mood) = 'GOOD' then
v_msg := q'[Don't tell me West Ham actually won ?!]';
else
v_msg := q'[I'm just a simple PL/SQL block and I can't handle complex emotions, OK ?!]';
end if;
dbms_output.new_line;
dbms_output.put_line(v_msg);
end;
/

Rather than typing this in on the command line, we can create a file ( called pep_talk.xml) which looks like this :

<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<aliases>
<alias name="pep_talk">
<description><![CDATA[How are you feeling ? Usage is pep_talk <emotion>]]></description>
<queries>
<query>
<sql><![CDATA[
declare
v_msg varchar2(100);
begin
if upper(:mood) = 'BAD' then
if to_char(sysdate, 'DAY') != 'MONDAY' then
v_msg := q'[At least it's not Monday!]';
elsif to_number(to_char(sysdate, 'HH24MI')) > 1200 then
v_msg := q'[At least it's not Monday morning!]';
else
v_msg := q'[I'm not surprised. It's Monday morning !]';
end if;
elsif upper(:mood) = 'GOOD' then
v_msg := q'[Don't tell me West Ham actually won ?!]';
else
v_msg := q'[I'm just a simple PL/SQL block and I can't handle complex emotions, OK ?!]';
end if;
dbms_output.new_line;
dbms_output.put_line(v_msg);
end;
]]></sql>
</query>
</queries>
</alias>
</aliases>

Now, we can load this alias from the file as follows :

alias load pep_talk.xml
Aliases loaded

We can now execute our new alias. First though, we need to remember to turn serveroutput on before we invoke it :

Once you’ve terminated your SQLcl session, the new alias will be written to aliases.xml.

Exporting custom aliases

There may come a time when you want to share your custom aliases with your colleagues. After all, it’s always useful to know where the trace file is and who doesn’t need a pep talk from time-to-time ?

To “export” your aliases, you can issue the following command from SQLcl :

alias save mike_aliases.xml

This writes the file to the same location as your aliases.xml :

You can then import these aliases to another SQLcl installation simply by sharing the file and then using the alias load command.

References

As you can imagine, there are a wide variety of possible uses for the ALIAS command.

As the original author of this feature, this post by Kris Rice is probably worth a read.
Jeff Smith has written on this topic several times including :

Menno Hoogendijk has an example which employs some Javascript wizardry which he has published on GitHub.

Right, back to my trace files.

New Study: “Digital Natives” Value Brick and Mortar Stores More Than their Parents or Grandparents

Oracle Press Releases - Tue, 2019-06-25 08:00
Press Release
New Study: “Digital Natives” Value Brick and Mortar Stores More Than their Parents or Grandparents Global Study Highlights the Varying Shopping Expectations of Different Generations and the Role of Technology in Personalizing Retail

Redwood City, CA.—Jun 25, 2019

Despite clear differences in expectations among shoppers of different generations, almost half of retailers (44 percent) have made no progress in tailoring the in-store shopping experience according to a recent study conducted by Oracle NetSuite, Wakefield Research and The Retail Doctor. The global study of 1,200 consumers and 400 retail executives across the U.S., U.K. and Australia dispelled stereotypes around generations and found big differences in generational expectations across baby boomers, Gen X, millennials and Gen Z.

“We have seen decades of diminishing experiences in brick and mortar stores, and the differences identified in these results point to its impact on consumers over the years,” said Bob Phibbs, CEO, The Retail Doctor. “Retailers have fallen behind in offering in-store experiences that balance personalization and customer service but there’s an opportunity to take the reins back. The expectation from consumers is clear and it’s up to retailers to offer engaging and custom experiences that will cater to shoppers across a diverse group of generations.”

Beauty is in the eye of the beholder: Retailers struggle to keep stride with generational shoppers

The in-store shopping experience remains an important part of the retail environment for all generations, but the progress retailers are making to improve the in-store experience is being viewed differently by different generations.

  • Despite the stereotypes of “digital natives”, Gen Z and millennials (43 percent) are most likely to do more in-store shopping this year followed by Gen X (29 percent) and baby boomers (13 percent).
  • Gen Z and millennials (57 percent) had the most positive view of the current retail environment feeling it was more inviting, followed by Gen X (40 percent). Baby boomers (27 percent) were more likely to find the current retail environment less inviting than consumers overall.
  • Gen Z valued in-store interaction the least with 42 percent feeling more annoyed from increased interaction with retail associates. In contrast, millennials (56 percent), Gen X (44 percent) and baby boomer (43 percent) generations all noted they would feel more welcomed by more in-store interactions.

Retailers view emerging technologies through rose-colored glasses

While more than three quarters of retail executives (79 percent) believe having AI and VR in stores will increase sales, the study found that these technologies are not yet widely accepted by any generation.

  • Overall, only 14 percent of consumers believe that emerging technologies like AI and VR will have a significant impact on their purchase decisions.
  • Emerging tech in retail stores is most attractive to millennials (50 percent) followed by Gen Z (38 percent), Gen X (35 percent) and baby boomers (20 percent).
  • Perceptions of VR varied widely across different generations. Fifty-eight percent of Gen Z said VR would have some influence on their purchase decisions, while 59 percent of baby boomers said VR would have no influence on their purchase decision.

Insta-famous brands reach Gen Z and millennial consumers, but not as much as retailers think

While almost all retail executives (98 percent) think that engaging customers on social media is important to building stronger relationships with them, the study found a big disconnect with consumers across all generations.

  • Overall, only 12 percent of consumers think their engagement with brands on social media has a significant impact on the way they think or feel about a brand.
  • Among those who engage with brands on social media, Gen Z (38 percent) consumers are much more likely than other generations to engage with retailers on social to get to know the brand compared to millennials (25 percent) and baby boomers (21 percent).
  • Gen Z (65 percent) consumers and millennials (63 percent) believe their engagement with brands on social media platforms have an impact on their relationship with brands. 
  • More than half of baby boomers (53 percent) and 29 percent of Gen X consumers do not engage with brands on social media.

“After all the talk about brick and mortar stores being dead, it’s interesting to see that ‘digital natives’ are more likely to increase their shopping in physical stores this year than any other generation,” said Greg Zakowicz, senior commerce marketing analyst, Oracle NetSuite. “Stepping back, these findings fit with broader trends we have been seeing around the importance of immediacy and underlines why retailers cannot afford to make assumptions about the needs and expectations of different generations. It really is a complex puzzle and as this study clearly shows, retailers need to think carefully about how they meet the needs of different generations.”

To read more about NetSuite’s insights into the report’s finding visit NetSuite’s cloud blog.

Methodology

For this survey, 1,200 consumers and 400 retail executives were surveyed around the overall retail environment, in-store and online shopping experiences and advanced technologies. Both retailers and consumers were surveyed from three global markets including the U.S., U.K. and Australia with retail executives representing organizations between $10-100 million in annual sales.

Contact Info
Danielle Tarp
Oracle
650-506-2904
danielle.tarp@oracle.com
About Wakefield Research

Wakefield is a full-service market research firm that uncovers insights for brands to help them solve problems and grow their business. Wakefield Research is a partner to the world’s leading consumer and B2B brands, including 50 of the Fortune 100. Wakefield Research conducts qualitative and quantitative research in 70 countries. For more information, please visit https://www.wakefieldresearch.com

About The Retail Doctor

The Retail Doctor is a New York-based retail consulting firm created by expert retail consultant and leading business mentor Bob Phibbs. With over 30 years of experience in retail, Bob has worked as a consultant, speaker, and entrepreneur, helping businesses revolutionize their brand and grow their success. Bob is also the author of three highly-praised books, including The Retail Doctor's Guide to Growing Your Business (WILEY). His clients include some of the largest retail brands in the world including Bernina, Brother, Caesars Palace, Hunter Douglas, Lego, Omega and Yamaha. For more information, please visit www.retaildoc.com

About Oracle NetSuite

For more than 20 years, Oracle NetSuite has helped organizations grow, scale and adapt to change. NetSuite provides a suite of cloud-based applications, which includes financials / Enterprise Resource Planning (ERP), HR, professional services automation and omnichannel commerce, used by more than 18,000 customers in 203 countries and dependent territories.

For more information, please visit http://www.netsuite.com

Follow NetSuite’s Cloud blog, Facebook page and @NetSuite Twitter handle for real-time updates.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Danielle Tarp

  • 650-506-2904

Belgian Telecom Provider Speeds Delivery of Customer Services with Oracle

Oracle Press Releases - Tue, 2019-06-25 07:00
Press Release
Belgian Telecom Provider Speeds Delivery of Customer Services with Oracle Proximus taps virtualized Oracle SBC Solution to boost deployment versatility, cut costs, and speed deployment

Redwood Shores, Calif.—Jun 25, 2019

Proximus, a leading international communications service provider, has chosen Oracle Communications virtualized Oracle Session Border Controller as a core network component to enable the delivery of its residential and enterprise communications cloud-based solutions for voice. As such, Proximus will be able to deploy its internet communications offerings faster, while decreasing operational expenses and increasing services flexibility.

Oracle’s virtualized SBC platform will be running on Proximus’ telco cloud and used for residential VoIP and SIP trunking for enterprise customers. This will enable them to deliver trusted and first-class, real-time communications services across the Internet. The virtualization of Oracle’s SBC is an important step in Proximus’s overall network strategy to virtualize the majority of its telco and service applications on a multitenant and open telco cloud. In addition, the automated and orchestrated core network will allow for adaptable capacity planning.

“As a digital service provider, we want to deliver the latest technologies to our customers in a way that simplifies and improves their lives and work environments,” said Laurent Claus, director service platforms & cloud, Proximus. “This is why our choice of Oracle was on target. Oracle Communications’ SBC delivers unparalleled operational efficiency and flexibility, which are essential as we continue to scale our offerings and customer base.”

“Given the scale and complexity of Proximus’ network needs, Oracle Communications is a strong fit,” said Greg Collins, founder & principal analyst, Exact Ventures. “As a tier-one communications service provider, Proximus requires the speed, trust and innovation that Oracle can deliver.” 

“Promixus has been a long time customer of Oracle Communications and this deployment is an exciting next step in their digital transformation journey,” said Doug Suriano, senior vice president and general manager, Oracle Communications. “Matching Promixus’ ambition to deliver innovative services in an easy-to-consume way, we are confident that Oracle’s virtualized Session Border Controller will provide them the security, comprehensive control and scalability needed to bring their customers into the next generation of communications services.”

Contact Info
Katie Barron
Oracle
+1.202.904.1138
katie.barron@oracle.com
Haroun Fenaux
Proximus
+32 476 60 03 33
press@proximus.com
About Proximus

Proximus Group is a telecommunication & ICT company operating in the Belgian and international markets, servicing residential, enterprise and public customers. Proximus’ ambition is to become a digital service provider, opening up a world of digital opportunities so people live better and work smarter. Through its best-quality integrated fixed and mobile networks, Proximus provides access anywhere and anytime to digital services and easy-to-use solutions, as well as to a broad offering of multimedia content. Proximus transforms technologies like the Internet of Things (IoT), Big Data, Cloud and Security into solutions with positive impact on people and society. With 13,391 employees, all engaged to offer customers a superior experience, the Group realized an underlying Group revenue of EUR 5,778 million end-2017.

Proximus (Euronext Brussels: PROX) is also active in Luxembourg through its affiliates Proximus Luxembourg and in the Netherlands through Telindus Netherlands. BICS is a leading international communications enabler, one of the key global voice carriers and the leading provider of mobile data services worldwide.

About Oracle Communications

Oracle Communications provides integrated communications and cloud solutions for Service Providers and Enterprises to accelerate their digital transformation journey in a communications-driven world from network evolution to digital business to customer experience. www.oracle.com/communications

To learn more about Oracle Communications industry solutions, visit: Oracle Communications LinkedIn, or join the conversation at Twitter @OracleComms.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Katie Barron

  • +1.202.904.1138

Haroun Fenaux

  • +32 476 60 03 33

Small- and Mid-sized Banks Fight Money Laundering with Oracle

Oracle Press Releases - Tue, 2019-06-25 07:00
Press Release
Small- and Mid-sized Banks Fight Money Laundering with Oracle Small- and Mid-sized Banks Fight Money Laundering with Oracle

Redwood Shores, Calif.—Jun 25, 2019

Oracle announced the availability of Oracle Financial Services Anti Money Laundering (AML) Express Edition targeted at small- and mid-sized banks. It provides a single, unified platform to efficiently detect, investigate, and report suspected money laundering and terrorist financing activity to comply with evolving regulations and guidelines.

Smaller banks need to address regulations and compliance the same as global top-tier banks but must do so with significantly smaller IT budgets and limited resources. AML Express uses new architecture principles to offer a choice of deployment and includes all the core functionality needed to fight financial crime.

“The largest financial institutions in the world have been using Oracle Anti Money Laundering solutions for decades. Today, the same comprehensive financial crime technology is now accessible for small- and mid-sized financial institutions. Lowering the total cost of ownership without compromising on the core functional capabilities is an engineering breakthrough made possible with the use of modern, cloud-compatible architectures,” said Sonny Singh, senior vice president and general manager, Oracle Financial Services.

To address the unique challenges of smaller banks, Oracle Financial Services created this scalable, out-of-the-box AML solution. Key features of AML Express include:

  • Architecture designed for rapid deployment on premise or on cloud infrastructures allowing firms to transition to their future states faster and at reduced implementation costs
  • In-built library of scenarios that detect the most common money laundering behaviors coupled with in-built case management abilities that reduce the time and resources needed for scenario configuration and case investigation.
  • Modern solution design that allows visual scenario configuration, reducing coding overheads, and enabling easy adaption to ever-changing compliance demands

For more information about AML Express, please click here.

Contact Info
Judi Palmer
Oracle
+1 650 784 7901
judi.palmer@oracle.com
Brian Pitts
Hill+Knowlton Strategies
+1 312 475 5921
brian.pitts@hkstrategies.com
Katie McCracken
CMG
+44 20 7861 0736
kmccracken@cmgrp.com
About Oracle Financial Services

Oracle Financial Services Global Business Unit provides clients in more than 140 countries with an integrated, best-in-class, end-to-end solution of intelligent software and powerful hardware designed to meet every financial service need. Our market leading platforms provide the foundation for banks and insurers’ digital and core transformations and we deliver a modern suite of Analytical Applications for Risk, Finance Compliance and Customer Insight. For more information, visit our website at https://www.oracle.com/industries/financial-services/index.html.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Judi Palmer

  • +1 650 784 7901

Brian Pitts

  • +1 312 475 5921

Katie McCracken

  • +44 20 7861 0736

During Extract Upgrade “extract not ready to be upgraded because recovery SCN” returned

VitalSoftTech - Mon, 2019-06-24 23:59

During the upgrade of a Classical extract process to Integrated extract I get the "extract not ready to be upgraded because recovery SCN". How do I work around this?

The post During Extract Upgrade “extract not ready to be upgraded because recovery SCN” returned appeared first on VitalSoftTech.

Categories: DBA Blogs

Disable scheduler jobs during deployment

Jeff Kemp - Mon, 2019-06-24 19:54

Like most active sites our applications have a healthy pipeline of change requests and bug fixes, and we manage this pipeline by maintaining a steady pace of small releases.

Each release is built, tested and deployed within a 3-4 week timeframe. Probably once or twice a month, on a Thursday evening, one or more deployments will be run, and each deployment is fully scripted with as few steps as possible. My standard deployment script has evolved over time to handle a number of cases where failures have happened in the past; failed deployments are rare now.

One issue we encountered some time ago was when a deployment script happened to be run at the same time as a database scheduler job; the job started halfway during the deployment when some objects were in the process of being modified. This led to some temporary compilation failures that caused the job to fail. Ultimately the deployment was successful, and the next time the job ran it was able to recover; but we couldn’t be sure that another failure of this sort wouldn’t cause issues in future. So I added a step to each deployment to temporarily stop all the jobs and re-start them after the deployment completes, with a script like this:

prompt disable_all_jobs.sql

begin
  for r in (
    select job_name
    from   user_scheduler_jobs
    where  schedule_type = 'CALENDAR'
    and    enabled = 'TRUE'
    order by 1
  ) loop
    dbms_scheduler.disable
      (name  => r.job_name
      ,force => true);
  end loop;
end;
/

This script simply marks all the jobs as “disabled” so they don’t start during the deployment. A very similar script is run at the end of the deployment to re-enable all the scheduler jobs. This works fine, except for the odd occasion when a job just happens to start running, just before the script starts, and the job is still running concurrently with the deployment. The line force => true in the script means that my script allows those jobs to continue running.

To solve this problem, I’ve added the following:

prompt Waiting for any running jobs to finish...

whenever sqlerror exit sql.sqlcode;

declare
  max_wait_seconds constant number := 60;
  start_time       date := sysdate;
  job_running      varchar2(100);
begin
  loop

    begin
      select job_name
      into   job_running
      from   user_scheduler_jobs
      where  state = 'RUNNING'
      and    rownum = 1;
    exception
      when no_data_found then
        job_running := null;
    end;

    exit when job_running is null;

    if sysdate - start_time > max_wait_seconds/24/60/60 then

      raise_application_error(-20000,
           'WARNING: waited for '
        || max_wait_seconds
        || ' seconds but job is still running ('
        || job_running
        || ').');

    else
      dbms_lock.sleep(2);
    end if;

  end loop;
end;
/

When the DBA runs the above script, it pauses to allow any running jobs to finish. Our jobs almost always finish in less than 30 seconds, usually sooner. The loop checks for any running jobs; if there are no jobs running it exits straight away – otherwise, it waits for a few seconds then checks again. If a job is still running after a minute, the script fails (stopping the deployment) and the DBA can investigate further to see what’s going on; once the job has finished, they can re-start the deployment.

Stein Mart Boosts Omni-Channel Growth with Oracle Cloud

Oracle Press Releases - Mon, 2019-06-24 07:30
Press Release
Stein Mart Boosts Omni-Channel Growth with Oracle Cloud Merchandise Financial Planning helps national retailer leverage data to optimize inventory management

Redwood Shores, Calif. and Jacksonville, Fla.—Jun 24, 2019

Stein Mart, a national specialty off-price retailer, has gained a holistic view of its inventory and a more streamlined approach to merchandise planning with Oracle Cloud.

By consolidating the planning and forecasting process for its physical stores, online store and warehouses into one solution, Stein Mart will be better equipped to manage its inventory to support the needs of its customers, regardless of how they choose to shop. With Oracle Retail Cloud Services, Stein Mart has the tools to keep its merchandise assortments fresh and relevant for buyers.

“We have been focused on simplifying our merchandising processes while expanding our omni-channel capabilities and new business initiatives. The enhanced functionality of Oracle’s Merchandise Financial Planning solution will help us analyze data faster to create better plans up front so we can buy smarter and manage inventory more effectively,” said Nick Swetonic, Stein Mart’s senior vice president of planning and allocation.

“Today, retailers sell whatever they buy, often at the expense of the bottom line. Tomorrow, they will be able to more accurately predict placement, price, and sizes across every store and market. This is the promise of the Oracle Retail Cloud,” noted Mike Webster, senior vice president and general manager, Oracle Retail. “We are helping companies like Stein Mart refine their approach to inventory and purchasing, so they can continually delight customers while improving results with merchandise that turns quickly.”

Stein Mart partnered with Cognira, experts in analytics, configuration and integration, and retail consulting firm The Parker Avery Group to re-engineer business processes and implement Oracle Retail Merchandise Financial Planning Cloud Service. Both Cognira and Parker Avery are members of the Oracle PartnerNetwork (OPN). Previously, Stein Mart also implemented Oracle Retail Merchandising, Oracle Retail Store Inventory Management, Oracle GoldenGate, Oracle JD Edwards, and Oracle Retail Point of Sale.

Contact Info
Kris Reeves
Oracle PR
+1.925.787.6744
kris.reeves@oracle.com
Linda Tasseff
Stein Mart Investor Relations
+1.904.858.2639
ltasseff@steinmart.com
About Stein Mart

Stein Mart, Inc. is a national specialty off-price retailer offering designer and name-brand fashion apparel, home décor, accessories and shoes at everyday discount prices. Stein Mart provides real value that customers love every day both in stores and online. The Company currently operates 283 stores across 30 states. For more information, please visit www.steinmart.com.

About Oracle Retail

Oracle is the modern platform for retail. Oracle provides retailers with a complete, open, and integrated platform for best-of-breed business applications, cloud services, and hardware that are engineered to work together. Leading fashion, grocery, and specialty retailers use Oracle solutions to accelerate from best practice to next practice, drive operational agility, and refine the customer experience. For more information, visit our website, www.oracle.com/retail.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Kris Reeves

  • +1.925.787.6744

Linda Tasseff

  • +1.904.858.2639

We moved to @Medium

Marcelo Ochoa - Sat, 2019-06-22 15:35
Since August 2017 We are moved to Medium, some of the reason are great described into this blog post "3 reasons we moved our startup blog to Medium". You are invited to see the new channel, greetings.

ANSI bug

Jonathan Lewis - Sat, 2019-06-22 07:01

The following note is about a script that I found on my laptop while I was searching for some details about a bug that appears when you write SQL using the ANSI style format rather than traditional Oracle style. The script is clearly one that I must have cut and pasted from somewhere (possibly the OTN/ODC database forum) many years ago without making any notes about its source or resolution. All I can say about it is that the file has a creation date of July 2012 and I can’t find any reference to a problem through Google searches – though the tables and even a set of specific insert statements appears in a number of pages that look like coursework for computer studies and MoS has a similar looking bug “fixed in 11.2”.

Here’s the entire script:

rem
rem     Script:         ansi_bug.sql
rem     Author:         ???
rem     Dated:          July 2012
rem

CREATE TABLE Student (
  sid INT PRIMARY KEY,
  name VARCHAR(20) NOT NULL,
  address VARCHAR(20) NOT NULL,
  major CHAR(2)
);

CREATE TABLE Professor (
  pid INT PRIMARY KEY,
  name VARCHAR(20) NOT NULL,
  department VARCHAR(10) NOT NULL
);

CREATE TABLE Course (
  cid INT PRIMARY KEY,
  title VARCHAR(20) NOT NULL UNIQUE,
  credits INT NOT NULL,
  area VARCHAR(5) NOT NULL
);

CREATE TABLE Transcript (
  sid INT,
  cid INT,
  pid INT,
  semester VARCHAR(9),
  year CHAR(4),
  grade CHAR(1) NOT NULL,
  PRIMARY KEY (sid, cid, pid, semester, year),
  FOREIGN KEY (sid) REFERENCES Student (sid),
  FOREIGN KEY (cid) REFERENCES Course (cid),
  FOREIGN KEY (pid) REFERENCES Professor (pid)
);

INSERT INTO Student (sid, name, address, major) VALUES (101, 'Nathan', 'Edinburg', 'CS');
INSERT INTO Student (sid, name, address, major) VALUES (105, 'Hussein', 'Edinburg', 'IT');
INSERT INTO Student (sid, name, address, major) VALUES (103, 'Jose', 'McAllen', 'CE');
INSERT INTO Student (sid, name, address, major) VALUES (102, 'Wendy', 'Mission', 'CS');
INSERT INTO Student (sid, name, address, major) VALUES (104, 'Maria', 'Pharr', 'CS');
INSERT INTO Student (sid, name, address, major) VALUES (106, 'Mike', 'Edinburg', 'CE');
INSERT INTO Student (sid, name, address, major) VALUES (107, 'Lily', 'McAllen', NULL);

INSERT INTO Professor (pid, name, department) VALUES (201, 'Artem', 'CS');
INSERT INTO Professor (pid, name, department) VALUES (203, 'John', 'CS');
INSERT INTO Professor (pid, name, department) VALUES (202, 'Virgil', 'MATH');
INSERT INTO Professor (pid, name, department) VALUES (204, 'Pearl', 'CS');
INSERT INTO Professor (pid, name, department) VALUES (205, 'Christine', 'CS');

INSERT INTO Course (cid, title, credits, area) VALUES (4333, 'Database', 3, 'DB');
INSERT INTO Course (cid, title, credits, area) VALUES (1201, 'Comp literacy', 2, 'INTRO');
INSERT INTO Course (cid, title, credits, area) VALUES (6333, 'Advanced Database', 3, 'DB');
INSERT INTO Course (cid, title, credits, area) VALUES (6315, 'Applied Database', 3, 'DB');
INSERT INTO Course (cid, title, credits, area) VALUES (3326, 'Java', 3, 'PL');
INSERT INTO Course (cid, title, credits, area) VALUES (1370, 'CS I', 4, 'INTRO');

INSERT INTO Transcript (sid, cid, pid, semester, year, grade) VALUES (101, 4333, 201, 'Spring', '2009', 'A');
INSERT INTO Transcript (sid, cid, pid, semester, year, grade) VALUES (101, 6333, 201, 'Fall', '2009', 'A');
INSERT INTO Transcript (sid, cid, pid, semester, year, grade) VALUES (101, 6315, 201, 'Fall', '2009', 'A');
INSERT INTO Transcript (sid, cid, pid, semester, year, grade) VALUES (103, 4333, 203, 'Summer I', '2010', 'B');
INSERT INTO Transcript (sid, cid, pid, semester, year, grade) VALUES (102, 4333, 201, 'Fall', '2009', 'A');
INSERT INTO Transcript (sid, cid, pid, semester, year, grade) VALUES (103, 3326, 204, 'Spring', '2008', 'A');
INSERT INTO Transcript (sid, cid, pid, semester, year, grade) VALUES (104, 1201, 205, 'Fall', '2009', 'B');
INSERT INTO Transcript (sid, cid, pid, semester, year, grade) VALUES (104, 1370, 203, 'Summer II', '2010', 'A');
INSERT INTO Transcript (sid, cid, pid, semester, year, grade) VALUES (106, 1201, 205, 'Fall', '2009', 'C');
INSERT INTO Transcript (sid, cid, pid, semester, year, grade) VALUES (106, 1370, 203, 'Summer II', '2010', 'C');
INSERT INTO Transcript (sid, cid, pid, semester, year, grade) VALUES (105, 3326, 204, 'Spring', '2001', 'A');
INSERT INTO Transcript (sid, cid, pid, semester, year, grade) VALUES (105, 6315, 203, 'Fall', '2008', 'A');

SELECT 
        pid, 
        name, title
FROM 
        Professor 
NATURAL LEFT OUTER JOIN 
        (
                Transcript 
        NATURAL JOIN 
                Course
        )
;

SELECT 
        name, title
FROM 
        Professor 
NATURAL LEFT OUTER JOIN 
        (
                Transcript 
        NATURAL JOIN 
                Course
        )
;

SELECT 
        name, title
FROM 
        Professor 
NATURAL LEFT OUTER JOIN 
        (
                Transcript 
        NATURAL JOIN 
                Course
        )
order by pid
;

I’ve run three minor variations of the same query – the one in the middle selects two columns from a three table join using natural joins. The first query does the same but includes an extra column in the select list while the third query selects only the original columns but orders the result set by the extra column.

The middle query returns 60 rows – the first and third, with the “extra” column projected somewhere in the execution plan, return 13 rows.

I didn’t even have a note of the then-current version of Oracle when I copied this script, but I’ve just run it on 12.2.0.1, 18.3.0.0, and 19.2.0.0 (using LiveSQL), and the error reproduces on all three versions.

Ubuntu Server: How to activate kernel dumps

Dietrich Schroff - Fri, 2019-06-21 14:25
If you are running ubuntu server, you can add kdump on your system to write kernel dumps in case of sudden reboots etc.

Installing is very easy:
root@ubuntuserver:/etc# apt install linux-crashdump
Reading package lists... Done
Building dependency tree      
Reading state information... Done
The following additional packages will be installed:
  binutils binutils-common binutils-x86-64-linux-gnu crash kdump-tools kexec-tools libbinutils libdw1 libsnappy1v5 makedumpfile
Suggested packages:
  binutils-doc
The following NEW packages will be installed:
  binutils binutils-common binutils-x86-64-linux-gnu crash kdump-tools kexec-tools libbinutils libdw1 libsnappy1v5 linux-crashdump makedumpfile
0 upgraded, 11 newly installed, 0 to remove and 43 not upgraded.
Need to get 2,636 B/5,774 kB of archives.
After this operation, 26.0 MB of additional disk space will be used.
Do you want to continue? [Y/n]
Get:1 http://archive.ubuntu.com/ubuntu bionic-updates/main amd64 linux-crashdump amd64 4.15.0.46.48 [2,636 B]
Fetched 2,636 B in 0s (28.1 kB/s)    
Preconfiguring packages ...
Selecting previously unselected package binutils-common:amd64.
(Reading database ... 66831 files and directories currently installed.)
Preparing to unpack .../00-binutils-common_2.30-21ubuntu1~18.04_amd64.deb ...
Unpacking binutils-common:amd64 (2.30-21ubuntu1~18.04) ...
Selecting previously unselected package libbinutils:amd64.
Preparing to unpack .../01-libbinutils_2.30-21ubuntu1~18.04_amd64.deb ...
Unpacking libbinutils:amd64 (2.30-21ubuntu1~18.04) ...
Selecting previously unselected package binutils-x86-64-linux-gnu.
Preparing to unpack .../02-binutils-x86-64-linux-gnu_2.30-21ubuntu1~18.04_amd64.deb ...
Unpacking binutils-x86-64-linux-gnu (2.30-21ubuntu1~18.04) ...
Selecting previously unselected package binutils.
Preparing to unpack .../03-binutils_2.30-21ubuntu1~18.04_amd64.deb ...
Unpacking binutils (2.30-21ubuntu1~18.04) ...
Selecting previously unselected package libsnappy1v5:amd64.
Preparing to unpack .../04-libsnappy1v5_1.1.7-1_amd64.deb ...
Unpacking libsnappy1v5:amd64 (1.1.7-1) ...
Selecting previously unselected package crash.
Preparing to unpack .../05-crash_7.2.1-1ubuntu2_amd64.deb ...
Unpacking crash (7.2.1-1ubuntu2) ...
Selecting previously unselected package kexec-tools.
Preparing to unpack .../06-kexec-tools_1%3a2.0.16-1ubuntu1_amd64.deb ...
Unpacking kexec-tools (1:2.0.16-1ubuntu1) ...
Selecting previously unselected package libdw1:amd64.
Preparing to unpack .../07-libdw1_0.170-0.4_amd64.deb ...
Unpacking libdw1:amd64 (0.170-0.4) ...
Selecting previously unselected package makedumpfile.
Preparing to unpack .../08-makedumpfile_1%3a1.6.3-2_amd64.deb ...
Unpacking makedumpfile (1:1.6.3-2) ...
Selecting previously unselected package kdump-tools.
Preparing to unpack .../09-kdump-tools_1%3a1.6.3-2_amd64.deb ...
Unpacking kdump-tools (1:1.6.3-2) ...
Selecting previously unselected package linux-crashdump.
Preparing to unpack .../10-linux-crashdump_4.15.0.46.48_amd64.deb ...
Unpacking linux-crashdump (4.15.0.46.48) ...
Processing triggers for ureadahead (0.100.0-20) ...
Setting up libdw1:amd64 (0.170-0.4) ...
Setting up kexec-tools (1:2.0.16-1ubuntu1) ...
Generating /etc/default/kexec...
Setting up binutils-common:amd64 (2.30-21ubuntu1~18.04) ...
Processing triggers for libc-bin (2.27-3ubuntu1) ...
Setting up makedumpfile (1:1.6.3-2) ...
Setting up libsnappy1v5:amd64 (1.1.7-1) ...
Processing triggers for systemd (237-3ubuntu10.12) ...
Processing triggers for man-db (2.8.3-2ubuntu0.1) ...
Setting up libbinutils:amd64 (2.30-21ubuntu1~18.04) ...
Setting up kdump-tools (1:1.6.3-2) ...

Creating config file /etc/default/kdump-tools with new version
Sourcing file `/etc/default/grub'
Sourcing file `/etc/default/grub.d/50-curtin-settings.cfg'
Sourcing file `/etc/default/grub.d/kdump-tools.cfg'
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-4.15.0-45-generic
Found initrd image: /boot/initrd.img-4.15.0-45-generic
done
Created symlink /etc/systemd/system/multi-user.target.wants/kdump-tools.service → /lib/systemd/system/kdump-tools.service.
Setting up linux-crashdump (4.15.0.46.48) ...
Setting up binutils-x86-64-linux-gnu (2.30-21ubuntu1~18.04) ...
Setting up binutils (2.30-21ubuntu1~18.04) ...
Setting up crash (7.2.1-1ubuntu2) ...
Processing triggers for ureadahead (0.100.0-20) ...
Processing triggers for libc-bin (2.27-3ubuntu1) ...
Processing triggers for systemd (237-3ubuntu10.12) ...
Within the installation you have to answer these questions:


After the installation the following parameter is added to the kernel cmdline:
grep -r crash /boot* |grep cfg
/boot/grub/grub.cfg:        linux    /boot/vmlinuz-4.15.0-46-generic root=UUID=a83c2a94-91c4-461a-b6a4-c7a81422a857 ro  maybe-ubiquity crashkernel=384M-:128M
/boot/grub/grub.cfg:            linux    /boot/vmlinuz-4.15.0-46-generic root=UUID=a83c2a94-91c4-461a-b6a4-c7a81422a857 ro  maybe-ubiquity crashkernel=384M-:128M
with
crashkernel=:[,:,...][@offset]
range=start-[end] 'start' is inclusive and 'end' is exclusive

The configuration is done via /etc/default/kdump-tools. Here the parameter to control the directory to dump the core into:

cat /etc/default/kdump-tools  |grep DIR
# KDUMP_COREDIR - local path to save the vmcore to.
KDUMP_COREDIR="/var/crash"
Next step is to reboot and verify the kernel cmdline.

#cat /proc/cmdline 
BOOT_IMAGE=/boot/vmlinuz-4.15.0-46-generic root=UUID=a83c2a94-91c4-461a-b6a4-c7a81422a857 ro maybe-ubiquity crashkernel=384M-:128M


To get a coredump just use the following commands:
root@ubuntuserver:/etc# sysctl -w kernel.sysrq=1
kernel.sysrq = 1
root@ubuntuserver:/etc# echo c > /proc/sysrq-trigger

Oracle ERP Cloud Recognized as a Leader in the Gartner Magic Quadrant for Cloud Core Financial Management Suites for Midsize, Large and Global Enterprises

Oracle Press Releases - Thu, 2019-06-20 07:30
Press Release
Oracle ERP Cloud Recognized as a Leader in the Gartner Magic Quadrant for Cloud Core Financial Management Suites for Midsize, Large and Global Enterprises Oracle named a Leader based on completeness of vision and ability to execute

Redwood Shores, Calif.—Jun 20, 2019

Oracle (NYSE: ORCL) has been named a Leader in Gartner’s 2019 “Magic Quadrant for Cloud Core Financial Management Suites for Midsize, Large and Global Enterprises” report1. Oracle ERP Cloud is positioned as a Leader based on its ability to execute and completeness of vision. A complimentary copy of the report is available here.

This is the third consecutive year that Oracle ERP Cloud has been recognized as a Leader in Gartner’s report, and out of 10 products evaluated, Oracle ERP Cloud is positioned highest for ability to execute as well as furthest to the right for completeness of vision.

According to the report, “Leaders demonstrate a market-defining vision of how core financial management systems and processes can be supported and improved by moving them to the cloud. They couple this with a clear ability to execute this vision through products, services and go-to-market strategies. They have a strong presence in the market and are growing their revenue and market share. In this market, Leaders show a consistent ability to secure deals with enterprises of different sizes, and have a good depth of functionality across all areas of core financial management. They have multiple proofs of successful deployments by customers, both in their home region and elsewhere. Their offerings are often used by system integrator partners to support financial transformation initiatives. Leaders typically address a wide market audience by supporting broad market requirements. However, they may fail to meet the specific needs of vertical markets or other, more specialized segments, which might be better addressed by Niche Players in particular.”

“Oracle remains laser-focused on our customer’s success. We are committed to continued significant investments in innovation that can help our 6,000+ ERP Cloud customers drive operational excellence in finance,” said Rondy Ng, Senior Vice President, Applications Development, Oracle. “We are ecstatic to be acknowledged once again as a Leader by Gartner. We believe this report is a validation of our product strengths, investment focus, and customer successes.”

Oracle ERP Cloud includes complete ERP capabilities across FinancialsProcurement, and Project Portfolio Management (PPM), as well as Enterprise Performance Management (EPM) and Governance Risk and Compliance (GRC). Together with Supply Chain Management (SCM) and native integration with the broader Oracle Cloud Applications suite, which includes Human Capital Management (HCM) and Customer Experience (CX) SaaS applications, Oracle helps customers to stay ahead of changing expectations, build adaptable organizations, and realize the potential of the latest innovations.

Oracle portfolio of financial management and planning cloud offerings have garnered industry recognition. Oracle ERP Cloud was named the sole Leader in Gartner’s 2018 Magic Quadrant for Cloud ERP for Product-Centric Midsize Enterprises.2 Oracle was also named the Leader in the Gartner 2018 Magic Quadrant for Cloud Financial Planning and Analysis Solutions3 (with the highest position for its ability to execute) and was named a Leader in the 2018 Magic Quadrant for Cloud Financial Close Solutions.4

1Gartner Magic Quadrant for Cloud Core Financial Management Suites for Midsize, Large and Global Enterprises, John Van Decker, Robert Anderson, Greg Leiter, 13 May 2019
2 Gartner Magic Quadrant for Cloud ERP for Product-Centric Midsize Enterprises, Mike Guay, John Van Decker, Christian Hestermann, Nigel Montgomery, Duy Nguyen, Denis Torii, Paul Saunders, Paul Schenck, Tim Faith, 31 October 2018
3 Gartner Magic Quadrant for Cloud Financial Planning and Analysis Solutions, Christopher Iervolino, John Van Decker, 24 July 2018
4 Gartner Magic Quadrant for Cloud Financial Close Solutions, John Van Decker, Christopher Iervolino, 26 July 2018

Gartner Disclaimer

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner's research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Additional Information

For additional information on Oracle ERP Cloud applications, visit Oracle Enterprise Resource Planning (ERP) Cloud’s Facebook and Twitter or the Modern Finance Leader blog.

Contact Info
Bill Rundle
Oracle PR
650.506.1891
bill.rundle@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Bill Rundle

  • 650.506.1891

Baylor University Selects Oracle Cloud Applications to Gain Competitive Advantage

Oracle Press Releases - Thu, 2019-06-20 07:00
Press Release
Baylor University Selects Oracle Cloud Applications to Gain Competitive Advantage Pioneering Texas university shifts business applications to the cloud to enhance user experience, gain real-time insights and improve organizational agility

Redwood Shores, Calif.—Jun 20, 2019

Credit: Baylor University

To compete more aggressively at the pinnacle of higher education, Baylor University—the oldest continuously operating university in Texas—has adopted Oracle Cloud Applications. With cloud-based applications for finance, planning and human resources, Baylor will be able to improve productivity and business insights by transforming administrative operations and employee experience and gaining real-time access to data from across its growing operations.

From its beginning as a small Baptist college in 1845, Baylor has grown to serve more than 16,000 students annually and has become a world-class brand in higher education. Oracle Cloud Applications play a supportive role in Baylor’s aspiration to become a preeminent research university as outlined in the institution’s academic strategic plan, Illuminate.

To stay at the forefront of higher education as it continues to evolve, Baylor is replacing its manual systems with an integrated suite of applications that can provide real-time insights into key business processes. To meet these needs and gain a competitive edge over peer-institutions, Baylor selected Oracle Enterprise Resource Planning (ERP) Cloud, Oracle Enterprise Performance Management (EPM) Cloud, and Oracle Human Capital Management (HCM) Cloud  

“Education is evolving and the technology that drives our organization forward needs to reflect modern education best practices,” said Becky King, associate vice president of IT, Baylor University. “Shifting to Oracle Cloud Applications will help us introduce modern best practices that will make our organization more efficient and reach our goal of becoming a top-tier, Christian research institution. Moving core finance, planning and HR systems to one cloud-based platform will also improve business insight and enhance our ability to respond to changing dynamics in education.”

With Oracle ERP Cloud, Oracle EPM Cloud and Oracle HCM Cloud, Baylor will be able to take advantage of the cloud to break down organizational silos, standardize processes and manage financial, planning and workforce data on a single integrated cloud platform. Oracle Cloud Applications’ common and intuitive interface enables rapid user adoption, delivers enhanced employee experience and improves productivity.

“To compete at the leading edge of higher education, institutions need real-time visibility across the entire organization in order to respond to rapidly changing educational needs and expectations,” said Hari Sankar, Group Vice President, Product Management. “With Oracle Cloud Applications, Baylor will be able to make smarter decisions about the direction of the organization while delivering better experiences to end users, improving its agility and enabling it to better compete in higher education.”

For additional information on Oracle Cloud Applications visit oracle.com/cloud/applications.

Contact Info
Bill Rundle
Oracle PR
650.506.1891
bill.rundle@oracle.com
About Baylor University

Baylor University is a private Christian University and a nationally ranked research institution. The University provides a vibrant campus community for more than 17,000 students by blending interdisciplinary research with an international reputation for educational excellence and a faculty commitment to teaching and scholarship. Chartered in 1845 by the Republic of Texas through the efforts of Baptist pioneers, Baylor is the oldest continually operating University in Texas. Located in Waco, Baylor welcomes students from all 50 states and more than 90 countries to study a broad range of degrees among its 12 nationally recognized academic divisions.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Safe Harbor

The preceding is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. The development, release and timing of any features or functionality described for Oracle's products remains at the sole discretion of Oracle.

Talk to a Press Contact

Bill Rundle

  • 650.506.1891

Q4 FY19 GAAP EPS Up 36% to $1.07 and NON-GAAP EPS Up 23% to $1.16

Oracle Press Releases - Wed, 2019-06-19 15:00
Press Release
Q4 FY19 GAAP EPS Up 36% to $1.07 and NON-GAAP EPS Up 23% to $1.16 Operating Income Up 3% in USD and 7% in Constant Currency

Redwood Shores, Calif.—Jun 19, 2019

Oracle Corporation (NYSE: ORCL) today announced fiscal 2019 Q4 results and fiscal 2019 full year results. Total Quarterly Revenues were $11.1 billion, up 1% in USD and up 4% in constant currency compared to Q4 last year. Cloud Services and License Support revenues were $6.8 billion, while Cloud License and On-Premise License revenues were $2.5 billion. Total Cloud Services and License Support plus Cloud License and On-Premise License revenues were $9.3 billion, up 3% in USD and 6% in constant currency.

Q4 GAAP Operating Income was up 2% to $4.3 billion and GAAP operating margin was 38%. Non-GAAP Operating Income was up 4% to $5.3 billion and non-GAAP operating margin was 47%. GAAP Net Income was up 14% to $3.7 billion and non-GAAP Net Income was up 3% to $4.1 billion. GAAP Earnings Per Share was $1.07, while non-GAAP Earnings Per Share was $1.16.

Short-term deferred revenues were $8.4 billion. Operating cash flow for fiscal 2019 was $14.6 billion.

For fiscal 2019, Total Revenues were $39.5 billion, slightly higher in USD and up 3% in constant currency. Cloud Services and License Support revenues were $26.7 billion, while Cloud License and On-Premise License revenues were $5.9 billion. Total Cloud Services and License Support plus Cloud License and On-Premise revenues were $32.6 billion, up 2% in USD and 4% in constant currency.

Fiscal 2019 GAAP Operating Income was $13.5 billion, and GAAP operating margin was 34%. Non-GAAP Operating Income was $17.4 billion, and non-GAAP operating margin was 44%. GAAP Net Income was $11.1 billion, while non-GAAP Net Income was $13.1 billion. GAAP Earnings Per Share increased 251% to $2.97, while non-GAAP Earnings Per Share was up 16% to $3.52.

“In Q4, our non-GAAP operating income grew 7% in constant currency—which drove EPS well above the high end of my guidance,” said Oracle CEO, Safra Catz. “Our high-margin Fusion and NetSuite cloud applications businesses are growing rapidly, while we downsize our low-margin legacy hardware business. The net result of this shift away from commodity hardware to cloud applications was a Q4 non-GAAP operating margin of 47%, the highest we’ve seen in five years.”

“Our Fusion ERP and HCM cloud applications suite revenues grew 32% in FY19,” said Oracle CEO, Mark Hurd. “Our NetSuite ERP cloud applications revenues also grew 32% this year. These strong results extend Oracle’s already commanding lead in worldwide Cloud ERP. Our cloud applications businesses are growing faster than our competitors. That said, let me call your attention to the following approved statement from industry analyst IDC.”

Per IDC’s latest annual market share results, Oracle gained the most market share globally out of all Enterprise Applications SaaS vendors three years running—in CY16, CY17 and CY18.

“We added over five thousand new Autonomous Database trials in Q4,” said Oracle Chairman and CTO, Larry Ellison. “Our new Gen2 Cloud Infrastructure offers those customers a compelling array of advance technology features including our self-driving database that automatically encrypts all your data, backs itself up, tunes itself, upgrades itself, and patches itself when a security threat is detected. It does all of this autonomously—while running—without the need for any human intervention, and without the need for any downtime. No other cloud infrastructure provides anything close to these autonomous features.”

The Board of Directors also declared a quarterly cash dividend of $0.24 per share of outstanding common stock. This dividend will be paid to stockholders of record as of the close of business on July 17, 2019, with a payment date of July 31, 2019.

Q4 Fiscal 2019 Earnings Conference Call and Webcast

Oracle will hold a conference call and webcast today to discuss these results at 2:00 p.m. Pacific. You may listen to the call by dialing (816) 287-5563, Passcode: 425392. To access the live webcast, please visit the Oracle Investor Relations website at http://www.oracle.com/investor. In addition, Oracle’s Q4 results and fiscal 2019 financial tables are available on the Oracle Investor Relations website.

A replay of the conference call will also be available by dialing (855) 859-2056 or (404) 537-3406, Passcode: 9955119.

Contact Info
Ken Bond
Oracle Investor Relations
+1.650.607.0349
ken.bond@oracle.com
Deborah Hellinger
Oracle Corporate Communciations
+1.212.508.7935
deborah.hellinger@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly-Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE:ORCL), visit us at www.oracle.com or contact Investor Relations at investor_us@oracle.com or (650) 506-4073.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

“Safe Harbor” Statement

Statements in this press release relating to Oracle's future plans, expectations, beliefs, intentions and prospects, including statements regarding the growth of our high-margin cloud applications businesses, are "forward-looking statements" and are subject to material risks and uncertainties. Many factors could affect our current expectations and our actual results, and could cause actual results to differ materially. We presently consider the following to be among the important factors that could cause actual results to differ materially from expectations: (1) Our cloud strategy, including our Oracle Software as a Service and Infrastructure as a Service offerings, may not be successful. (2) If we are unable to develop new or sufficiently differentiated products and services, integrate acquired products and services, or enhance and improve our existing products and support services in a timely manner, or price our products and services to meet market demand, customers may not purchase or subscribe to our software, hardware or cloud offerings or renew software support, hardware support or cloud subscriptions contracts. (3) Enterprise customers rely on our cloud, license and hardware offerings and related services to run their businesses and significant coding, manufacturing or configuration errors in our cloud, license and hardware offerings and related services could expose us to product liability, performance and warranty claims, as well as cause significant harm to our brand and reputation, which could impact our future sales. (4) If the security measures for our products and services are compromised and as a result, our customers' data or our IT systems are accessed improperly, made unavailable, or improperly modified, our products and services may be perceived as vulnerable, our brand and reputation could be damaged and we may experience legal claims and reduced sales. (5) Our business practices with respect to data could give rise to operational interruption, liabilities or reputational harm as a result of governmental regulation, legal requirements or industry standards relating to consumer privacy and data protection. (6) Economic, political and market conditions can adversely affect our business, results of operations and financial condition, including our revenue growth and profitability, which in turn could adversely affect our stock price. (7) Our international sales and operations subject us to additional risks that can adversely affect our operating results. (8) We have a selective and active acquisition program and our acquisitions may not be successful, may involve unanticipated costs or other integration issues or may disrupt our existing operations. A detailed discussion of these factors and other risks that affect our business is contained in our SEC filings, including our most recent reports on Form 10-K and Form 10-Q, particularly under the heading "Risk Factors." Copies of these filings are available online from the SEC or by contacting Oracle Corporation's Investor Relations Department at (650) 506-4073 or by clicking on SEC Filings on Oracle’s Investor Relations website at http://www.oracle.com/investor. All information set forth in this press release is current as of June 19, 2019. Oracle undertakes no duty to update any statement in light of new information or future events. 

Talk to a Press Contact

Ken Bond

  • +1.650.607.0349

Deborah Hellinger

  • +1.212.508.7935

Contextual Targeting vs Behavioral Targeting

VitalSoftTech - Tue, 2019-06-18 12:19

Let’s suppose these are the olden times and you have to advertise for a new circus in town. Do you paste the posters on the walls of places of entertainment like a movie theater, a bar, horse racing tracks, or a casino? Or do you spend a little time about town and look around for […]

The post Contextual Targeting vs Behavioral Targeting appeared first on VitalSoftTech.

Categories: DBA Blogs

PostgreSQL partitioning (8): Sub-partitioning

Yann Neuhaus - Tue, 2019-06-18 04:19

We are slowly coming to the end of this little series about partitioning in PostgreSQL. In the last post we had a look at indexing and constraints and today we will have a look at sub partitioning. Sub partitioning means you go one step further and partition the partitions as well. Although it is not required to read all the posts of this series to follow this one: If you want, here they are:

  1. PostgreSQL partitioning (1): Preparing the data set
  2. PostgreSQL partitioning (2): Range partitioning
  3. PostgreSQL partitioning (3): List partitioning
  4. PostgreSQL partitioning (4) : Hash partitioning
  5. PostgreSQL partitioning (5): Partition pruning
  6. PostgreSQL partitioning (6): Attaching and detaching partitions
  7. PostgreSQL partitioning (7): Indexing and constraints

Coming back to our range partitioned table this is how it looks like currently:

postgres=# \d+ traffic_violations_p
                                      Partitioned table "public.traffic_violations_p"
         Column          |          Type          | Collation | Nullable | Default | Storage  | Stats target | Description 
-------------------------+------------------------+-----------+----------+---------+----------+--------------+-------------
 seqid                   | text                   |           |          |         | extended |              | 
 date_of_stop            | date                   |           |          |         | plain    |              | 
 time_of_stop            | time without time zone |           |          |         | plain    |              | 
 agency                  | text                   |           |          |         | extended |              | 
 subagency               | text                   |           |          |         | extended |              | 
 description             | text                   |           |          |         | extended |              | 
 location                | text                   |           |          |         | extended |              | 
 latitude                | numeric                |           |          |         | main     |              | 
 longitude               | numeric                |           |          |         | main     |              | 
 accident                | text                   |           |          |         | extended |              | 
 belts                   | boolean                |           |          |         | plain    |              | 
 personal_injury         | boolean                |           |          |         | plain    |              | 
 property_damage         | boolean                |           |          |         | plain    |              | 
 fatal                   | boolean                |           |          |         | plain    |              | 
 commercial_license      | boolean                |           |          |         | plain    |              | 
 hazmat                  | boolean                |           |          |         | plain    |              | 
 commercial_vehicle      | boolean                |           |          |         | plain    |              | 
 alcohol                 | boolean                |           |          |         | plain    |              | 
 workzone                | boolean                |           |          |         | plain    |              | 
 state                   | text                   |           |          |         | extended |              | 
 vehicletype             | text                   |           |          |         | extended |              | 
 year                    | smallint               |           |          |         | plain    |              | 
 make                    | text                   |           |          |         | extended |              | 
 model                   | text                   |           |          |         | extended |              | 
 color                   | text                   |           |          |         | extended |              | 
 violation_type          | text                   |           |          |         | extended |              | 
 charge                  | text                   |           |          |         | extended |              | 
 article                 | text                   |           |          |         | extended |              | 
 contributed_to_accident | boolean                |           |          |         | plain    |              | 
 race                    | text                   |           |          |         | extended |              | 
 gender                  | text                   |           |          |         | extended |              | 
 driver_city             | text                   |           |          |         | extended |              | 
 driver_state            | text                   |           |          |         | extended |              | 
 dl_state                | text                   |           |          |         | extended |              | 
 arrest_type             | text                   |           |          |         | extended |              | 
 geolocation             | point                  |           |          |         | plain    |              | 
 council_districts       | smallint               |           |          |         | plain    |              | 
 councils                | smallint               |           |          |         | plain    |              | 
 communities             | smallint               |           |          |         | plain    |              | 
 zip_codes               | smallint               |           |          |         | plain    |              | 
 municipalities          | smallint               |           |          |         | plain    |              | 
Partition key: RANGE (date_of_stop)
Partitions: traffic_violations_p_2013 FOR VALUES FROM ('2013-01-01') TO ('2014-01-01'),
            traffic_violations_p_2014 FOR VALUES FROM ('2014-01-01') TO ('2015-01-01'),
            traffic_violations_p_2015 FOR VALUES FROM ('2015-01-01') TO ('2016-01-01'),
            traffic_violations_p_2016 FOR VALUES FROM ('2016-01-01') TO ('2017-01-01'),
            traffic_violations_p_2017 FOR VALUES FROM ('2017-01-01') TO ('2018-01-01'),
            traffic_violations_p_2018 FOR VALUES FROM ('2018-01-01') TO ('2019-01-01'),
            traffic_violations_p_2019 FOR VALUES FROM ('2019-01-01') TO ('2020-01-01'),
            traffic_violations_p_2020 FOR VALUES FROM ('2020-01-01') TO ('2021-01-01'),
            traffic_violations_p_2021 FOR VALUES FROM ('2021-01-01') TO ('2022-01-01'),
            traffic_violations_p_default DEFAULT

Lets assume that you expect that traffic violations will grow exponentially in 2022 because more and more cars will be on the road and when there will be more cars there will be more traffic violations. To be prepared for that you do not only want to partition by year but also by month. In other words: Add a new partition for 2022 but sub partition that by month. First of all you need a new partition for 2022 that itself is partitioned as well:

create table traffic_violations_p_2022
partition of traffic_violations_p
for values from ('2022-01-01') to ('2023-01-01') partition by range(date_of_stop);

Now we can add partitions to the just created partitioned partition:

create table traffic_violations_p_2022_jan
partition of traffic_violations_p_2022
for values from ('2022-01-01') to ('2022-02-01');

create table traffic_violations_p_2022_feb
partition of traffic_violations_p_2022
for values from ('2022-02-01') to ('2022-03-01');

create table traffic_violations_p_2022_mar
partition of traffic_violations_p_2022
for values from ('2022-03-01') to ('2022-04-01');

create table traffic_violations_p_2022_apr
partition of traffic_violations_p_2022
for values from ('2022-04-01') to ('2022-05-01');

create table traffic_violations_p_2022_may
partition of traffic_violations_p_2022
for values from ('2022-05-01') to ('2022-06-01');

create table traffic_violations_p_2022_jun
partition of traffic_violations_p_2022
for values from ('2022-06-01') to ('2022-07-01');

create table traffic_violations_p_2022_jul
partition of traffic_violations_p_2022
for values from ('2022-07-01') to ('2022-08-01');

create table traffic_violations_p_2022_aug
partition of traffic_violations_p_2022
for values from ('2022-08-01') to ('2022-09-01');

create table traffic_violations_p_2022_sep
partition of traffic_violations_p_2022
for values from ('2022-09-01') to ('2022-10-01');

create table traffic_violations_p_2022_oct
partition of traffic_violations_p_2022
for values from ('2022-10-01') to ('2022-11-01');

create table traffic_violations_p_2022_nov
partition of traffic_violations_p_2022
for values from ('2022-11-01') to ('2022-12-01');

create table traffic_violations_p_2022_dec
partition of traffic_violations_p_2022
for values from ('2022-12-01') to ('2023-01-01');

Looking at psql’s output when we describe the partitioned table not very much changed, just the keyword “PARTITIONED” is showing up beside our new partition for 2022:

postgres=# \d+ traffic_violations_p
                                      Partitioned table "public.traffic_violations_p"
         Column          |          Type          | Collation | Nullable | Default | Storage  | Stats target | Description 
-------------------------+------------------------+-----------+----------+---------+----------+--------------+-------------
 seqid                   | text                   |           |          |         | extended |              | 
 date_of_stop            | date                   |           |          |         | plain    |              | 
 time_of_stop            | time without time zone |           |          |         | plain    |              | 
 agency                  | text                   |           |          |         | extended |              | 
 subagency               | text                   |           |          |         | extended |              | 
 description             | text                   |           |          |         | extended |              | 
 location                | text                   |           |          |         | extended |              | 
 latitude                | numeric                |           |          |         | main     |              | 
 longitude               | numeric                |           |          |         | main     |              | 
 accident                | text                   |           |          |         | extended |              | 
 belts                   | boolean                |           |          |         | plain    |              | 
 personal_injury         | boolean                |           |          |         | plain    |              | 
 property_damage         | boolean                |           |          |         | plain    |              | 
 fatal                   | boolean                |           |          |         | plain    |              | 
 commercial_license      | boolean                |           |          |         | plain    |              | 
 hazmat                  | boolean                |           |          |         | plain    |              | 
 commercial_vehicle      | boolean                |           |          |         | plain    |              | 
 alcohol                 | boolean                |           |          |         | plain    |              | 
 workzone                | boolean                |           |          |         | plain    |              | 
 state                   | text                   |           |          |         | extended |              | 
 vehicletype             | text                   |           |          |         | extended |              | 
 year                    | smallint               |           |          |         | plain    |              | 
 make                    | text                   |           |          |         | extended |              | 
 model                   | text                   |           |          |         | extended |              | 
 color                   | text                   |           |          |         | extended |              | 
 violation_type          | text                   |           |          |         | extended |              | 
 charge                  | text                   |           |          |         | extended |              | 
 article                 | text                   |           |          |         | extended |              | 
 contributed_to_accident | boolean                |           |          |         | plain    |              | 
 race                    | text                   |           |          |         | extended |              | 
 gender                  | text                   |           |          |         | extended |              | 
 driver_city             | text                   |           |          |         | extended |              | 
 driver_state            | text                   |           |          |         | extended |              | 
 dl_state                | text                   |           |          |         | extended |              | 
 arrest_type             | text                   |           |          |         | extended |              | 
 geolocation             | point                  |           |          |         | plain    |              | 
 council_districts       | smallint               |           |          |         | plain    |              | 
 councils                | smallint               |           |          |         | plain    |              | 
 communities             | smallint               |           |          |         | plain    |              | 
 zip_codes               | smallint               |           |          |         | plain    |              | 
 municipalities          | smallint               |           |          |         | plain    |              | 
Partition key: RANGE (date_of_stop)
Partitions: traffic_violations_p_2013 FOR VALUES FROM ('2013-01-01') TO ('2014-01-01'),
            traffic_violations_p_2014 FOR VALUES FROM ('2014-01-01') TO ('2015-01-01'),
            traffic_violations_p_2015 FOR VALUES FROM ('2015-01-01') TO ('2016-01-01'),
            traffic_violations_p_2016 FOR VALUES FROM ('2016-01-01') TO ('2017-01-01'),
            traffic_violations_p_2017 FOR VALUES FROM ('2017-01-01') TO ('2018-01-01'),
            traffic_violations_p_2018 FOR VALUES FROM ('2018-01-01') TO ('2019-01-01'),
            traffic_violations_p_2019 FOR VALUES FROM ('2019-01-01') TO ('2020-01-01'),
            traffic_violations_p_2020 FOR VALUES FROM ('2020-01-01') TO ('2021-01-01'),
            traffic_violations_p_2021 FOR VALUES FROM ('2021-01-01') TO ('2022-01-01'),
            traffic_violations_p_2022 FOR VALUES FROM ('2022-01-01') TO ('2023-01-01'), PARTITIONED,
            traffic_violations_p_default DEFAULT

The is where the new functions in PostgreSQL 12 become very handy:

postgres=# select * from pg_partition_tree('traffic_violations_p');
             relid             |        parentrelid        | isleaf | level 
-------------------------------+---------------------------+--------+-------
 traffic_violations_p          |                           | f      |     0
 traffic_violations_p_default  | traffic_violations_p      | t      |     1
 traffic_violations_p_2013     | traffic_violations_p      | t      |     1
 traffic_violations_p_2014     | traffic_violations_p      | t      |     1
 traffic_violations_p_2015     | traffic_violations_p      | t      |     1
 traffic_violations_p_2016     | traffic_violations_p      | t      |     1
 traffic_violations_p_2017     | traffic_violations_p      | t      |     1
 traffic_violations_p_2018     | traffic_violations_p      | t      |     1
 traffic_violations_p_2019     | traffic_violations_p      | t      |     1
 traffic_violations_p_2020     | traffic_violations_p      | t      |     1
 traffic_violations_p_2021     | traffic_violations_p      | t      |     1
 traffic_violations_p_2022     | traffic_violations_p      | f      |     1
 traffic_violations_p_2022_jan | traffic_violations_p_2022 | t      |     2
 traffic_violations_p_2022_feb | traffic_violations_p_2022 | t      |     2
 traffic_violations_p_2022_mar | traffic_violations_p_2022 | t      |     2
 traffic_violations_p_2022_apr | traffic_violations_p_2022 | t      |     2
 traffic_violations_p_2022_may | traffic_violations_p_2022 | t      |     2
 traffic_violations_p_2022_jun | traffic_violations_p_2022 | t      |     2
 traffic_violations_p_2022_jul | traffic_violations_p_2022 | t      |     2
 traffic_violations_p_2022_aug | traffic_violations_p_2022 | t      |     2
 traffic_violations_p_2022_sep | traffic_violations_p_2022 | t      |     2
 traffic_violations_p_2022_oct | traffic_violations_p_2022 | t      |     2
 traffic_violations_p_2022_nov | traffic_violations_p_2022 | t      |     2
 traffic_violations_p_2022_dec | traffic_violations_p_2022 | t      |     2

To verify if data is routed correctly to the sub partitions let’s add some data for 2022:

insert into traffic_violations_p (date_of_stop)
       select * from generate_series ( date('01-01-2022')
                                     , date('12-31-2022')
                                     , interval '1 day' );

If we did the partitioning correctly we should see data in the new partitions:

postgres=# select count(*) from traffic_violations_p_2022_nov;
 count 
-------
    30
(1 row)

postgres=# select count(*) from traffic_violations_p_2022_dec;
 count 
-------
    31
(1 row)

postgres=# select count(*) from traffic_violations_p_2022_feb;
 count 
-------
    28
(1 row)

Here we go. Of course you could go even further and sub-partition the monthly partitions further by day or week. You can also partition by list and then sub-partition the list partitions by range. Or partition by range and then sub-partition by list, e.g.:

postgres=# create table traffic_violations_p_list_dummy partition of traffic_violations_p_list for values in ('dummy') partition by range (date_of_stop);
CREATE TABLE
postgres=# create table traffic_violations_p_list_dummy_2019 partition of traffic_violations_p_list_dummy for values from ('2022-01-01') to ('2023-01-01');
CREATE TABLE
postgres=# insert into traffic_violations_p_list (seqid, violation_type , date_of_stop) values (-1,'dummy',date('2022-12-01'));
INSERT 0 1
postgres=# select date_of_stop,violation_type from traffic_violations_p_list_dummy_2019;
 date_of_stop | violation_type 
--------------+----------------
 2022-12-01   | dummy
(1 row)

That’s it for sub-partitioning. In the final post we will look at some corner cases with partitioning in PostgreSQL.

Cet article PostgreSQL partitioning (8): Sub-partitioning est apparu en premier sur Blog dbi services.

Looking for errors in the Clusterware and RAC logs? Dash through using the TFA Collector

VitalSoftTech - Mon, 2019-06-17 09:49

The Oracle Trace File analyzer utility has been originally developed by Oracle to help collect and bundle up all the pertinent diagnostic data in the log files, tracefiles, os statistics, etc.. This is a very common task when Oracle Support engineers request this information to help troubleshoot issues and bugs.

The post Looking for errors in the Clusterware and RAC logs? Dash through using the TFA Collector appeared first on VitalSoftTech.

Categories: DBA Blogs

Can’t Unnest

Jonathan Lewis - Mon, 2019-06-17 09:35

In an echo of a very old “conditional SQL” posting, a recent posting on the ODC general database discussion forum ran into a few classic errors of trouble-shooting. By a lucky coincidence this allowed me to rediscover and publish an old example of parallel execution gone wild before moving on to talk about the fundamental problem exhibited in the latest query.

The ODC thread started with a question along the lines of “why isn’t Oracle using the index I hinted”, with the minor variation that it said “When I hint my SQL with an index hint it runs quickly so I’ve created a profile that applies the hint, but the hint doesn’t get used in production.”

The query was a bit messy and, as is often the case with ODC, the formatting wasn’t particularly readable, so I’ve extracted the where clause from the SQL that was used to generate the profile and reformatted it below. See if you can spot the hint clue that tells you why there might be a big problem using this SQL to generate a profile to use in the production environment:


WHERE   
        MSG.MSG_TYP_CD = '210_CUSTOMER_INVOICE' 
AND     MSG.MSG_CAPTR_STG_CD = 'PRE_BCS' 
AND     MSG.SRCH_4_FLD_VAL = '123456'   
AND     (
            (    'INVOICENUMBER' = 'INVOICENUMBER' 
             AND MSG.MSG_ID IN (
                        SELECT  *   
                        FROM    TABLE(CAST(FNM_GN_IN_STRING_LIST('123456') AS TABLE_OF_VARCHAR)))
            ) 
         OR (    'INVOICENUMBER' = 'SIEBELORDERID' 
             AND MSG.SRCH_3_FLD_VAL IN (
                        SELECT  *   
                        FROM    TABLE(CAST(FNM_GN_IN_STRING_LIST('') AS TABLE_OF_VARCHAR)))
            )
        ) 
AND     MSG.MSG_ID = TRK.INV_NUM(+) 
AND     (   TRK.RESEND_DT IS NULL 
         OR TRK.RESEND_DT = (
                        SELECT  MAX(TRK1.RESEND_DT)   
                        FROM    FNM.BCS_INV_RESEND_TRK TRK1   
                        WHERE   TRK1.INV_NUM = TRK.INV_NUM
                )
        )

If the SQL by itself doesn’t give you an inportant clue, compare it with the Predicate Information from the “good” execution plan that it produced:


Predicate Information (identified by operation id):  
---------------------------------------------------  
   2 - filter(("TRK"."RESEND_DT" IS NULL OR "TRK"."RESEND_DT"=))  
   8 - filter(("MSG"."SRCH_4_FLD_VAL"='123456' AND "MSG"."MSG_CAPTR_STG_CD"='PRE_BCS'))  
   9 - access("MSG"."MSG_ID"="COLUMN_VALUE" AND "MSG"."MSG_TYP_CD"='210_CUSTOMER_INVOICE')  
       filter("MSG"."MSG_TYP_CD"='210_CUSTOMER_INVOICE')  
  10 - access("MSG"."MSG_ID"="TRK"."INV_NUM")  
  13 - access("TRK1"."INV_NUM"=:B1)  

Have you spotted the thing that isn’t there in the predicate information ?

What happened to the ‘INVOICENUMBER’ = ‘INVOICENUMBER’ predicate and the ‘INVOICENUMBER’ = ‘SIEBELORDERID’ predicate? They’ve disappeared because the optimizer knows that the first predicate is always true and doesn’t need to be tested at run-time and the second one is always false and doesn’t need to be tested at run-time. Moreover both predicates are part of a conjunct (AND) – so in the second case the entire two-part predicate can be eliminated; so the original where clause can immediately be reduced to:


WHERE   
        MSG.MSG_TYP_CD = '210_CUSTOMER_INVOICE' 
AND     MSG.MSG_CAPTR_STG_CD = 'PRE_BCS' 
AND     MSG.SRCH_4_FLD_VAL = '123456'   
AND     (
                 MSG.MSG_ID IN (
                        SELECT  *   
                        FROM    TABLE(CAST(FNM_GN_IN_STRING_LIST('123456') AS TABLE_OF_VARCHAR)))
        ) 
AND     MSG.MSG_ID = TRK.INV_NUM(+) 
AND     (   TRK.RESEND_DT IS NULL 
         OR TRK.RESEND_DT = (
                        SELECT  MAX(TRK1.RESEND_DT)   
                        FROM    FNM.BCS_INV_RESEND_TRK TRK1   
                        WHERE   TRK1.INV_NUM = TRK.INV_NUM
                )
        )

Looking at this reduced predicate you may note that the IN subquery referencing the fnm_gn_in_string_list() collection could now be unnested and used to drive the final execution plan, and the optimizer will even recognize that it’s a rowsource with at most one row. So here’s the “good” execution plan:


---------------------------------------------------------------------------------------------------------------------------------------------------------------  
| Id  | Operation                               | Name                  | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |  OMem |  1Mem | Used-Mem |  
---------------------------------------------------------------------------------------------------------------------------------------------------------------  
|   0 | SELECT STATEMENT                        |                       |      1 |        |      2 |00:00:00.08 |      12 |      7 |       |       |          |  
|   1 |  SORT ORDER BY                          |                       |      1 |      1 |      2 |00:00:00.08 |      12 |      7 |  2048 |  2048 | 2048  (0)|  
|*  2 |   FILTER                                |                       |      1 |        |      2 |00:00:00.08 |      12 |      7 |       |       |          |  
|   3 |    NESTED LOOPS OUTER                   |                       |      1 |      1 |      2 |00:00:00.08 |      10 |      7 |       |       |          |  
|   4 |     NESTED LOOPS                        |                       |      1 |      1 |      2 |00:00:00.06 |       6 |      5 |       |       |          |  
|   5 |      VIEW                               | VW_NSO_1              |      1 |      1 |      1 |00:00:00.01 |       0 |      0 |       |       |          |  
|   6 |       HASH UNIQUE                       |                       |      1 |      1 |      1 |00:00:00.01 |       0 |      0 |  1697K|  1697K|  487K (0)|  
|   7 |        COLLECTION ITERATOR PICKLER FETCH| FNM_GN_IN_STRING_LIST |      1 |      1 |      1 |00:00:00.01 |       0 |      0 |       |       |          |  
|*  8 |      TABLE ACCESS BY INDEX ROWID        | FNM_VSBL_MSG          |      1 |      1 |      2 |00:00:00.06 |       6 |      5 |       |       |          |  
|*  9 |       INDEX RANGE SCAN                  | XIE2FNM_VSBL_MSG      |      1 |      4 |      4 |00:00:00.04 |       4 |      3 |       |       |          |  
|* 10 |     INDEX RANGE SCAN                    | XPKBCS_INV_RESEND_TRK |      2 |      1 |      2 |00:00:00.01 |       4 |      2 |       |       |          |  
|  11 |    SORT AGGREGATE                       |                       |      1 |      1 |      1 |00:00:00.01 |       2 |      0 |       |       |          |  
|  12 |     FIRST ROW                           |                       |      1 |      1 |      1 |00:00:00.01 |       2 |      0 |       |       |          |  
|* 13 |      INDEX RANGE SCAN (MIN/MAX)         | XPKBCS_INV_RESEND_TRK |      1 |      1 |      1 |00:00:00.01 |       2 |      0 |       |       |          |  
---------------------------------------------------------------------------------------------------------------------------------------------------------------  

The plan looks great – Oracle predicts a single row driver (operation 5) which can use a very good index (XIE2FNM_VSBL_MSG) in a nested loop, followed by a second nested loop, followed by a filter subquery and a sort of a tiny amount of data. Predictions match actuals all the way down the plan, and the workload is tiny. So what goes wrong in production?

You’ve probably guessed the flaw in this test. Why would anyone include a predicate like ‘INVOICENUMBER’ = ‘INVOICENUMBER’ in production code, or even worse ‘INVOICENUMBER’ = ‘SIEBELORDERID’. The OP has taken a query using bind variables picked up the actual values that were peeked when the query was executed, and substituted them into the test as literals. This has allowed the optimizer to discard two simple predicates and one subquery when the production query would need a plan that catered for the possibility that the second subquery would be the one that had to be executed and the first one bypassed. Here’s the corrected where clause using SQL*Plus variables (not the substitution type, the proper type) for the original bind variables:


WHERE
        MSG.MSG_TYP_CD = '210_CUSTOMER_INVOICE'
AND     MSG.MSG_CAPTR_STG_CD = 'PRE_BCS'
AND     MSG.SRCH_4_FLD_VAL = :BindInvoiceTo
AND     (
            (    :BindSearchBy = 'INVOICENUMBER' 
             AND MSG.MSG_ID IN (
                        SELECT  *
                        FROM    TABLE(CAST(FNM_GN_IN_STRING_LIST(:BindInvoiceList) AS TABLE_OF_VARCHAR)))
            )
         OR (    :BindSearchBy = 'SIEBELORDERID' 
             AND MSG.SRCH_3_FLD_VAL IN (
                        SELECT  *
                        FROM    TABLE(CAST(FNM_GN_IN_STRING_LIST(:BindSeibelIDList) AS TABLE_OF_VARCHAR)))
            )
        )
AND     MSG.MSG_ID = TRK.INV_NUM(+)
AND     (   TRK.RESEND_DT IS NULL
         OR TRK.RESEND_DT = (
                        SELECT  MAX(TRK1.RESEND_DT)
                        FROM    FNM.BCS_INV_RESEND_TRK TRK1
                        WHERE   TRK1.INV_NUM = TRK.INV_NUM
                )
        )

And this, with the “once good” hint in place to force the use of the XIE2FNM_VSBL_MSG index, is the resulting execution plan


---------------------------------------------------------------------------------------------------------  
| Id  | Operation                           | Name                  | E-Rows |  OMem |  1Mem | Used-Mem |  
---------------------------------------------------------------------------------------------------------  
|   0 | SELECT STATEMENT                    |                       |        |       |       |          |  
|   1 |  SORT ORDER BY                      |                       |      1 | 73728 | 73728 |          |  
|*  2 |   FILTER                            |                       |        |       |       |          |  
|   3 |    NESTED LOOPS OUTER               |                       |      1 |       |       |          |  
|*  4 |     TABLE ACCESS BY INDEX ROWID     | FNM_VSBL_MSG          |      1 |       |       |          |  
|*  5 |      INDEX FULL SCAN                | XIE2FNM_VSBL_MSG      |   4975K|       |       |          |  
|*  6 |     INDEX RANGE SCAN                | XPKBCS_INV_RESEND_TRK |      1 |       |       |          |  
|*  7 |    COLLECTION ITERATOR PICKLER FETCH| FNM_GN_IN_STRING_LIST |      1 |       |       |          |  
|*  8 |    COLLECTION ITERATOR PICKLER FETCH| FNM_GN_IN_STRING_LIST |      1 |       |       |          |  
|   9 |    SORT AGGREGATE                   |                       |      1 |       |       |          |  
|  10 |     FIRST ROW                       |                       |      1 |       |       |          |  
|* 11 |      INDEX RANGE SCAN (MIN/MAX)     | XPKBCS_INV_RESEND_TRK |      1 |       |       |          |  
---------------------------------------------------------------------------------------------------------  
 
Predicate Information (identified by operation id):  
---------------------------------------------------  
   2 - filter((((:BINDSEARCHBY='INVOICENUMBER' AND  IS NOT NULL) OR  
              (:BINDSEARCHBY='SIEBELORDERID' AND  IS NOT NULL)) AND ("TRK"."RESEND_DT" IS NULL OR  
              "TRK"."RESEND_DT"=)))  
   4 - filter(("MSG"."SRCH_4_FLD_VAL"=:BINDINVOICETO AND "MSG"."MSG_CAPTR_STG_CD"='PRE_BCS'))  
   5 - access("MSG"."MSG_TYP_CD"='210_CUSTOMER_INVOICE')  
       filter("MSG"."MSG_TYP_CD"='210_CUSTOMER_INVOICE')  
   6 - access("MSG"."MSG_ID"="TRK"."INV_NUM")  
   7 - filter(VALUE(KOKBF$)=:B1)  
   8 - filter(VALUE(KOKBF$)=:B1)  
  11 - access("TRK1"."INV_NUM"=:B1)  

The “unnested driving subquery” approach can no longer be used – we now start with the fnm_vsbl_msg table (accessing it using a most inefficient execution path because that’s what the hint does for us, and we can obey the hint), and for each row check which of the two subqueries we need to execute. There is, in fact, no way we can hint this query to operate efficiently [at least, that’s my opinion, .I may be wrong].

The story so far

If you’re going to try to use SQL*Plus (or similar) to test a production query with bind variables you can’t just use a sample of literal values in place of the bind variables (though you may get lucky sometimes, of course), you should set up some SQL*Plus variables and assign values to them.

Though I haven’t said it presiously in this article this is an example where a decision that really should have been made by the front-end code has been embedded in the SQL and passed to the database as SQL which cannot be run efficiently. The front end code should have been coded to recognise the choice between invoice numbers and Siebel order ids and sent the appropriate query to the database.

Next Steps

WIthout making a significant change to the front-end mechanism wrapper is it possible to change the SQL so something the optimizer can handle efficiently? Sometimes the answer is yes; so I’ve created a simpler model to demonstrate the basic problem and supply a solution for cases like this one. The key issue is finding a way of working around the OR clauses that are trying to allow the optimizer to choose between two subqueries but make it impossible for either to be unnested into a small driving data set.

First, some tables:


rem
rem     Script:         or_in_twice.sql
rem     Author:         Jonathan Lewis
rem     Dated:          June 2019
rem
rem     Last tested 
rem             18.3.0.0
rem             12.2.0.1
rem

create table t1
as
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        rownum                          id,
        mod(rownum,371)                 n1,
        lpad(rownum,10,'0')             v1,
        lpad('x',100,'x')               padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e4 -- > comment to avoid WordPress format issue
;

alter table t1 add constraint t1_pk primary key(id);

create table t2
as
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        rownum                          id,
        mod(rownum,372)                 n1,
        lpad(rownum,10,'0')             v1,
        lpad('x',100,'x')               padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e4 -- > comment to avoid WordPress format issue
;

create table t3
as
with generator as (
        select 
                rownum id
        from dual 
        connect by 
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        rownum                          id,
        mod(rownum,373)                 n1,
        lpad(rownum,10,'0')             v1,
        lpad('x',100,'x')               padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e4 -- > comment to avoid WordPress format issue
;


Now a query – first setting up a variable in SQL*Plus to allow us to emulate a production query with bind variables. Since I’m only going to use Explain Plan the variable won’t be peekable, so there would still be some scope for this plan not matching a production plan, but it’s adequate to demonstrate the structural problem:


variable v1 varchar2(10)
exec :v1 := 'INVOICE'

explain plan for
select
        t1.v1 
from
        t1
where
        (
            :v1 = 'INVOICE' 
        and t1.id in (select id from t2 where n1 = 0)
        )
or      (
            :v1 = 'ORDERID' 
        and t1.id in (select id from t3 where n1 = 0)
        )
;

select * from table(dbms_xplan.display);

---------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |    10 |   150 |    26   (4)| 00:00:01 |
|*  1 |  FILTER            |      |       |       |            |          |
|   2 |   TABLE ACCESS FULL| T1   | 10000 |   146K|    26   (4)| 00:00:01 |
|*  3 |   TABLE ACCESS FULL| T2   |     1 |     8 |    26   (4)| 00:00:01 |
|*  4 |   TABLE ACCESS FULL| T3   |     1 |     8 |    26   (4)| 00:00:01 |
---------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(:V1='INVOICE' AND  EXISTS (SELECT 0 FROM "T2" "T2" WHERE
              "ID"=:B1 AND "N1"=0) OR :V1='ORDERID' AND  EXISTS (SELECT 0 FROM "T3"
              "T3" WHERE "ID"=:B2 AND "N1"=0))
   3 - filter("ID"=:B1 AND "N1"=0)
   4 - filter("ID"=:B1 AND "N1"=0)

As you can see, thanks to the OR that effectively gives Oracle the choice between running the subquery against t3 or the one against t2, Oracle is unable to do any unnesting. (In fact different versions of Oracle allow different levels of sophistication with disjuncts (OR) of subqueries, so this is the kind of example that’s always useful to keep for tests against future versions.)

Since we know that we are going to use one of the data sets supplied in one of the subqueries and have no risk of double-counting or eliminating required duplicates, one strategy we could adopt for this query is to rewrite the two subqueries as a single subquery with a union all – because we know the optimizer can usually handle a single IN subquery very nicely. So let’s try the following:


explain plan for
select
        t1.v1
from
        t1
where
        t1.id in (
                select  id 
                from    t2 
                where   n1 = 0
                and     :v1 = 'INVOICE'
                union all
                select  id 
                from    t3 
                where   n1 = 0
                and     :v1 = 'ORDERID'
        )
;

select * from table(dbms_xplan.display);

-----------------------------------------------------------------------------------
| Id  | Operation              | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |          |    54 |  1512 |    77   (3)| 00:00:01 |
|*  1 |  HASH JOIN             |          |    54 |  1512 |    77   (3)| 00:00:01 |
|   2 |   VIEW                 | VW_NSO_1 |    54 |   702 |    51   (2)| 00:00:01 |
|   3 |    HASH UNIQUE         |          |    54 |   432 |    51   (2)| 00:00:01 |
|   4 |     UNION-ALL          |          |       |       |            |          |
|*  5 |      FILTER            |          |       |       |            |          |
|*  6 |       TABLE ACCESS FULL| T2       |    27 |   216 |    26   (4)| 00:00:01 |
|*  7 |      FILTER            |          |       |       |            |          |
|*  8 |       TABLE ACCESS FULL| T3       |    27 |   216 |    26   (4)| 00:00:01 |
|   9 |   TABLE ACCESS FULL    | T1       | 10000 |   146K|    26   (4)| 00:00:01 |
-----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T1"."ID"="ID")
   5 - filter(:V1='INVOICE')
   6 - filter("N1"=0)
   7 - filter(:V1='ORDERID')
   8 - filter("N1"=0)


Thanks to the FILTERs at operations 5 and 7 this plan will pick the data from just one of the two subqueries, reduce it to a unique list and then use that as the build table to a hash join. Of course, with different data (or suitable hints) that hash join could become a nested loop using a high precision index.

But there’s an alternative. We manually rewrote the two subqueries as a single union all subquery and as we did so we moved the bind variable comparisons inside their respective subqueries; maybe we don’t need to introduce the union all. What would happen if we simply take the original query and move the “constant” predicates inside their subqueries?


explain plan for
select
        t1.v1
from
        t1
where
        t1.id in (select id from t2 where n1 = 0 and :v1 = 'INVOICE')
or      t1.id in (select id from t3 where n1 = 0 and :v1 = 'ORDERID')
;

select * from table(dbms_xplan.display);


-----------------------------------------------------------------------------------
| Id  | Operation              | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |          |    54 |  1512 |    77   (3)| 00:00:01 |
|*  1 |  HASH JOIN             |          |    54 |  1512 |    77   (3)| 00:00:01 |
|   2 |   VIEW                 | VW_NSO_1 |    54 |   702 |    51   (2)| 00:00:01 |
|   3 |    HASH UNIQUE         |          |    54 |   432 |    51   (2)| 00:00:01 |
|   4 |     UNION-ALL          |          |       |       |            |          |
|*  5 |      FILTER            |          |       |       |            |          |
|*  6 |       TABLE ACCESS FULL| T3       |    27 |   216 |    26   (4)| 00:00:01 |
|*  7 |      FILTER            |          |       |       |            |          |
|*  8 |       TABLE ACCESS FULL| T2       |    27 |   216 |    26   (4)| 00:00:01 |
|   9 |   TABLE ACCESS FULL    | T1       | 10000 |   146K|    26   (4)| 00:00:01 |
-----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T1"."ID"="ID")
   5 - filter(:V1='ORDERID')
   6 - filter("N1"=0)
   7 - filter(:V1='INVOICE')
   8 - filter("N1"=0)

In 12.2.0.1 and 18.3.0.0 it gets the same plan as we did with our “single subquery” rewrite – the optimizer is able to construct the union all single subquery (although the ordering of the subqueries has been reversed) and unnest without any other manual intervention. (You may find that earlier versions of Oracle don’t manage to do this, but you might have to go all the way back to 10g.

Conclusion

Oracle doesn’t like disjuncts (OR) and finds conjuncts (AND) much easier to cope with. Mixing OR and subqueries is a good way to create inefficient execution plans, especially when you try to force the optimizer to handle a decision that should have been taken in the front-end code. The optimizer, however, gets increasingly skilled at handling the mixture as you move through the newer versions; but you may have to find ways to give it a little help if you see it running subqueries as filter subqueries when you’re expecting it to unnest a subquery to produce a small driving data set.

 

Video : Ranking using RANK, DENSE_RANK and ROW_NUMBER : Problem Solving using Analytic Functions

Tim Hall - Mon, 2019-06-17 02:36

Today’s video is a run through ranking data using the RANK, DENSE_RANK and ROW_NUMBER analytic functions.

There is more information about these and other analytic functions in the following articles.

The star of today’s video is Chris Saxon, who is one of the folks keeping the masses up to speed at AskTom.

Cheers

Tim…

Video : Ranking using RANK, DENSE_RANK and ROW_NUMBER : Problem Solving using Analytic Functions was first posted on June 17, 2019 at 8:36 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Pages

Subscribe to Oracle FAQ aggregator