Feed aggregator

Huawei Dorado 6000 V3 benchmark

Yann Neuhaus - Wed, 2019-07-10 02:39

I had the opportunity to test the new Dorada 6000 V3 All-Flash storage system.
See what the all-new Dorado 6000 V3 All-Flash Storage system is capable as storage for your database system.

Before you read

This is a series of different blog posts:
In the first blog post, I talk about “What you should measure on your database storage and why”.
The second blog post will talk about “How to do database storage performance benchmark with FIO”.
The third blog post will show “How good is the new HUAWEI Dorada 6000V3 All-Flash System for databases” measured with the methods and tools from post one and two (aka this one here).

The first two posts give you the theory to understand all the graphics and numbers I will show in the third blog post.

So in this post, we see, what are the results when we test a Huawei Dorado 6000V3 All-Flash storage system with these technics.

I uploaded all the files to a github repository: Huawei-Dorado6000V3-Benchmark.

Foreword

The setup was provided by Huawei in Shengsen, China. I’ve got remote access with a timeout at a certain point. Every test run runs for 10h, because of the timeout I was sometimes not able to capture all performance view pictures. That’s why some of the pictures are missing. Storage array and servers were provided free of charge, there was no exercise of influence from Huawei on the results or conclusion in any way.

Setup

4 Server were. provided, each with 4×16 GBit/s FC adapter direct connected to the storage systems.
There are 256 GByte of memory installed and 2x 14 Cores 2.6 GHz E5-2690 Intel CPUs.
Hyperthreading is disabled.
The 10 GBit/s network interfaces are irrelevant for this test here because all storage. traffic runs over FC.

The Dorado 6000 V3 System has 1 TByte of cache and 50x 900 GByte SSD from Huawei.
Deduplication was disabled.
Tests were made with and without compression.

Theoretical max speed

With 4x16GBit/s a maximal throughput of 64 GBit/s or 8 GByte/s is possible.
In IOPS this means we can transmit 8192 IOPS with 1 MByte block size or 1’048’576 IOPS with 8 KByte block size.
As mentioned in the title, this is theoretically or raw bandwidth, the usable bandwidth or payload is, of course, smaller: A FC-frames is 2112 bytes with 36 bytes of protocol overhead.
So in a 64 GBit/s FC network we can transfer: 64GBit/s / 8 ==> 8GByte/s * 1024 ==> 8192 MByte/s (raw) * (100-(36/2.112))/100 ==> 6795MByte/s (payload).

So we end up with a maximum of 6975 IOPS@1MByte or 869’841 IOPS@8KByte (payload) not included is the effect, that we are using multipathing* with 4x16GBit/s, which will also consume some power.

*If somebody out there has a method to calculate the overhead of multipathing in such a setup, please contact me!

Single-Server Results General

All single server tests were made on devices with enabled data compression. Unfortunately, I do not have the results from my tests with uncompressed devices for single server anymore, but you can see the difference in the multi-server section.

8 KByte block size

The 8 KByte block size tests on a single server were very performant.
What we can already tell, as higher the parallelity as better the storage performs. This is not really a surprise. Most storage systems work better, as higher the parallel access is.
Specialy for 1 thread, we see the differenc between having one disk in a diskgroup and be able to use 3967 IOPS or using e.g. 5 disks and 1 thread an be able to use 16700 IOPS.
The latency for all tests was great with 0.25ms to 0.4ms for reading operation and 0.1 to 0.4ms for write operations.
The 0.1ms for write is not that impressive, because it is mainly the performance of the write cache, but even when we exceeded the write cache we were not higher then 0.4ms

1 MByte block size

On the 1 MByte tests, we see, that we already hit the max speed with 6 devices (parallelity of 6) to 9 devices (parallelity 2).

As an example to interpret the graphic, when you have a look at the green line (6 devices), we reach the peak performance at a parallelity of 6.
For the dark blue line (7 devices) we hit the max peak at parallelity 4 and so on.

If we increase the parallelity over this point, the latency will grow or even the throughput will decrease.
For the 1 MByte tests, we hit a limitation at around 6280 IOPS. This is around 90% of the calculated maximum speed.

So if we go with Oracle ASM, we should bundle at least 5 devices together to a diskgroup.
We also see, that when we run a rebalance diskgroup we should go for a small rebalance power. A value smaller than 4 should be chosen, every value over 8 is counterproductive and will consume all possible I/O on your system and slow all databases on this server.
Monitoring / Verification

To verify the results, I am using dbms_io_calibration on the very same devices as the performance test was running. The expectation is, that we will see more or less the same results.

On large IO the measured 6231 IOPS by IO calibration is almost the same as measured by FIO (+/- 1%).
IO calibration measured 604K IOPS for small IO, which is significantly more than the +/- 340kw IOPS measured with FIO. This is explainable because IO calibration is working with the number of disks for the parallelity and I did this test with 20 disks instead. of 10. Sadly when I realized my mistake, I already had no more access to the system.

In the following pictures you see the performance view of the storage system with the data measured by FIO as an overlay. As we can see, the values for the IOPS matches perfectly.
The value for latency was lower on the storage part, which is explainable with the different points where we are measuring (once on the storage side, once on the server side).
All print screens of the live performance view of the storage can be found in the git repository. The values for Queue depth, throughput, and IOPS matched perfectly with the measured results.


Multi-Server Results with compression General

The tests for compressed and uncompressed devices were made with 3 parallel servers.

8 KByte block size

For random read with 8 KByte blocks, the IOPS increased almost linear from 1 to 3 nodes and we hit a peak of 655’000 IOPS with 10 devices / 10 threads. The answer time was between 0.3 and 0.45 ms.
For random write, we hit some kind of limitation at around 250k IOPS. We could not get a higher value than that which was kind of surprising for me. I would have expected better results here.
From the point, where we hit the maximum number of IOPS we see the same behavior as with 1 MByte blocks: More threads does only increase the answer time but does not get you better performance.
So for random write with 8 KByte blocks, the maximum numbers are around 3 devices and 10 threads or 10 devices and 3 threads or a parallelity of 30.
As long as we stay under this limit we see answer times between 0.15 and 0.5ms, over this limit the answer times can increase <10ms.
1 MByte block size

The multi-server tests show some interesting behavior with large reads on this storage system.
We hit a limitation at around 7500 to 7800 IOPS per second. For sequential write, we could achieve almost double this result with up to 14.5k IOPS.

Of course, I discussed all the results with Huawei to see their view on my tests.
The explanation for the way better performance on write then read was, with write we go straight to the 1 TByte big cache, for reading the system had to scratch everything from disk. This Beta-Firmware version did not have any read cache and that’s why the results were lower. All Firmwares starting from the end of February do have also read cache.
I go with this answer and hope to retest it in the future with the newest firmware, still thinking the 7500 IOPS is a little bit low even without read cache.
Multi-Server Results without compression

Comparing the results for compressed devices to uncompressed devices we see an increase of IOPS up to 30% and a decrease of latency at the same level for 8 KByte block size.
For 1 MByte sequential read, the difference was smaller with around 10%, for 1 MByte sequential write we could gain an increase of around 15-20%.

Multi-Server Results with high parallelity General

Because the tests with 3 servers did no max out the storage on the 8 KByte block size, I decided to do a max test with 4 parallel servers and with a parallelity from 1-100 instead of 1-10.
The steps were 1,5,10,15,20,30,40,50,75 and 100.
These tests were only performed on uncompressed devices.

8 KByte block size

It took 15 threads (per server) with 10 devices: 60 processes in total to reach the peak performance of the Dorado 6000V3 systems.
At this point, we reached 8 KByte random read 940k IOPS @0.637 ms. Remembering the answer, that this Firmware version does not have any read cache, this performance is achieved completely from the SSDs and could theoretically be even better with enabled read cache
If we increase the parallelity further, we see the same effect as with 1 MByte blocks: the answer time is increasing (dramatically) and the throughput is decreasing.

Depending on the number of parallel devices, we need between 60 parallel processes (with 10 devices) up to 300 parallel processes (with 3 parallel devices).

1 MByte block size

For the large IOs, we see the same picture as with 1 or 3 servers. A combined parallelity of 20-30 can max out the storage systems. So be very careful with your large IO tasks not to affect the other operations on the storage system.

Mixed Workload

After these tests, we know, the upper limit for this storage in single case tests. In a normal workload, we will never see only one kind of IO: There will always be a mixture of 8 KByte read & write IOPS side by side with 1 MByte IO. To simulate this picture, we create two FIO files. One creates approx: 40k-50k IOPS with random read and random write in a 50/50 split.
This will be our baseline, then we add approx. 1000 1 MByte IOPS every 60 seconds and see how the answer time reacts.


As seen in this picture from the performance monitor of the storage system the 1 MByte IOPS blocks had two effects on the smaller IOPS
The throughput of the small IOPS is decreasing
The latency is increasing.
In the middle of the test, we stop the small IOPS to see the latency of just the 1 MByte IOPS.

Both effects are expected and within the expected parameters: Test passed.

So with a base workload of 40k-50k IOPS, we can run e.g. backups in parallel with a bandwidth up to 5.5 GByte/s without interfering with the database work or we can do up to 5 active duplicates on the same storage without interfering with the other databases.

Summary

This storage system showed a fantastic performance at 8 KByte block size with very low latency. Especially the high number of parallel processes we can run against it before we hit the peak performance makes it a good choice to serving a large number of Oracle databases on it.

The large IO (1 MByte) performance for write operations was good but not that good compared with the excellent 8 KByte performance. The sequential read part is missing the read cache badly compared to the performance which is possible for writing. But even that is not on top of the line compared to other storage systems. Here I had seen other storage systems with a comparable configuration which were able to deliver up to 12k IOPS@1MByte.

Remember the questions from the first blog post:
-How many devices should I bundle into a diskgroup for best performance?
As many as possible.

-How many backups/duplicates can I run in parallel to my normal database workload without interfering with it?
You can run 5 parallel backup/duplicates with 1000 IOPS each without interferring a base line of 40-50k IOPS@8KByte

-What is the best rebalance power I can use on my system?
2-4 is absolutley enough for this system. More will slow down your other operations on the server.

Cet article Huawei Dorado 6000 V3 benchmark est apparu en premier sur Blog dbi services.

Storage performance benchmarking with FIO

Yann Neuhaus - Wed, 2019-07-10 02:18

Learn how to do storage performance benchmarks for your database system with the open source tool FIO.

Before you read

This is a series of different blog posts:
In the first blog post, I talk about “What you should measure on your database storage and why”.
The second blog post will talk about “How to do database storage performance benchmark with FIO” (aka this one here).
The third blog post will show “How good is the new HUAWEI Dorada 6000V3 All-Flash System for databases” measured with the methods and tools from post one and two.

The first two posts give you the theory to understand all the graphics and numbers I will show in the third blog post.

Install FIO

Many distributions have FIO in their repositories. On a Fedora/RHEL system, you can just use
yum install fio
and you are ready to go.

Start a benchmark with FIO

There are mainly two different ways to start a benchmark with FIO

Command line

Starting from the command line is the way to go when you just wanna have a quick feeling about the system performance.
I prefer to do more complex setups with job files. It is easier to create and debug.
Here a small example how to start a benchmark direct from the command line:
fio --filename=/dev/xvdf --direct=1 --rw=randwrite --refill_buffers --norandommap \
--randrepeat=0 --ioengine=libaio --bs=128k --rate_iops=1280 --iodepth=16 --numjobs=1 \
--time_based --runtime=86400 --group_reporting –-name=benchtest

FIO Job files

An FIO job file holds a [GLOBAL] section and one or many [JOBS] sections. This section holds the shared parameters which are used for all the jobs when you do not override them in the job sections.
Here is what a typical GLOBAL section from my files looks like:
[global] ioengine=libaio    #ASYNCH IO
invalidate=1       #Invalidate buffer-cache for the file prior to starting I/O.
                   #Should not be necessary because of direct IO but just to be sure ;-)
ramp_time=5        #First 5 seconds do not count to the result.
iodepth=1          #Number of I/O units to keep in flight against the file
runtime=60         #Runtime for every test
time_based         #If given, run for the specified runtime duration even if the files are completely read or written.
                   #The same workload will be repeated as many times as runtimeallows.
direct=1           #Use non buffered I/O.
group_reporting=1  #If set, display per-group reports instead of per-job when numjobs is specified.
per_job_logs=0     #If set, this generates bw/clat/iops log with per file private filenames.
                   #If not set, jobs with identical names will share the log filename.
bs=8k              #Block size
rw=randread        #I/O Type

Now that we have defined the basics, we can start with the JOBS section:
Example of single device test with different parallelity:


#
#Subtest: 1
#Total devices = 1
#Parallelity = 1
#Number of processes = devices*parallelity ==> 1*1 ==> 1
#
[test1-subtest1-blocksize8k-threads1-device1of1]     #Parallelity 1, Number of device: 1/1
stonewall                               #run this test until the next [JOB SECTION] with the “stonewall” keyword
filename=/dev/mapper/device01           #Device to use
numjobs=1                               #Create the specified number of clones of this job.
                                        #Each clone of job is spawned as an independent thread or process.
                                        #May be used to setup a larger number of threads/processes doing the same thing.
                                        #Each thread is reported separately: to see statistics for all clones as a whole
                                        #use group_reporting in conjunction with new_group.
#
#Subtest: 5
#Total devices = 1
#Parallelity = 5
#Number of processes = devices*parallelity ==> 1*5 ==> 5
#
[test1-subtest5-blocksize8k-threads5-device1of1]     #Parallelity 5, Number of device: 1/1
stonewall
numjobs=5
filename=/dev/mapper/device01

Example of multi device test with different parallelity:

#Subtest: 1
#Total devices = 4
#Parallelity = 1
#Number of processes = devices*parallelity ==> 4
#
[test1-subtest1-blocksize8k-threads1-device1of4]     # Parallelity 1, Number of device 1/4
stonewall
numjobs=1
filename=/dev/mapper/device01
[test1-subtest1-blocksize8k-threads1-device2of4]     # Parallelity 1, Number of device 2/4
numjobs=1
filename=/dev/mapper/device02
[test1-subtest1-blocksize8k-threads1-device3of4]     # Parallelity 1, Number of device 3/4
numjobs=1
filename=/dev/mapper/device03
[test1-subtest1-blocksize8k-threads1-device4of4]     # Parallelity 1, Number of device 4/4
numjobs=1
filename=/dev/mapper/device04
#
#Subtest: 5
#Total devices = 3
#Parallelity = 5
#Number of processes = devices*parallelity ==> 5
#
[test1-subtest5-blocksize8k-threads5-device1of3]     # Parallelity 5, Number of device 1/3
stonewall
numjobs=5
filename=/dev/mapper/device01
[test1-subtest5-blocksize8k-threads5-device2of3]     # Parallelity 5, Number of device 2/3
filename=/dev/mapper/device02
[test1-subtest5-blocksize8k-threads5-device3of3]     # Parallelity 5, Number of device 3/3
filename=/dev/mapper/device03

You can download a compelete set of FIO job files for running the described testcase on my github repository.
Job files list

To run a complete test with my job files you have to replace the devices. There is a small shell script to replace the devices called “replaceDevices.sh”

#!/bin/bash
######################################################
# dbi services michael.wirz@dbi-services.com
# Vesion: 1.0
#
# usage: ./replaceDevices.sh
#
# todo before use: modify newname01-newname10 with
# the name of your devices
######################################################
sed -i -e 's_/dev/mapper/device01_/dev/mapper/newname01_g' *.fio
sed -i -e 's_/dev/mapper/device02_/dev/mapper/newname02_g' *.fio
sed -i -e 's_/dev/mapper/device03_/dev/mapper/newname03_g' *.fio
sed -i -e 's_/dev/mapper/device04_/dev/mapper/newname04_g' *.fio
sed -i -e 's_/dev/mapper/device05_/dev/mapper/newname05_g' *.fio
sed -i -e 's_/dev/mapper/device06_/dev/mapper/newname06_g' *.fio
sed -i -e 's_/dev/mapper/device07_/dev/mapper/newname07_g' *.fio
sed -i -e 's_/dev/mapper/device08_/dev/mapper/newname08_g' *.fio
sed -i -e 's_/dev/mapper/device09_/dev/mapper/newname09_g' *.fio
sed -i -e 's_/dev/mapper/device10_/dev/mapper/newname10_g' *.fio

!!!After you replaced the filenames you should double check, that you have the correct devices, because when you start the test, all data on these devices is lost!!!

grep filename *.fio|awk -F '=' '{print $2}'|sort -u
/dev/mapper/device01
/dev/mapper/device02
/dev/mapper/device03
/dev/mapper/device04
/dev/mapper/device05
/dev/mapper/device06
/dev/mapper/device07
/dev/mapper/device08
/dev/mapper/device09
/dev/mapper/device10

To start the test run:

for job_file in $(ls *.fio)
do
    fio ${job_file} --output /tmp/bench/${job_file%.fio}.txt
done

Multiple Servers

FIO supports to do tests on multiple servers in parallel which is very nice! Often a single server can not max out a modern all-flash storage system, this could be of bandwidth problems (e.g. not enough adapters per server) or one server is just not powerful enough.

You need to start FIO in server mode on all machines you wanna test:
fio --server

Then you start the test with
fio --client=serverA,serverB,serverC /path/to/fio_jobs.file

Should you have a lot of servers you can put them in a file and use this as input for your fio command:


cat fio_hosts.list
serverA
serverB
serverC
serverD
...

fio --client=fio_hosts.list /path/to/fio_jobs.file

Results

The output files are not really human readable, so you can go with my getResults.sh script which formats you the output ready to copy/past to excel:


cd /home/user/Huawei-Dorado6000V3-Benchmark/TESTRUN5-HOST1_3-COMPR/fio-benchmark-output
bash ../../getResults.sh
###########################################
START :Typerandread-BS8k
FUNCTION: getResults
###########################################
Typerandread-BS8k
LATENCY IN MS
.399 .824 1.664 2.500 3.332 5.022 6.660 8.316 12.464 16.683
.392 .826 1.667 2.495 3.331 4.995 6.680 8.344 12.474 16.637
.397 .828 1.661 2.499 3.330 4.992 6.656 8.329 12.505 16.656
.391 .827 1.663 2.493 3.329 5.002 6.653 8.330 12.482 16.656
.398 .827 1.663 2.497 3.327 5.005 6.660 8.327 12.480 16.683
.403 .828 1.662 2.495 3.326 4.995 6.663 8.330 12.503 16.688
.405 .825 1.662 2.496 3.325 4.997 6.648 8.284 12.369 16.444
.417 .825 1.661 2.497 3.326 4.996 6.640 8.256 12.303 16.441
.401 .826 1.661 2.500 3.327 4.999 6.623 8.273 12.300 16.438
.404 .826 1.661 2.500 3.327 4.993 6.637 8.261 12.383 16.495
IOPS
2469 6009 5989 5986 5991 5966 5998 6006 6012 5989
5004 12000 11000 11000 11000 11000 11000 11000 12000 12000
7407 17000 18000 17000 17000 18000 18000 17000 17000 17000
10000 23000 23000 24000 23000 23000 24000 23000 24000 23000
12300 29000 29000 29000 30000 29900 29000 29000 30000 29900
14600 35900 35000 35000 36000 35000 35000 35000 35000 35900
16000 42100 41000 41000 42000 41000 42100 42200 42400 42500
16500 42100 41000 41900 42000 41000 42100 42400 42600 42500
19600 48000 47000 47900 47000 47900 48300 48300 48700 48600
21900 54000 53000 53900 53000 53000 54200 54400 54400 54400
###########################################
START :Typerandwrite-BS8k
FUNCTION: getResults
###########################################
Typerandwrite-BS8k
LATENCY IN MS
.461 .826 1.662 2.501 3.332 5.022 6.660 8.317 12.467 16.676
.457 .826 1.668 2.495 3.330 5.002 6.681 8.346 12.473 16.635
.449 .826 1.662 2.499 3.327 4.991 6.664 8.326 12.497 16.649
.456 .828 1.661 2.496 3.331 4.997 6.663 8.329 12.477 16.651
.460 .827 1.663 2.495 3.327 5.001 6.660 8.333 12.484 16.676
.463 .830 1.663 2.495 3.325 4.997 6.661 8.330 12.503 16.684
.474 .827 1.661 2.495 3.324 4.999 6.665 8.334 12.451 16.580
.469 .828 1.661 2.497 3.324 5.002 6.668 8.322 12.489 16.594
.471 .827 1.660 2.499 3.327 4.998 6.663 8.335 12.481 16.609
.476 .825 1.675 2.500 3.328 4.992 6.675 8.334 12.480 16.623
IOPS
2137 5997 5990 5985 5991 5966 5998 6005 6010 5992
4306 12000 11900 11000 11000 11000 11000 11000 12000 12000
6571 17000 17000 17000 18000 18000 17000 17000 17000 18000
8635 23900 23000 23000 23000 23000 23000 23000 24000 24000
10700 29000 29000 29000 30000 29900 29000 29000 30000 29000
12800 35900 35000 35000 36000 35000 35000 35000 35000 35900
14500 41000 41000 41000 42000 41000 41000 41000 42100 42200
14700 41000 41000 41900 42000 41900 41900 42000 42000 42100
16700 48000 48000 47900 47000 47000 47000 47900 47000 48100
18600 54100 53500 53900 53000 54000 53900 53900 53000 54100
...

Copy & paste the result into the excel template and you can have an easy over view of the results:
fio summary excel

Troubleshooting

If you’ve got a libaio error you have to install the libaio libraries:

fio: engine libaio not loadable
fio: failed to load engine
fio: file:ioengines.c:89, func=dlopen, error=libaio: cannot open shared object file: No such file or directory

yum install libaio-devel

Cet article Storage performance benchmarking with FIO est apparu en premier sur Blog dbi services.

Witty Screen Names and Why You Should Use Them

VitalSoftTech - Tue, 2019-07-09 10:08

There are several reasons why someone would require a screen name for social media. Everyone manages their privacy in their unique ways. Some are more comfortable letting on about themselves to the oldest and most trusted friends. Similarly, others tell their grave dark tales to strangers on trains or in these days, social media. Considering […]

The post Witty Screen Names and Why You Should Use Them appeared first on VitalSoftTech.

Categories: DBA Blogs

Change Item Icon Dynamically

Jeff Kemp - Tue, 2019-07-09 04:18

The floating item type has an optional “Icon” property that allows you to render an icon next to the item, which can help users quickly identify what the item is for. This is especially helpful when the form has a lot of items.

The icon attribute can be static, e.g. fa-hashtag, or it can be chosen based on the value of another item, e.g. &P1_FA_ICON..

If you want the icon to change dynamically as the user enters or modifies data, it’s a little bit more complicated. I have a list item based on a table of asset categories, and each asset category has an icon assigned to it. When the user selects an asset category from the list I want it to get the icon from the table and show it in the item straight away.

To do this, I use two Dynamic Actions: (1) a PL/SQL action which updates the hidden Pn_FA_ICON item, and (2) a Javascript action which manipulates the displayed icon next to the list item.

This is my item and its two dynamic actions. The Icon attribute causes the icon to be shown when the page is loaded.

The Execute PL/SQL Code action is a simple PL/SQL block which gets the icon from the reference table for the selected category code. Make sure the “Wait for Result” is “Yes”, and make sure the Items to Submit and Items to Return are set to P260_CATEGORY_CODE and P260_CATEGORY_FA_ICON, respectively.

select x.fa_icon
into   :P260_CATEGORY_FA_ICON
from   asset_categories x
where  x.code = :P260_CATEGORY_CODE;

On examining the source of the page, we see that the select item is immediately followed by a span which shows the icon:

The Execute JavaScript Code action finds the item (in this case, the triggering element), then searches the DOM for the following span with the apex-item-icon class. Once found, it resets the classes on the span with a new set of classes, including the new icon.

It’s a little gimmicky but it’s an easy way to delight users, and it might help them to quickly identify data entry mistakes.

Warning: due to the way the javascript manipulates the DOM, this method is not guaranteed to work correctly in future releases of APEX., so it will need to be retested after upgrades.

Pepkor Europe Selects Oracle Cloud as a Platform for Growth

Oracle Press Releases - Tue, 2019-07-09 04:00
Press Release
Pepkor Europe Selects Oracle Cloud as a Platform for Growth

London and Redwood Shores, Calif.—Jul 9, 2019

Pepkor Europe, the leading pan-European variety discount retailer, has chosen Oracle Cloud to support the planned future growth of its brands, PEPCO, Poundland and Dealz. Pepkor sells clothing and fast-moving consumer goods such as food, health, beauty products, and general merchandise to families on a budget across Europe.

“The Pepkor Europe brands serve customers in 14 countries through over 2,000 stores, offering a diverse and constantly evolving range of products, delivering great value to our customers, aided by being a high-volume business. We are confident that the centralised and enhanced inventory management capability that Oracle Retail provides, will improve our operational agility and flexibility through better visibility into inventory and margins,” said Andy Bond, chief executive officer, Pepkor Europe. “After a rigorous evaluation, we chose Oracle as our partner for this key element of our infrastructure transformation.”

Pepkor Europe will leverage Oracle Retail Merchandising Cloud Service to unify inventory management and Oracle Enterprise Resource Planning (ERP) Cloud to automate and streamline the organisation’s end-to-end financial management processes.

“Pepkor Europe needed a technology foundation that would match the requirements of its business and deliver a new level of insight and operational efficiency,” said Mike Webster, senior vice president and general manager, Oracle Retail. “From backend financials to managing complex retail operations, only Oracle Cloud can provide the end-to-end solutions Pepkor Europe needs to continue its international expansion while supporting multiple accounting approaches, currencies, languages, and legal entities.”

Contact Info
Kris Reeves
Oracle
+1.925.787.6744
kris.reeves@oracle.com
Nick Wharton
Pepkor Europe
07880 784319
nick.w@pepkor.co.uk
About Oracle Retail

Oracle is the modern platform for retail. Oracle provides retailers with a complete, open, and integrated platform for best-of-breed business applications, cloud services, and hardware that are engineered to work together. Leading fashion, grocery, and specialty retailers use Oracle solutions to accelerate from best practice to next practice, drive operational agility and refine the customer experience. For more information, visit our website at www.oracle.com/retail.

About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

About Pepkor Europe

Pepkor Europe was established in 2015 and comprised three strong, independent value retailers PEPCO, Poundland and Dealz.  Its vertically-integrated global sourcing arm, PGS enables its retail brands to deliver the value its customers demand in general merchandise and apparel.  In FMCG, thanks to its scale, it can offer widely recognised grocery brands at a significant discount.

PEPCO, Poundland & Dealz operate across some of Europe’s largest economies. Pepkor Europe now has 2,473 stores in 14 countries including the UK, the Republic of Ireland, Spain and across the CEE region, employing over 33,000 people.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Kris Reeves

  • +1.925.787.6744

Nick Wharton

  • 07880 784319

[Video] Oracle Exadata Cloud Service(ExaCS) Offerings

Online Apps DBA - Tue, 2019-07-09 00:50

[Video] Oracle Exadata Cloud Service(ExaCS) Offerings Exadata Cloud Service is available in 4 different configurations or shapes and 2 models. 1. What are the 4 shapes available in ExaCS? 2. Which is the newly released shape of ExaCS? 3. What are the specifications of each shape? 4. How does the Exadata Machine Model affect the […]

The post [Video] Oracle Exadata Cloud Service(ExaCS) Offerings appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Converting columns from one data type to another in PostgreSQL

Yann Neuhaus - Mon, 2019-07-08 00:19

Usually you should use the data type that best fits the representation of your data in a relational database. But how many times did you see applications that store dates or numbers as text or dates as integers? This is not so uncommon as you might think and fixing that could be quite a challenge as you need to cast from one data type to another when you want to change the data type used for a specific column. Depending on the current format of the data it might be easy to fix or it might become more complicated. PostgreSQL has a quite clever way of doing that.

Frequent readers of our blog might know that already: We start with a simple, reproducible test setup:

postgres=# create table t1 ( a int, b text );
CREATE TABLE
postgres=# insert into t1 values ( 1, '20190101');
INSERT 0 1
postgres=# insert into t1 values ( 2, '20190102');
INSERT 0 1
postgres=# insert into t1 values ( 3, '20190103');
INSERT 0 1
postgres=# select * from t1;
 a |    b     
---+----------
 1 | 20190101
 2 | 20190102
 3 | 20190103
(3 rows)

What do we have here? A simple table with two columns: Column “a” is an integer and column “b” is of type text. For humans it seems obvious that the second column in reality contains a date but stored as text. What options do we have to fix that? We could try something like this:

postgres=# alter table t1 add column c date default (to_date('YYYYDDMM',b));
psql: ERROR:  cannot use column reference in DEFAULT expression

That obviously does not work. Another option would be to add another column with the correct data type, populate that column and then drop the original one:

postgres=# alter table t1 add column c date;
ALTER TABLE
postgres=# update t1 set c = to_date('YYYYMMDD',b);
UPDATE 3
postgres=# alter table t1 drop column b;
ALTER TABLE

But what is the downside of that? This will probably break the application as the column name changed and there is no way to avoid that. Is there a better way of doing that? Let’s start from scratch:

postgres=# create table t1 ( a int, b text );
CREATE TABLE
postgres=# insert into t1 values ( 1, '20190101');
INSERT 0 1
postgres=# insert into t1 values ( 2, '20190102');
INSERT 0 1
postgres=# insert into t1 values ( 3, '20190103');
INSERT 0 1
postgres=# select * from t1;
 a |    b     
---+----------
 1 | 20190101
 2 | 20190102
 3 | 20190103
(3 rows)

The same setup as before. What other options do we have to convert "b" to a real date without changing the name of the column. Let's try the most obvious way and let PostgreSQL decide what to do:

postgres=# alter table t1 alter column b type date;
psql: ERROR:  column "b" cannot be cast automatically to type date
HINT:  You might need to specify "USING b::date".

This does not work as PostgreSQL in this case can not know how to go from one data type to another. But the “HINT” does already tell us what we might need to do:

postgres=# alter table t1 alter column b type date using (b::date);
ALTER TABLE
postgres=# \d t1
                 Table "public.t1"
 Column |  Type   | Collation | Nullable | Default 
--------+---------+-----------+----------+---------
 a      | integer |           |          | 
 b      | date    |           |          | 

postgres=# 

For our data in the “b” column that does work. but consider you have data like this:

postgres=# drop table t1;
DROP TABLE
postgres=# create table t1 ( a int, b text );
CREATE TABLE
postgres=# insert into t1 values (1,'01-JAN-2019');
INSERT 0 1
postgres=# insert into t1 values (2,'02-JAN-2019');
INSERT 0 1
postgres=# insert into t1 values (3,'03-JAN-2019');
INSERT 0 1
postgres=# select * from t1;
 a |      b      
---+-------------
 1 | 01-JAN-2019
 2 | 02-JAN-2019
 3 | 03-JAN-2019
(3 rows)

Would that still work?

postgres=# alter table t1 alter column b type date using (b::date);;
ALTER TABLE
postgres=# select * from t1;
 a |     b      
---+------------
 1 | 2019-01-01
 2 | 2019-01-02
 3 | 2019-01-03
(3 rows)

Yes, but in this case it will not:

DROP TABLE
postgres=# create table t1 ( a int, b text );
CREATE TABLE
postgres=# insert into t1 values (1,'First--January--19');
INSERT 0 1
postgres=# insert into t1 values (2,'Second--January--19');
INSERT 0 1
postgres=# insert into t1 values (3,'Third--January--19');
INSERT 0 1
postgres=# select * from t1;
 a |          b           
---+---------------------
 1 | First--January--19
 2 | Second--January--19
 3 | Third--January--19
(3 rows)

postgres=# alter table t1 alter column b type date using (b::date);;
psql: ERROR:  invalid input syntax for type date: "First--January--19"
postgres=# 

As PostgreSQL has no idea how to do the conversion this will fail, no surprise here. But still you have the power of doing that by providing a function that does the conversion in exactly the way you want to have it:

create or replace function f_convert_to_date ( pv_text in text ) returns date
as $$
declare
begin
  return date('20190101');
end;
$$ language plpgsql;

Of course you would add logic to parse the input string so that the function will return the matching date and not a constant as in this example. For demonstration purposes we will go with this fake function:

postgres=# alter table t1 alter column b type date using (f_convert_to_date(b));;
ALTER TABLE
postgres=# \d t1
                 Table "public.t1"
 Column |  Type   | Collation | Nullable | Default 
--------+---------+-----------+----------+---------
 a      | integer |           |          | 
 b      | date    |           |          | 

postgres=# select * from t1;
 a |     b      
---+------------
 1 | 2019-01-01
 2 | 2019-01-01
 3 | 2019-01-01
(3 rows)

… and here we go. The column was converted from text to date and we provided the exact way of doing that by calling a function that contains the logic to do that. As long as the output of the function conforms to the data type you want and you did not do any mistakes you can potentially go from any source data type to any target data type.

There is one remaining question: Will that block other sessions selecting from the table while the conversion is ongoing?

postgres=# drop table t1;
DROP TABLE
postgres=# create table t1 ( a int, b text );
CREATE TABLE
postgres=# insert into t1 select a, '20190101' from generate_series(1,1000000) a;
INSERT 0 1000000
postgres=# create index i1 on t1(a);
CREATE INDEX

In one session we will do the conversion and in the other session we will do a simple select that goes over the index:

-- first session
postgres=# alter table t1 alter column b type date using (f_convert_to_date(b));

Second one at the same time:

-- second session
postgres=# select * from t1 where a = 1;
-- blocks

Yes, that will block, so you should plan such actions carefully when you have a busy system. But this is still better than adding a new column.

Cet article Converting columns from one data type to another in PostgreSQL est apparu en premier sur Blog dbi services.

Telling the PostgreSQL optimizer more about your functions

Yann Neuhaus - Sun, 2019-07-07 05:29

When you reference/call functions in PostgreSQL the optimizer does not really know much about the cost nor the amount of rows that a function returns. This is not really surprising as it is hard to predict what the functions is doing and how many rows will be returned for a given set of input parameters. What you might not know is, that indeed you can tell the optimizer a bit more about your functions.

As usual let’s start with a little test setup:

postgres=# create table t1 ( a int, b text, c date );
CREATE TABLE
postgres=# insert into t1 select a,a::text,now() from generate_series(1,1000000) a;
INSERT 0 1000000
postgres=# create unique index i1 on t1(a);
CREATE INDEX
postgres=# analyze t1;
ANALYZE

A simple table containing 1’000’000 rows and one unique index. In addition let’s create a simple function that will return exactly one row from that table:

create or replace function f_tmp ( a_id in int ) returns setof t1
as $$
declare
begin
  return query select * from t1 where a = $1;
end;
$$ language plpgsql;

What is the optimizer doing when you call that function?

postgres=# explain (analyze) select f_tmp (1);
                                         QUERY PLAN                                         
--------------------------------------------------------------------------------------------
 ProjectSet  (cost=0.00..5.27 rows=1000 width=32) (actual time=0.654..0.657 rows=1 loops=1)
   ->  Result  (cost=0.00..0.01 rows=1 width=0) (actual time=0.003..0.004 rows=1 loops=1)
 Planning Time: 0.047 ms
 Execution Time: 0.696 ms
(4 rows)

We know that only one row will be returned but the optimizer is assuming that 1000 rows will be returned. This is the default and documented. So, no matter how many rows will really be returned, PostgreSQL will always estimate 1000. But you have some control and can tell the optimizer that the function will return one row only:

create or replace function f_tmp ( a_id in int ) returns setof t1
as $$
declare
begin
  return query select * from t1 where a = $1;
end;
$$ language plpgsql
   rows 1;

Looking again at the execution plan again:

postgres=# explain (analyze) select f_tmp (1);
                                        QUERY PLAN                                        
------------------------------------------------------------------------------------------
 ProjectSet  (cost=0.00..0.27 rows=1 width=32) (actual time=0.451..0.454 rows=1 loops=1)
   ->  Result  (cost=0.00..0.01 rows=1 width=0) (actual time=0.003..0.004 rows=1 loops=1)
 Planning Time: 0.068 ms
 Execution Time: 0.503 ms
(4 rows)

Instead of 1000 rows we now do see that only 1 row was estimated which is what we specified when we created the function. Of course this is a very simple example and in reality you often might not be able to tell exactly how many rows will be returned from a function. But at least you can provide a better estimate as the default of 1000. In addition you can also specify a cost for your function (based on cpu_operator_cost):

create or replace function f_tmp ( a_id in int ) returns setof t1
as $$
declare
begin
  return query select * from t1 where a = $1;
end;
$$ language plpgsql
   rows 1
   cost 1;

If you use functions remember that you can give the optimizer more information and that there is a default of 1000.

Cet article Telling the PostgreSQL optimizer more about your functions est apparu en premier sur Blog dbi services.

[Video] Oracle Autonomous Database Overview : ADW, ATP, Serverless & Dedicated Infrastructure

Online Apps DBA - Fri, 2019-07-05 02:18

[Video] Oracle Autonomous Database Overview : ADW, ATP, Serverless & Dedicated Infrastructure Oracle Autonomous Database is a combination of Exadata with Database and Infrastructure Automation on Oracle Gen 2 Cloud. Autonomous Databases are of two types based on workload: 1. Autonomous Data Warehouse (ADW) 2. Autonomous Transaction Processing (ATP) Autonomous Databases can be deployed in […]

The post [Video] Oracle Autonomous Database Overview : ADW, ATP, Serverless & Dedicated Infrastructure appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

SQL Server containers and docker network driver performance considerations

Yann Neuhaus - Fri, 2019-07-05 01:45

Few months ago I attended to the Franck Pachot session about Microservices and databases at SOUG Romandie in Lausanne on 2019 May 21th. He covered some performance challenges that can be introduced by Microservices architecture design and especially when database components come into the game with chatty applications. One year ago, I was in a situation where a customer installed some SQL Server Linux 2017 containers in a Docker infrastructure with user applications located outside of this infrastructure. It is likely an uncommon way to start with containers but anyway when you are immerging in a Docker world you just notice there is a lot of network drivers and considerations you may be aware of and just for a sake of curiosity, I proposed to my customer to perform some network benchmark tests to get a clear picture of these network drivers and their related overhead in order to design correctly Docker infrastructure from a performance standpoint.

The initial customer’s scenario included a standalone Docker infrastructure and we considered different approaches about application network configurations from a performance perspective. We did the same for the second scenario that concerned a Docker Swarm infrastructure we installed in a second step.

The Initial reference – Host network and Docker host network

The first point was to get an initial reference with no network management overhead directly from the network host. We used the iperf3 tool for the tests. This is a kind of tool I’m using with virtual environments as well to ensure network throughput is what we really expect and sometimes I got some surprises on this topic. So, let’s go back to the container world and each test was performed from a Linux host outside to the concerned Docker infrastructure according to the customer scenario.

The attached network card speed link of the Docker Host is supposed to be 10GBits/sec …

$ sudo ethtool eth0 | grep "Speed"
        Speed: 10000Mb/s

 

… and it is confirmed by the first iperf3 output below:

Let’s say that we tested the Docker host driver as well and we got similar results.

$ docker run  -it --rm --name=iperf3-server  --net=host networkstatic/iperf3 -s

 

Docker bridge mode

The default modus operandi for a Docker host is to create a virtual ethernet bridge (called docker0), attach each container’s network interface to the bridge, and to use network address translation (NAT) when containers need to make themselves visible to the Docker host and beyond. Unless specified, a docker container will use it by default and this is exactly the network driver used by containers in the context of my customer. In fact, we used user-defined bridge network but I would say it doesn’t matter for the tests we performed here.

$ ip addr show docker0
5: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:70:0a:e8:7a brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
    inet6 fe80::42:70ff:fe0a:e87a/64 scope link
       valid_lft forever preferred_lft forever

 

The iperf3 docker container I ran for my tests is using the default bridge network as show below. The interface with index 24 corresponds to the veth0bfc2dc peer of the concerned container.

$ docker run  -d --name=iperf3-server -p 5204:5201 networkstatic/iperf3 -s
…
$ docker ps | grep iperf
5c739940e703        networkstatic/iperf3              "iperf3 -s"              38 minutes ago      Up 38 minutes                0.0.0.0:5204->5201/tcp   iperf3-server
$ docker exec -ti 5c7 ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
24: eth0@if25: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever

[clustadmin@docker1 ~]$ ethtool -S veth0bfc2dc
NIC statistics:
     peer_ifindex: 24

 

Here the output after running the iperf3 benchmark:

It’s worth noting that the “Bridge” network adds some overheads with an impact of 13% in my tests but in fact, it is an expected outcome to be honest and especially if we refer to the Docker documentation:

Compared to the default bridge mode, the host mode gives significantly better networking performance since it uses the host’s native networking stack whereas the bridge has to go through one level of virtualization through the docker daemon.

 

When the docker-proxy comes into play

Next scenario we wanted to test concerned the closet network proximity we may have between the user applications and the SQL Server containers in the Docker infrastructure. In other words, we assumed the application resides on the same host than the SQL Server container and we got some surprises from the docker-proxy itself.

Before running the iperf3 result, I think we have to answer to the million-dollar question here: what is the docker-proxy? But did you only pay attention to this process on your docker host? Let’s run a pstree command:

$ pstree
systemd─┬─NetworkManager───2*[{NetworkManager}]
        ├─agetty
        ├─auditd───{auditd}
        ├─containerd─┬─containerd-shim─┬─npm─┬─node───9*[{node}]
        │            │                 │     └─9*[{npm}]
        │            │                 └─12*[{containerd-shim}]
        │            ├─containerd-shim─┬─registry───9*[{registry}]
        │            │                 └─10*[{containerd-shim}]
        │            ├─containerd-shim─┬─iperf3
        │            │                 └─9*[{containerd-shim}]
        │            └─16*[{containerd}]
        ├─crond
        ├─dbus-daemon
        ├─dockerd─┬─docker-proxy───7*[{docker-proxy}]
        │         └─20*[{dockerd}]

 

Well, if I understand correctly the Docker documentation, the purpose of this process is to enable a service consumer to communicate with the service providing container …. but it’s only used in particular circumstances. Just bear in mind that controlling access to a container’s service is massively done through the host netfilter framework, in both NAT and filter tables and the docker-proxy mechanism is required only when this method of control is not available:

  • When the Docker daemon is started with –iptables=false or –ip-forward=false or when the Linux host cannot act as a router with Linux kernel parameter ipv4.ip_forward=0. This is not my case here.
  • When you are using localhost in the connection string of your application that implies to use the loopback interface (127.0.0.0/8) and the Kernel doesn’t allow routing traffic from it. Therefore, it’s not possible to apply netfilter NAT rules and instead, netfilter sends packets through the filter table’s INPUT chain to a local process listening on the docker-proxy
$ sudo iptables -L -n -t nat | grep 127.0.0.0
DOCKER     all  --  0.0.0.0/0           !127.0.0.0/8          ADDRTYPE match dst-type LOCAL

 

In the picture below you will notice I’m using the localhost key word in my connection string so the docker-proxy comes into play.

A huge performance impact for sure which is about 28%. This performance drop may be explained by the fact the docker-proxy process is consuming 100% of my CPUs:

The docker-proxy operates in userland and I may simply disable it with the docker daemon parameter – “userland-proxy”: false – but I would say this is a case we would not encounter in practice because applications will never use localhost in their connection strings. By the way, changing the connection string from localhost to the IP address of the host container gives a very different outcome similar to the Docker bridge network scenario.

 

Using an overlay network

Using a single docker host doesn’t fit well with HA or scalability requirements and in a mission-critical environment I strongly guess no customer will go this way. I recommended to my customer to consider using an orchestrator like Docker Swarm or K8s to anticipate future container workload that was coming from future projects. The customer picked up Docker Swarm for its easier implementation compared to K8s.

 

After implementing a proof of concept for testing purposes (3 nodes included one manager and two worker nodes), we took the opportunity to measure the potential overhead implied by the overlay network which is the common driver used by containers through stacks and services in such situation. Referring to the Docker documentation overlay networks manage communications among the Docker daemons participating in the swarm and used by services deployed on it. Here the docker nodes in the swarm infrastructure:

$ docker node ls
ID                            HOSTNAME                    STATUS              AVAILABILITY        MANAGER STATUS      ENGINE VERSION
vvdofx0fjzcj8elueoxoh2irj *   docker1.dbi-services.test   Ready               Active              Leader              18.09.5
njq5x23dw2ubwylkc7n6x63ly     docker2.dbi-services.test   Ready               Active                                  18.09.5
ruxyptq1b8mdpqgf0zha8zqjl     docker3.dbi-services.test   Ready               Active                                  18.09.5

 

An ingress overlay network is created by default when setting up a swarm cluster. User-defined overlay network may be created afterwards and extends to the other nodes only when needed by containers.

$ docker network ls | grep overlay
NETWORK ID    NAME              DRIVER   SCOPE
ehw16ycy980s  ingress           overlay  swarm

 

Here the result of the iperf3 benchmark:

Well, the same result than the previous test with roughly 30% of performance drop. Compared to the initial reference, this is again an expected outcome but I didn’t imagine how important could be the impact in such case.  Overlay network introduces additional overhead by putting together behind the scene a VXLAN tunnel (virtual Layer 2 network on top of an existing Layer 3 infrastructure), VTEP endpoints for encapsulation/de-encapsulation stuff and traffic encryption by default.

Here a summary of the different scenarios and their performance impact:

Scenario Throughput (GB/s) Performance impact Host network 10.3 Docker host network 10.3 Docker bridge network 8.93 0.78 Docker proxy 7.37 0.71 Docker overlay network 7.04 0.68

 

In the particular case of my customer where SQL Server instances sit on the Docker infrastructure and applications reside outside of it, it’s clear that using directly Docker host network may be a good option from a performance standpoint assuming this infrastructure remains simple with few SQL Server containers. But in this case, we have to change the SQL Server default listen port with MSSQL_TCP_PORT parameter because using Docker host networking doesn’t provide port mapping capabilities. According to our tests, we didn’t get any evidence of performance improvement in terms of application response time between Docker network drivers but probably because those applications are not network bound here. But I may imagine scenarios where it can be. Finally, this kind of scenario encountered here is likely uncommon and I see containerized apps with database components outside the Docker infrastructure more often but it doesn’t change the game at all and the same considerations apply here … Today I’m very curious to test real microservices scenarios where database and application components are all sitting on a Docker infrastructure.

See you!

 

Cet article SQL Server containers and docker network driver performance considerations est apparu en premier sur Blog dbi services.

Kafka and Football: KSQL, Google Natural Language APIs, BigQuery and DataStudio

Rittman Mead Consulting - Thu, 2019-07-04 02:08

If you missed it, yesterday I wrote a guest blog post for Confluent! The blog post mixes two of my favorite topics: Apache Kafka and Football! The post starts by defining the data ingestion from Twitter and sport news RSS feeds via Kafka Connect, continues with the definition of a KSQL UDF Function using Google Natural Language APIs for Sentiment Analysis. Then it's time to define the data sink to Google Big Query and the data visualization with Google Data Studio.

The last bit of the post is dedicated to data analysis with both KSQL and DataStudio on top of the quarterfinal match won by England against Norway. If you are interested in the full article, check it out here!

Categories: BI & Warehousing

Wipe APEX mail queue

Jeff Kemp - Thu, 2019-07-04 01:32

Refreshing any of our non-prod environments (e.g. dev, test, etc.) with a clone from production is a fairly regular process at my client. A recurring issue with this is emails: we’ve had occasion where users have received a second copy of an email immediately after the clone has completed. This was confusing because they thought the event that had triggered the email actually occurred twice.

As it turns out, the duplicate emails were caused by the fact that the emails happened to be waiting in the APEX mail queue in production at the time of the export. After the export, the APEX mail queue was processed normally in production and the users received their emails as expected; after the clone was completed, the database jobs were restarted in the cloned environment which duly processed the emails sitting in the cloned queue and the users effectively got the same emails a second time.

What’s worse, if the same export were to be used for multiple clones, the users might get the same emails again and again!

A good way to solve this sort of issue would be to isolate the non-prod environments behind a specially configured mail server with a whitelist of people who want (and expect) to get emails from the non-prod systems. We don’t have this luxury at this client, however.

Instead, we have a post_clone.sql script which is run by the DBAs immediately after creating the clone. They already stop all the jobs by setting job_queue_processes=0.

In case the mail queue happens to have any emails waiting to be sent, the post clone script now includes the following step:

begin
*** WARNING: DO NOT RUN THIS IN PRODUCTION! ***
  for r in (
    select workspace_id
          ,workspace
    from apex_workspaces
    ) loop
    apex_application_install.set_workspace_id (r.workspace_id);
    apex_util.set_security_group_id
      (p_security_group_id => apex_application_install.get_workspace_id);
    delete apex_mail_queue;
  end loop;
  commit;
end;
/

This script is run as SYS but it could also be run as SYSTEM or as APEX_nnnnnn, depending on your preference.

ADDENDUM: Overriding the From Email Address

Christian Neumüller commented that an additional technique that might be useful is to override the From (sender) email address to indicate which environment each email was sent from. To do this, run something like the following:

begin
  apex_instance_admin.set_parameter('EMAIL_FROM_OVERRIDE',
    'apex-' || sys_context('userenv','db_name') || '@mydomain');
end;

I’ve tested this in APEX 19.1 and it seems to work fine. Regardless of the p_from parameter that the code passes to apex_mail.send, the EMAIL_FROM_OVERRIDE email address is used instead.
Note that this is currently undocumented, so this may stop working or change in a future release.

Oracle ADF A Status Update

Andrejus Baranovski - Wed, 2019-07-03 14:11
Oracle posted information update for Oracle ADF - "With the continuous investment and usage of Oracle ADF inside Oracle we expect external customers will also continue to enjoy the benefits of Oracle ADF for many more years."

Read the complete post here: https://blogs.oracle.com/jdeveloperpm/oracle-adf-a-status-update

Happy to read the update, sounds positive. Thanks to Oracle for taking time and publishing this information. #adf #middleware #javascript #oracle #cloud #oraclefusion

Serving Prophet Model with Flask — Predicting Future

Andrejus Baranovski - Wed, 2019-07-03 08:23
The solution to demonstrate how to serve Prophet model API on the Web with Flask. Prophet — Open-Source Python library developed by Facebook to predict time series data.

An accurate forecast and future prediction are crucial almost for any business. This is an obvious thing and it doesn’t need explanation. There is a concept of time series data, this data is ordered by date and typically each date is assigned with one or more values specific to that date. Machine Learning powered models could generate forecasts based on time series data. Such forecasts could be an important source of information for business decisions.

Read more in my Towards Data Science post.

Null Display Value on Read-only List Item

Jeff Kemp - Wed, 2019-07-03 04:00

The updated Universal Theme has added new “Floating” item templates which look great, e.g.:

I had a list item which I wanted to leave optional; if the user leaves it null, I wanted it to show a “default” display value (derived at runtime). To implement this, I added a hidden item (P10_DEPTNO_DEFAULT) and on the list item set Null Display Value to &P10_DEPTNO_DEFAULT..

If the page is shown in read-only mode, however, the list item is rendered as a Display Item, and the Null Display Value attribute is ignored:

To solve this, I added a Dynamic Action which injects the default value into the HTML for display (without affecting the value of the underlying item):

  • Event: Page Load
  • Server-side Condition: <page is readonly> AND :P10_DEPTNO IS NULL
  • Action: Execute JavaScript Code
  • Fire on Initialization: No
  • Code:

This finds the span for the display-only item and injects the default display value for display:

If you want to see this in action for yourself, here is a demo: https://apex.oracle.com/pls/apex/f?p=APEXTEST:DISPSHOWDEFAULT&c=JK64

Snapchat Usernames that are Interesting and More You

VitalSoftTech - Tue, 2019-07-02 09:52

Snapchat is an application for Android and Apple by Eva Spiegel and Bobby Murphy. It is a social media messenger to allow users to share their photos and videos with their friends. It is essential to have a cool Snapchat username that will help you portray your personality and entertain your friends and family. One […]

The post Snapchat Usernames that are Interesting and More You appeared first on VitalSoftTech.

Categories: DBA Blogs

Oracle Transparent Data Encryption and the world of Multitenant Database (Orace 12c)

VitalSoftTech - Tue, 2019-07-02 09:45

Step-by-step instructions on how to secure the Oracle Database Datafiles and the Operating System Data Files using Oracle 12c Transparent Data Encryption. Learn more ..

The post Oracle Transparent Data Encryption and the world of Multitenant Database (Orace 12c) appeared first on VitalSoftTech.

Categories: DBA Blogs

Using DbVisualizer to work with #Oracle, #PostgreSQL and #Exasol

The Oracle Instructor - Tue, 2019-07-02 09:01

As a Database Developer or Database Administrator, it becomes increasingly unlikely that you will work with only one platform.

It’s quite useful to have one single tool to handle multiple different database platforms. And that’s exactly the ambition of DbVisualizer.

As a hypothecial scenario, let’s assume you are a database admin who works on a project to migrate from Oracle to EDB Postgres and Exasol.

The goal might be to replace the corporate Oracle database landscape, moving the OLTP part to EDB Postgres and the DWH / Analytics part to Exasol.

Instead of having to switch constantly between say SQL Developer, psql and EXAplus, a more efficient approach would be using DbVisualizer for all three.

I created one connection for each of the three databases here for my demo:Now let’s see if statements I do in Oracle also work in EDB Postgres and in Exasol:

Oracle

EDB

Exasol

Works the same for all three! The convenient thing here is that I just had to select the Database Connection from the pull down menu while leaving the statement as it is. No need to copy & paste even.

What about schemas and tables?

Oracle

In EDB, I need to create a schema accordingly:

EDB

 

In Exasol, schema and table can be created in the same way:

Exasol

Notice that the data types got silently translated into the proper Exasol data types:

Exasol

There is no DBA_TABLES in Exasol, though:

Exasol

Of course, there’s much more to check and test upon migration, but I think you got an idea how a universal SQL Client like DbVisualizer might help for such purposes.

 

Categories: DBA Blogs

PeopleSoft ReConnect 2019

Jim Marion - Mon, 2019-07-01 09:37

It is about two weeks until PeopleSoft ReConnect, and definitely time to build your schedule. I'm looking forward to a great conference with partners such as Appsian, psadmin.io, SpearMC, Presence of IT, Gideon Taylor, PS Web Solutions, New Resources Consulting, Oracle, and colleagues such as Sasank Vemana. There are so many great sessions available. I personally have several overlapping sessions on my agenda. In fact, I am delivering sessions during timeslots that list sessions I would like to attend.

If you still have room in your schedule, here are the sessions I will be presenting at ReConnect 2019. I hope you aren't leaving early because both of my sessions are on Thursday, the final day of the conference.

See you there!

Pages

Subscribe to Oracle FAQ aggregator