Feed aggregator

New Release: ROracle 1.3-2, the DBI-compliant driver for R and Oracle Database

Christopher Jones - Thu, 2019-05-30 17:45

ROracle 1.3-2, the DBI-compliant driver for R and Oracle Database, has been released.

R is a powerful statistical language. ROracle can be used with Oracle Database to perform sophisticated analysis. Many users are also using it as part of Oracle R Enterprise,

ROracle 1.3-2 source code, documentation, and binaries (built for R-3.6.0 on Linux and Windows) are available for ROracle on OTN. This patch update isn't on CRAN due to an ongoing issue with CRAN infrastructure.

Don't all rush! There are only a couple of small changes in this update:

  • you can now specify the type of the Oracle Database column using the ora.type attribute in a dataframe when using dbWriteTable

  • ora.encoding is used for DML, when specified

Questions and comments can be posted on the OTN R Technologies forum

Workarounds for JPPD with view and table(kokbf$), xmltable or json_table functions

XTended Oracle SQL - Thu, 2019-05-30 09:05

You may know that table() (kokbf$ collection functions), xmltable and json_table functions block Join-Predicate PushDown(JPPD).

Simple example:

DDL

create table xtest(a, b, c) as
select mod(level,1000),level,rpad('x',100,'x')
from dual
connect by level<=1e4
/
create index itest on xtest(a)
/
create or replace view vtest as
select a,count(b) cnt
from xtest
group by a
/
call dbms_stats.gather_table_stats(user,'xtest');
/

[collapse]

select distinct v.* 
from table(sys.odcinumberlist(1,2,3)) c, vtest v
where v.a = c.column_value;

Plan hash value: 699667151

-------------------------------------------------------------------------------------------------
| Id  | Operation                               | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                        |       |     1 |    19 |    80   (4)| 00:00:01 |
|   1 |  HASH UNIQUE                            |       |     1 |    19 |    80   (4)| 00:00:01 |
|*  2 |   HASH JOIN                             |       |     1 |    19 |    79   (3)| 00:00:01 |
|   3 |    COLLECTION ITERATOR CONSTRUCTOR FETCH|       |     1 |     2 |    29   (0)| 00:00:01 |
|   4 |    VIEW                                 | VTEST |  1000 | 17000 |    49   (3)| 00:00:01 |
|   5 |     HASH GROUP BY                       |       |  1000 |  8000 |    49   (3)| 00:00:01 |
|   6 |      TABLE ACCESS FULL                  | XTEST | 10000 | 80000 |    48   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("V"."A"=VALUE(KOKBF$))
same for json_table
select/*+ cardinality(c 1) use_nl(v) push_pred(v) */ * 
from json_table('{"a":[1,2,3]}', '$.a[*]' COLUMNS (a int PATH '$')) c
    ,vtest v
where c.a = v.a;

Plan hash value: 664523328

--------------------------------------------------------------------------------
| Id  | Operation              | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------
|   0 | SELECT STATEMENT       |       |     1 |    28 |    78   (2)| 00:00:01 |
|   1 |  NESTED LOOPS          |       |     1 |    28 |    78   (2)| 00:00:01 |
|   2 |   JSONTABLE EVALUATION |       |       |       |            |          |
|*  3 |   VIEW                 | VTEST |     1 |    26 |    49   (3)| 00:00:01 |
|   4 |    SORT GROUP BY       |       |  1000 |  8000 |    49   (3)| 00:00:01 |
|   5 |     TABLE ACCESS FULL  | XTEST | 10000 | 80000 |    48   (0)| 00:00:01 |
--------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - filter("V"."A"="P"."A")

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 1 (U - Unused (1))
---------------------------------------------------------------------------

   1 -  SEL$F534CA49 / V@SEL$1
         U -  push_pred(v)

[collapse]
same for xmltable

select/*+ leading(c v) cardinality(c 1) use_nl(v) push_pred(v) */ v.*
from  xmltable('(1,3)' columns a int path '.') c,vtest v
where  c.a = v.a(+);

Plan hash value: 564839666

------------------------------------------------------------------------------------------------------------
| Id  | Operation                          | Name                  | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                   |                       |     1 |    28 |    78   (2)| 00:00:01 |
|   1 |  NESTED LOOPS OUTER                |                       |     1 |    28 |    78   (2)| 00:00:01 |
|   2 |   COLLECTION ITERATOR PICKLER FETCH| XQSEQUENCEFROMXMLTYPE |     1 |     2 |    29   (0)| 00:00:01 |
|*  3 |   VIEW                             | VTEST                 |     1 |    26 |    49   (3)| 00:00:01 |
|   4 |    SORT GROUP BY                   |                       |  1000 |  8000 |    49   (3)| 00:00:01 |
|   5 |     TABLE ACCESS FULL              | XTEST                 | 10000 | 80000 |    48   (0)| 00:00:01 |
------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - filter("V"."A"(+)=CAST(TO_NUMBER(SYS_XQ_UPKXML2SQL(SYS_XQEXVAL(VALUE(KOKBF$),0,0,54525952,0),
              50,1,2)) AS int ))

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 1 (U - Unused (1))
---------------------------------------------------------------------------

   1 -  SEL$6722A2F6 / V@SEL$1
         U -  push_pred(v)

[collapse]

And compare with this:

create global temporary table temp_collection(a number);

insert into temp_collection select * from table(sys.odcinumberlist(1,2,3));

select/*+ cardinality(c 1) no_merge(v) */
   distinct v.* 
from temp_collection c, vtest v
where v.a = c.a;

Plan hash value: 3561835411

------------------------------------------------------------------------------------------------------------
| Id  | Operation                                | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                         |                 |     1 |    26 |    41   (3)| 00:00:01 |
|   1 |  HASH UNIQUE                             |                 |     1 |    26 |    41   (3)| 00:00:01 |
|   2 |   NESTED LOOPS                           |                 |     1 |    26 |    40   (0)| 00:00:01 |
|   3 |    TABLE ACCESS FULL                     | TEMP_COLLECTION |     1 |    13 |    29   (0)| 00:00:01 |
|   4 |    VIEW PUSHED PREDICATE                 | VTEST           |     1 |    13 |    11   (0)| 00:00:01 |
|*  5 |     FILTER                               |                 |       |       |            |          |
|   6 |      SORT AGGREGATE                      |                 |     1 |     8 |            |          |
|   7 |       TABLE ACCESS BY INDEX ROWID BATCHED| XTEST           |    10 |    80 |    11   (0)| 00:00:01 |
|*  8 |        INDEX RANGE SCAN                  | ITEST           |    10 |       |     1   (0)| 00:00:01 |
------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   5 - filter(COUNT(*)>0)
   8 - access("A"="C"."A")

You can see that JPPD works fine in case of global temporary tables and, obviously, the first workaround is to avoid such functions with complex views.
But in such simple queries you have 2 other simple options:
1. you can avoid JPPD and get optimal plans using CVM(complex view merge) by just simply rewriting the query using IN or EXISTS:

select * 
from vtest v
where v.a in (select/*+ cardinality(c 1) */ c.* from table(sys.odcinumberlist(1,2,3)) c);

Plan hash value: 1474391442

---------------------------------------------------------------------------------------------------
| Id  | Operation                                 | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                          |       |    10 |   100 |    42   (5)| 00:00:01 |
|   1 |  SORT GROUP BY NOSORT                     |       |    10 |   100 |    42   (5)| 00:00:01 |
|   2 |   NESTED LOOPS                            |       |    10 |   100 |    41   (3)| 00:00:01 |
|   3 |    NESTED LOOPS                           |       |    10 |   100 |    41   (3)| 00:00:01 |
|   4 |     SORT UNIQUE                           |       |     1 |     2 |    29   (0)| 00:00:01 |
|   5 |      COLLECTION ITERATOR CONSTRUCTOR FETCH|       |     1 |     2 |    29   (0)| 00:00:01 |
|*  6 |     INDEX RANGE SCAN                      | ITEST |    10 |       |     1   (0)| 00:00:01 |
|   7 |    TABLE ACCESS BY INDEX ROWID            | XTEST |    10 |    80 |    11   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   6 - access("A"=VALUE(KOKBF$))
the same with json_table and xmltable

select * 
from vtest t
where t.a in (select/*+ cardinality(v 1) */ v.a from json_table('{"a":[1,2,3]}', '$.a[*]' COLUMNS (a int PATH '$')) v);

Plan hash value: 2910004067

---------------------------------------------------------------------------------------
| Id  | Operation                     | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |       |    10 |   100 |    42   (5)| 00:00:01 |
|   1 |  SORT GROUP BY NOSORT         |       |    10 |   100 |    42   (5)| 00:00:01 |
|   2 |   NESTED LOOPS                |       |    10 |   100 |    41   (3)| 00:00:01 |
|   3 |    NESTED LOOPS               |       |    10 |   100 |    41   (3)| 00:00:01 |
|   4 |     SORT UNIQUE               |       |       |       |            |          |
|   5 |      JSONTABLE EVALUATION     |       |       |       |            |          |
|*  6 |     INDEX RANGE SCAN          | ITEST |    10 |       |     1   (0)| 00:00:01 |
|   7 |    TABLE ACCESS BY INDEX ROWID| XTEST |    10 |    80 |    11   (0)| 00:00:01 |
---------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   6 - access("A"="P"."A")

select v.*
from  vtest v
where exists(select/*+ cardinality(c 1) */ 1 from xmltable('(1,3)' columns a int path '.') c where c.a = v.a);

Plan hash value: 1646016183

---------------------------------------------------------------------------------------------------------------
| Id  | Operation                             | Name                  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                      |                       |    10 |   100 |    42   (5)| 00:00:01 |
|   1 |  SORT GROUP BY NOSORT                 |                       |    10 |   100 |    42   (5)| 00:00:01 |
|   2 |   NESTED LOOPS                        |                       |    10 |   100 |    41   (3)| 00:00:01 |
|   3 |    NESTED LOOPS                       |                       |    10 |   100 |    41   (3)| 00:00:01 |
|   4 |     SORT UNIQUE                       |                       |     1 |     2 |    29   (0)| 00:00:01 |
|   5 |      COLLECTION ITERATOR PICKLER FETCH| XQSEQUENCEFROMXMLTYPE |     1 |     2 |    29   (0)| 00:00:01 |
|*  6 |     INDEX RANGE SCAN                  | ITEST                 |    10 |       |     1   (0)| 00:00:01 |
|   7 |    TABLE ACCESS BY INDEX ROWID        | XTEST                 |    10 |    80 |    11   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   6 - access("A"=CAST(TO_NUMBER(SYS_XQ_UPKXML2SQL(SYS_XQEXVAL(VALUE(KOKBF$),0,0,54525952,0),50,1,2)) AS int ))

[collapse]

2. Avoid JPPD using lateral():

select/*+ cardinality(c 1) no_merge(lat) */
   distinct lat.* 
from table(sys.odcinumberlist(1,2,3)) c, 
     lateral(select * from vtest v where v.a = c.column_value) lat;

Plan hash value: 18036714

-----------------------------------------------------------------------------------------------------------
| Id  | Operation                               | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                        |                 |    10 |   190 |    41   (3)| 00:00:01 |
|   1 |  HASH UNIQUE                            |                 |    10 |   190 |    41   (3)| 00:00:01 |
|   2 |   NESTED LOOPS                          |                 |    10 |   190 |    40   (0)| 00:00:01 |
|   3 |    COLLECTION ITERATOR CONSTRUCTOR FETCH|                 |     1 |     2 |    29   (0)| 00:00:01 |
|   4 |    VIEW                                 | VW_LAT_4DB60E85 |    10 |   170 |    11   (0)| 00:00:01 |
|   5 |     SORT GROUP BY                       |                 |    10 |    80 |    11   (0)| 00:00:01 |
|   6 |      TABLE ACCESS BY INDEX ROWID BATCHED| XTEST           |    10 |    80 |    11   (0)| 00:00:01 |
|*  7 |       INDEX RANGE SCAN                  | ITEST           |    10 |       |     1   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   7 - access("A"=VALUE(KOKBF$))

Let’s see a bit more complex query:

Test tables 2

create table xtest1(id primary key, a) as
  select level,level from dual connect by level<=1000;

create table xtest2(a, b, c) as
   select mod(level,1000),level,rpad('x',100,'x')
   from dual
   connect by level<=1e4
/
create index itest2 on xtest2(a)
/
create or replace view vtest2 as
select a,count(b) cnt
from xtest2
group by a
/

[collapse]

select v.* 
from xtest1 t1,
     vtest2 v
where t1.id in (select/*+ cardinality(c 1) */ * from table(sys.odcinumberlist(1,2,3)) c)
  and v.a = t1.a;

Plan hash value: 4293766070

-----------------------------------------------------------------------------------------------------------
| Id  | Operation                                  | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                           |              |     1 |    36 |    80   (3)| 00:00:01 |
|*  1 |  HASH JOIN                                 |              |     1 |    36 |    80   (3)| 00:00:01 |
|   2 |   JOIN FILTER CREATE                       | :BF0000      |     1 |    10 |    31   (4)| 00:00:01 |
|   3 |    NESTED LOOPS                            |              |     1 |    10 |    31   (4)| 00:00:01 |
|   4 |     NESTED LOOPS                           |              |     1 |    10 |    31   (4)| 00:00:01 |
|   5 |      SORT UNIQUE                           |              |     1 |     2 |    29   (0)| 00:00:01 |
|   6 |       COLLECTION ITERATOR CONSTRUCTOR FETCH|              |     1 |     2 |    29   (0)| 00:00:01 |
|*  7 |      INDEX UNIQUE SCAN                     | SYS_C0026365 |     1 |       |     0   (0)| 00:00:01 |
|   8 |     TABLE ACCESS BY INDEX ROWID            | XTEST1       |     1 |     8 |     1   (0)| 00:00:01 |
|   9 |   VIEW                                     | VTEST2       |  1000 | 26000 |    49   (3)| 00:00:01 |
|  10 |    HASH GROUP BY                           |              |  1000 |  8000 |    49   (3)| 00:00:01 |
|  11 |     JOIN FILTER USE                        | :BF0000      | 10000 | 80000 |    48   (0)| 00:00:01 |
|* 12 |      TABLE ACCESS FULL                     | XTEST2       | 10000 | 80000 |    48   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("V"."A"="T1"."A")
   7 - access("T1"."ID"=VALUE(KOKBF$))
  12 - filter(SYS_OP_BLOOM_FILTER(:BF0000,"A"))

As you see, CVM can’t help in this case, but we can use lateral():

select/*+ no_merge(lat) */ lat.* 
from xtest1 t1,
     lateral(select * from vtest2 v where v.a = t1.a) lat
where t1.id in (select/*+ cardinality(c 1) */ * from table(sys.odcinumberlist(1,2,3)) c);

Plan hash value: 1798023704

------------------------------------------------------------------------------------------------------------
| Id  | Operation                                | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                         |                 |    10 |   360 |    42   (3)| 00:00:01 |
|   1 |  NESTED LOOPS                            |                 |    10 |   360 |    42   (3)| 00:00:01 |
|   2 |   NESTED LOOPS                           |                 |     1 |    10 |    31   (4)| 00:00:01 |
|   3 |    SORT UNIQUE                           |                 |     1 |     2 |    29   (0)| 00:00:01 |
|   4 |     COLLECTION ITERATOR CONSTRUCTOR FETCH|                 |     1 |     2 |    29   (0)| 00:00:01 |
|   5 |    TABLE ACCESS BY INDEX ROWID           | XTEST1          |     1 |     8 |     1   (0)| 00:00:01 |
|*  6 |     INDEX UNIQUE SCAN                    | SYS_C0026365    |     1 |       |     0   (0)| 00:00:01 |
|   7 |   VIEW                                   | VW_LAT_A18161FF |    10 |   260 |    11   (0)| 00:00:01 |
|   8 |    SORT GROUP BY                         |                 |    10 |    80 |    11   (0)| 00:00:01 |
|   9 |     TABLE ACCESS BY INDEX ROWID BATCHED  | XTEST2          |    10 |    80 |    11   (0)| 00:00:01 |
|* 10 |      INDEX RANGE SCAN                    | ITEST2          |    10 |       |     1   (0)| 00:00:01 |
------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   6 - access("T1"."ID"=VALUE(KOKBF$))
  10 - access("A"="T1"."A")

There is also another workaround with non-documented “precompute_subquery” hint:

select v.* 
from xtest1 t1,
     vtest2 v 
where t1.id in (select/*+ precompute_subquery */ * from table(sys.odcinumberlist(1,2,3)) c)
and v.a = t1.a;

Plan hash value: 1964829099

------------------------------------------------------------------------------------------------
| Id  | Operation                       | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                |              |    30 |   480 |    37   (3)| 00:00:01 |
|   1 |  HASH GROUP BY                  |              |    30 |   480 |    37   (3)| 00:00:01 |
|   2 |   NESTED LOOPS                  |              |    30 |   480 |    36   (0)| 00:00:01 |
|   3 |    NESTED LOOPS                 |              |    30 |   480 |    36   (0)| 00:00:01 |
|   4 |     INLIST ITERATOR             |              |       |       |            |          |
|   5 |      TABLE ACCESS BY INDEX ROWID| XTEST1       |     3 |    24 |     3   (0)| 00:00:01 |
|*  6 |       INDEX UNIQUE SCAN         | SYS_C0026365 |     3 |       |     2   (0)| 00:00:01 |
|*  7 |     INDEX RANGE SCAN            | ITEST2       |    10 |       |     1   (0)| 00:00:01 |
|   8 |    TABLE ACCESS BY INDEX ROWID  | XTEST2       |    10 |    80 |    11   (0)| 00:00:01 |
------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   6 - access("T1"."ID"=1 OR "T1"."ID"=2 OR "T1"."ID"=3)
   7 - access("A"="T1"."A")

It can help even in most difficult cases, for example if you can’t rewrite query (in this case you can create sql patch or sql profile with “precompute_subquery”), but I wouldn’t suggest it since “precompute_subquery” is non-documented, it can be used only with simple collections and has limitation in 1000 values.
I’d suggest to use the workaround with lateral, since it’s most reliable and very simple.

Categories: Development

[Solved] OIM/OIG – IDCS Integration : SSL Handshake Exception: Received fatal alert: protocol_version

Online Apps DBA - Thu, 2019-05-30 08:12

[Solved] OIM/OIG – IDCS Integration: SSL Handshake Exception TLS1.2: Protocol_version To synch users from On-Premise Identity Management system to Oracle Identity Cloud Service (IDCS), IDCS connector can be used with OIM/OIG. This connection for security reasons must be configured to using TLS 1.2 If you integrate OIM/OIG with IDCS and get Error like *”javax.net.ssl.SSLException Received […]

The post [Solved] OIM/OIG – IDCS Integration : SSL Handshake Exception: Received fatal alert: protocol_version appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Can you start two (or more) PostgreSQL instances against the same data directory?

Yann Neuhaus - Thu, 2019-05-30 06:41

As PostgreSQL does not know the concept of running multiple instances against the same files on disk (e.g. like Oracle RAC) it should not be possible to start two or more instances against the same data directory. If that would work the result can only be corruption. In this post we will look at how PostgreSQL is detecting that and what mechanism are build in to avoid the situation of having multiple instances working against the same files on disk.

To start with we create a new cluster:

postgres@rhel8pg:/home/postgres/ [PGDEV] mkdir /var/tmp/pgtest
12:16:46 postgres@rhel8pg:/home/postgres/ [PGDEV] initdb -D /var/tmp/pgtest/
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locales
  COLLATE:  en_US.utf8
  CTYPE:    en_US.utf8
  MESSAGES: en_US.utf8
  MONETARY: de_CH.UTF-8
  NUMERIC:  de_CH.UTF-8
  TIME:     en_US.UTF-8
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

fixing permissions on existing directory /var/tmp/pgtest ... ok
creating subdirectories ... ok
selecting dynamic shared memory implementation ... posix
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting default timezone ... Europe/Zurich
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok

initdb: warning: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.

Success. You can now start the database server using:

    pg_ctl -D /var/tmp/pgtest/ -l logfile start

We use a dedicated port and then start it up:

postgres@rhel8pg:/home/postgres/ [PGDEV] export PGPORT=8888
postgres@rhel8pg:/home/postgres/ [PGDEV] pg_ctl -D /var/tmp/pgtest start
waiting for server to start....2019-05-16 12:17:22.399 CEST [7607] LOG:  starting PostgreSQL 12devel on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.2.1 20180905 (Red Hat 8.2.1-3), 64-bit
2019-05-16 12:17:22.403 CEST [7607] LOG:  listening on IPv6 address "::1", port 8888
2019-05-16 12:17:22.403 CEST [7607] LOG:  listening on IPv4 address "127.0.0.1", port 8888
2019-05-16 12:17:22.409 CEST [7607] LOG:  listening on Unix socket "/tmp/.s.PGSQL.8888"
2019-05-16 12:17:22.446 CEST [7608] LOG:  database system was shut down at 2019-05-16 12:16:54 CEST
2019-05-16 12:17:22.455 CEST [7607] LOG:  database system is ready to accept connections
 done
server started

postgres@rhel8pg:/home/postgres/ [PGDEV] psql -p 8888 -c "select version()" postgres
                                                  version                                                  
-----------------------------------------------------------------------------------------------------------
 PostgreSQL 12devel on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.2.1 20180905 (Red Hat 8.2.1-3), 64-bit
(1 row)

What happens when we want to start another instance against that data directory?

postgres@rhel8pg:/home/postgres/ [PGDEV] pg_ctl -D /var/tmp/pgtest start
pg_ctl: another server might be running; trying to start server anyway
waiting for server to start....2019-05-16 12:18:26.252 CEST [7629] FATAL:  lock file "postmaster.pid" already exists
2019-05-16 12:18:26.252 CEST [7629] HINT:  Is another postmaster (PID 7607) running in data directory "/var/tmp/pgtest"?
 stopped waiting
pg_ctl: could not start server
Examine the log output.

When PostgreSQL is starting up it will look at a file called “postmaster.pid” which exists in the data directory once the instance is started. If that file exists PostgreSQL will not start up another instance against the same data directory. Once the instance is stopped the file is removed:

postgres@rhel8pg:/home/postgres/ [PGDEV] pg_ctl -D /var/tmp/pgtest/ stop
waiting for server to shut down....2019-05-16 12:48:50.636 CEST [7896] LOG:  received fast shutdown request
2019-05-16 12:48:50.641 CEST [7896] LOG:  aborting any active transactions
2019-05-16 12:48:50.651 CEST [7896] LOG:  background worker "logical replication launcher" (PID 7903) exited with exit code 1
2019-05-16 12:48:50.651 CEST [7898] LOG:  shutting down
2019-05-16 12:48:50.685 CEST [7896] LOG:  database system is shut down
 done
server stopped
postgres@rhel8pg:/home/postgres/ [PGDEV] ls -al /var/tmp/pgtest/postmaster.pid
ls: cannot access '/var/tmp/pgtest/postmaster.pid': No such file or directory

At least by default this is not possible to start two or more instances as PostgreSQL checks if postmaster.pid already exists. Lets remove that file and try again:

postgres@rhel8pg:/home/postgres/ [PGDEV] rm /var/tmp/pgtest/postmaster.pid
postgres@rhel8pg:/home/postgres/ [PGDEV] pg_ctl -D /var/tmp/pgtest start
waiting for server to start....2019-05-16 12:20:17.754 CEST [7662] LOG:  starting PostgreSQL 12devel on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.2.1 20180905 (Red Hat 8.2.1-3), 64-bit
2019-05-16 12:20:17.756 CEST [7662] LOG:  could not bind IPv6 address "::1": Address already in use
2019-05-16 12:20:17.756 CEST [7662] HINT:  Is another postmaster already running on port 8888? If not, wait a few seconds and retry.
2019-05-16 12:20:17.756 CEST [7662] LOG:  could not bind IPv4 address "127.0.0.1": Address already in use
2019-05-16 12:20:17.756 CEST [7662] HINT:  Is another postmaster already running on port 8888? If not, wait a few seconds and retry.
2019-05-16 12:20:17.756 CEST [7662] WARNING:  could not create listen socket for "localhost"
2019-05-16 12:20:17.756 CEST [7662] FATAL:  could not create any TCP/IP sockets
2019-05-16 12:20:17.756 CEST [7662] LOG:  database system is shut down
 stopped waiting
pg_ctl: could not start server
Examine the log output.

Again, this does not work and even the initial instance was shutdown because PostgreSQL detected that the lock file is not there anymore:

2019-05-16 12:20:22.540 CEST [7607] LOG:  could not open file "postmaster.pid": No such file or directory
2019-05-16 12:20:22.540 CEST [7607] LOG:  performing immediate shutdown because data directory lock file is invalid
2019-05-16 12:20:22.540 CEST [7607] LOG:  received immediate shutdown request
2019-05-16 12:20:22.540 CEST [7607] LOG:  could not open file "postmaster.pid": No such file or directory
2019-05-16 12:20:22.544 CEST [7612] WARNING:  terminating connection because of crash of another server process
2019-05-16 12:20:22.544 CEST [7612] DETAIL:  The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2019-05-16 12:20:22.544 CEST [7612] HINT:  In a moment you should be able to reconnect to the database and repeat your command.
2019-05-16 12:20:22.549 CEST [7664] WARNING:  terminating connection because of crash of another server process
2019-05-16 12:20:22.549 CEST [7664] DETAIL:  The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.

Lets start the first instance again:

postgres@rhel8pg:/home/postgres/ [PGDEV] pg_ctl -D /var/tmp/pgtest/ start
waiting for server to start....2019-05-16 12:22:20.136 CEST [7691] LOG:  starting PostgreSQL 12devel on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.2.1 20180905 (Red Hat 8.2.1-3), 64-bit
2019-05-16 12:22:20.140 CEST [7691] LOG:  listening on IPv6 address "::1", port 8888
2019-05-16 12:22:20.140 CEST [7691] LOG:  listening on IPv4 address "127.0.0.1", port 8888
2019-05-16 12:22:20.148 CEST [7691] LOG:  listening on Unix socket "/tmp/.s.PGSQL.8888"
2019-05-16 12:22:20.193 CEST [7693] LOG:  database system was interrupted; last known up at 2019-05-16 12:17:22 CEST
.2019-05-16 12:22:21.138 CEST [7693] LOG:  database system was not properly shut down; automatic recovery in progress
2019-05-16 12:22:21.143 CEST [7693] LOG:  redo starts at 0/15D3420
2019-05-16 12:22:21.143 CEST [7693] LOG:  invalid record length at 0/15D3458: wanted 24, got 0
2019-05-16 12:22:21.143 CEST [7693] LOG:  redo done at 0/15D3420
2019-05-16 12:22:21.173 CEST [7691] LOG:  database system is ready to accept connections
 done
server started

postgres@rhel8pg:/home/postgres/ [PGDEV] psql -p 8888 -c "select version()" postgres
                                                  version                                                  
-----------------------------------------------------------------------------------------------------------
 PostgreSQL 12devel on x86_64-pc-linux-gnu, compiled by gcc (GCC) 8.2.1 20180905 (Red Hat 8.2.1-3), 64-bit
(1 row)

Lets change the port for the second instance and then try again to start it against the same data directory:

postgres@rhel8pg:/home/postgres/ [PGDEV] export PGPORT=8889
postgres@rhel8pg:/home/postgres/ [PGDEV] pg_ctl -D /var/tmp/pgtest/ start
pg_ctl: another server might be running; trying to start server anyway
waiting for server to start....2019-05-16 12:24:41.700 CEST [7754] FATAL:  lock file "postmaster.pid" already exists
2019-05-16 12:24:41.700 CEST [7754] HINT:  Is another postmaster (PID 7741) running in data directory "/var/tmp/pgtest"?
 stopped waiting
pg_ctl: could not start server
Examine the log output.

Does not work as well, which is good. Lets be a bit more nasty and truncate the postmaster.pid file:

postgres@rhel8pg:/home/postgres/ [PGDEV] cat /var/tmp/pgtest/postmaster.pid 
7790
/var/tmp/pgtest
1558002434
8888
/tmp
localhost
  8888001    819201
ready   
postgres@rhel8pg:/home/postgres/ [PGDEV] cat /dev/null > /var/tmp/pgtest/postmaster.pid
postgres@rhel8pg:/home/postgres/ [PGDEV] cat /var/tmp/pgtest/postmaster.pid

The pid file is now empty and right after emptying that file we can see this in the PostgreSQL log file:

019-05-16 12:30:14.140 CEST [7790] LOG:  lock file "postmaster.pid" contains wrong PID: 0 instead of 7790
2019-05-16 12:30:14.140 CEST [7790] LOG:  performing immediate shutdown because data directory lock file is invalid
2019-05-16 12:30:14.140 CEST [7790] LOG:  received immediate shutdown request
2019-05-16 12:30:14.149 CEST [7795] WARNING:  terminating connection because of crash of another server process
2019-05-16 12:30:14.149 CEST [7795] DETAIL:  The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
2019-05-16 12:30:14.149 CEST [7795] HINT:  In a moment you should be able to reconnect to the database and repeat your command.
2019-05-16 12:30:14.160 CEST [7790] LOG:  database system is shut down

So even that case it is detected and PostgreSQL protects you from starting up another instance against the same data directory. Lets try something else and modify PGDATA in the postmaster.pid file:

postgres@rhel8pg:/home/postgres/ [PGDEV] cat /var/tmp/pgtest/postmaster.pid 
7896
/var/tmp/pgtest
1558002751
8888
/tmp
localhost
  8888001    851969
ready   

postgres@rhel8pg:/home/postgres/ [PGDEV] sed -i  's/\/var\/tmp\/pgtest/\/var\/tmp\/pgtest2/g' /var/tmp/pgtest/postmaster.pid 
postgres@rhel8pg:/home/postgres/ [PGDEV] cat /var/tmp/pgtest/postmaster.pid
7896
/var/tmp/pgtest2
1558002751
8888
/tmp
localhost
  8888001    851969
ready   

Although we changed PGDATA PostgreSQL will not start up another instance against this data directory:

postgres@rhel8pg:/home/postgres/ [PGDEV] pg_ctl -D /var/tmp/pgtest/ start
pg_ctl: another server might be running; trying to start server anyway
waiting for server to start....2019-05-16 12:35:28.540 CEST [7973] FATAL:  lock file "postmaster.pid" already exists
2019-05-16 12:35:28.540 CEST [7973] HINT:  Is another postmaster (PID 7896) running in data directory "/var/tmp/pgtest"?
 stopped waiting
pg_ctl: could not start server
Examine the log output.

So by default you can not get PostgreSQL to start two or even more instances against the same data directory. There is an comment about this behaviour in src/backend/postmaster/postmaster.c in the source code:

                /*
                 * Once a minute, verify that postmaster.pid hasn't been removed or
                 * overwritten.  If it has, we force a shutdown.  This avoids having
                 * postmasters and child processes hanging around after their database
                 * is gone, and maybe causing problems if a new database cluster is
                 * created in the same place.  It also provides some protection
                 * against a DBA foolishly removing postmaster.pid and manually
                 * starting a new postmaster.  Data corruption is likely to ensue from
                 * that anyway, but we can minimize the damage by aborting ASAP.
                 */

“Once a minute” might be critical and we might be able to start a second one if we are fast enough, so lets try again. This time we start the first one, remove the lock file and immediately start another one using another port:

export PGPORT=8888
pg_ctl -D /var/tmp/pgtest start
rm -f /var/tmp/pgtest/postmaster.pid
export PGPORT=8889
pg_ctl -D /var/tmp/pgtest start

And here you have it:

postgres@rhel8pg:/home/postgres/ [pg120] ps -ef | grep postgres
postgres  1445     1  0 May27 ?        00:00:00 /usr/lib/systemd/systemd --user
postgres  1456  1445  0 May27 ?        00:00:00 (sd-pam)
root      9780   786  0 06:09 ?        00:00:00 sshd: postgres [priv]
postgres  9783  9780  0 06:09 ?        00:00:00 sshd: postgres@pts/1
postgres  9784  9783  0 06:09 pts/1    00:00:00 -bash
postgres 10302     1  0 06:19 ?        00:00:00 /u01/app/postgres/product/DEV/db_1/bin/postgres -D /var/tmp/pgtest
postgres 10304 10302  0 06:19 ?        00:00:00 postgres: checkpointer   
postgres 10305 10302  0 06:19 ?        00:00:00 postgres: background writer   
postgres 10306 10302  0 06:19 ?        00:00:00 postgres: walwriter   
postgres 10307 10302  0 06:19 ?        00:00:00 postgres: autovacuum launcher   
postgres 10308 10302  0 06:19 ?        00:00:00 postgres: stats collector   
postgres 10309 10302  0 06:19 ?        00:00:00 postgres: logical replication launcher   
postgres 10313     1  0 06:19 ?        00:00:00 /u01/app/postgres/product/DEV/db_1/bin/postgres -D /var/tmp/pgtest
postgres 10315 10313  0 06:19 ?        00:00:00 postgres: checkpointer   
postgres 10316 10313  0 06:19 ?        00:00:00 postgres: background writer   
postgres 10317 10313  0 06:19 ?        00:00:00 postgres: walwriter   
postgres 10318 10313  0 06:19 ?        00:00:00 postgres: autovacuum launcher   
postgres 10319 10313  0 06:19 ?        00:00:00 postgres: stats collector   
postgres 10320 10313  0 06:19 ?        00:00:00 postgres: logical replication launcher   
postgres 10327  9784  0 06:19 pts/1    00:00:00 ps -ef

Conclusion: PostgreSQL does some basic checks to avoid starting two instances against the same files on disk. But if you really want (and of course you should never do that) then you can achieve that => with all the consequences! Don’t do it!

Cet article Can you start two (or more) PostgreSQL instances against the same data directory? est apparu en premier sur Blog dbi services.

Lastest Blog Posts from ACE Program Members - May 19-25, 2019

OTN TechBlog - Thu, 2019-05-30 05:00
How it all stacks up...

...depends on who's doing the stacking. The blog posts listed below, published by members of the Oracle ACE Program between May 19th and 25th, reveal a bit about how these people get things done.

 

Oracle ACE Director

Oracle ACE Director Edward Whalen Edward Whalen
Chief Technologist, Performance Tuning Corporation
Houston, Texas

 

Oracle ACE Director Opal AlapatOpal Alapat
Vision Team Practice Lead, interRel Consulting
Arlington, TX

 

Oracle ACE Director Syed Jaffar HussainSyed Jaffar Hussain
Freelance Consultant, Architect, Trainer
Saudi Arabia

 
Oracle ACE

Oracle ACE Rodrigo MufalaniRodrigo Mufalani
Principal Database Architect, eProseed
Luxembourg

 

Satoshi Mitani
Database Platform Technical Lead, Yahoo! JAPAN
Tokyo, Japan

 

Oracle ACE Sean StuberSean Stuber
Database Analyst, American Electric Power
Columbus, Ohio

 

Oracle ACE Stefan OehrliStefan Oehrli
Platform Architect, Trivadis AG
Zurich, Switzerland

 
Oracle ACE Associate

Oracle ACE Associate Bruno Reis da SilvaBruno Reis da Silva
Senior Oracle Database Administrator, IBM
Stockholm, Sweden

 

Oracle ACE Associate Tercio CostaTercio Costa
Analista de Dados, Unimed João Pessoa
João Pessoa, Paraíba, Brazil

 

Oracle ACE Associate Wadah DaouehiWadhah Daouehi
Manager/Consultant/Trainer, ORANUX
Riadh city, Tunisia

 

 

Additional Resources

Why Oracle Cloud @ Customer is a good option?

Syed Jaffar - Thu, 2019-05-30 03:53
One of the major concerns moving over cloud is the security for most of the organizations. Though cloud concept is around for quite sometime, still, a majority of customers are concerned about putting their data over cloud. To gain the confidence and at the same to take full advantage of Cloud technologies, various Cloud vendors started offering cloud at customer solutions. In this blog spot, am going to discuss about Oracle cloud at customer solutions, its advantages , subscription model etc.

Oracle Cloud at Customer delivers full advantages of cloud technologies at your data center. You subscribe hardware and software together when you go for cloud at customer option. Though Oracle does the initial setup, configuration and day-to-day system management, you still have all the advantages of security, network of your data center.

Typically, the cloud at customer option consist of the following:


  • The hardware required to run Cloud at customer
  • Control panel software
  • The Oracle Advanced Support Gateway
  • Oracle Cloud services
 As a customer, your responsibility involves managing cloud account and subscribed services. At any time, you can check your account balance and your current Oracle Cloud at Customer service usage. It is also possible that you can view your usage by region, by service, or by a specific time period.
To check your account balance and usage, Oracle recommends that you sign in to your Oracle Cloud Account in an Oracle data region. From there, you can view your overall account usage and Universal Credits balance. 

In nutshell, cloud at customer brings the cloud solutions to your data center, where you can apply all the rules of your data centre while taking the full advantages of cloud solutions.

Subscribe to business events in Fusion-based SaaS applications from Oracle Integration Cloud ...

For integration with all Oracle Fusion based Cloud services - like Oracle Sales Cloud and Oracle ERP Cloud - each service provides Business Events which external applications or integrations can...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Build and Deploy a Helidon Microservice Using Oracle Developer Cloud

OTN TechBlog - Wed, 2019-05-29 19:53

Project Helidon was recently introduced by Oracle. It provides a new way to write microservices. This blog will help you understand how use Oracle Developer Cloud to build and deploy your first Helidon-based microservice on Oracle Container Engine for Kubernetes.

Before we begin, let’s examine a few things:

What is Helidon?

Project Helidon is a set of Java Libraries for writing microservices.  Helidon supports two programming models: Helidon MP, based on MicroProfile 1.2, and Helidon SE, a small, functional style API.

Regardless of which model you choose, you’ll be writing an application that is a Java SE-based program. Helidon is open source and the code is available on GitHub.  To read and learn more about Project Helidon, see the following links:

Get Started

Helidon doesn’t have downloads. Instead, you’ll need to use the Maven releases. This means that you’ll be using the Maven Archetype to get started with your Helidon microservice project. In this blog, we’ll be using the Helidon SE programming model.

The following basic prerequisites should be installed on your machine to develop with Helidon:

  • Maven
  • Java 8
  • Gitcli (for pushing code to the Oracle Developer Cloud Git repository)

Download the Sample Microservice Project for Helidon with Maven

Open the command prompt, if you’re using a Windows machine. Then go to (or create) the directory or folder where you’d like to create the sample Helidon microservice project and execute the following Maven command.

mvn archetype:generate -DinteractiveMode=false \

    -DarchetypeGroupId=io.helidon.archetypes \

    -DarchetypeArtifactId=helidon-quickstart-se \

    -DarchetypeVersion=1.1.0 \

    -DgroupId=io.helidon.examples \

    -DartifactId=helidon-quickstart-se \

    -Dpackage=io.helidon.examples.quickstart.se

 

When executed, this Maven command will create the helidon-quickstart-se folder.

The microservice application code, the build files, and the deployment files all reside in the helidon-quickstart-se folder.

 

These are the files and folder(s):

  • src folder –  Contains the microservice application source code
  • app.yml – Describes the Kubernetes deployment
  • Dockerfile – Provides instructions for building the Docker image
  • Dockerfile.native – Provides instructions for building the Docker image using the Graal VM
  • Pom.xml – Project descriptor for the Maven build
  • README.md –  File that contains a description of the project

 

Now let’s create an Oracle Developer Cloud project with a Git repository. We’ll call the Git repository Helidon.git.

 

Navigate to the helidon-quickstart-se folder in your command prompt window and execute the following Git commands to push the Helidon microservice application code to the Git repository you created.

Note: You need to have gitcli installed on your development machine to execute Git commands.

git init

git add --all

git commit -m "First commit"

git remote add origin <git repository url>

git push origin master

 

Your Helidon.git repository should have the structure shown below.

 

Configure the Build Job

In Developer Cloud, select Builds in the left navigation bar to display the Builds page. Then click the +Create Job button.  

In the New Job dialog, enter BuildHelidon for the Name and select a Template that has the Docker runtime. Then click the Create button. This build job will build Docker image for the Helidon Microservice code in the Git repository and push it to the DockerHub registry.

In the Git tab, select Git from the Add Git dropdown, select Helidon.git as the Git repository and, for the branch, select master.

 

In the Steps tab, use the Add Step dropdown to add Docker login, Docker build, and Docker push steps.

In the Docker login step, provide your DockerHub Username and Password. Leave the Registry Host empty, since we’re using DockerHub as the registry.

In the Docker build step, enter <DockerHub Username>/helidonmicro for the Image Name and 1,0 for the Version Tag. The full image name shown is <DockerHub Username>/helidonmicro:1.0

 

In the Docker push step, enter <DockerHub Username>/helidonmicro for the Image Name and 1.0 for the Version Tag. Then click the Save button.

Before we create the build job that will deploy the Helidon Microservice Docker container, you need to edit the app.yaml file and modify the Docker image name. To edit that file, go to the Git page, select the Helidon.git repository, and click the app.yml file link.

 

Click the pencil icon to edit the file.

 

Replace the image name with <your DockerHub username>/helidonmicro:1.0, then click the Commit button to commit the code changes to the master branch.

 

To create another build job, navigate to the Builds page and click the +Create Job button. 

In the New Job dialog enter DeployHelidon for the Name, select the template with Kubectl, then click the Create button. This build job will deploy the Docker image built by the BuildHelidon build job to the Kubernetes cluster.

 

The first thing you’ll do to configure the DeployHelidon build job is to specify the repository where the code is found and select the branch where you’ll be working on the files.  To do this, in the Git tab, add Git from the dropdown, select Helidon.git as the Git repository and, for the branch, select master.

In the Steps tab, select OCIcli and Unix Shell from the Add Step drop down. Take a look at this blog link to see how and where to get the values for the OCIcli configuration. Then, in the Unix Shell build step, enter the following script. You can get the Kubernetes Cluster Id from the Oracle Cloud Infrastructure console. 

mkdir -p $HOME/.kube oci ce cluster create-kubeconfig --cluster-id --file $HOME/.kube/config --region us-ashburn-1 export KUBECONFIG=$HOME/.kube/config kubectl create -f app.yaml sleep 30 kubectl get services helidon-quickstart-se kubectl get pods kubectl describe pods

When you’re done, click the Save button.

 

Create the Build Pipeline

Navigate to the Pipelines tab in the Builds page. Then click the +Create Pipeline button.

 

In the Create Pipeline dialog, you can enter the Name as HelidonPipeline. Then click the Create button.

Drag and drop the BuildHelidon and DeployHelidon build jobs and then connect them.

 

Double click the link that connects the build jobs and select Successful as the Result Condition. Then click the Apply button.

 

Click the Build button, as shown, to run the build pipeline. The BuildHelidon build job will be executed first and, if it is successful, then the DeployHelidon build job that deploys the container on the Kubernetes cluster on Oracle Cloud will be executed next.

 

After the jobs in the build pipeline finish executing, navigate to the Jobs tab and click the link for the DeployHelidon build job.  Then click the log icon for the executed build. You should see messages that the service and deployment were successfully created.  Now, for the helidon-quickstart-se service and deployment that were created on the Kubernetes cluster, search the log, and find the public IP address and port to access the microservice, as shown below.

 

 

Enter the IP address and port that you retrieved from the log, into the browser using the format shown in the following URL:

http://<retrieved IP address>:<retrieved port>/greet

You should see the “Hello World!” message in your browser.

 

 

So, you’ve seen how Oracle Developer Cloud can help you manage the complete DevOps lifecycle for your Helidon-based microservices and how out-of-the-box support for Build and Deploy to Oracle Container Engine for Kubernetes makes it easier.

To learn more about other new features in Oracle Developer Cloud, take a look at the What's New in Oracle Developer Cloud Service document and explore the links it provides to our product documentation. If you have any questions, you can reach us on the Developer Cloud Slack channel or in the online forum.

Happy Coding!

**The views expressed in this post are my own and do not necessarily reflect the views of Oracle

Timestamp Oddity

Jonathan Lewis - Wed, 2019-05-29 12:17

[Editorial note: this is something I started writing in 2013, managed to complete in 2017, and still failed to publish. It should have been a follow-on to another posting on the oddities of timestamp manipulation.]

Just as national language support used to be, timestamps and time-related columns are still a bit of a puzzle to the Oracle world – so much so that OEM could cripple a system if it was allowed to do the check for “failed logins over the last 30 minutes”. And, just like NLS, it’s one of those things that you use so rarely that you keep forgetting what went wrong the last time you used it. Here’s one little oddity that I reminded myself about recently:

rem
rem     Script:         timestamp_anomaly.sql
rem     Author:         Jonathan Lewis
rem     Dated:          April 2013
rem
create table t1 (
        ts_tz   timestamp(9) with time zone,
        ts_ltz  timestamp(9) with local time zone
);

insert into t1 values(systimestamp, systimestamp);
commit;

alter table t1 add constraint ts_ltz_uk unique (ts_ltz);
alter table t1 add constraint ts_tz_uk  unique (ts_tz);


Nothing terribly difficult – just a table with two variants on the timestamp data type and a unique constraint on both: except for one problem. Watch what happens as I create the unique constraints:

SQL> alter table t1 add constraint ts_ltz_uk unique (ts_ltz);

Table altered.

SQL> alter table t1 add constraint ts_tz_uk  unique (ts_tz);
alter table t1 add constraint ts_tz_uk  unique (ts_tz)
                                        *
ERROR at line 1:
ORA-02329: column of datatype TIME/TIMESTAMP WITH TIME ZONE cannot be unique or a primary key

Slightly unexpected – unless you’ve memorized the manuals, of course, which I hadn’t. I wonder if you can create a unique index on timestamp with time zone:


SQL> create unique index ts_tz_uk on t1(ts_tz);

Index created.

You can’t have a unique constraint, but you CAN create a unique index! How curious – did that really happen ?

SQL> select index_name, column_name from user_ind_columns where table_name = 'T1';

INDEX_NAME           COLUMN_NAME
-------------------- --------------------
TS_LTZ_UK            TS_LTZ
TS_TZ_UK             SYS_NC00003$

The index is on a column called SYS_NC00003$ – which looks suspiciously like one of those “function-based-index” things:


SQL> select * from user_ind_expressions where table_name = 'T1';

INDEX_NAME           TABLE_NAME           COLUMN_EXPRESSION                        COLUMN_POSITION
-------------------- -------------------- ---------------------------------------- ---------------
TS_TZ_UK             T1                   SYS_EXTRACT_UTC("TS_TZ")                               1

Oracle has silently invoked the sys_extract_utc() function on our (free-floating) timestamp column to normalize it to UTC. This is really not very friendly but it does make sense, of course – it would be rather expensive to enforce uniqueness if there were (at least) 24 different ways of storing the same absolute value – and 24 is a conservative estimate.

 

 

Securing the Oracle Cloud

Oracle Security Team - Wed, 2019-05-29 09:01

Greetings from sunny Seattle! My name is Eran Feigenbaum and I am the Chief Information Security Officer for the Oracle Cloud. Oracle Cloud Infrastructure (OCI) is what we call a Gen2 cloud, a fundamentally re-designed public cloud, architected for superior customer isolation and enterprise application performance than the cloud designs of ten years past. OCI is the platform for Autonomous Data Warehouse and Autonomous Transaction Processing  and, in short order, for all Oracle applications  (see Oracle CEO Mark Hurd on moving NetSuite to the Oracle Cloud),.  This is my inaugural post on our relaunched corporate security blog (thank you Mary Ann) and I’m thrilled to begin a substantive discussion with you about public cloud security. But first things first, with this blog I will describe how my group is organized and functions to protect the infrastructure for the literally thousands of applications and services moving to and continuously being developed on Oracle OCI.

My journey to Oracle was paved on over two decades-worth of experience in security. I was lucky to experience the cloud evolution from all sides in my various roles as pen tester, architect, cloud provider and cloud customer. Certainly, the core set of learnings came from nearly a decade of leading security for what is now Google Cloud. This was during a time when cloud business models were very much in their infancy, as were the protection mechanisms for customer isolation. Later, I would understand the challenges differently as the CISO of an e-commerce venture. Jet.com was a cloud-native business, so while we had no physical data centers, I understood well the limitations of first-generation cloud designs in dealing with cloud-borne threats and data protection requirements. So, when it came to joining OCI, the decision was an easy one. In its Gen2 offering, I saw that Oracle was building the future of enterprise cloud; a place where “enterprise-grade” had meaningful payoff in architecture choices like isolated network virtualization to control threat proliferation and as importantly, DevSecOps was foundational to OCI, not a transformation challenge. What security leader would not want to be a part of that?

OCI distinguishes itself among cloud providers for having predictable performance and a security-first design, so most of our customers are organizations with high sensitivity to data and information protection. They are building high performance computing applications, and that includes our Oracle internal customers, so security must be continuous, ubiquitous, agile and above all scalable. By extension then, the OCI Security Group is in many ways the modern Security Operations Center (SOC). Our job is to enable the continuous integration and continuous deployment (CI/CD) pipeline.

In building the team, I aimed at three main goals: 1) build a complete organization that could address not only detection and response but proactively ensure the security of services developed and deployed on OCI, 2) create a culture and operating practice of frequent communication and metrics sharing among teams to ensure continuous goal evaluation and 3) align with the practices that Oracle’s corporate security teams had set and refined over four decades of protecting customers’ most sensitive data.

To that end the Chief Security Office at Oracle Cloud Infrastructure (OCI) consists of six (6) teams. Between these six (6) teams, the OCI Security Group provides a comprehensive and proactive set of security services, technologies, guidance, and processes that ensure a good security posture and address security risks.

  • Security Assurance: Works collaboratively with the security teams and stakeholders throughout Oracle to drive the development and deployment of security controls, technologies, processes, and guidance for those building on OCI.
  • Product Security: This team really examines and evolves the OCI architecture, both hardware and software/services, to ensure we are taking advantage of innovations and making those changes that enhance our security posture.
  • Offensive Security: The work of this team is really to understand and emulate the methods of bad actors. Some of the work involves research, penetration testing and simulating advanced threats, against our hardware and software. All work is about strengthening our architecture and defensive capability.
  • Defensive Security: These are really the first responders of cloud security. They work proactively to spot weaknesses and in the event of incidents, work to remediate them within the shortest possible window.
  • Security Automation Services: We know that automation is fundamental to scaling but it is also key to shortening detection and response time. The team aggregates and correlates information about risks and methods to develop visualizations and tools that expedite risk reduction.
  • Security Go-To-Market: One of the most common requests of me is to share information on our security architecture, methods, tooling and best practices. Our internal and external customers want reference architectures and information on how to benefit from our experience. Having this function as part of the group gives the team access to ground truth and aligns with a core value to “put customers first”.

While the team organization is set up for completeness of function in service to the CI/CD pipeline, the key to achieving continuous security and security improvement is how well all members operate as a unit. I think of each team as being essential to the others. Each area generates intelligence that informs the other units and propels them in a kind of virtuous cycle with security automation enabling accelerated revolutions through this cycle.

 Functionally interdependent and mission aligned

As Oracle engineers, for instance, plan for the re-homing or development of new applications and services on OCI, our security architecture works with them. Throughout the drawing board and design phases, we advise on best practices, compliance considerations, tooling and what the process for continuous security will look like during the integration and deployment phases. Security assurance personnel, experts in code review best practices, give guidance and create awareness about the benefits of a security mindset for code development. At time of implementation and execution, the offensive security team conducts tests looking for weaknesses and vulnerabilities which will be surfaced both to the development teams as well as to our defensive security teams for both near term and long-term strategic remediation. This process is continuous as changes and updates can quickly alter the security posture of an environment or an application, so our aim is rapid response and most importantly refining practices and processes that will reduce the risk from those same vulnerabilities for the long term. This latter includes continuous security awareness training so that a security mindset is the cultural norm even as we scale and grow at a rapid pace.

Agility and scale in security are an imperative for a cloud operator, especially one at Oracle’s size and scope which attracts the most security sensitive businesses, governments and organizations. Our approach to security automation applies to nearly every activity and process of OCI security. We observe that which can be replicated and actioned either without human intervention or through self service mechanisms. Automation provides innovations and tooling that help not only our OCI security group but internal security stakeholders and even customers. Through visibility and self-service mechanisms, we make developers and service owners part of the OCI security mission and consequently improve our ability to maintain consistent security.

I mentioned at the beginning of this post that key to security effectiveness is not only an organizational structure built for the modern cloud but also security functional areas that are interdependent and in constant communication. One of the best ways that I have found to do this in my career managing large teams is through the Objective and Key Results (OKR) process. Similar, to Key Performance Indicators (KPIs), OKRs enable measurement of success or failure, but unlike KPIs, Objectives and Key Results (OKRs) encourage leaders, teams and contributors to make big bets, stretch beyond what seems merely achievable toward what can be revolutionary. In his seminal book Measure What Matters (of which I talk about to anyone who will listen), John Doerr outlines the structure by which agile enterprises stay aligned to mission even as they adjust to account for changes in business conditions. The key results will confirm if the direction is correct or needs adjusting. The teams of the OCI Security group stay aligned and informed by one another through the OKR system. The focus on cross communication, deduplication and realignment give us visibility to the incremental improvements and successes.

With this description of the OCI Security Group, I’ve given you some insights to how we secure the industry’s most technically advanced public cloud. Over the next months, I am eager to delve deeper on the architecture choices and innovations that set us apart. Let the journey of getting to know OCI security begin!

 

 

 

 

Utilities Test Drive Analytics from Oracle to Manage Influx of Electric Vehicles

Oracle Press Releases - Wed, 2019-05-29 07:00
Press Release
Utilities Test Drive Analytics from Oracle to Manage Influx of Electric Vehicles Advanced analytics help utilities better plan for energy demand as cars move from gas to grid

Redwood Shores, Calif.—May 29, 2019

The use of electric vehicles (EVs) is growing at a record rate, with the International Energy Agency (IEA) predicting that the number of electric cars on the road will rise from 3.1 million in 2017 to 125 million in 2030. Enabling utilities to intelligently manage this new energy demand on the power grid, Oracle Utilities has unveiled a breakthrough in EV detection.

Tapping deep machine learning, Oracle Utilities Analytics Insights is able to identify the presence of an EV, show the time and frequency of charging and disaggregate the energy being consumed by the vehicle with advanced metering infrastructure (AMI) data. With this intelligence, utilities can reliably plan for the energy infusion needed to power EVs at scale and engage customers to charge at the times that are the least expensive for them and best for the health of the energy grid. The new EV detection capabilities from Oracle Utilities Analytics Insights are currently being piloted by a number of utilities.

“With solar, wind and storage technologies now constituting 90 percent of investment interest, the road is paved for deeper decarbonization of the electricity sector,” said Ben Kellison, director grid research, Wood Mackenzie Power & Renewables. “The case for transport electrification has never been stronger and the rapid growth in investment interest from car manufacturers is a confirmation of the future consumer demand for EVs. Utilities are now faced with an increasingly clean and decentralized system and they need new data and analytic packages to support a new planning paradigm.” 

Impact of the EV Explosion on the Energy Grid

The influx of EVs could represent an average additional growth of 1-4 percent in peak load on the grid over the next few decades, according to a report by McKinsey. While this may seem modest, the impact will be highly volatile and cause unpredictable spikes at the local sub-station and feeder levels in residential areas. This load is projected to reach as high as 30 percent peak growth in certain urban areas that are hotspots for EV adoption.

While this transportation development represents an important step forward in reducing carbon emissions, most electricity grids were created long before EVs were a commercially viable consumer product. As transportation continues to evolve from gas to the grid, utilities must plan for an uptick in energy demand that will vary dramatically by area. 

“With almost every major auto manufacturer releasing new EV models in the coming years, the window of time for utilities to act is closing,” said Dan Byrnes, SVP of product development, Oracle Utilities. “The intelligence our analytics provide is essential for utilities to make needed assessments on grid investments and in tandem, work as trusted advisors to customers who may be in the dark as to how owning an EV is impacting their energy footprint and bill. From utility optimization to proven customer engagement, only Oracle offers a complete package to manage the explosion of EVs.”

Powering Better EV Planning and Engagement

The Oracle EV detection capabilities are powered by more than a decade of research and experience disaggregating household energy data from billions of data points collected from 60 million households across 100 utilities. Oracle’s trained data models can be deployed for each specific household’s usage to understand whether a customer has an EV, how they interact with their EV chargers, and where EVs are clustering on the distribution grid. As such, utilities will be able to better plan for and manage the operational impact of EVs as a new distributed energy resource (DER) on the grid.

From a customer perspective, charging an EV can increase a typical household’s energy usage by 15 percent or more and potentially double usage during peak demand times. With the offering, utilities will have the tools to roll-out intuitive, user-friendly EV adoption customer journeys and time-of-use (TOU) plans to engage, educate and reward owners for charging during non-peak times. In the future, these same kinds of engagement programs can also be used for utilities to buy-back unused energy from their customers’ EV batteries to help balance energy supply and demand in times of need.

“EVs will have an impact on every part of a utility’s operations—from grid stability and regulatory affairs to customer billing and engagement,” added Byrnes. “With Oracle, our customers have the tools and intelligence they need to make better decisions, maximize outcomes, and increase customer satisfaction every step of the journey.”

Contact Info
Kristin Reeves
Oracle
+1.925.787.6744
kris.reeves@oracle.com
About Oracle

The Oracle Cloud offers a complete suite of integrated applications for Sales, Service, Marketing, Human Resources, Finance, Supply Chain and Manufacturing, plus Highly Automated and Secure Generation 2 Infrastructure featuring the Oracle Autonomous Database. For more information about Oracle (NYSE: ORCL), please visit us at www.oracle.com.

Trademarks

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Talk to a Press Contact

Kristin Reeves

  • +1.925.787.6744

PeopleSoft Adminstrator Podcast: #184 – nVision Performance

David Kurtz - Wed, 2019-05-29 06:42
I recorded a second podcast with Dan Iverson and Kyle Benson for the PeopleSoft Administrator Podcast, this time about nVision.
(10 May 2019) #184 – nVision Performance You can listen to the podcast on psadmin.io, or subscribe with your favourite podcast player, or in iTunes.

[Q/A] 1Z0-932 Oracle Cloud Infrastructure Architect Certification Day 1 Training Review

Online Apps DBA - Wed, 2019-05-29 05:22

[Q/A] 1Z0-932 Oracle Cloud Infra Architect: Day1 Review & Feedback Oracle Cloud Infrastructure (OCI) is used everywhere be it for Database, EBS (R12), Peoplesoft, JDEdward, SOA, OBIEE, WebLogic or Unix/Linux Machines. There is a huge demand in the Market for Certified Cloud Architect. Check what 56 Students in our May batch learned on Day1 including […]

The post [Q/A] 1Z0-932 Oracle Cloud Infrastructure Architect Certification Day 1 Training Review appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Ton nuage n’attends que toi

Yann Neuhaus - Wed, 2019-05-29 05:01

Après plusieurs formations offertes par dbi services chez différents fournisseurs de service Cloud et plusieurs retours d’expériences, je suis enthousiaste quand à l’évolution des technologies du numérique. Azure, Amazon AWS et Oracle OCI proposent d’excellents services d’infrastructure dont les noms changes mais les concepts sont les mêmes. On retrouve aussi quelques différences entre les services Cloud infrastructure qui peuvent être intéressants.

Peu être que toi aussi tu es intéressé par le Cloud mais tout ça n’est pas clair. A ça je réponds de voir le Cloud plus comme un data center traditionnel mais au lieu de provisionner tes machines et tes connexions à la main dans ta salle sur-climatisée, tout se fait maintenant que quelque cliques (ou lignes de commandes).

Permet moi de te présenter quelques informations sur le Cloud OCI d’Oracle qui lève les doutes sur les rumeurs et permette d’aller de l’avant.

Serveurs

Il est possible d’avoir des machines physiques et/ou des machines virtuelles. Ces machines physiques et virtuelles sont ensuite déclinées en “shapes” (template):

  • Standard shapes: Un standard pour la plus part des applications
  • DenseIO shapes: Fait pour des charges de travail élevée sur les données
  • GPU shapes: Fait pour utiliser des processeur de carte graphiques (Calcul de Bitcoins, algorthym pour gagner au jeu de Go,…)
  • High performance computing (HPC) shapes: Pour machine physique seullement. Fait pour des besoins massif de calcul CPU en parallele

Temps de provisionnement: quelque minutes

Réseau

Les possibilités de connexion sont sans limite. Il est notamment possible de:

  • Connecter ton Infrastructure Cloud avec Internet et/ou ton data center
  • Il est possible de connecter le Cloud Oracle avec un autre Cloud (Amazon AWS par example)
  • Il est possible de connecter ton espace Cloud avec ton infrastructure locale avec un lien “FastConnect” offrant une connexion avec une bande passante d’au minimum 10Gb.

Ces options de connexion permettent de répondre à tous les scénarios et évitent d’être bloqué chez Oracle. Tout comme il est possible de convertir ton espace Cloud en extension de ton Data center grâce au “FastConnect”. Je comprends mieux avec un schéma:

Le service base de données autonome (Autonomous Database)

Le service base de données autonome sert 2 types de charges de travail:

  • La charge de travail applicative standard (courte requêtes très fréquentes)
  • La charge de travail pour les entrepôts de données (long traitements unitaires)

Avec une base de données autonome, on ne fait que configurer le nombre de CPUs et la capacité de stockage sans arrêt de service ou dégradation de performance. De plus toutes les opérations suivantes sont prises en charge automatiquement:

  • Création de la base de données
  • Sauvegarde de la base de données
  • Mise à jour des fonctionnalité de la base de données
  • Mise à jour des failles de sécurité et bug de la base de données
  • Optimisation de la base de données

Il est possible de garder ses licences existantes pour le Cloud ou d’utiliser les licences inclues dans le service.

Pays disponibles (pour l’instant)

Liste des pays disponibles pour votre Cloud: https://docs.cloud.oracle.com/iaas/Content/General/Concepts/regions.htm

  • Royaume unis
  • Allemagne
  • Canada
  • États-Unis
  • Japon
  • Corée du Sud
  • Suisse (septembre 2019)
Prix

Tu peux payer au fur et à mesure de ta consommation ou par un abonnement. Tu peux comparer les prix toi même grâce aux calculettes onlines fournies par les fournisseurs:

  • Microsoft Azure: https://azure.microsoft.com/en-in/pricing/calculator
  • Oracle Cloud: https://cloud.oracle.com/en_US/cost-estimator
  • Amazon AWS: https://calculator.s3.amazonaws.com/index.html

Par exemple pour un serveur virtuel à 8 CPUs et environs 120GB de mémoire tu paieras mensuellement:

  • Microsoft Azure (DS13v2): 648.97 $
  • Oracle Cloud (B88514): 401.00 $
  • Amazon AWS (f1.x2large): 1207.00 $
Conclusion

Si les autres fournisseurs de Cloud étaient loin devant Oracle depuis quelques années, Oracle offre maintenant un service infrastructure (Oracle OCI) solide et facile à comprendre (comme les autres). L’avantage par rapport à d’autre est qu’il est possible d’avoir des machines physiques. Et enfin, nous ne sommes pas lié à Oracle grâce à l’option des réseaux privés ou public entre le cloud Oracle et d’autres fournisseurs de Cloud.

Enfin, même si avoir du cloud te débarrasse de la gestion matériel et fait passé ton temps de provisionnement à quelques minutes, cela n’empêchera pas que tu auras besoin d’ingénieurs système pour gérer les accès, les ressources, l’opérationnel, les sauvegardes et ton plan de reprise d’activité.

J’ai oublié un point? N’hésite pas à me laisser ta question ci-dessous et j’y répondrai avec plaisir.

Cet article Ton nuage n’attends que toi est apparu en premier sur Blog dbi services.

Dell Boomi Training: Day 1 Review/Introduction & Q/As

Online Apps DBA - Wed, 2019-05-29 02:26

[Dell Boomi Training] Day 1 Review/Introduction & Q/As Recently We had our Day1 Session for DellBoomi Training, So Learn with our New blog at https://k21academy.com/dellboomi19 about the review of the interactive live session & Introduction to On-Premise Integration, that covers: ✔ Insights of Dell Boomi Platform & Course Agenda ✔ Introduction to AtomSphere, Atom in […]

The post Dell Boomi Training: Day 1 Review/Introduction & Q/As appeared first on Oracle Trainings for Apps & Fusion DBA.

Categories: APPS Blogs

Latest Blog Posts from Members of the Oracle ACE Program - May 12-18, 2019

OTN TechBlog - Tue, 2019-05-28 16:24
Getting into it...

Spring fever affects different people in different ways. For instance, while most of us were taking in nature or discussing the final episode of Game of Throne, these members of the Oracle ACE Program chose to spend time pounding out more than 50 blog posts. Here's what they produced for the week of May 12-18, 2019.

 

ACE Director

Oracle ACE Director Clarissa Maman OrfaliClarisa Maman Orfali
Founder/System Engineer, ClarTech Solutions, Inc.
Irvine, CA

  

Oracle ACE Director Timo HahnTimo Hahn
Principal Software Architect, virtual 7 GmbH
Germany

 
ACE

Oracle ACE Dirk NachbarDirk Nachbar
Senior Consultant, Trivadis AG
Bern, Switzerland

 

Oracle ACE Laurent LeturgezLaurent Leturgez
President/CTO, Premiseo
Lille, France

 

Oracle ACE Patrick JolliffePatrick Joliffe
Manager, Li & Fung Limited
Hong Kong

 

Oracle ACE Phill WilkinsPhil Wilkins
Senior Consultant, Capgemini
Reading, United Kingdom

 

Oracle ACE Phillippe FierensPhilippe Fierens
Oracle DBA on Exadata with OVM
Brussels, Belgium

 

Oracle ACE Rodrigo MufalaniRodrigo Mufalani
Principal Database Architect, eProseed
Luxembourg

 

Oracle ACE Satishbabu GunukulaSatishbabu Gunukula
Sr. Database Architect, Intuitive Surgical
San Francisco, California

 

Oracle ACE Sven WellerSven Weller
CEO, Syntegris Information Solutions Gmbh
Germany

 
ACE Associate

Oracle ACE Associate Adam BolinskiAdam Boliński
Oracle Certified Master
Warsaw, Poland

 

Oracle ACE Associate Adrian WardAdrian Ward
Owner, Addidici
Surrey, United Kingdom

 

Oracle ACE Associate Bruno Reis da SilvaBruno Reis da Silva
Senior Oracle Database Administrator, IBM
Stockholm, Sweden

 

Oracle ACE Associate Elise Valin-RakiElise Valin-Raki
Solution Manager, Fennia IT
Finland

 

Oracle ACE Associate Fred DenisFred Denis
Oracle/Exadata DBA, Pythian
Brisbane, Australia

 

Oracle ACE Associate Sayan MakashinovSayan Malakshinov
Oracle performance Tuning Expert, TMD (TransMedia Dynamics) Ltd.
Aylesbury, United Kingdom

 

Oracle ACE Simo VilmunenSimo Vilmunen
Technical Architect, Uponor
Toronto, Canada

 
Additional Resources

Network design for Oracle Cloud Infrastructure

Syed Jaffar - Tue, 2019-05-28 15:48
Assuming, you are planning to migrate your resources from Oracle Cloud Infrastructure Compute classic environment to Oracle Cloud Infrastructure, this blog post explains the details of network design for Cloud Infrastructure environment. It's important to understand and map the network design and details from both environments.

Cloud Inrastructure Compute Classic network has IP Networks and Shared Network model. On other hand, Cloud Infrastructure has Virtual Cloud Networks (VCNs) , Subnets, Availability Domain network model.

Before migration, you must map the network resources between the environments. Source -> Target:
Shared network -> VCN, IP Network -> IP Network, VPN -> IPSec VPN and Fast Connect classic -> FastConnect.

Consider creating below listed network elements in Oracle Cloud Infrastructure:

  • VCN and Subnet CIDR Prefixes
  • DNS Names 
Use the below procedure to configure cloud network for Cloud Infrastructure environment:


  1. Create one or more VCNs.
  2. Create an Internet gateway and/or NAT gateway. An Internet gateway is a virtual router that allows resources in a public subnet direct access the public Internet. A NAT gateway allows resources that don't have a public IP addresses to access the Internet, without exposing those resources to incoming traffic from the Internet.
  3. Configure a service gateway, if required. A service gateway provides a path for private network traffic between your VCN and a public Oracle service such as Oracle Cloud Infrastructure Object Storage.
  4. Create one or more subnets in each VCN.
  5. Configure local peering gateways between VCNs, if required.
  6. Configure security lists, security rules, and route tables for each subnet.

v$sql_hint.target_level

XTended Oracle SQL - Tue, 2019-05-28 13:48

Today I wanted to give a link to the description of v$sql_hint.target_level to show that no_parallel can be specified for statement or object, and though it’s pretty obvious, but surprisingly I haven’t found any articles or posts about it, so this short post describes it.
v$sql_hint.target_level is a bitset, where
1st bit set to 1 means that the hint can be specified on statement level,
2nd – on query block level,
3rd – on object level,
4th – on join level(for multiple objects).
Short example:

   select name,sql_feature
          ,class,inverse
          ,version,version_outline
          ,target_level
         ,decode(bitand(target_level,1),0,'no','yes') Statement_level
         ,decode(bitand(target_level,2),0,'no','yes') Query_block_level
         ,decode(bitand(target_level,4),0,'no','yes') Object_level
         ,decode(bitand(target_level,8),0,'no','yes') Join_level
   from v$sql_hint h;
with hints as (
   select name,sql_feature
          ,class,inverse
          ,version,version_outline
          ,target_level
         ,decode(bitand(target_level,1),0,'no','yes') Statement_level
         ,decode(bitand(target_level,2),0,'no','yes') Query_block_level
         ,decode(bitand(target_level,4),0,'no','yes') Object_level
         ,decode(bitand(target_level,8),0,'no','yes') Join_level
   from v$sql_hint h
)
select *
from hints
where statement_level='yes'
  and to_number(regexp_substr(version,'^\d+')) >= 18
order by version;

Result:

NAME              SQL_FEATURE     CLASS                VERSION  TARGET_LEVEL STATEMENT_LEVEL QUERY_BLOCK_LEVEL OBJECT_LEVEL JOIN_LEVEL
----------------- --------------- -------------------- -------- ------------ --------------- ----------------- ------------ ----------
PDB_LOCAL_ONLY    QKSFM_DML       PDB_LOCAL_ONLY       18.1.0              1 yes             no                no           no
SUPPRESS_LOAD     QKSFM_DDL       SUPPRESS_LOAD        18.1.0              1 yes             no                no           no
SYSTEM_STATS      QKSFM_ALL       SYSTEM_STATS         18.1.0              1 yes             no                no           no
MEMOPTIMIZE_WRITE QKSFM_EXECUTION MEMOPTIMIZE_WRITE    18.1.0              1 yes             no                no           no
SKIP_PROXY        QKSFM_ALL       SKIP_PROXY           18.1.0              1 yes             no                no           no
CURRENT_INSTANCE  QKSFM_ALL       CURRENT_INSTANCE     18.1.0              1 yes             no                no           no
JSON_LENGTH       QKSFM_EXECUTION JSON_LENGTH          19.1.0              1 yes             no                no           no
QUARANTINE        QKSFM_EXECUTION QUARANTINE           19.1.0              1 yes             no                no           no
Categories: Development

RMAN Incremental & Demo Part 2 (Level 1)

Zed DBA's Oracle Blog - Tue, 2019-05-28 11:07

This blog post is part of the “RMAN Back to Basics” series, which can be found here.

Incremental Backup

An incremental backup only backup up those data blocks that have changed since the last backup.

Types of Incremental Backups

There are 2 types of Incremental Backups:

  1. Level 0 are a base for subsequent backups.  Copies all blocks containing data similar to a full backup, with the only difference that full backups are never included in an incremental strategy.  Level 0 can be backup sets or image copies.
  2. Level 1 are subsequent backups of a level 0, backing up by default all blocks changed after the most recent level 0 or 1, known as differential incremental backup.  More on different types of level 1 backups is discuss in detail here.
Incremental Level 1 Demo

We take an incremental level 1 backup using my script 6_incremental_level_1.sh:

[oracle@dc1sbxdb001 demo]$ ./6_incremental_level_1.sh 
-----------------------
Step 1: Set environment
-----------------------

Setting the Database Environment using oraenv...
The Oracle base remains unchanged with value /u01/app/oracle

ORACLE_SID: ZEDDBA
ORACLE_HOME: /u01/app/oracle/product/12.2.0/dbhome_1

Press Enter to continue

The environment is set to my ZEDDBA database, then next the incremental backup (Level 1) is taken:

----------------------------------------
Step 2: Take Incremental Level 1 Backup
----------------------------------------

Cotent of 6_incremental_level_1.cmd file:

BACKUP INCREMENTAL LEVEL 1 DATABASE TAG 'INCR LEVEL 1';
HOST 'read Press Enter to LIST BACKUP';
LIST BACKUP;

Press Enter to continue

Calling 'rman target / cmdfile=/media/sf_Software/scripts/demo/6_incremental_level_1.cmd'

Recovery Manager: Release 12.2.0.1.0 - Production on Wed May 22 12:08:27 2019

Copyright (c) 1982, 2017, Oracle and/or its affiliates. All rights reserved.

connected to target database: ZEDDBA (DBID=3520942925)

RMAN> BACKUP INCREMENTAL LEVEL 1 DATABASE TAG 'INCR LEVEL 1';
2> HOST 'read Press Enter to LIST BACKUP';
3> LIST BACKUP;
4> 
Starting backup at 22-MAY-19
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=153 device type=DISK
channel ORA_DISK_1: starting incremental level 1 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_system_gftkr3fv_.dbf
input datafile file number=00002 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_sysaux_gftkr792_.dbf
input datafile file number=00003 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_undotbs1_gftkr944_.dbf
input datafile file number=00004 name=/u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_users_gftkr9fc_.dbf
channel ORA_DISK_1: starting piece 1 at 22-MAY-19
channel ORA_DISK_1: finished piece 1 at 22-MAY-19
piece handle=/u01/app/oracle/fast_recovery_area/ZEDDBA/backupset/2019_05_22/o1_mf_nnnd1_INCR_LEVEL_1_ggbcfhbp_.bkp tag=INCR LEVEL 1 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:03
Finished backup at 22-MAY-19

Starting Control File and SPFILE Autobackup at 22-MAY-19
piece handle=/u01/app/oracle/fast_recovery_area/ZEDDBA/autobackup/2019_05_22/o1_mf_s_1008936514_ggbcflon_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 22-MAY-19

host command complete

We use host within RMAN just to wait for input before moving on for demo purposes.  Next we list the backup within RMAN using ‘LIST BACKUP‘:

List of Backup Sets
===================

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
1 Full 498.93M DISK 00:00:08 16-MAY-19 
BP Key: 1 Status: AVAILABLE Compressed: NO Tag: FULL BACKUP
Piece Name: /u01/app/oracle/fast_recovery_area/ZEDDBA/backupset/2019_05_16/o1_mf_nnndf_FULL_BACKUP_gfv4k119_.bkp
List of Datafiles in backup set 1
File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name
---- -- ---- ---------- --------- ----------- ------ ----
1 Full 353825 16-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_system_gftkr3fv_.dbf
2 Full 353825 16-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_sysaux_gftkr792_.dbf
3 Full 353825 16-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_undotbs1_gftkr944_.dbf
4 Full 353825 16-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_users_gftkr9fc_.dbf

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
2 Full 8.31M DISK 00:00:00 16-MAY-19 
BP Key: 2 Status: AVAILABLE Compressed: NO Tag: TAG20190516T173912
Piece Name: /u01/app/oracle/fast_recovery_area/ZEDDBA/autobackup/2019_05_16/o1_mf_s_1008437952_gfv4kjko_.bkp
SPFILE Included: Modification time: 16-MAY-19
SPFILE db_unique_name: ZEDDBA
Control File Included: Ckp SCN: 353836 Ckp time: 16-MAY-19

BS Key Size Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ ---------------
3 77.83M DISK 00:00:01 16-MAY-19 
BP Key: 3 Status: AVAILABLE Compressed: NO Tag: FULL BACKUP PLUS ARCHIVELOG
Piece Name: /u01/app/oracle/fast_recovery_area/ZEDDBA/backupset/2019_05_16/o1_mf_annnn_FULL_BACKUP_PLUS_ARC_gfv4y7m2_.bkp

List of Archived Logs in backup set 3
Thrd Seq Low SCN Low Time Next SCN Next Time
---- ------- ---------- --------- ---------- ---------
1 13 332298 16-MAY-19 354044 16-MAY-19

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
4 Full 498.94M DISK 00:00:06 16-MAY-19 
BP Key: 4 Status: AVAILABLE Compressed: NO Tag: TAG20190516T174603
Piece Name: /u01/app/oracle/fast_recovery_area/ZEDDBA/backupset/2019_05_16/o1_mf_nnndf_TAG20190516T174603_gfv4yco3_.bkp
List of Datafiles in backup set 4
File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name
---- -- ---- ---------- --------- ----------- ------ ----
1 Full 354058 16-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_system_gftkr3fv_.dbf
2 Full 354058 16-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_sysaux_gftkr792_.dbf
3 Full 354058 16-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_undotbs1_gftkr944_.dbf
4 Full 354058 16-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_users_gftkr9fc_.dbf

BS Key Size Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ ---------------
5 3.50K DISK 00:00:00 16-MAY-19 
BP Key: 5 Status: AVAILABLE Compressed: NO Tag: FULL BACKUP PLUS ARCHIVELOG
Piece Name: /u01/app/oracle/fast_recovery_area/ZEDDBA/backupset/2019_05_16/o1_mf_annnn_FULL_BACKUP_PLUS_ARC_gfv4ym4v_.bkp

List of Archived Logs in backup set 5
Thrd Seq Low SCN Low Time Next SCN Next Time
---- ------- ---------- --------- ---------- ---------
1 14 354044 16-MAY-19 354066 16-MAY-19

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
6 Full 8.31M DISK 00:00:00 16-MAY-19 
BP Key: 6 Status: AVAILABLE Compressed: NO Tag: TAG20190516T174612
Piece Name: /u01/app/oracle/fast_recovery_area/ZEDDBA/autobackup/2019_05_16/o1_mf_s_1008438372_gfv4ynmv_.bkp
SPFILE Included: Modification time: 16-MAY-19
SPFILE db_unique_name: ZEDDBA
Control File Included: Ckp SCN: 354077 Ckp time: 16-MAY-19

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
7 Full 8.31M DISK 00:00:00 17-MAY-19 
BP Key: 7 Status: AVAILABLE Compressed: NO Tag: TAG20190517T165453
Piece Name: /u01/app/oracle/fast_recovery_area/ZEDDBA/autobackup/2019_05_17/o1_mf_s_1008521693_gfxpbfs7_.bkp
SPFILE Included: Modification time: 17-MAY-19
SPFILE db_unique_name: ZEDDBA
Control File Included: Ckp SCN: 458035 Ckp time: 17-MAY-19

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
8 Incr 0 511.64M DISK 00:00:05 20-MAY-19 
BP Key: 8 Status: AVAILABLE Compressed: NO Tag: INCR LEVEL 0
Piece Name: /u01/app/oracle/fast_recovery_area/ZEDDBA/backupset/2019_05_20/o1_mf_nnnd0_INCR_LEVEL_0_gg5njx01_.bkp
List of Datafiles in backup set 8
File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name
---- -- ---- ---------- --------- ----------- ------ ----
1 0 Incr 571045 20-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_system_gftkr3fv_.dbf
2 0 Incr 571045 20-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_sysaux_gftkr792_.dbf
3 0 Incr 571045 20-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_undotbs1_gftkr944_.dbf
4 0 Incr 571045 20-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_users_gftkr9fc_.dbf

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
9 Full 8.31M DISK 00:00:00 20-MAY-19 
BP Key: 9 Status: AVAILABLE Compressed: NO Tag: TAG20190520T171324
Piece Name: /u01/app/oracle/fast_recovery_area/ZEDDBA/autobackup/2019_05_20/o1_mf_s_1008782004_gg5nk4cs_.bkp
SPFILE Included: Modification time: 20-MAY-19
SPFILE db_unique_name: ZEDDBA
Control File Included: Ckp SCN: 571056 Ckp time: 20-MAY-19

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
10 Incr 1 11.22M DISK 00:00:02 22-MAY-19 
BP Key: 10 Status: AVAILABLE Compressed: NO Tag: INCR LEVEL 1
Piece Name: /u01/app/oracle/fast_recovery_area/ZEDDBA/backupset/2019_05_22/o1_mf_nnnd1_INCR_LEVEL_1_ggbcfhbp_.bkp
List of Datafiles in backup set 10
File LV Type Ckp SCN Ckp Time Abs Fuz SCN Sparse Name
---- -- ---- ---------- --------- ----------- ------ ----
1 1 Incr 575174 22-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_system_gftkr3fv_.dbf
2 1 Incr 575174 22-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_sysaux_gftkr792_.dbf
3 1 Incr 575174 22-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_undotbs1_gftkr944_.dbf
4 1 Incr 575174 22-MAY-19 NO /u01/app/oracle/oradata/ZEDDBA/datafile/o1_mf_users_gftkr9fc_.dbf

BS Key Type LV Size Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
11 Full 8.31M DISK 00:00:00 22-MAY-19 
BP Key: 11 Status: AVAILABLE Compressed: NO Tag: TAG20190522T120834
Piece Name: /u01/app/oracle/fast_recovery_area/ZEDDBA/autobackup/2019_05_22/o1_mf_s_1008936514_ggbcflon_.bkp
SPFILE Included: Modification time: 22-MAY-19
SPFILE db_unique_name: ZEDDBA
Control File Included: Ckp SCN: 575181 Ckp time: 22-MAY-19

Recovery Manager complete.

Press Enter to continue

Next we look at the file size of the backup piece and we can see the level 1 was just 12M compared to the level 0 which was 512M:

Files size on disk:
/u01/app/oracle/fast_recovery_area/ZEDDBA/backupset/2019_05_16:
total 1.1G
-rw-r-----. 1 oracle oinstall 499M May 16 17:39 o1_mf_nnndf_FULL_BACKUP_gfv4k119_.bkp
-rw-r-----. 1 oracle oinstall 78M May 16 17:46 o1_mf_annnn_FULL_BACKUP_PLUS_ARC_gfv4y7m2_.bkp
-rw-r-----. 1 oracle oinstall 499M May 16 17:46 o1_mf_nnndf_TAG20190516T174603_gfv4yco3_.bkp
-rw-r-----. 1 oracle oinstall 4.0K May 16 17:46 o1_mf_annnn_FULL_BACKUP_PLUS_ARC_gfv4ym4v_.bkp

/u01/app/oracle/fast_recovery_area/ZEDDBA/backupset/2019_05_20:
total 512M
-rw-r-----. 1 oracle oinstall 512M May 20 17:13 o1_mf_nnnd0_INCR_LEVEL_0_gg5njx01_.bkp

/u01/app/oracle/fast_recovery_area/ZEDDBA/backupset/2019_05_22:
total 12M
-rw-r-----. 1 oracle oinstall 12M May 22 12:08 o1_mf_nnnd1_INCR_LEVEL_1_ggbcfhbp_.bkp

Press Enter to continue

Finally, we update the demo log table:

-------------------------------------
Step 3: Updating and viewing demo log
-------------------------------------
Calling 'sqlplus / as sysdba'
To updated and view demo log

1 row created.

Commit complete.

WHEN COMMENTS
------------------------------ --------------------------------------------------
16-MAY-19 05.31.24.000000 PM Enable Archive Log Mode
16-MAY-19 05.40.23.000000 PM Full Backup
16-MAY-19 05.46.17.000000 PM Full Backup plus Archive Logs
17-MAY-19 04.55.12.000000 PM Image Copy
20-MAY-19 05.13.44.000000 PM Incremental Level 0
22-MAY-19 12.09.17.000000 PM Incremental Level 1

6 rows selected.

Press Enter to exit shell script

[oracle@dc1sbxdb001 demo]$
Reference Scripts
  1. 6_incremental_level_1.sh
  2. 6_incremental_level_1.cmd

To download all 2 in one zip: 6_incremental_level_1.zip

The rest of the series
  1. Oracle Database File Placement Best Practice & Create Database Demo
  2. RMAN Full Backup & Demo
  3. RMAN Image Copy & Demo
  4. RMAN Incremental & Demo Part 1 (Level 0)
  5. RMAN Incremental & Demo Part 2 (Level 1)
  6. RMAN Incremental with Block Change Tracking & Demo
  7. RMAN Incremental Differential vs Cumulative & Demo
  8. RMAN Incremental Updating Backup & Demo Part 1
  9. RMAN Incremental Updating Backup & Demo Part 2
  10. Flashback
  11. RMAN Block Media Recovery
  12. RMAN Recover database with only FRA
  13. RMAN Obsolete

Please Note: Links to the blog posts will be released daily and updated here.

 

If you found this blog post useful, please like as well as follow me through my various Social Media avenues available on the sidebar and/or subscribe to this oracle blog via WordPress/e-mail.

Thanks

Zed DBA (Zahid Anwar)

Categories: DBA Blogs

Migrating Oracle Cloud Infrastructure Classic Workloads to Oracle Cloud Infrastructure - Migration Tools

Syed Jaffar - Tue, 2019-05-28 03:22
If you are planning to migrate your resources from Oracle Cloud Infrastructure Classic to Oracle Cloud Infrastructure, Oracle provides variety of tools to achieve this. In this blog post will discuss some of thetools which can be used to migrate Oracle Cloud Infrastructure Classic Workload resources to Oracle Cloud Infrastructure. 

The tools below can be used to identify resources from Oracle cloud infrastructure Classic environments and to migrate to Oracle Cloud Infrastructure tenancy. Using these tools, one can setup required network, and migrate VMs and block storage volumes over the targeted systems.

Tools for migrating infrastructure resources : Compute, VMs and Block Storage

  • Oracle Cloud Infrastructure Classic Discovery and Translation Tool: as the name explains, it is a discovery tool, which assist discovering the resources of different resources in your Cloud Infrastructure Classic environments, such as : compute Classic, Object Storage classic, load balancing classic account. Additionally, it is capable of reporting the items in the specified environment, list of VMs in the source environment and also networking information of the source system.
  • Oracle Cloud Infrastructure Classic VM and Block Storage Tool: This tool automates the process of migrating VMs and Block Storage over to the target environment. 
  • Oracle Cloud Infrastructure Classic Block Volume Backup and Restore Tool: This tool used to migrate your remote snapshot of storage volumes as well as scheduled backups. 

Tools for migrating databases

To migrate databases to Oracle cloud infrastructure, you can use Oracle Cloud Infrastructure Class Database Migration tool. This tool uses Oracle RMAN to create backup and restore the database over the targeted system.

Alternatively, Oracle Data Guard solution also can be used to migrate single or RAC databases to Oracle Cloud Infrastructure.

Tools for migrating Object Storage

  • rclone command is used to migrate your object storage data f you don't use the Oracle Cloud Infrastructure Storage Software Appliance.
  • If Oracle Cloud Infrastructure Storage Software Appliance is used to store your object data, then you can migrate your data to your Oracle Cloud Infrastructure Object Storage account by using the Oracle Cloud Infrastructure Storage Gateway.





Pages

Subscribe to Oracle FAQ aggregator