Страницы

воскресенье, 30 апреля 2017 г.

Asynchronous Global Index Maintenance and Recycle Bin

Got Oracle error ORA-38301: "can not perform DDL/DML over objects in Recycle Bin" in an alert.log:
ORCL(3):Errors in file /u01/app/oracle/diag/rdbms/orcl12c/orcl12c/trace/orcl12c_j000_10274.trc:
ORA-12012: error on auto execute of job "SYS"."PMO_DEFERRED_GIDX_MAINT_JOB"
ORA-38301: can not perform DDL/DML over objects in Recycle Bin
ORA-06512: at "SYS.DBMS_PART", line 131
ORA-06512: at "SYS.DBMS_PART", line 131
ORA-06512: at "SYS.DBMS_PART", line 120
ORA-06512: at line 1
The highlighted line indicates that the error relates to the automatic scheduler job SYS.PMO_DEFERRED_GIDX_MAINT_JOB, which was introduced in the Oracle database 12.1 release.
It is responsible for cleaning up indexes after drop/truncate operations: link.
I delved into the issue and found that the job SYS.PMO_DEFERRED_GIDX_MAINT_JOB did not ignore objects in the Recycle Bin.
Here is a short test case to demonstrate that:
SQL> select banner from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
PL/SQL Release 12.2.0.1.0 - Production
CORE    12.2.0.1.0      Production
TNS for Linux: Version 12.2.0.1.0 - Production
NLSRTL Version 12.2.0.1.0 - Production

SQL> create table t(x)
  2  partition by range(x)
  3  (
  4    partition values less than(maxvalue)
  5  )
  6  as
  7  select 1
  8    from dual;

Table created.

SQL>
SQL> create index t_i on t(x);

Index created.

SQL>
SQL> alter table t truncate partition for(1) update indexes;

Table truncated.

SQL>
SQL> select status, orphaned_entries
  2    from ind
  3   where index_name = 'T_I';

STATUS   ORP
-------- ---
VALID    YES

SQL>
SQL> drop table t;

Table dropped.
Now I run the job SYS.PMO_DEFERRED_GIDX_MAINT_JOB manually as if it was running automatically:
SQL> select run_count, failure_count, state from user_scheduler_jobs where job_name='PMO_DEFERRED_GIDX_MAINT_JOB';

 RUN_COUNT FAILURE_COUNT STATE
---------- ------------- --------------------
         4             0 SCHEDULED

SQL>  exec dbms_scheduler.run_job( 'PMO_DEFERRED_GIDX_MAINT_JOB', false)

PL/SQL procedure successfully completed.

SQL>  select run_count, failure_count, state from user_scheduler_jobs where job_name='PMO_DEFERRED_GIDX_MAINT_JOB';

 RUN_COUNT FAILURE_COUNT STATE
---------- ------------- --------------------
         5             1 SCHEDULED

The job SYS.PMO_DEFERRED_GIDX_MAINT_JOB calls the DBMS_PART.CLEANUP_GIDX_INTERNAL procedure.
The same ORA-38301 error can be raised if I call the DBMS_PART.CLEANUP_GIDX procedure:
SQL> exec DBMS_PART.CLEANUP_GIDX (user)
BEGIN DBMS_PART.CLEANUP_GIDX (user); END;

*
ERROR at line 1:
ORA-38301: can not perform DDL/DML over objects in Recycle Bin
ORA-06512: at "SYS.DBMS_PART", line 131
ORA-06512: at "SYS.DBMS_PART", line 131
ORA-06512: at "SYS.DBMS_PART", line 120
ORA-06512: at "SYS.DBMS_PART", line 193
ORA-06512: at line 1
The problem here is that the automatic job stops further processing on any error.
It means that we may end up having lots of indexes requiring cleanup that are not processed automatically and have to undergone manual actions to reset their state.

tl;dr. Although the Asynchronous Global Index Maintenance feature is quite useful and can greatly speedup the partition maintenance operations TRUNCATE PARTITION and DROP PARTITION, it still does not ignore objects in the Recycle Bin.
Therefore, the automatic maintenance job PMO_DEFERRED_GIDX_MAINT_JOB may fail and does not process all of the indexes that require cleanup.

среда, 15 марта 2017 г.

SQL*Plus 12.2 and searching path of login.sql

Having installed Oracle Database 12.2 on a client, I have noticed that the login.sql script, which is placed in a custom directory specified by SQLPATH, is not invoked anymore.
Here is my login.sql:
[oracle@localhost]$ cat /tmp/sqlpath/login.sql 
select 'login.sql invoked' output 
  from dual;
The login.sql script is not invoked when I connect through SQL*Plus 12.2 despite the fact the SQLPATH environment variable is set:
[oracle@localhost]$ export SQLPATH=/tmp/sqlpath
[oracle@localhost]$ sqlplus tc/tc@ora12

SQL*Plus: Release 12.2.0.1.0 Production on Wed Mar 15 09:20:52 2017

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Last Successful login time: Wed Mar 15 2017 09:16:06 +07:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics,
Real Application Testing and Unified Auditing options

SQL> 

This issue is documented in SQL*Plus 12.2.0.1.0 Change in Behavior for Search Path of Login.sql (SQL*Plus User Profile Script) (Doc ID 2241021.1).
Unsurprisignly, if I set ORACLE_PATH, then login.sql is invoked:
[oracle@localhost]$ export ORACLE_PATH=$SQLPATH
[oracle@localhost]$ unset SQLPATH
[oracle@localhost]$ sqlplus tc/tc@ora12

SQL*Plus: Release 12.2.0.1.0 Production on Wed Mar 15 09:21:13 2017

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Last Successful login time: Wed Mar 15 2017 09:20:52 +07:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics,
Real Application Testing and Unified Auditing options


OUTPUT
---------------------------------------------------
login.sql invoked

SQL> 

The MOS note also contains information that this new behaviour may influence earlier releases when the PSU or CI are released for them.
I have no idea why Oracle has changed the existing functionality with login.sql, but that is definitely something to keep in mind in case you are going to upgrade to a new release.
Interestingly, SQLcl still honor SQLPATH even when both SQLPATH and ORACLE_PATH are set:
[oracle@localhost]$ cat /tmp/sqlpath/login.sql
select 'login.sql invoked' output 
  from dual;
[oracle@localhost]$ cat /tmp/oracle_path/login.sql
select 'oracle_path invoked'
  from dual;
[oracle@localhost]$ export ORACLE_PATH=/tmp/oracle_path
[oracle@localhost]$ export SQLPATH=/tmp/sqlpath
[oracle@localhost]$ ./sql tc/tc@ora12

SQLcl: Release 12.2.0.1.0 RC on Ср мар 15 09:42:08 2017

Copyright (c) 1982, 2017, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, Automatic Storage Management, OLAP, Advanced Analytics,
Real Application Testing and Unified Auditing options


OUTPUT           
-----------------
login.sql invoked

четверг, 16 февраля 2017 г.

ORA-3180 on Active Data Guard standby database

Got a call asking me to provide advice on the cause of ORA-3180 error on an Active Data Guard standby database instance:
SQL> explain plan for select * from dual;
explain plan for select * from dual
                        *
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-03180: Sequence values cannot be allocated for Oracle Active Data Guard standby.
It is a documented fact that sequences can be used within an Oracle Active Data Guard physical standby database: Using Sequences in Oracle Active Data Guard.
I have not found whether the algorithm used to identify the primary database is documented either on MOS or in the documentation.
I have been tinkering around with the issue for a while, so this blog post is about my findings.
First, the most useful source of information about this error is the server process trace file. Here is an excerpt from it:
*** 2017-02-09 13:12:59.339
*** SESSION ID:(51.12274) 2017-02-09 13:12:59.339
*** CLIENT ID:() 2017-02-09 13:12:59.339
*** SERVICE NAME:(SYS$USERS) 2017-02-09 13:12:59.339
*** MODULE NAME:(sqlplus@misha2 (TNS V1-V3)) 2017-02-09 13:12:59.339
*** CLIENT DRIVER:(SQL*PLUS) 2017-02-09 13:12:59.339
*** ACTION NAME:() 2017-02-09 13:12:59.339

krsd_get_primary_connect_string: found pcs 'adg3' by reverse lookup
Connected to primary database target adg3
*** 2017-02-09 13:12:59.372643 3981 krsb.c
krsb_stream_dispatch: Error 604 during streaming operation to destination 1
*** 2017-02-09 13:12:59.372842 2178 krsu.c
krsu_rmi_send_recv: Encountered error 604 sending message to connection 1
*** 2017-02-09 13:12:59.372867 2023 krsu.c
krsu_rmi_lwc_send_recv: Encountered error status 604 sending RMI message to adg3
kdn_sseq_so_primary: Encountered send recv exception 604
I was curious about what "by reverse lookup" meant. I speculate that it means that the primary database TNS alias (or the fully-formed TNS descriptor) is obtained from one of LOG_ARCHIVE_DEST_n parameters (let it call LADn for the sake of shortness).
And conversely, there is a forward lookup by the FAL_SERVER parameter:
krsd_get_primary_connect_string: found pcs 'adg1' by FAL_SERVER lookup
Connected to primary database target adg1
krsd_get_primary_connect_string: found pcs 'adg1' by FAL_SERVER lookup
Connected to primary database target adg1
It seems that Oracle is considering either the LADn or FAL_SERVER parameter when it tries to identify the primary database connect identifier to request a sequence cache.
I have done several tests in my sandbox Data Guard environment and I think that the LADn take precedence over the FAL_SERVER.
All of the tests were performed on the following Data Guard configuration:
DGMGRL> show configuration;

Configuration - adg

  Protection Mode: MaxAvailability
  Members:
  adg1 - Primary database
    adg2 - Physical standby database 
    adg3 - Physical standby database 

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 10 seconds ago)
I ran my scripts in adg2, which is a physical standby database, and the databases have DBBP 12.1.0.2.161018 applied.
Here is what I was observing while doing my experiments:
1. the first LADn with db_unique_name=<primary_db> is selected. The corresponding LOG_ARCHIVE_DEST_STATE_N, VALID_FOR parameters are ignored.
Here is an example where I set VALID_FOR=(ALL_LOGFILES,STANDBY_ROLE), LOG_ARCHIVE_DEST_STATE_4=DEFER and, still, the LADn was used to identify the primary database TNS:
SQL> alter system set fal_server='adg1' log_archive_dest_4='service=non_existent valid_for=(all_logfiles,standby_role) db_unique_name=adg1' log_archive_dest_state_4=defer;

System altered.

SQL> conn / as sysdba
Connected.
SQL> explain plan for select * from dual;
explain plan for select * from dual
                        *
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-03180: Sequence values cannot be allocated for Oracle Active Data Guard standby.

-- trace file
krsd_get_primary_connect_string: found pcs 'non_existent' by reverse lookup
OCIServerAttach failed -1
.. Detailed OCI error val is 12154 and errmsg is 'ORA-12154: TNS:could not resolve the connect identifier specified
'
OCIServerAttach failed -1
.. Detailed OCI error val is 12154 and errmsg is 'ORA-12154: TNS:could not resolve the connect identifier specified
'
OCIServerAttach failed -1
.. Detailed OCI error val is 12154 and errmsg is 'ORA-12154: TNS:could not resolve the connect identifier specified
'
OCIServerAttach failed -1
.. Detailed OCI error val is 12154 and errmsg is 'ORA-12154: TNS:could not resolve the connect identifier specified
'
*** 2017-02-09 14:05:22.153950 4929 krsh.c
Error 12154 received logging on to the standby
*** 2017-02-09 14:05:22.153969 1460 krsu.c
krsu_rmi_lwc_connect: Encountered error status 12154 attempting connection to non_existent
non_existent: Encountered connect exception 12154
2. when there is no LADn with db_unique_name=<primary_db> is present then the FAL_SERVER parameter is used. It is sequentially traversed left-to-right:
SQL> alter system set fal_server='x','adg1','y';

System altered.

SQL> explain plan for select * from dual;

Explained.

-- trace file
*** 2017-02-09 14:42:50.944
*** SESSION ID:(47.31048) 2017-02-09 14:42:50.944
*** CLIENT ID:() 2017-02-09 14:42:50.944
*** SERVICE NAME:(SYS$USERS) 2017-02-09 14:42:50.944
*** MODULE NAME:(sqlplus@userhost (TNS V1-V3)) 2017-02-09 14:42:50.944
*** CLIENT DRIVER:(SQL*PLUS) 2017-02-09 14:42:50.944
*** ACTION NAME:() 2017-02-09 14:42:50.944
 
OCIServerAttach failed -1
.. Detailed OCI error val is 12154 and errmsg is 'ORA-12154: TNS:could not resolve the connect identifier specified
'
OCIServerAttach failed -1
.. Detailed OCI error val is 12154 and errmsg is 'ORA-12154: TNS:could not resolve the connect identifier specified
'
OCIServerAttach failed -1
.. Detailed OCI error val is 12154 and errmsg is 'ORA-12154: TNS:could not resolve the connect identifier specified
'
OCIServerAttach failed -1
.. Detailed OCI error val is 12154 and errmsg is 'ORA-12154: TNS:could not resolve the connect identifier specified
'
*** 2017-02-09 14:42:50.960591 4929 krsh.c
Error 12154 received logging on to the standby
*** 2017-02-09 14:42:50.960676 4929 krsh.c
FAL[client, USER]: Error 12154 connecting to x for fetching gap sequence
ORA-12154: TNS:could not resolve the connect identifier specified
krsd_get_primary_connect_string: found pcs 'adg1' by FAL_SERVER lookup
Connected to primary database target adg1 
OCIServerAttach failed -1
.. Detailed OCI error val is 12154 and errmsg is 'ORA-12154: TNS:could not resolve the connect identifier specified
'
OCIServerAttach failed -1
.. Detailed OCI error val is 12154 and errmsg is 'ORA-12154: TNS:could not resolve the connect identifier specified
'
OCIServerAttach failed -1
.. Detailed OCI error val is 12154 and errmsg is 'ORA-12154: TNS:could not resolve the connect identifier specified
'
OCIServerAttach failed -1
.. Detailed OCI error val is 12154 and errmsg is 'ORA-12154: TNS:could not resolve the connect identifier specified
'
*** 2017-02-09 14:42:51.071116 4929 krsh.c
Error 12154 received logging on to the standby
*** 2017-02-09 14:42:51.071188 4929 krsh.c
FAL[client, USER]: Error 12154 connecting to x for fetching gap sequence
ORA-12154: TNS:could not resolve the connect identifier specified
krsd_get_primary_connect_string: found pcs 'adg1' by FAL_SERVER lookup
Connected to primary database target adg1
How could one come across this issue:
1. Have incorrect parameter settings and do not use Data Guard Broker. That is what the problem was when my client came upon it.
2. Actually, we can face this issue using Data Guard Broker. Data Guard Broker does not change the FAL_SERVER parameters on switchover, at least, it is how it is working now. See, for example, this link.
One possible workaround - it is to manually invoke the DGMGRL "enable configuration" command each time when the switchover takes place.
I raised a SR with Oracle about this problem but I decided not to progress towards the permanent solution.

понедельник, 30 января 2017 г.

DBMS_METADATA.GET_SXML_DDL may produce incorrect DDL for trigger in EBR environment

I have recently discovered a case when DBMS_METADATA.GET_SXML_DDL returns incorrect DDL for the trigger in an EBR environment.
Here is a test case to reproduce the issue that is present in 12.1.0.2.170117 and 11.2.0.4.161018:
SQL> grant connect, create table, create view, create trigger to tc identified by tc;

Grant succeeded.

SQL> 
SQL> alter user tc enable editions for view,trigger;

User altered.

SQL> 
SQL> conn tc/tc
Connected.
SQL> 
SQL> create table t (
  2    x int)
  3  /

Table created.

SQL> 
SQL> create or replace editioning view ev
  2  as
  3  select *
  4    from t
  5  /

View created.

SQL> 
SQL> create or replace trigger trg
  2  before update of x on ev
  3  declare
  4  begin
  5    null;
  6  end;
  7  /

Trigger created.
Let us look at the DDL of the trigger returning by DBMS_METADATA.GET_DDL and DBMS_METADATA.GET_SXML_DDL functions.
SQL> select dbms_metadata.get_ddl( 'TRIGGER', 'TRG') from dual;

DBMS_METADATA.GET_DDL('TRIGGER','TRG')
--------------------------------------------------------------------------------

  CREATE OR REPLACE EDITIONABLE TRIGGER "TC"."TRG"
before update of x on ev
declare
begin
  null;
end;
ALTER TRIGGER "TC"."TRG" ENABLE


SQL> select dbms_metadata.get_sxml_ddl( 'TRIGGER', 'TRG') from dual;

DBMS_METADATA.GET_SXML_DDL('TRIGGER','TRG')
--------------------------------------------------------------------------------
  CREATE OR REPLACE TRIGGER "TC"."TRG"
  BEFORE UPDATE OF X ON "TC"."EV"
  declare
begin
  null;
end;
They are pretty much the same. Now I change the status of my trigger. Please note the highlighted lines showing the differences:
SQL> alter trigger trg enable;

Trigger altered.

SQL> 
SQL> select dbms_metadata.get_ddl( 'TRIGGER', 'TRG') from dual;

DBMS_METADATA.GET_DDL('TRIGGER','TRG')
--------------------------------------------------------------------------------

  CREATE OR REPLACE EDITIONABLE TRIGGER "TC"."TRG"
before update of x on ev
declare
begin
  null;
end;
ALTER TRIGGER "TC"."TRG" ENABLE


SQL> select dbms_metadata.get_sxml_ddl( 'TRIGGER', 'TRG') from dual;

DBMS_METADATA.GET_SXML_DDL('TRIGGER','TRG')
--------------------------------------------------------------------------------
  CREATE OR REPLACE TRIGGER "TC"."TRG"
  BEFORE UPDATE OF X, X ON "TC"."EV"
  declare
begin
  null;
end;
I have ended up with two X columns in the output of DBMS_METADATA.GET_SXML_DDL, which is, in fact, a non-working DDL statement.
I played more with this issue and I have eventually obtained four X columns and it is not a limit:
SQL> alter trigger trg disable;

Trigger altered.

SQL> 
SQL> select dbms_metadata.get_ddl( 'TRIGGER', 'TRG') from dual;

DBMS_METADATA.GET_DDL('TRIGGER','TRG')
--------------------------------------------------------------------------------

  CREATE OR REPLACE EDITIONABLE TRIGGER "TC"."TRG"
before update of x on ev
declare
begin
  null;
end;
ALTER TRIGGER "TC"."TRG" DISABLE


SQL> select dbms_metadata.get_sxml_ddl( 'TRIGGER', 'TRG') from dual;

DBMS_METADATA.GET_SXML_DDL('TRIGGER','TRG')
--------------------------------------------------------------------------------
  CREATE OR REPLACE TRIGGER "TC"."TRG"
  BEFORE UPDATE OF X, X, X ON "TC"."EV"
  declare
begin
  null;
end;
  ALTER TRIGGER "TC"."TRG" DISABLE


SQL> 
SQL> alter trigger trg enable;

Trigger altered.

SQL> 
SQL> select dbms_metadata.get_ddl( 'TRIGGER', 'TRG') from dual;

DBMS_METADATA.GET_DDL('TRIGGER','TRG')
--------------------------------------------------------------------------------

  CREATE OR REPLACE EDITIONABLE TRIGGER "TC"."TRG"
before update of x on ev
declare
begin
  null;
end;
ALTER TRIGGER "TC"."TRG" ENABLE


SQL> select dbms_metadata.get_sxml_ddl( 'TRIGGER', 'TRG') from dual;

DBMS_METADATA.GET_SXML_DDL('TRIGGER','TRG')
--------------------------------------------------------------------------------
  CREATE OR REPLACE TRIGGER "TC"."TRG"
  BEFORE UPDATE OF X, X, X, X ON "TC"."EV"
  declare
begin
  null;
end;
The DBMS_METADATA.GET_SXML_DDL procedure uses a SELECT statement similar to the above to obtain the DDL:
SELECT /*+all_rows*/ 
       SYS_XMLGEN(
         VALUE(KU$), 
         XMLFORMAT.createFormat2('TRIGGER_T', '7')), 
       KU$.OBJ_NUM 
  FROM SYS.KU$_TRIGGER_VIEW KU$ 
 WHERE NOT (KU$.BASE_OBJ IS NOT NULL AND BITAND(KU$.BASE_OBJ.FLAGS,128)!=0) 
   AND KU$.SCHEMA_OBJ.NAME = 'TRG' 
   AND KU$.SCHEMA_OBJ.OWNER_NAME = 'TC';
The column list for the trigger comes from this part of the SYS.KU$_TRIGGER_VIEW view:
          cast(multiset(select * from ku$_triggercol_view tv
                        where tv.obj_num=t.obj#
                      ) as ku$_triggercol_list_t
             ),
The code of the KU$_TRIGGERCOL_VIEW view is:
CREATE OR REPLACE FORCE NONEDITIONABLE VIEW "SYS"."KU$_TRIGGERCOL_VIEW" OF "SYS"."KU$_TRIGGERCOL_T"
  WITH OBJECT IDENTIFIER (obj_num,intcol_num,type_num) AS
  select '1','0',
         tc.obj#, tc.col#, tc.type#, tc.position#, tc.intcol#, c.name,
         (select a.name from attrcol$ a where
                        a.obj#=tc.obj# and a.intcol#=tc.intcol#)
  from col$ c, triggercol$ tc, trigger$ t
  where tc.obj#=t.obj#
    and c.obj#=t.baseobject
    and c.intcol#=tc.intcol#
And the final part of the puzzle is the TRIGGERCOL$ table:
SQL> select * from sys.triggercol$ where obj#=(select object_id from dba_objects where owner='TC' and object_name='TRG');

      OBJ#       COL#      TYPE#  POSITION#    INTCOL#
---------- ---------- ---------- ---------- ----------
    214352          1          0          0          1
    214352          1          0          0          1
    214352          1          0          0          1
    214352          1          0          0          1
    214352          1       1024          0          1
It gets one row each time I execute the "ALTER TRIGGER ENABLE/DISABLE" statement. I think Oracle Developers should filter the rows because there is only one row with TYPE#=1024.
DBMS_METADATA.GET_DDL returns correct DDL and it seems to be come from the DDL passed by a user.
Conversely, DBMS_METADATA.GET_SXML_DDL tries to reconstruct the user's DDL and it messes things up.