Страницы

вторник, 21 декабря 2021 г.

How to trace specific SQL statement executed by certain user

A question asking the same appeared on oracle-l today here. It is widely known how to trace a specific SQL statement at the system level, e.g.:

alter system set events 'sql_trace[sql:<some_sql_id>]';

What if we want to trace the same SQL statement when it is executed by some specific user? A common approach in this case is to create a LOGON trigger and use ALTER SESSION SET EVENTS in it. It turns out that it is possible by using event filters and the username() function too. For example, to trace the SQL_ID bqka14bvd2zmb run by the user TC1 we can use:

alter system set events 'sql_trace[sql:bqka14bvd2zmb] {streq:username(),"TC1"}';

Here is a full example which is also available as a Gist here:

[oracle@db-21 ~]$ sqlplus /nolog @q

SQL*Plus: Release 21.0.0.0.0 - Production on Tue Dec 21 15:54:22 2021
Version 21.4.0.0.0

Copyright (c) 1982, 2021, Oracle.  All rights reserved.

SQL> conn / as sysdba
Connected.
SQL> alter session set container=pdb;

Session altered.

SQL>
SQL> drop user tc1 cascade;

User dropped.

SQL> drop user tc2 cascade;

User dropped.

SQL>
SQL> grant create session, select_catalog_role to tc1 identified by tc1;

Grant succeeded.

SQL> grant create session, select_catalog_role to tc2 identified by tc2;

Grant succeeded.

SQL>
SQL> col sql_id old_v sql_id
SQL> select dbms_sql_translator.sql_id(q'#select 'trace_me' from dual#') sql_id from dual;

SQL_ID
--------------------------------------------------------------------------------
bqka14bvd2zmb

SQL>
SQL> alter system set events 'sql_trace[sql:&sql_id.] {streq:username(),"TC1"}';
old   1: alter system set events 'sql_trace[sql:&sql_id.] {streq:username(),"TC1"}'
new   1: alter system set events 'sql_trace[sql:bqka14bvd2zmb] {streq:username(),"TC1"}'

System altered.

SQL>
SQL> conn tc1/tc1@db-21/pdb
Connected.
SQL>
SQL> select 'trace_me' from dual;

'TRACE_M
--------
trace_me

SQL> col trace_file old_v trace_file
SQL> select value trace_file from v$diag_info where name='Default Trace File';

TRACE_FILE
--------------------------------------------------------------------------------
/u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_5455.trc

SQL>
SQL> !cat &trace_file.
Trace file /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_5455.trc
Oracle Database 21c Enterprise Edition Release 21.0.0.0.0 - Production
Version 21.4.0.0.0
Build label:    RDBMS_21.3.0.0.0_LINUX.X64_210727
ORACLE_HOME:    /u01/app/oracle/product/21.0.0/dbhome_1
System name:    Linux
Node name:      db-21.localdomain
Release:        5.4.17-2136.300.7.el8uek.x86_64
Version:        #2 SMP Fri Oct 8 16:23:01 PDT 2021
Machine:        x86_64
VM name:        KVM CPUID feature flags: 0x01000089
Instance name: orcl
Redo thread mounted by this instance: 1
Oracle process number: 48
Unix process pid: 5455, NID: 4026531836, image: oracle@db-21.localdomain


*** 2021-12-21T15:54:23.397300+00:00 (PDB(3))
*** SESSION ID:(82.1864) 2021-12-21T15:54:23.397327+00:00
*** CLIENT ID:() 2021-12-21T15:54:23.397344+00:00
*** SERVICE NAME:(pdb) 2021-12-21T15:54:23.397360+00:00
*** MODULE NAME:(SQL*Plus) 2021-12-21T15:54:23.397377+00:00
*** ACTION NAME:() 2021-12-21T15:54:23.397394+00:00
*** CLIENT DRIVER:(SQL*PLUS) 2021-12-21T15:54:23.397409+00:00
*** CONTAINER ID:(3) 2021-12-21T15:54:23.397426+00:00
*** CLIENT IP:(10.0.2.15) 2021-12-21T15:54:23.397449+00:00
*** CLIENT IP:(10.0.2.15) 2021-12-21T15:54:23.397449+00:00

=====================
PARSING IN CURSOR #140557434715864 len=332 dep=1 uid=0 oct=3 lid=0 tim=1528633612 hv=2698389488 ad='85cb8630' sqlid='acmvv4fhdc9zh'
select obj#,type#,ctime,mtime,stime, status, dataobj#, flags, oid$, spare1, spare2, spare3, signature, spare7, spare8, spare9, nvl(dflcollid, 16382), creappid, creverid, modappid, modverid, crepatchid, modpatchid from obj$ where owner#=:1 and name=:2 and namespace=:3 and remoteowner is null and linkname is null and subname is null
END OF STMT
EXEC #140557434715864:c=0,e=25,p=0,cr=0,cu=0,mis=0,r=0,dep=1,og=4,plh=813480514,tim=1528633611
FETCH #140557434715864:c=19,e=19,p=0,cr=3,cu=0,mis=0,r=0,dep=1,og=4,plh=813480514,tim=1528634393
CLOSE #140557434715864:c=0,e=0,dep=1,type=3,tim=1528634426
=====================
PARSING IN CURSOR #140557434718920 len=27 dep=0 uid=120 oct=3 lid=120 tim=1528635472 hv=4140924523 ad='7aa8feb0' sqlid='bqka14bvd2zmb'
select 'trace_me' from dual
END OF STMT
EXEC #140557434718920:c=30,e=30,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=1,plh=1388734953,tim=1528635471
FETCH #140557434718920:c=5,e=6,p=0,cr=0,cu=0,mis=0,r=1,dep=0,og=1,plh=1388734953,tim=1528635612
STAT #140557434718920 id=1 cnt=1 pid=0 pos=1 obj=0 op='FAST DUAL  (cr=0 pr=0 pw=0 str=1 time=1 us cost=2 size=0 card=1)'
FETCH #140557434718920:c=0,e=0,p=0,cr=0,cu=0,mis=0,r=0,dep=0,og=0,plh=1388734953,tim=1528636086
CLOSE #140557434718920:c=0,e=7,dep=0,type=0,tim=1528637068

SQL>
SQL> conn tc2/tc2@db-21/pdb
Connected.
SQL>
SQL> select 'trace_me' from dual;

'TRACE_M
--------
trace_me

SQL> col trace_file old_v trace_file
SQL> select value trace_file from v$diag_info where name='Default Trace File';

TRACE_FILE
--------------------------------------------------------------------------------
/u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_5458.trc

SQL>
SQL> !cat &trace_file.
cat: /u01/app/oracle/diag/rdbms/orcl/orcl/trace/orcl_ora_5458.trc: No such file or directory

Here I make use of the streq filter and the username function:

SQL> oradebug doc event filter streq

streq: filter to only fire an event when string s1 = s2 (up to <len> characters)

Usage
-------
{streq:  s1              <string>[256],
         s2              <string>[256],
         len             <ub4> }

SQL> oradebug doc event action username
username
        - Return user log-in name
Usage
-------
username( Action returns: <string>[30])

суббота, 18 декабря 2021 г.

Mythbusters: VARRAY faster than CLOB

There has been a tweet recently saying that VARRAY is faster than CLOB: link. On this data set with SQL*Plus, the correctness of this statement largely depends on the underlying hardware on the database, and the network between the client and the server. More specifically, VARRAY will be faster with a rather slow network.

A typical production environment that I work with includes one or more databases deploying across different availability zones (AZ) on the cloud. The applications reside in the same AZ as the database server to avoid inter-AZ traffic that costs extra money. I tested the script from this Gist across two major cloud providers and VARRAY was never faster than CLOB. In fact, it is significantly slower. See the output from 19.13 below (the script is from this Gist - I just added the last query with DBMS_LOB):

[oracle@rac2 ~]$ NLS_LANG=.AL32UTF8 sqlplus tc/tc@rac1:1522/pdb1 @test1

SQL*Plus: Release 19.0.0.0.0 - Production on Sat Dec 18 10:54:03 2021
Version 19.13.0.0.0

Copyright (c) 1982, 2021, Oracle.  All rights reserved.

Last Successful login time: Sat Dec 18 2021 10:53:17 +00:00

Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.13.0.0.0

SQL> set lobprefetch 32767
SQL> set long 10000000
SQL> set longchunksize 10000000
SQL> set timing on;
SQL> set arraysize 1000;
SQL> --set feedback only
SQL> set autotrace trace stat;
SQL> select id,c_lob from t_lob_1_mb where id<=25;

25 rows selected.

Elapsed: 00:00:01.50

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
         82  consistent gets
       6475  physical reads
          0  redo size
   51655001  bytes sent via SQL*Net to client
      15345  bytes received via SQL*Net from client
         52  SQL*Net roundtrips to/from client
          0  sorts (memory)
          0  sorts (disk)
         25  rows processed

SQL> select id,lob_to_varray(c_lob) c_varray from t_lob_1_mb where id<=25;

25 rows selected.

Elapsed: 00:00:14.10

Statistics
----------------------------------------------------------
         33  recursive calls
          0  db block gets
        199  consistent gets
     653950  physical reads
          0  redo size
   25107664  bytes sent via SQL*Net to client
       9261  bytes received via SQL*Net from client
         61  SQL*Net roundtrips to/from client
         25  sorts (memory)
          0  sorts (disk)
         25  rows processed

SQL>
SQL> select
  2    c_varray
  3  from t_lob_1_mb
  4       outer apply (
  5         select
  6           cast(
  7             collect(
  8                cast(substr(c_lob,(level-1)*4000 + 1,4000) as varchar2(4000))
  9                )
 10             as sys.odcivarchar2list
 11           ) c_varray
 12         from dual
 13         connect by level<=ceil(length(c_lob)/4000)
 14       )
 15  where id<=25;

25 rows selected.

Elapsed: 00:00:14.16

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
        176  consistent gets
     653950  physical reads
          0  redo size
   25106012  bytes sent via SQL*Net to client
       9086  bytes received via SQL*Net from client
         58  SQL*Net roundtrips to/from client
         50  sorts (memory)
          0  sorts (disk)
         25  rows processed

SQL>
SQL> select
  2    c_varray
  3  from t_lob_1_mb
  4       outer apply (
  5         select
  6           cast(
  7             collect(
  8                dbms_lob.substr(c_lob,4000,(level-1)*4000 + 1)
  9                )
 10             as sys.odcivarchar2list
 11           ) c_varray
 12         from dual
 13         connect by level<=ceil(length(c_lob)/4000)
 14       )
 15  where id<=25;

25 rows selected.

Elapsed: 00:00:03.19

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
        176  consistent gets
      24900  physical reads
          0  redo size
   25106012  bytes sent via SQL*Net to client
       9071  bytes received via SQL*Net from client
         58  SQL*Net roundtrips to/from client
         50  sorts (memory)
          0  sorts (disk)
         25  rows processed

NB: the DBMS_LOB query is not mentioned in the original tweet, but I wrote why SUBSTR should not be used against LOB's in 2019: Temporary LOBs.

As I said, it is quite a typical cloud environment in which the client is on a different VM from the DB server. I can get even better results with CLOB if I run the same script on the DB server itself or use a proximity placement group (Azure)/cluster placement group (AWS).

It can be seen that there is twice as much data transfered with CLOB than with the other queries (50MB vs 25MB). It is a known issue that was already observed by several other authors, e.g. LOB reads. In the specific example from this post, the extra CLOB data will become noticeable on a slow network, e.g. me pulling data from a different continent on a mobile broadband. It is not the case in most environments (including non-productions) that I work with - CLOB will be faster than VARRAY (1.5 seconds vs 14.16 seconds). As always, rather than relying on any information, such as the tweet above, it is better to test it for yourself as this post demonstrates.

пятница, 17 декабря 2021 г.

Using credentials with database links in 21c

Since 21c it is now possible to use credential objects in database links. Here is a short demonstration of this functionality:

SQL> exec dbms_credential.create_credential('TC_CRED', 'TC', 'tc')

PL/SQL procedure successfully completed.

SQL>
SQL> create database link link1 connect with tc_cred using 'localhost/pdb';

Database link created.

SQL> create database link link2 connect with tc_cred using 'localhost/pdb';

Database link created.

SQL>
SQL> select * from dual@link1;

D
-
X

SQL> select * from dual@link2;

D
-
X

SQL Language Reference has not been updated with the new syntax yet. If we alter the user's password, the existing DB links will not work anymore (I do not consider gradual password rollover here):

SQL> alter user tc identified by tc2;

User altered.

SQL>
SQL> alter session close database link link1;

Session altered.

SQL> alter session close database link link2;

Session altered.

SQL>
SQL> select * from dual@link1;
select * from dual@link1
                   *
ERROR at line 1:
ORA-01017: invalid username/password; logon denied
ORA-02063: preceding line from LINK1

It is enough to alter the credentials objects to make the DB links work again:

SQL> exec dbms_credential.update_credential('TC_CRED', 'PASSWORD', 'tc2')

PL/SQL procedure successfully completed.

SQL>
SQL> select * from dual@link1;

D
-
X

SQL> select * from dual@link2;

D
-
X

Conclusion

This functionality really comes into its own when you re-use one username and password pair in multiple database links. If we want to change the username or password, there is no need to change each link anymore. We can alter one credentials object instead. The functionality has been backported to 19c as well: Bug 29541929 - support credential objects in database links (Doc ID 29541929.8).

четверг, 9 декабря 2021 г.

gridSetup.sh executeConfigTools fails with PRVG-13606 : chrony daemon is not synchronized with any external time source

The command failed with the following errors in the log:

/u01/app/19.3.0/grid/gridSetup.sh -executeConfigTools -responseFile /opt/rsp/gi_19.3_config.rsp -silent
...
INFO:  [Dec 8, 2021 8:20:51 AM] Verifying Clock Synchronization ...FAILED
INFO:  [Dec 8, 2021 8:20:51 AM] Skipping line: Verifying Clock Synchronization ...FAILED
INFO:  [Dec 8, 2021 8:20:51 AM]   Verifying Network Time Protocol (NTP) ...FAILED
INFO:  [Dec 8, 2021 8:20:51 AM] Skipping line:   Verifying Network Time Protocol (NTP) ...FAILED
INFO:  [Dec 8, 2021 8:20:51 AM]     Verifying chrony daemon is synchronized with at least one external time
INFO:  [Dec 8, 2021 8:20:51 AM] Skipping line:     Verifying chrony daemon is synchronized with at least one external time
INFO:  [Dec 8, 2021 8:20:51 AM]     source ...FAILED
INFO:  [Dec 8, 2021 8:20:51 AM] Skipping line:     source ...FAILED
INFO:  [Dec 8, 2021 8:20:51 AM]     rac2: PRVG-13606 : chrony daemon is not synchronized with any external time
INFO:  [Dec 8, 2021 8:20:51 AM] Skipping line:     rac2: PRVG-13606 : chrony daemon is not synchronized with any external time
INFO:  [Dec 8, 2021 8:20:51 AM]           source on node "rac2".
INFO:  [Dec 8, 2021 8:20:51 AM] Skipping line:           source on node "rac2".
INFO:  [Dec 8, 2021 8:20:51 AM] Skipping line:
INFO:  [Dec 8, 2021 8:20:51 AM]     rac1: PRVG-13606 : chrony daemon is not synchronized with any external time
INFO:  [Dec 8, 2021 8:20:51 AM] Skipping line:     rac1: PRVG-13606 : chrony daemon is not synchronized with any external time
INFO:  [Dec 8, 2021 8:20:51 AM]           source on node "rac1".
INFO:  [Dec 8, 2021 8:20:51 AM] Skipping line:           source on node "rac1".

It can be easily reproduced by running CVU:

[grid@rac1 ~]$ cluvfy comp clocksync -n rac1 -verbose

Verifying Clock Synchronization ...
  Node Name                             Status
  ------------------------------------  ------------------------
  rac1                                  passed

  Node Name                             State
  ------------------------------------  ------------------------
  rac1                                  Observer

CTSS is in Observer state. Switching over to clock synchronization checks using NTP

  Verifying Network Time Protocol (NTP) ...
    Verifying '/etc/chrony.conf' ...
    Node Name                             File exists?
    ------------------------------------  ------------------------
    rac1                                  yes

    Verifying '/etc/chrony.conf' ...PASSED
    Verifying Daemon 'chronyd' ...
    Node Name                             Running?
    ------------------------------------  ------------------------
    rac1                                  yes

    Verifying Daemon 'chronyd' ...PASSED
    Verifying NTP daemon or service using UDP port 123 ...
    Node Name                             Port Open?
    ------------------------------------  ------------------------
    rac1                                  yes

    Verifying NTP daemon or service using UDP port 123 ...PASSED
    Verifying chrony daemon is synchronized with at least one external time source ...FAILED (PRVG-13606)
  Verifying Network Time Protocol (NTP) ...FAILED (PRVG-13606)
Verifying Clock Synchronization ...FAILED (PRVG-13606)

Verification of Clock Synchronization across the cluster nodes was unsuccessful on all the specified nodes.


Failures were encountered during execution of CVU verification request "Clock Synchronization across the cluster nodes".

Verifying Clock Synchronization ...FAILED
  Verifying Network Time Protocol (NTP) ...FAILED
    Verifying chrony daemon is synchronized with at least one external time
    source ...FAILED
    rac1: PRVG-13606 : chrony daemon is not synchronized with any external time
          source on node "rac1".


CVU operation performed:      Clock Synchronization across the cluster nodes
Date:                         Dec 9, 2021 10:56:45 AM
CVU home:                     /u01/app/19.3.0/grid/
User:                         grid

If we want to get more details, the CVU trace could be enabled:

[grid@rac1 ~]$ rm -rf /tmp/cvutrace
[grid@rac1 ~]$ mkdir /tmp/cvutrace
[grid@rac1 ~]$ export CV_TRACELOC=/tmp/cvutrace
[grid@rac1 ~]$ export SRVM_TRACE=true
[grid@rac1 ~]$ export SRVM_TRACE_LEVEL=1
[grid@rac1 ~]$ cluvfy comp clocksync -n rac1 -verbose

This produces the following lines in the trace file /tmp/cvutrace/cvutrace.log.0:

[main] [ 2021-12-09 10:58:10.179 UTC ] [VerificationUtil.traceAndLogInternal:16755]  [TaskNTP.doChronyTimeSourceCheck:2465] status=SUCCESSFUL; vfyCode=0; output=MS Name/IP address         Stratum Poll Reach Last
Rx Last sample
===============================================================================
^* 169.254.169.123               3   7   377   128  +1141ns[+4603ns] +/-  501us

[main] [ 2021-12-09 10:58:10.179 UTC ] [TaskAnonymousProxy.<init>:119]  Defining proxy task with: 'chrony daemon is synchronized with at least one external time source'
nodeList: 'rac1'
from task: 'TaskNTP'
Called from: 'TaskNTP.performNTPChecks:889'
[main] [ 2021-12-09 10:58:10.179 UTC ] [ResultSet.overwriteResultSet:810]  Overwriting ResultSet, called from: TaskAnonymousProxy.performAnonymousTask:148
[main] [ 2021-12-09 10:58:10.179 UTC ] [CVUVariables.getCVUVariable:607]  variable name : MODE_API
[main] [ 2021-12-09 10:58:10.179 UTC ] [CVUVariables.getCVUVariable:643]  Variable found in the CVU and Command Line context
[main] [ 2021-12-09 10:58:10.179 UTC ] [CVUVariables.resolve:981]  ForcedLookUp not enabled for variable:MODE_API
[main] [ 2021-12-09 10:58:10.179 UTC ] [CVUVariables.secureVariableValueTrace:789]  getting CVUVariableConstant : VAR = MODE_API VAL = FALSE
[main] [ 2021-12-09 10:58:10.180 UTC ] [ResultSet.traceResultSet:1040]

ResultSet AFTER overwrite ===>
        Overall Status->VERIFICATION_FAILED

        Uploaded Overall Status->UNKNOWN

        HasNodeResults: true


        contents of resultTable

Dumping Result data.
  Status     : VERIFICATION_FAILED
  Name       : rac1
  Type       : Node
  Has Results: No
  Exp. value : null
  Act. value : null

  Errors  :
    PRVG-13606 : chrony daemon is not synchronized with any external time source on node "rac1".

The same configuration steps were completed successfully on Oracle Linux (OL) 8.4, so that I started looking for any changes. It turns out that the chronyc sources output was changed.
OL 8.5 with chrony-4.1-1.0.1.el8.x86_64:

[root@rac1 ~]# chronyc sources
MS Name/IP address         Stratum Poll Reach LastRx Last sample
===============================================================================
^* 169.254.169.123               3   7   377    10   -590ns[+4002ns] +/-  494us

OL 8.4 with chrony-3.5-2.0.1.el8.x86_64:

210 Number of sources = 1
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^* 169.254.169.123               3   6   377    39  +4827ns[  +10us] +/-  560us

The number of sources line is absent in OL 8.5.
There are two workarounds that can be used.

Use ORA_DISABLED_CVU_CHECKS

[grid@rac1 ~]$ ORA_DISABLED_CVU_CHECKS=TASKNTP cluvfy comp clocksync -n rac1 -verbose

Verifying Clock Synchronization ...
  Node Name                             Status
  ------------------------------------  ------------------------
  rac1                                  passed

  Node Name                             State
  ------------------------------------  ------------------------
  rac1                                  Observer
  Verifying Network Time Protocol (NTP) ...WARNING (PRVG-11640)
Verifying Clock Synchronization ...WARNING (PRVG-11640)

Verification of Clock Synchronization across the cluster nodes was successful.


Warnings were encountered during execution of CVU verification request "Clock Synchronization across the cluster nodes".

Verifying Clock Synchronization ...WARNING
  Verifying Network Time Protocol (NTP) ...WARNING
  rac1: PRVG-11640 : The check "Network Time Protocol (NTP)" was not performed
        as it is disabled


CVU operation performed:      Clock Synchronization across the cluster nodes
Date:                         Dec 9, 2021 11:03:41 AM
CVU home:                     /u01/app/19.3.0/grid/
User:                         grid

Amend chronyc temporarily to produce the desired output

This is only to validate our hypothesis related to the cause of the validation error.

[root@rac1 ~]# mv /usr/bin/chronyc{,.orig}
[root@rac1 ~]# vi /usr/bin/chronyc
[root@rac1 ~]# chmod a+x /usr/bin/chronyc
[root@rac1 ~]# /usr/bin/chronyc sources
210 Number of sources = 1
MS Name/IP address         Stratum Poll Reach LastRx Last sample
===============================================================================
^* 169.254.169.123               3   8   377    21    -32us[-5361ns] +/-  578us
[root@rac1 ~]# cat /usr/bin/chronyc
#!/bin/bash
echo '210 Number of sources = 1'
/usr/bin/chronyc.orig "$@"

Then we can validate the change by running CVU:

[grid@rac1 ~]$ cluvfy comp clocksync -n rac1 -verbose

Verifying Clock Synchronization ...
  Node Name                             Status
  ------------------------------------  ------------------------
  rac1                                  passed

  Node Name                             State
  ------------------------------------  ------------------------
  rac1                                  Observer

CTSS is in Observer state. Switching over to clock synchronization checks using NTP

  Verifying Network Time Protocol (NTP) ...
    Verifying '/etc/chrony.conf' ...
    Node Name                             File exists?
    ------------------------------------  ------------------------
    rac1                                  yes

    Verifying '/etc/chrony.conf' ...PASSED
    Verifying Daemon 'chronyd' ...
    Node Name                             Running?
    ------------------------------------  ------------------------
    rac1                                  yes

    Verifying Daemon 'chronyd' ...PASSED
    Verifying NTP daemon or service using UDP port 123 ...
    Node Name                             Port Open?
    ------------------------------------  ------------------------
    rac1                                  yes

    Verifying NTP daemon or service using UDP port 123 ...PASSED
    Verifying chrony daemon is synchronized with at least one external time source ...PASSED
  Verifying Network Time Protocol (NTP) ...PASSED
Verifying Clock Synchronization ...PASSED

Verification of Clock Synchronization across the cluster nodes was successful.

CVU operation performed:      Clock Synchronization across the cluster nodes
Date:                         Dec 9, 2021 11:05:36 AM
CVU home:                     /u01/app/19.3.0/grid/
User:                         grid

Conclusion

The requirement to have an external NTP server is questionable. There can be hardware clocks which chrony can use, such as TimeSync PTP service on Azure. This post demonstrates how to debug NTP issues during Grid Infrastructure installations and/or CVU checks. There is one generic method provided to work around such issues by setting ORA_DISABLED_CVU_CHECKS. If the issue is caused by Oracle utilities expecting certain output, then we might as well tweak system programs temporarily to produce the desired output.