DBA Blogs

While studying RedoLogs OP Code, I found uncommon OP Code

Tom Kyte - 43 min 55 sec ago
Hello, I am studying and analyzing Oracle Redo Logs. They have a pattern for each DML such as OP 5.1/ OP 11.2 for INSERT Statement. Few days ago, I found new OP Code that I have not seen before, which is OP 11.22 and 11.28. Unfortunately, the only information about the database producing that case, it consists of ASM and RAC of 3 Nodes. It would be appreciated that if you have opinions for that OP Codes.
Categories: DBA Blogs

weather prediction

Tom Kyte - 43 min 55 sec ago
What info fo weather forecasters have that make them not agree with the computer models?
Categories: DBA Blogs

Checking maximum usage for SGA and PGA

Tom Kyte - 43 min 55 sec ago
Hello Tom, Is there a way to find out the maximum SGA and PGA ever used in an instance ? We need to do some strict assessment where we need to check whether the SGA and PGA assigned by us is being completely utilised or not. If not 100% utilised then how to find it ? so that we can released the unused memory back to the OS ? In a nutshell how to find the maximum SGA and PGA ever used by an instance ? Similar to checking maximum temp ever used, is it possible to get those details for SGA and PGA ? Thanks, Vaibhav
Categories: DBA Blogs

Replacing BMC Fast Unload with another software

Tom Kyte - 43 min 55 sec ago
Hello, Any advice or suggestion on what tool or software (3rd party) i can use to replace BMC fast unload? i have several scripts running using it but with upgrade in oracle, sometimes i am having problem with the software (compatibility i guess). i learned that BMC fast unload is no longer supported so i am looking for a solution. I am new to the team and only familiar with running queries/sql. Just excited and wanted to know if their is a better or best software to replace the one used in scripts. hoping something that wont take much effort or has minor impact to scripts. Appreciate any advice/help you can provide.
Categories: DBA Blogs

How can a DBA excel in a Zero Trust environment?

Tom Kyte - 43 min 55 sec ago
Hi, I support a comparatively small project of around twenty OLTP instances on AWS. My customer expects both full auditing and least necessary privileges in all environments -- which hasn't been a problem for code development and artifact promotion. My team lead explains Zero Trust, however, that even with full-monty auditing of everything, all DBA activity outside of development is limited to pre-written scripts. No SQLcli, no TOAD, no SQL*Developer, no Putty. And because we're homed on AWS, SYS or SYSDBA commands are available only through the RDSADMIN account. I'm asking to learn from you experts whether this description is typical of a Zero Trust shop; and if not, point me toward sources that might help improve my options. TIA.
Categories: DBA Blogs

Question from USER399341 for Database In-Memory Office Hours - 2021-11-24

Tom Kyte - Wed, 2021-11-24 08:06
The following question was asked, but got attached to the Nov 24th session which was re-scheduled to this Nov 18th session: Question from USER399341 for Database In-Memory Office Hours - 2021-11-24 I have a Table with all transaction amounts (both +ve and -ve amount). I have to find a combination of these amounts that equate to a given sum transaction amount stored in another table. There will be 10s of thousands of transactions int he first table. I am using Oracle 19c. Appreciate your help.
Categories: DBA Blogs

Unnest a nested table with the extracted data in single row

Tom Kyte - Tue, 2021-11-23 13:46
Hi, I have a nested table with three columns within the nested column. I have 3 entries for the same ID within the nested column. I want to unnest this table and get the 3 entries as separate columns in single row. How do I do it? Below is the code: <code>create or replace TYPE "TEST" AS OBJECT ( point NUMBER(3), latitude NUMBER(10), longitude NUMBER(10) ) create or replace TYPE "TESTS" IS TABLE OF TEST; CREATE TABLE TEST_TABLE ( "ID" NUMBER(3,0), "LOCATION" "SYS"."TESTS" ) SEGMENT CREATION IMMEDIATE PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 NOCOMPRESS LOGGING STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) TABLESPACE "SYSTEM" NESTED TABLE "LOCATION" STORE AS "LOCATIONS" (PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 LOGGING STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT) NOCOMPRESS TABLESPACE "SYSTEM" ) RETURN AS VALUE; Insert into TEST_TABLE (ID,LOCATION) values (161,SYS.TESTS(SYS.TESTS(0, 4009716, 50056416), SYS.TESTS(1, 4324450, 51769233), SYS.TESTS(2, 5570283, 51604983), SYS.TESTS(3, 5845666, 49989300))); Insert into TEST_TABLE (ID,LOCATION) values (162,SYS.TESTS(SYS.TESTS(0, 4862133, 43994149), SYS.TESTS(1, 3183550, 43960533), SYS.TESTS(2, 3970383, 45314300), SYS.TESTS(3, 5032600, 44909200)));</code> Expected Output: <code>ID POINT1 LATITUDE1 LONGITUDE1 POINT2 LATITUDE2 LONGITUDE2 POINT3 LATITUDE3 LONGITUDE3 POINT4 LATITUDE4 LONGITUDE4 --- ------- --------- ---------- ------ --------- ---------- ------ --------- ---------- ------ --------- ---------- 161 0 4009716 50056416 1 4324450 51769233 2 5570283 51604983 3 5845666 49989300 162 0 4862133 43994149 1 3183550 43960533 2 3970383 45314300 3 5032600 44909200</code>
Categories: DBA Blogs

Json_Transform in Oracle 21c

Tom Kyte - Tue, 2021-11-23 13:46
Team, Given this JSON, how do i increment the quantity by two in each level? tried the below using json_transform but ended up with error. is that possible using json_transform function? kinldy help. <code> demo@XEPDB1> select json_serialize(y pretty) y 2 from t 3 where x =2; Y ---------------------------------------- { "produce" : [ { "fruit" : "apple", "quantity" : 10 }, { "fruit" : "orange", "quantity" : 15 } ] } demo@XEPDB1> select json_serialize( 2 json_transform( y, set '$.produce[*].quantity' = 3 '$.produce[*].quantity' + 2 ) pretty ) 4 from t 5 where x =2; '$.produce[*].quantity' + 2 ) pretty ) * ERROR at line 3: ORA-01722: invalid number demo@XEPDB1> </code> Was able to get this done using JSON object types from PL/SQL. would like to know if the same can be done using JSON_TRANSFROM function in sql? <code> demo@XEPDB1> create or replace function update_json( p_input json ) 2 return json 3 as 4 l_data json_object_t; 5 l_size number := 0; 6 l_ele json_element_t; 7 l_array json_array_t; 8 l_obj json_object_t; 9 l_qty number := 0; 10 begin 11 l_data := json_object_t( p_input ); 12 13 if l_data.has('produce') then 14 l_ele := l_data.get('produce'); 15 if l_ele.is_array then 16 l_size := l_ele.get_size()-1; 17 l_array := json_array_t( l_ele ); 18 for i in 0..l_size 19 loop 20 l_obj := treat( l_array.get(i) as json_object_t ); 21 l_qty := l_obj.get_Number('quantity'); 22 l_obj.put( 'quantity', l_qty+2 ); 23 end loop; 24 end if; 25 end if; 26 return l_data.to_json; 27 end; 28 / Function created. demo@XEPDB1> select json_serialize(y) 2 from t t1 3 where x =2; JSON_SERIALIZE(Y) -------------------------------------------------------------------------------------- {"produce":[{"fruit":"apple","quantity":10},{"fruit":"orange","quantity":15}]} demo@XEPDB1> select update_json(y) 2 from t t1 3 where x =2; UPDATE_JSON(Y) -------------------------------------------------------------------------------- {"produce":[{"fruit":"apple","quantity":12},{"fruit":"orange","quantity":17}]} demo@XEPDB1> </code>
Categories: DBA Blogs

SQL Server Migration Assistant for Oracle

Hemant K Chitale - Sat, 2021-11-20 08:08

 Here's a Video Demo of the SQL Server Migration Assistant for Oracle   https://youtu.be/zNTF1ncr45g  


The tool is available for download here

Categories: DBA Blogs

How to change the file name dynamically for an external table creation (ex ABC_YYYYMMDDHH24MISS)

Tom Kyte - Fri, 2021-11-19 18:06
<code></code>Team, <i>The version of oracle we use is :" Oracle Database 19c Enterprise Edition Release 19.0.0.0.0". </i> Lets consider our basic scott.emp table for this scenario limited to few columns. The external table creation script goes something like this : <code>CREATE TABLE employee ( empno NUMBER(4,0), ename VARCHAR2(10), job VARCHAR2(9), mgr NUMBER(4,0) ) ORGANIZATION EXTERNAL ( TYPE oracle_loader DEFAULT DIRECTORY scott ACCESS PARAMETERS ( RECORDS DELIMITED BY NEWLINE SKIP 1 FIELDS TERMINATED BY "," MISSING FIELD VALUES ARE NULL REJECT ROWS WITH ALL NULL FIELDS ) LOCATION ( 'EMPLOYEE_20211116203018.csv' ) ) REJECT LIMIT UNLIMITED; The file name passed over here is : 'EMPLOYEE_20211116203018.csv'</code> If we see the file name structure it is some thing like EMPLOYEE_YYYYMMDDHH24MISS YYYY--2021,MM-11,DD-16,HH24:MI:SS --203018 if the file name had only extension of only YYYYMMDD , this could have been handled using dynamic sql like including execute immediate <code>select 'EMPLOYEE_'||to_char(sysdate,'YYYYMMDD')||'.csv' AS tablename from dual CREATE TABLE employee ( empno NUMBER(4,0), ename VARCHAR2(10), job VARCHAR2(9), mgr NUMBER(4,0) ) ORGANIZATION EXTERNAL ( TYPE oracle_loader DEFAULT DIRECTORY scott ACCESS PARAMETERS ( RECORDS DELIMITED BY NEWLINE SKIP 1 FIELDS TERMINATED BY "," MISSING FIELD VALUES ARE NULL REJECT ROWS WITH ALL NULL FIELDS ) LOCATION ( tablename ) ) REJE?CT LI?MIT UNLIMITED;</code> But unfortunately the file extension is including hh24miss along with YYYYMMDD extension. The timestamp is not a fixed value , as it may vary atleast in minutes and seconds. Any code snippet to handle timestamp as well in the external creation script is most welcome and appreciated. Regards, Satyam Reddy.
Categories: DBA Blogs

Move data to warehouse

Tom Kyte - Thu, 2021-11-18 23:46
looking for the best way to move data from oracle 19c oltp system(size 13tb) to oracle 19c warehouse db. Currently using stantby database to fetch the modified rows and add to warehouse system. This process is taking so much time.
Categories: DBA Blogs

dblink blocked

Tom Kyte - Thu, 2021-11-18 23:46
Hi, I have a simple sql like "SELECT max(case when day=:1 then tra else 0 end) tra, max(case when tra=1 and day<:2 then day else '0' end) lasttradingday FROM settlement.t_calendar@dl_otcdb WHERE day <= :3 ". It query blocked offen. I cannot find any blocking session in v$session. After killed this session, it works after query again. How could it be blocked? it query at 19c, and remote database is 11g.
Categories: DBA Blogs

RMAN Recover Database

Tom Kyte - Thu, 2021-11-18 23:46
Hi Team, I have a disaster recovery scenario using RMAN. RMAN backup settings: - Database is backup using RMAN incremental Level 0 every Sunday - Database is backup using RMAN incremental Level 1 daily (Mon-Sat) - The archive log is backup every 15 minutes - Controlfile is configured autobackup in RMAN - SPFILE is configured autobackup in RMAN - Backup set is stored in separate backup server storage (SBT) - Environment is running on Windows x64 - Virtual Machine is backup daily Scenario: 1. Sunday Level=0 database backup was successful (01:00H) 2. Monday Level=1 database backup was successful (01:00H) 3. Tuesday Level=1 database backup was successful (01:00H) 4. All archivelog backup (15 minutes interval) was successful 5. At around 10AM Tuesday the server crash, and to found out that the restore point of the server/machine is Monday. 6. The machine was restored based on monday backup. At this state, the database state is back to Monday after the machine was successfully restored. My question, since the database is restored back to Monday state, is it possible to roll forward the database to Tuesday since the date of failure is Tuesday? What will be the strategy to do? Thank you AskTom.
Categories: DBA Blogs

Oracle Groundbreakers APAC Virtual Tour 2021: “Automatic Indexing: An Update On Improvements and New Capabilities”

Richard Foote - Thu, 2021-11-18 17:43
I’m very excited to have another opportunity to present at the upcoming APACOUC Oracle Groundbreakers APAC Virtual Tour 2021 a somewhat expanded version of my new presentation “Automatic Indexing: An Update On Improvements and New Capabilities”. This massive virtual conference runs from 22 November through to 11 December and features some of the very best […]
Categories: DBA Blogs

How to return result of a query in chunks of 100 rows at a time.

Tom Kyte - Thu, 2021-11-18 05:46
Hi Tom, I have a requirement where in users want that on the screen where we display result set of a query only 100 rows should be displayed at a time and again when user clicks button ( on the screen named Next) next 100 rows should be displayed from the query result and so on till result ends. Can I handle this scenario in a PLSQL/SQL? Many thanks to you for providing such a interface to us. Regards, Shreem
Categories: DBA Blogs

Mapping system GENERATED PARTITION names

Tom Kyte - Thu, 2021-11-18 05:46
I recently converted a few tables to use INTERVAL PARTITIONs, which worked fine. Since the conversion the PARTITION names have changed to SYS_####. Unbeknownst to me I found there were some applications, which were referencing partitions by names (as ridiculous as it sounds) ie SELECT count(*) from table partition P_MMDDYYYY ), which no longer works. Is there some sort of function or procedure that I can give these developers that can map a system GENERATED PARTITION name back to the format P_MMDDYYYY as I'm getting blow back because some still insist on using the PARTITION name).
Categories: DBA Blogs

Find query causing NOLOGGING

Tom Kyte - Thu, 2021-11-18 05:46
Hello team, RMAN report unrecoverable is showing that we have NOLOGGING operations in the database. Is there a way to find out which query performed the NOLOGGING operation? Thanks for your help.
Categories: DBA Blogs

Ways to find why a SQL Plan Baseline is not being used

Tom Kyte - Thu, 2021-11-18 05:46
Hi Chris/Connor. We have a production 12c Database (12.2.0.1) and have enabled a particular SQL Plan Baseline for a high-load query, but, for some reason, it's not being used. Baseline is ENABLED and FIXED, and REPRODUCED column reads YES, but I can't see a date on LAST_EXECUTED nor in LAST_REPRODUCED and neither can I see the plan name in the notes when executing DBMS_XPLAN.DISPLAY_CURSOR or so. The plan is the only one in the baseline and plan history. I have checked that tables and indexes named there exist and are valid (and visible for indexes), no remote tables are being referenced, and the SQL is being executed from the same schema specified in PARSING_SCHEMA_NAME of DBA_SQL_PLAN_BASELINES. I want to know what options do I have to determine why the plan is not being executed (i.e. a data dictionary view that shows why a particular plan is not being used or so). Does the optimizer save somewhere its decision for not using a sql plan baseline? Thanks in advance for your help.
Categories: DBA Blogs

Row lock contention

Tom Kyte - Thu, 2021-11-18 05:46
Hi Tom, This is regarding locking issue in database. One fine Monday morning client asked me to check slowness in the system reported by customer on Saturday. ASH report looks like as below for issue period. Blocking Sid (Inst) % Activity Event Caused % Event User Program 1132, 3397( 2) 9.12 enq: TX - row lock contention 9.12 ** NOT FOUND ** BLOCKING SESSION NOT FOUND From report its clear that there was some row lock contention for almost 2 hours(same update query was present in AWR for almost 2 hours) So I run below query on table (DBA_HIST_ACTIVE_SESS_HISTORY) to check further: SELECT DISTINCT A.SQL_ID,TO_CHAR(A.SAMPLE_TIME,'YYYY-MM-DD HH24:MI:SS') SAMPLE_TIME,A.BLOCKING_SESSION,D.OBJECT_NAME FROM DBA_HIST_ACTIVE_SESS_HISTORY A, GV$SQL S, DBA_OBJECTS D WHERE A.SQL_ID=S.SQL_ID AND BLOCKING_SESSION IS NOT NULL AND A.USER_ID <> 0 AND A.CURRENT_OBJ# = D.OBJECT_ID AND A.SAMPLE_TIME BETWEEN TO_TIMESTAMP('06.11.2021 13:00:00', 'dd.mm.yyyy hh24:mi:ss') AND TO_TIMESTAMP('06.11.2021 16:40:59', 'dd.mm.yyyy hh24:mi:ss') AND A.EVENT = 'enq: TX - row lock contention' ORDER BY SAMPLE_TIME DESC; and output confirmed the same that blocking session is 1132 (as shown in ASH report). Now when I queried the same table with session Id 1132, sample output is below: session Id sql_id 1132 aykcpznxu0k8c 1132 cgj479fjtszs0 1132 8ryy5pw5gjbn6 1132 0pcgv72dynnfv 1132 2npxyk4vq7575 So here I am stuck and my question is as below: 1. Since multiple sqls are running with same session id, how to uniquely identify the blocking sql. 2. All above sqls with session id 1132 are select statement. So can a select statement cause lock if query runs for 30 minute and in between underlying data changes. Thanks
Categories: DBA Blogs

Pages

Subscribe to Oracle FAQ aggregator - DBA Blogs