Exploring Kafka

Exploring Kafka

In this blog we will explore the basics of Kafka. An open-source distributed streaming platform, developed by LinkedIn & donated to Apache Software Foundation.

Kafka generally used in building real-time streaming data application. So that you can stream the live data from source systems without any delay. Following are some capabilities of kafka:

  • Publishing stream of records.
  • Store streams of record in fault tolerant way.
  • Subscribe/Consuming stream of records.

Download Kafka:

Download latest Kafka from : kafka.apache.org and un-tar it.

1

Start Zookeeper Server:

First you have to start the zookeeper server, as it was used by the kafka.

bin/zookeeper-server-start.sh config/zookeeper.properties
2
3

Once your zookeeper server started leave that terminal & do not close. Just not down the port number.

Start kafka-server:

Now once the zookeeper server is up and running, you can start the kafka-server.

bin/kafka-server-start.sh config/server.properties
4start
5done

Once your kafka-server started leave that terminal & do not close.

Create Topic:

Where producer application push the stream of record is called topic. Producer can push records in multiple topics. Lest create a sample topic for demo.

bin/kafka-topics.sh --zookeeper localhost:2181 --create --topic firsttopic --partitions 1 --replication-factor 1
topic

List all topics:

bin/kafka-topics.sh --list --zookeeper localhost:2181

You can delete topics using :

bin/kafka-topics.sh --delete --zookeeper localhost:2181 --topic <<topic-name>>

Produce Data:

Sending the streams of records to a kafka topic using producer.

bin/kafka-console-producer.sh --broker-list localhost:9092 --topic firsttopic
prodcue

Consuming Data:

Receive the streams of records from a kafka topic.

bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic firsttopic --from-beginning

consumedIn real-time applications you have integrate the kafka APIs for producing & consuming data. Kafka has five core APIs

  • Producer API  – To publish a stream of records to one or more Kafka topics
  • Consumer API – To subscribe to one or more topics
  • Streams API – To convert input stream into output stream
  • Connector API – Used for reusable connectors, external data source setup
  • Admin API  – For managing Kafka objects

Refer this blog for more details kafka.apache.org/intro


Thanks!

Happy Learning! Your feedback would be appreciated!

GitHub Repository – Configuring/Cloning

GitHub Repository – Configuring/Cloning

In this blog we will explore the basics of git repository. Git – Open source version control system for tracking changes in source code during software development. I have used github.com/ to create a repository for objects versioning. We will explore below topics in this blog. 

  • Installation of git in the Linux, Windows OS & configuration.
  • Generating ssh key for authentication git-hub account.
  • Clone the existing git-hub repository in local environment.
  • Pushing changes from local cloned repository to remote repository.

Install git:

  • To install the git in Linux: sudo apt-get install git
  • For Windows you can download it from : git-scm.com Check the version after installation

check

Below configuration required.

git config –global user.name “<<username>>”
git config –global user.email “<<email>>”

Generation SSH keys:

You can setup the ssh keys so that very time you don’t have to provide username & password for git repository while pushing changes. Refer this blog for more details help.github.com

Below command create the public & private ssh keys for your system.

$ ssh-keygen -t rsa -b 4096 -C “your_email@example.com”

$ ssh-add ~/.ssh/id_rsa

Key for Linux & Windows:

linux

win

The public key present in id_rsa.pub file you need to copy that & set in your git-hub settings. You can have multiple keys in git-hub account.

keys

Cloning existing git-hub repository:

You can clone the remote repository in your local environment. After that you can make changes & later push to remote location.

Created a repository using the github.com/

repocrete

Repository Created: I have some files in it. Copy SSH link so that we clone the repo.

test

git clone git@github.com:shobhit-singh/test-dev-repo.git

clone
After doing some changes in any file you can check the status of git.

git status

status

Pushing local changes to remote repository:

To push the changes from local to remote repository.

git add . --all
git commit -m 'insert comments here'
git push

git push

Checking object in git-hub:

git

Commands:

git clone git@github.com:shobhit-singh/test-dev-repo.git
git status
git add . --all
git commit -m 'insert comments here'
git push

For creating new repository from command line refer this blog: GitHub Repository Creation – Command Line


Thanks!

Happy Learning! Your feedback would be appreciated!

GitHub Repository Creation – Command Line

In this blog we will explore how we can add new repository in GitHub using  API & adding the new project in that repository.

For configuring git refer this blog: Exploring Git – Configuring GitHub Repo


Creating GitHub Repository:

Let’s create a empty repository in git-hub using the API.

sudo apt install curl
curl -u ‘username’ https://api.github.com/user/repos -d ‘{“name”:”first-project”}’

curl

Empty repository has been created in git-hub account, copy the remote URL.

github

Creating Project Files:

I have created project folder ‘first-project’  in local environment inside that one file is created.

1

Initializing, Add & Commit

Initializing the new project using git init, adding the protect to stage using git add & commit.

inti

Adding URL & Push:

Using below commands you can push your local code in remote new git repository.

push

Check the files in git-hub repository.

creatd repo

Commands:

 
$ sudo apt install curl
$ curl -u 'shobhit-singh' https://api.github.com/user/repos -d '{"name":"first-project"}'

$ git init
$ git add .
$ git commit -m "Initial commit"
$ git remote add origin git@github.com:shobhit-singh/first-project.git
$ git remote -v
$ git push origin master

 

To clone existing repository : Refer this blog Exploring Git – Configuring GitHub Repo


Thanks!

Happy Learning! Your feedback would be appreciated!

ODI 12C – Slowly Changing Dimensions

In this blog we will explore how we can setup Slowly Changing Dimension SCD2 in ODI 12c.

Slowly Changing Dimensions: Attributes that changes slowly over the period of time, rather than changing frequent. In order to generate the historical reports in DW we need to track the history of dimension.

Type 1: When there is no need to store historical data in the dimension table. So process overwrites the old data with the new data in the dimension table. This is normal incremental update.

Type 2: When there is need to stores the entire history of the attribute in the dimension table in the form of separate rows. Using dates & current flag you can identify which records is currently active. See below example.

scd2

Type 3: When there is need to store current as well as previous value of the attribute. In this scenario two separate columns required in table to store current as well as previous value. See below example.

scd3


Import the IKM: Oracle Slowly Changing Dimension

IKM

Step 1: Create the tables & reverse engineered the same in your model. I have created below tables in localhost database.

 

  CREATE TABLE "SRC_EMP_ASSIGNMENT"
   (	EMP_ID VARCHAR2(10 CHAR),
	ASSIGNMENT_ID VARCHAR2(150 CHAR)
   );
  CREATE TABLE "TRG_EMP_ASSIGNMENT"
   (  ROW_WID NUMBER,
      EMP_ID VARCHAR2(10 CHAR),
      ASSIGNMENT_ID VARCHAR2(150 CHAR),
      ASSIGNMENT_TYPE VARCHAR2(5 CHAR),
      CURRENT_FLAG CHAR(1 CHAR),
      EFF_START_DATE DATE,
      EFF_END_DATE DATE
   );
 CREATE SEQUENCE SEQ_TRG_EMP_ASSIGNMENT START WITH 1;

 

Step 2: Open the target data-store and change the OLAP type to “Slowly Changing Dimension“.

1

Step 3: Now you have to set the SCD behavior of columns for target SCD2 data-store.

  • Surrogate Key – Unique key for table which does not have any business meaning.
  • Natural Key – Unique Key from the source data.
  • Current Record Flag – Indicate whether the record is active or old.
  • Starting Timestamp – Indicate the starting date of that entry
  • Ending Timestamp – Indicate the ending date of that entry
  • Overwrite on Change – If you don’t want to enable on SCD2 on any column.
  • Add Row on Change – If you want to enable on SCD2 on any column.

In this example, for target table : TRG_EMP_ASSIGNMENT

  • Surrogate KeyROW_WID
  • Natural KeyEMP_ID
  • Current Record FlagCURRENT_FLAG
  • Starting TimestampEFF_START_DATE DATE
  • Ending TimestampEFF_END_DATE DATE
  • Overwrite on ChangeASSIGNMENT_TYPE (Don’t want to track history for this)
  • Add Row on Change ASSIGNMENT_ID

Open the data-store, attributes & select the SCD behavior for every column.

8

Step 4: Create Mapping: See below execute on hints for columns.

mappig

Executing Mapping 1st Time: Initial Run

Source Date:

src

Target: See the end dates of all the records, default 01-01-2400 EFF_END_DATE.

trg

Executing Mapping 1st Time: Incremental Run

Source Data:

src2

Target: 

trg2

In above screenshot you can notice that previous assignment for EMP_ID 100 has been updated with CURRENT_FLAG=0 and EFF_END_DATE updated. New assignment for EMP_ID 100 has been created with CURRENT_FLAG=1 and default 01-01-2400 EFF_END_DATE.

odiopr


Thanks!

Happy Learning! Your feedback would be appreciated!

ODI 12c – Setting CDC/Journalizing

ODI 12c – Setting CDC/Journalizing

In this blog we will see step by step how we can setup the change data capture (CDC)/journalizing in ODI 12c.

Change data capture as the name suggest the process to identify the change in source data whether the records inserted/updated or deleted from the source systems. After identifying the change data & applying the same in targets.  We will explore here the complete process & objects required to setup the CDC in ODI 12c.

ODI  Journalizing Modes:

Simple –  Capture changes for individual data-stores in a model, does not support referential integrity between the data-store.

Consistent Set – It also support the referential integrity between data-stores. Group of datastores journalized in this mode is called ‘Consistent Set’.


Step 1. Import JKM:

Import the Journalizing Knowledge Model – JKM Oracle Simple

JKM

Create the Model for your source data-store. Select the ‘Journalizing Mode‘ as simple & select the correct JKM from drop down. Refer below screenshot.

Model
Model

Reverse engineer your tables, make sure it has some key.

Step 2. Add Subscriber:

Subscriber: It is a application or process which consume the captured change data. When subscribers have consumed the captured changes, later on changes were discarded from the journals.

If you do not want to add any subscriber to your data-store, by default ‘SUNOPSIS‘ will be there.

Subcriber
Add Subscriber

Step 3. Add to CDC:

You are adding your data-store to CDC process.

addtocdc
Add to CDC

If you open your data-store,  go to Journalizing tab. You will see a message that table is included in CDC. Also the clock sign will come on  your data-store.

CDC addeed
Data-store

Step 4. Start Journal:

Start the journal, select the subscriber.  Select context & run. It will create all the objects using which change data can be captured in J$ tables.

startjournal

journal
Journal Steps

Below objects created in database. Two views are created: the JV$ view and the JV$D view.

  • TABLES: J$ACCOUNTS_STAGE & SNP_SUBSCRIBERS
  • TRIGGERT$ACCOUNTS_STAGE
  • VIEWS: JV$ACCOUNTS_STAGE & JV$DACCOUNTS_STAGE

Subscriber Table:

ACCOUNTS_STAGE Table registered as subscriber in SNP_SUBSCRIBERS.

subsciber

J$ Table:

I have inserted 4 records in my staging table ACCOUNTS_STAGE manually. After commit I have noticed that 4 records also inserted in my J$ Tabe i.e. J$ACCOUNTS_STAGE with consume flag as 0 i.e. records not yet consumed. J$ Table J$ACCOUNTS_STAGE has only 5 columns, last one is the key of table. 

  • JRN_SUBSCRIBER
  • JRN_CONSUMED
  • JRN_FLAG
  • JRN_DATE
  • ACCOUNT_ID

j$ table

Above records are inserted by the trigger T$ACCOUNTS_STAGE on ACCOUNTS_STAGE. 

Step 5. Journalizing Filter in Mapping :

You can set your subscriber here in filter.

journalizing filter

Step 6. Journalizing Data Only:

Tick the ‘Check box’ for ‘Journalizing Data Only‘.

journa_data_only

JV$ view:

JV$ACCOUNTS_STAGE is used in the mappings where you select the option Journalized data only. This view is used to insert the records in I$ flow table.

Records from the J$ table ( J$ACCOUNTS_STAGE) are filtered using condition JRN_CONSUMED=1. If you have multiple records for 1 key, then it will pick the latest entry of the that key using JRN_DATE.

select	L.JRN_SUBSCRIBER AS  JRN_SUBSCRIBER,
L.ACCOUNT_ID AS  ACCOUNT_ID,
max(L.JRN_DATE) AS JRN_DATE
from DEV_SCHEMA.J$ACCOUNTS_STAGE L
where L.JRN_CONSUMED = '1'
group by L.JRN_SUBSCRIBER,L.ACCOUNT_ID

Locking of records in J$ table using below SQL update statements. When you start execution of mappings this is the first step.

update	DEV_SCHEMA.J$ACCOUNTS_STAGE
set	JRN_CONSUMED = '1'
where	(1=1)
AND JRN_SUBSCRIBER = 'SUNOPSIS'

JV$D View:

Data view is used to show the change data available for all subscribe in the J$ table without any consume filter, when you select the menu Journal Data from data-store.

I have inserted  new 1 record & 1 updated in my staging table ACCOUNTS_STAGE manually. If you notice the JRN_FLAG is coming as I for update case also, code was present in trigger like that.

updated

journal data jdview

List of changed data if available: It is coming from JV$D view (JV$DACCOUNTS_STAGE)

journal data jdview2

At the end of mapping steps there is one step which cleans journalized table.

delete from J$ACCOUNTS_STAGE
where 	JRN_CONSUMED = '1'
AND JRN_SUBSCRIBER = 'SUNOPSIS' ;

Thanks!

Happy Learning! Your feedback would be appreciated!

ODI 12c | Version Control System Setup – Git

In this blog we will see step by step how we can setup the version control in ODI 12c using git. Git – open source version control system for tracking changes in source code during software development. I have used github.com/ to create a repository for my ODI 12c objects versioning.


Step 1: Created a repository using the github.com/

1

Copy the git url for you repository.

https://github.com/shobhit-singh/odi12c-repo.git

2

Step 2: Open the ODI Studio & change your “Version Control Management System” to git by default it was set to ODI.

3

4

Step 3: Setup the git, go to setting & edit connection.

5

Provide the git url, github repository. Change the authentication to SSL.

6

Accept the server certificate.

7

Step 3: Clone remote repository with local folder.

8

9

10

Step 4: Click on configure to setup the branch. I have selected the master branch & save the settings.

11

12

Adding ODI Object First Time in VCS:

I have created the Model Folder, Sub-Model Folder, Model & Data-store in ODI.

datastore

Right Click on the Model Folder & Add to VCS, as it was new object & going first time.

13

You can select object also if you want to move child objects also. Add comments & Click on OK.

14

15

You can check the ODI Objects in GitHub.

Check the objects

Re-Adding ODI Object in VCS:

I have changed the data-store ‘TEST_TABLE’ in ODI. I have added the column in data-store. Now I have create a new version for this object. See below steps to create new version.

16

Add comments & Click on OK.

17

18

Check the version in GitHub: Now this object has 2 version.

19

 


Thanks!

Happy Learning! Your feedback would be appreciated!

ODI 12c | Master & Work Repository Setup

ODI 12c | Master & Work Repository Setup

In this blog we will see step by step how we can setup the ODI 12c Master & Work Repository:


Prerequisite: Setup the repository schema first in database(Supported by ODI). I have created the schema in the Oracle database. Refer the blog for creating new schema in Oracle schema Oracle Database 11g | Create User

CREATE USER ODI_REPO IDENTIFIED BY welcome;
GRANT CONNECT, RESOURCE, DBA TO ODI_REPO;
GRANT CREATE SESSION TO ODI_REPO;
GRANT DBA TO ODI_REPO;
GRANT ALL PRIVILEGES TO ODI_REPO;

You can create separate schema for master repository as well as work repository, but in this blog we going to use only one schema for both master as well as work repository. Refer this blog for more understanding of ODI Repositories | Master & Work Repository

Checkout this blog for understanding of relationship between ODI Master & Work Repository.

Setup Master Repository

Open the ODI 12c Studio. Go to File -> New. Select “Create  New Master Repository“.

1

Step 1. Provide all the details for repository connection.  Like schema name, password, JDBC details, DBA credential details etc. See below screenshot.

jdbc:oracle:thin:@localhost:1522/orcl.168.56.1

2

After providing all the details, test the connection. Click next.

Step 2: Provide authentication mode using ODI authentication, provide supervisor password & note it down somewhere as it is required for login setup.

3

Click on finish after providing supervisor password. It will start the setup of the master repository. It will take few minutes to complete the setup of master repository.

4

5

master done

Here the master repository has been created. Now you have create login for the master repository: Follow below steps


Setup Master Repository Login

Step 1: Open the ODI 12c Studio. Go to File -> New. Select “Create  New ODI Repository Login“.

master login

Step 2. Provide the detail like login name (provide any name), supervisor name & supervisor password you provided at the time of repository setup.

For Database Connection: Provide details for master reposotory schema details. Select radio button master repository. Test the connection.

login2

Click on ‘Ok’. You are done with the master repository login setup. Now you have to login your master repository & create the work repository inside it.

In ODI 12c Studio. Click on connect repository.

connect

Below screen will come with all pre-populated data, you just have to click ‘Ok’. Now you are inside the master repository. Check the repository details ODI->’Repository Connection Information’

ml2

ml3

Here you have logged in ODI Master Repository. Notice here the designer navigator is by default disabled. Basically you cannot do development work in master repository, you have to create work repository for this master to start the development work.

Now you have to setup work repository under this master repository: Follow below steps


Setup Work Repository:

Checkout this blog for understanding of relationship between ODI Master & Work Repository.

Go to Topology Navigator, Repositories -> New Work Repository

work1

Notice here I have provided the same schema, you can create separate schema for work schema which is recommended.

work2

Provide the password for work repository& select development type. As you are going to develop ETL in this work repository. For productions & UAT generally we select ‘Execution‘ as work repository type where Designer navigator is by default disabled. It means you can develop object in Execution type work repository.

work3

Click on Finish.

work4

It will ask you to setup the login for work repo. Click on yes & provide the login name.

worklogin

worklogin2

workdone

Here the work repository setup &  login has been done & disconnect from master repository and login the work repository.

connect

Check the repository details ODI->’Repository Connection Information‘.

Notice here the designer navigator is not disabled. As this is as development type work repository, you can start the development work by creating new project.

Dataserver, Physical Schema, Logical Schema & Context | Oracle Data Integrator 11g

Model & Datastore| Creating and Reverse-Engineering| Oracle Data Integrator 11g

worklogin3


Thanks!

Happy Learning! Your feedback would be appreciated!

Oracle Database 12c Installation & Setup| Windows 10

Oracle Database 12c  Installation & Setup| Windows 10

In this blog we will see how we can setup the Oracle Database 12c on Windows 10 – 64 Bit PC. First you have to download the latest Oracle Database 12c using below link.

https://www.oracle.com/database/technologies/oracle12c-windows-downloads.html


I have downloaded the Oracle Database 12c (12.2.1.4.0) version.

Go to the database directory & start the setup. It will automatically start the Oracle Universal Installer.

Here creating database as container database
Setup is in-progress
Checking the status of database after installation using lsnrctl status
Oracle Database 12c is now up & running.

Refer this blog for creating pluggable database & schema : Container & Pluggable Database – Oracle 12c


Thanks!

Happy Learning! Your feedback would be appreciated!

PL/SQL Scenario Questions – Part 1

PL/SQL Scenario Questions – Part 1

In this blog we will explore some PL/SQL – Scenario questions.

Visit Part 2: PL/SQL Scenario Questions – Part 2


Scenario 1. Write PL/SQL program to show overloading concept of oops in PL/SQL.

Solution 1: See below example for PL/SQL package PKG_TEST, it has two procedure. Both procedure has same name & same input parameter name. But parameter type is different. This is example of overloading of objects in PL/SQL.

CREATE OR REPLACE PACKAGE PKG_TEST
AS
  PROCEDURE PRC_SAMPLE(INPUT_ID IN VARCHAR2);
  PROCEDURE PRC_SAMPLE(INPUT_ID IN NUMBER);
END PKG_TEST;
CREATE OR REPLACE PACKAGE BODY PKG_TEST
AS
  PROCEDURE PRC_SAMPLE(INPUT_ID IN VARCHAR2)
  IS
  BEGIN
   DBMS_OUTPUT.PUT_LINE ('Calling VARCHAR2 procedure');
  END PRC_SAMPLE;
  PROCEDURE PRC_SAMPLE(INPUT_ID IN NUMBER)
  IS
  BEGIN
    DBMS_OUTPUT.PUT_LINE ('Calling number procedure');
  END PRC_SAMPLE;
END PKG_TEST;
/
pls0

Scenario 2. Write PL/SQL block which raise the user-defined exception if the day is weekend.

SET SERVEROUTPUT ON;
DECLARE
exp_weeknds EXCEPTION;
BEGIN
IF TO_CHAR(SYSDATE, 'DY')IN ('SUN','SAT') THEN
raise exp_weeknds;
ELSE
DBMS_OUTPUT.put_line( 'Record processed!');
END IF;
EXCEPTION
WHEN exp_weeknds THEN
dbms_output.put_line('Cannot process the record as its weekend!');
END ;
/
pls1

Scenario 3. Write PL/SQL program to show hiding concept of oops in PL/SQL.

Solution 3: See below example for PL/SQL package PKG_TEST, it has two procedure in its body but only one is declared in package spec. The one which is not declared but present in the body is basically hidden from outside world & can be used internally only.

CREATE OR REPLACE PACKAGE PKG_TEST
AS
PROCEDURE PRC_TEST(INPUT_ID IN VARCHAR2);
END PKG_TEST;
CREATE OR REPLACE PACKAGE BODY PKG_TEST
AS
PROCEDURE PRC_INTERNAL(INPUT_ID IN VARCHAR2)
IS
BEGIN
-- This procedure is not declared in package specification.
DBMS_OUTPUT.PUT_LINE ('Calling PRC_INTERNAL procedure');
END PRC_INTERNAL;
PROCEDURE PRC_TEST(INPUT_ID IN VARCHAR2)
IS
BEGIN
DBMS_OUTPUT.PUT_LINE ('Calling PRC_TEST procedure');
PRC_INTERNAL('100');
END PRC_TEST;
END PKG_TEST;
/
pls3

Now try calling the procedure which was not defined in declaration.

EXEC PKG_TEST.PRC_INTERNAL(‘100’);

pls4

Calling procedure PRC_TEST which is calling the internal procedure PRC_INTERNAL.

pls5

Scenario 4. Can we write commit in PL/SQL exception block.

Solution 4: Yes, but it is not recommended to write the commit. As if you want to logs the exception you can use pragma autonomous transaction.  Below example showing that we can use commit in exception block.

CREATE TABLE err_logs
(
error_number  VARCHAR2(25),
error_message VARCHAR2(1000)
);

DECLARE
lv_err_code VARCHAR2(25);
lv_err_msg  VARCHAR2(1000);
ln_num      NUMBER;
BEGIN
SELECT to_number('X') INTO ln_num FROM dual;
EXCEPTION
WHEN OTHERS THEN
dbms_output.put_line('Writing exception logs!');
lv_err_code := SQLCODE;
lv_err_msg  := SQLERRM ;
INSERT INTO err_logs(error_number,error_message)
VALUES(lv_err_code,lv_err_msg);
COMMIT;
END;
/
pls6

Scenario 5. What is pragma autonomous transaction?

Solution 5: Fire an transaction which is independent of calling transaction and return to the calling transaction without affecting it’s state.

CREATE OR REPLACE PROCEDURE prc_log_errors (lv_err_code  IN  VARCHAR2,lv_err_msg IN VARCHAR2) AS
PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
INSERT INTO err_logs(error_number,error_message)
VALUES(lv_err_code,lv_err_msg);
COMMIT;
END prc_log_errors;
/

If you see below example, DML transaction performed before exception occurred. As we have called the procedure with pargma autonomous which has commit in it, still my main translation data is roll backed.

CREATE TABLE test_table (TEST_ID NUMBER);

DECLARE
lv_err_code VARCHAR2(25);
lv_err_msg  VARCHAR2(1000);
ln_num      NUMBER;
BEGIN
INSERT INTO test_table(TEST_ID)
VALUES(1);
INSERT INTO test_table(TEST_ID)
VALUES(2);
SELECT to_number('X') INTO ln_num FROM dual;
commit;
EXCEPTION
WHEN OTHERS THEN
dbms_output.put_line('Writing exception logs!');
lv_err_code := SQLCODE;
lv_err_msg  := SQLERRM ;
prc_log_errors(lv_err_code,lv_err_msg);
rollback;
END;
/
pls6
pls5

Scenario 6. Can you fire DDLs in function?

Solution 6: Yes using dynamic sql, but can not use those function in SQL statement. See below example.

CREATE OR REPLACE FUNCTION FNC_SQUARE(
INPUT_ID IN NUMBER)
RETURN NUMBER
IS
ln_out NUMBER;
BEGIN
BEGIN
EXECUTE immediate 'drop table TEST_TABLE';
dbms_output.put_line('Table dropped');
EXCEPTION
WHEN OTHERS THEN
NULL;
END;
EXECUTE immediate 'CREATE TABLE TEST_TABLE (TEST_ID NUMBER)';
dbms_output.put_line('Table created');
ln_out:= INPUT_ID*INPUT_ID;
RETURN ln_out;
END;
/
pls4

Now calling that function in SQL statement:

pls5

Same thing with the DML also you can use in function but cannot call using SQL statement. I have tried calling function from sql which has DML & commit;

pls6

Scenario 7. Can you call procedure from sql statement?

Solution 7: No

CREATE OR REPLACE PROCEDURE PRC_SQUARE(
INPUT_NUM IN NUMBER,OUTPUT_NUM OUT NUMBER )
IS
BEGIN
OUTPUT_NUM:= INPUT_NUM*INPUT_NUM;
END;
/

SELECT PRC_SQUARE(2) FROM dual;
pls6

Call procedure using below method.

declare 
ln_out NUMBER; 
begin 
PRC_SQUARE(6,ln_out); 
dbms_output.put_line('Square is: '|| ln_out); 
end; /
pls5

Function you can using SQL statements. As it has always return type.

create or replace FUNCTION FNC_SQUARE(
INPUT_ID IN NUMBER)
RETURN NUMBER
IS
ln_out NUMBER;
BEGIN
ln_out:= INPUT_ID*INPUT_ID;
RETURN ln_out;
END;
/
pls6

Thanks!

Happy Learning! Your feedback would be appreciated!

ODI 12C Installation & Setup | Windows 10 64-Bit

In this blog we will see how we can setup the ODI 12c on your Windows 10 – 64 Bit PC. First you have to download the latest ODI 12c using below link. I have downloaded the Oracle Data Integrator 12c (12.2.1.4.0) version.

Step 1. Lets begin the installation by  downloading & unzipping the file.

Oracle Data Integrator Download Link

Instal1

Step 2. Prerequisite: Check you java version & set the java variables.

2

java -version
set JAVA_HOME="C:\Program Files\Java\jdk1.8.0_221"
set path=C:\Program Files\Java\jdk1.8.0_221\bin;%path%

Step 3. Start the installation using below commands:

cd C:\Users\shobhit\Downloads\fmw_12.2.1.4.0_odi_Disk1_1of1
java -jar fmw_12.2.1.4.0_odi.jar

1

It will automatically start the installation.

Screenshot 23-02-2020 12_05_44

Installation steps are self explanatory you just have to check all the details like path & click on next.

3

4

Installation is completed. Go to windows start,  type ODI Studio and open the tool.

Step 3. Setup the Master Repository,Work Repository & Logins. Refer below blog to setup the master repository, work repository & logins.

Oracle Data Integrator 11g (ODI) Repositories | Master & Work Repository | Connection & Setup

tool

Repository configuration is done & now you can start building the project using ODI 12c.

If you encounter such issue while opening the ODI 12c Sudio. Please follow below steps.

issue

Delete all the temp files present in the below directory & try opening the ODI studio again.

C:\Users\<<Username>>\AppData\Roaming\odi\system12.2.1.4.0


Thanks!

Happy Learning! Your feedback would be appreciated!