Exploring Docker

Exploring Docker

Problem Statement: Let say we have to setup multiple application (App 1, App 2 & App 3) on machine. All these application require different version of software/libraries dependencies. How we can setup applications so that there is no conflict of libraries version between applications.

Earlier times approach where we can setup different physical machines for all the applications. Following are some disadvantages:

  • Huge cost involved for setup different machines.
  • Wastage of resources in individual machines.
  • Difficult to scale & manage the applications.

Old Approach: Setup VMs (Virtual Machines) on top of one host machine & run applications separately. See below diagram

In this kind of hypervisor based virtualization model each VM has its own OS i.e. separate resources allocated for VM (RAM, HDD and CPU). Following are some disadvantages:

  • Entire OS loads first then the app start. So Boot time is slow.
  • VMs are generally huge is size (GB’s).
  • Wastage of unused resources in VM.
  • Deployment is not easy.

Latest Approach: Containerization – Process of packaging of an application with its required files, libraries/dependencies to run in efficient way in isolated user space called container.

Its a form of OS virtualization, where containers shares the host OS kernel. Containers are light weight as it holds only required files & libs & consume resources whenever required.

Source: pexels.com (Free Licence)

What is Docker?

‘Container based technology’ which enable developer to create, run & deploy application as efficient light weight container. You can create the containers using the docker images i.e. read only template. You can get the images for OS, Programming Language, Databases, DevOps etc.

Visit Docker Hub for more details.

Containers shares the host OS kernel

Docker Installation:

Docker is easy to install application, available for variety of Linux, macOS & Windows. There are several methods to install the Docker on your machine, you can refer all the methods at docs.docker.com/

In this blog we will install docker using the shell script on Ubuntu box.

curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh het-docker.sh

Checking the docker version after installation using docker -v or docker –version

Testing Docker Installation:

Docker is now installed on the system, lets run the first container to test the docker installation using docker official image “Hello-World”. You can refer the complete documentation of this image here.

Pulling the image of ‘hello-word‘ from docker hub. You can check all downloaded images in your local repository using docker images.

Now we have hello-world image present in local repository. We can use docker run command to launch the container using this image.

Listing all the container using docker ps -a command.

We are able to test the doker installation by running first container successfully. Let’s move to next step by creating the custom images by passing our own set of code.


Creating Custom Docker Image:

There are many options to create the custom docker image. Here we are going to use the Dockerfile to create multiple custom docker custom images.

Here I’m going to create two custom images using the base images of Python version 2.7 & 3.8 and will provide common python code for both the apps. Later going to create the containers for these images.

I have create one Dockerfile, first I’m using Python 2.7 version image as base.

Sample python script:

Creating custom image for python 2.7 using docker build command.

python2.7app custom image is created

After this I have edited the Dockerfile & changed the Python version to 3.8

Creating second custom docker image for python 3.8:

python3.8app custom image is created

Checking all the docker images: So we have our own custom images created. See below image python2.7app & python3.8app.

Launching Docker Containers:

Containers created for separate images & code is also executed.

Checking all the container using docker ps -a command

Check the details of containers like IP, Ports etc using inspect command.

Pushing custom images to Docker Hub:

First you have to sign up for docker hub: hub.docker. You will get you docker id i.e. username.

docker tag <image-id> username/<image-name>

docker login

# provide your docker hub credentials

To push the image :

docker push username/<image-id>

You can check your image in hub.docker

Removing the Docker Images/Containers:

# Remove all stopped containers 
docker rm $(docker ps -a -q)
# Remove single container 
docker rm <container-id>
# Remove single image
docker rmi <image-id>

Thanks!

Happy Learning! Your feedback would be appreciated!

Customer-Orders Sample Schema | Oracle 12c

Customer-Orders Sample Schema | Oracle 12c

Customer Orders is a new sample schema in Oracle Database launched in Aug’19, its a simple retail application database. You can use this for learning SQL, writing blog posts or any kind of application demo. In this schema you will also get to know the usage of JSON in Oracle Database.

Below is the relational diagram of CO schema created using Oracle Data Modeler.


Customer Orders schema requires Oracle Database 12c or higher version. Refer below blogs for installing Oracle Database 12c & setting up the pluggable database.

Installing CO schema

Download the schemas from the GitHub link: db-sample-schemas . After downloading the schemas you have to run below sql

@co_main <CO_password> <connect string> <tablespace> <temp tablespace>

I have created the table spaces in my pluggable database, which is required for running the co_main script for schema creation.

CREATE SMALLFILE TABLESPACE "USERS" DATAFILE 'C:\APP\SHOBHIT\VIRTUAL\ORADATA\ORCL\PDB\USERS01.dbf' SIZE 100M AUTOEXTEND ON NEXT 100M ;
CREATE TEMPORARY TABLESPACE TEMP TEMPFILE 'C:\APP\SHOBHIT\VIRTUAL\ORADATA\ORCL\PDB\TEMP1' SIZE 2G;

Now cloning the Oracle sample schemas from GitHub or you can download the Zip extract & unzip from GitHub.

Executing the co_main script under my pluggable database.

This command will first drop the CO schema & then create it, after that it will start the DDL script & DML script. You can check the script logs in co_install txt file in the same directory where your schema is present.

Connecting the CO schema using the SQL Developer.

Checking out the JSON data in Product table:

Sample JSON data for one of the product:

{
    "colour": "green",
    "gender": "Women's",
    "brand": "FLEETMIX",
    "description": "Excepteur anim adipisicing aliqua ad. Ex aliquip ad tempor cupidatat dolore ipsum ex anim Lorem aute amet.",
    "sizes": [0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20],
    "reviews": [{
        "rating": 8,
        "review": "Laborum ipsum adipisicing magna nulla tempor incididunt."
    }, {
        "rating": 10,
        "review": "Cupidatat dolore nulla pariatur quis quis."
    }, {
        "rating": 9,
        "review": "Pariatur mollit dolor in deserunt cillum consectetur."
    }, {
        "rating": 3,
        "review": "Dolore occaecat mollit id ad aliqua irure reprehenderit amet eiusmod pariatur."
    }, {
        "rating": 10,
        "review": "Est pariatur et qui minim velit non consectetur sint fugiat ad."
    }, {
        "rating": 6,
        "review": "Et pariatur ipsum eu qui."
    }, {
        "rating": 6,
        "review": "Voluptate labore irure cupidatat mollit irure quis fugiat enim laborum consectetur officia sunt."
    }, {
        "rating": 8,
        "review": "Irure elit do et elit aute veniam proident sunt."
    }, {
        "rating": 8,
        "review": "Aute mollit proident id veniam occaecat dolore mollit dolore nostrud."
    }]
}

Parsing the JSON data using JSON_TABLE:

select p.product_name, j.gender, j.colour,j.brand,j.sizes
from   products p,
       json_table (
         p.product_details, '$'
         columns ( 
             colour varchar2(4000) path '$.colour',
             gender varchar2(4000) path '$.gender',
             brand  varchar2(4000) path '$.brand',
             sizes varchar2(4000) FORMAT JSON path '$.sizes'
))j;
SELECT p.product_name,  j.value
FROM products p,
  json_table ( p.product_details, '$.sizes[*]' 
columns ( value PATH '$' ) 
)j;

For more details please visit: announcing-a-new-sample-schema-customer-orders


Thanks!

Happy Learning! Your feedback would be appreciated!

Container & Pluggable Database | Oracle 12c

Container & Pluggable Database | Oracle 12c

In this blog we will explore the container database concept in Oracle Database 12c. We are going to create the pluggable database manually & schema under it.


I have installed the Oracle 12c Database in Windows 10 & logged in using sys as dba. Refer this blog for installation & setup Oracle 12c Database Oracle Database 12c Installation & Setup| Windows 10

In connected session V$DATABASE showing database which was created at the time of installation of Oracle Database 12c. If you refer below screenshot column value CDB=’YES’ i.e. its a container database(CDB).

v$database output

Multitenant architecture enable the Oracle database to work as multi-tenant container database (CDB). All Oracle databases before 12c were non-CDBs.

Container Database (CDB) has following containers in it :

  • Root: Container is named as CDB$ROOT. Oracle supplied metadata & main database holding all control files, redo etc.
  • Seed PDB: Named as PDB$SEED. System supplied template that CDB can use to create new PDBs.
  • User-created PDB: Pluggable database created by user for business requirements. Actual schemas will be created under here for code & data.

You can run below command to check your connected container. I’m running this command in my sys as dba connection. Here it is showing that you are connected to the CDB$ROOT i.e. root container.

sho con_name;
sho con_name ouput

Create Pluggable Database :

Here we will see how we can create pluggable database under our root container. See below sample SQL for creating manually the pluggable database. You have to provide DB name, admin user name i.e. which will be admin of your PDB.

You can also create PDB using dbca. Just do Windows search.

CREATE PLUGGABLE DATABASE <database-name> ADMIN USER <username> IDENTIFIED BY <password>
DEFAULT TABLESPACE USERS
DATAFILE '<location>'
SIZE <size> AUTOEXTEND ON
FILE_NAME_CONVERT=(
'<location of pdbseed pdb>',
'<location of new pdb>');
Pluggable Database Setup

I have created the pluggable database using my sys connection i.e. root. You can check your data files created in the provided location.

Datafiles

After creating the pluggable database you can check its initial status should be present as MOUNTED. Check the data in v$pbs

v$pdbs output

You have to fire below command to put your database in read write mode.

alter pluggable database PDB open;
v$pdbs output

Now your pluggable database is altered & its showing “Read Write” mode. You can also check the status of PDBs using sho pdbs;

Creating Pluggable Database Schema:

In this step we are going to create the database schema under our pluggable container. Just need to alter the container.

alter session set container=PDB;
sho con_name;

CREATE USER DEV_SCHEMA IDENTIFIED BY DEV_SCHEMA;
GRANT CONNECT, RESOURCE, DBA TO DEV_SCHEMA;
GRANT CREATE SESSION TO DEV_SCHEMA;
GRANT ALL PRIVILEGES TO DEV_SCHEMA;

Setting up SQL Developer Connection:

Refer this blog for setting up the new sample schema ‘Customer-Orders’ : Customer-Orders Sample Schema | Oracle 12c


Thanks!

Happy Learning! Your feedback would be appreciated!

Oracle SQL – Scenario Questions Part 4

In this blog we will explore some more Oracle SQL scenario questions. Here we will cover more about indexes.


Scenario 1: Create table T_ACCOUNT, inserted some data & created index on three columns (ACCOUNT_ID,ACCOUNT_TYPE,ACCOUNT_STATUS).

Now running below query, will index picked up by Oracle optimizer or not?

SELECT * FROM T_ACCOUNT where ACCOUNT_TYPE ='C' and ACCOUNT_STATUS='O';
CREATE TABLE T_ACCOUNT
  ( account_id     NUMBER ,
    account_name   VARCHAR2(100) ,
    account_type   VARCHAR2(10) ,
    account_status CHAR(1),
    open_date      DATE );
CREATE INDEX IDX_ACCOUNT_1 ON T_ACCOUNT(ACCOUNT_ID,ACCOUNT_TYPE,ACCOUNT_STATUS);

Explanation : The order of the columns can make a difference. IDX_ACCOUNT_1 would be most useful if you fire such queries. Table would be full accessed if you don’t follow the order.

SELECT * FROM T_ACCOUNT where ACCOUNT_ID=100 and ACCOUNT_TYPE ='C' and ACCOUNT_STATUS='O';
SELECT * FROM T_ACCOUNT where ACCOUNT_ID=100 and ACCOUNT_TYPE ='C';
SELECT * FROM T_ACCOUNT where ACCOUNT_ID=100;

Example: I have used my columns as per the order of index. Here Table accessed by Index.

EXPLAIN PLAN FOR SELECT * FROM T_ACCOUNT where ACCOUNT_ID=100 and ACCOUNT_TYPE ='C';
SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY);

Now using columns not in order or index. Here tables is full accessed.

EXPLAIN PLAN FOR SELECT * FROM T_ACCOUNT where ACCOUNT_TYPE ='C' and ACCOUNT_STATUS='O';
SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY);

Scenario 2: Can you create the Unique Index on the column having some dupes records.

Explanation: No

Scenario 3: Created one table with primary key, so oracle created the default index on that column. What will happen if you are going to create unique index on same column.

Explanation: Cannot create the index on already indexed columns.

You can first create the Index & then enforce the constraint.

DROP TABLE T_SAMPLE;
CREATE TABLE T_SAMPLE
  ( ID NUMBER ,
  DATA_SOURCE NUMBER
  );
CREATE UNIQUE INDEX IDX_T_SAMPLE ON T_SAMPLE(ID);

ALTER TABLE T_SAMPLE ADD CONSTRAINT PK_T_SAMPLE PRIMARY KEY(ID);

Though you can create index if the primary key having more than one column.

CREATE TABLE T_SAMPLE
  ( ID NUMBER ,
  DATA_SOURCE NUMBER,
  CONSTRAINT T_SAMPLE_pk PRIMARY KEY (ID,DATA_SOURCE)
  );
CREATE INDEX IDX_T_SAMPLE ON T_SAMPLE(ID);

Scenario 4: Created the tables & index on two columns ID, CODE. In query I’m using the condition like ID||CODE = ‘100E’ . Will optimizer pick up the index, if not how to solve this, as some times we have to use like this.

CREATE TABLE T_SAMPLE ( ID NUMBER, CODE VARCHAR2(1));
CREATE INDEX T_SAMPLE_IDX ON T_SAMPLE (ID,CODE);

INSERT INTO T_SAMPLE
SELECT level,
  DECODE(mod(level, 2), 0, 'E', 'O')
FROM dual
  CONNECT BY level <= 100000;
BEGIN
  dbms_stats.gather_table_stats(USER, 'T_SAMPLE');
END;
/

select * from T_SAMPLE where Id =100 and code='E';-- Index Picked
select * from T_SAMPLE where Id||code='100E';-- Index Not Picked

Explanation: Optimizer will not pick the index in this case. You have to create one functional index.

After creating functional index:

CREATE INDEX T_SAMPLE_IDX2 ON T_SAMPLE (ID||CODE);
SELECT * FROM T_SAMPLE WHERE Id||code='100E' ;

Scenario 5: See below scenario, column is number datatype & while querying you have provided the single quotes. Will Index picked by optimizer.

DROP TABLE T_SAMPLE;
CREATE TABLE T_SAMPLE( ID NUMBER);
CREATE UNIQUE INDEX T_SAMPLE_IDX ON T_SAMPLE  (ID);

INSERT INTO T_SAMPLE
SELECT level
FROM dual
  CONNECT BY level <= 100000;
BEGIN
  dbms_stats.gather_table_stats(USER, 'T_SAMPLE');
END;
/
select * from T_SAMPLE where Id =100 ; -- Index Picked
select * from T_SAMPLE where Id =to_char('100');
Index got picked up by optimizer

Scenario 6: See below scenario, column is number datatype & while querying you have provided the single quotes. Will Index picked by optimizer.

DROP TABLE T_SAMPLE;

CREATE TABLE T_SAMPLE( ID VARCHAR2(4000));
CREATE UNIQUE INDEX T_SAMPLE_IDX ON T_SAMPLE (ID);

INSERT INTO T_SAMPLE
SELECT level
FROM dual
  CONNECT BY level <= 100000;

BEGIN
  dbms_stats.gather_table_stats(USER, 'T_SAMPLE');
END;
/

select * from T_SAMPLE where Id ='100';
select * from T_SAMPLE where Id = 100 ;-- Table Access Full

Visit below blogs for other SQL Scenario questions.


Thanks!

Happy Learning! Your feedback would be appreciated!

Exploring Kafka

Exploring Kafka

In this blog we will explore the basics of Kafka. An open-source distributed streaming platform, developed by LinkedIn & donated to Apache Software Foundation.

Kafka generally used in building real-time streaming data application. So that you can stream the live data from source systems without any delay. Following are some capabilities of kafka:

  • Publishing stream of records.
  • Store streams of record in fault tolerant way.
  • Subscribe/Consuming stream of records.

Download Kafka:

Download latest Kafka from : kafka.apache.org and un-tar it.

1

Start Zookeeper Server:

First you have to start the zookeeper server, as it was used by the kafka.

$ bin/zookeeper-server-start.sh config/zookeeper.properties

2

3

Once your zookeeper server started leave that terminal & do not close. Just not down the port number.

Start kafka-server:

Now once the zookeeper server is up and running, you can start the kafka-server.

$ bin/kafka-server-start.sh config/server.properties

4start

5done

Once your kafka-server started leave that terminal & do not close.

Create Topic:

Where producer application push the stream of record is called topic. Producer can push records in multiple topics. Lest create a sample topic for demo.

$ bin/kafka-topics.sh –zookeeper localhost:2181 –create –topic firsttopic –partitions 1 –replication-factor 1

topic

List all topics:

$ bin/kafka-topics.sh –list –zookeeper localhost:2181

You can delete topics using :

$ bin/kafka-topics.sh –delete –zookeeper localhost:2181 –topic <<topic-name>>

Produce Data:

Sending the streams of records to a kafka topic using producer.

$ bin/kafka-console-producer.sh –broker-list localhost:2181 –topic firsttopic

prodcue

Consuming Data:

Receive the streams of records from a kafka topic.

$ bin/kafka-console-consumer.sh –bootstrap-server localhost:9092 –topic firsttopic –from-beginning

consumedIn real-time applications you have integrate the kafka APIs for producing & consuming data. Kafka has five core APIs

  • Producer API  – To publish a stream of records to one or more Kafka topics
  • Consumer API – To subscribe to one or more topics
  • Streams API – To convert input stream into output stream
  • Connector API – Used for reusable connectors, external data source setup
  • Admin API  – For managing Kafka objects

Refer this blog for more details kafka.apache.org/intro


Thanks!

Happy Learning! Your feedback would be appreciated!

GitHub Repository – Configuring/Cloning

GitHub Repository – Configuring/Cloning

In this blog we will explore the basics of git repository. Git – Open source version control system for tracking changes in source code during software development. I have used github.com/ to create a repository for objects versioning. We will explore below topics in this blog. 

  • Installation of git in the Linux, Windows OS & configuration.
  • Generating ssh key for authentication git-hub account.
  • Clone the existing git-hub repository in local environment.
  • Pushing changes from local cloned repository to remote repository.

Install git:

  • To install the git in Linux: sudo apt-get install git
  • For Windows you can download it from : git-scm.com Check the version after installation

check

Below configuration required.

git config –global user.name “<<username>>”
git config –global user.email “<<email>>”

Generation SSH keys:

You can setup the ssh keys so that very time you don’t have to provide username & password for git repository while pushing changes. Refer this blog for more details help.github.com

Below command create the public & private ssh keys for your system.

$ ssh-keygen -t rsa -b 4096 -C “your_email@example.com”

$ ssh-add ~/.ssh/id_rsa

Key for Linux & Windows:

linux

win

The public key present in id_rsa.pub file you need to copy that & set in your git-hub settings. You can have multiple keys in git-hub account.

keys

Cloning existing git-hub repository:

You can clone the remote repository in your local environment. After that you can make changes & later push to remote location.

Created a repository using the github.com/

repocrete

Repository Created: I have some files in it. Copy SSH link so that we clone the repo.

test

git clone git@github.com:shobhit-singh/test-dev-repo.git

clone
After doing some changes in any file you can check the status of git.

git status

status

Pushing local changes to remote repository:

To push the changes from local to remote repository.

git add . --all
git commit -m 'insert comments here'
git push

git push

Checking object in git-hub:

git

Commands:

git clone git@github.com:shobhit-singh/test-dev-repo.git
git status
git add . --all
git commit -m 'insert comments here'
git push

For creating new repository from command line refer this blog: GitHub Repository Creation – Command Line


Thanks!

Happy Learning! Your feedback would be appreciated!

GitHub Repository Creation – Command Line

In this blog we will explore how we can add new repository in GitHub using  API & adding the new project in that repository.

For configuring git refer this blog: Exploring Git – Configuring GitHub Repo


Creating GitHub Repository:

Let’s create a empty repository in git-hub using the API.

sudo apt install curl
curl -u ‘username’ https://api.github.com/user/repos -d ‘{“name”:”first-project”}’

curl

Empty repository has been created in git-hub account, copy the remote URL.

github

Creating Project Files:

I have created project folder ‘first-project’  in local environment inside that one file is created.

1

Initializing, Add & Commit

Initializing the new project using git init, adding the protect to stage using git add & commit.

inti

Adding URL & Push:

Using below commands you can push your local code in remote new git repository.

push

Check the files in git-hub repository.

creatd repo

Commands:

 
$ sudo apt install curl
$ curl -u 'shobhit-singh' https://api.github.com/user/repos -d '{"name":"first-project"}'

$ git init
$ git add .
$ git commit -m "Initial commit"
$ git remote add origin git@github.com:shobhit-singh/first-project.git
$ git remote -v
$ git push origin master

 

To clone existing repository : Refer this blog Exploring Git – Configuring GitHub Repo


Thanks!

Happy Learning! Your feedback would be appreciated!