Author Archives: Shanoj
Setting storage driver in docker
Reference:
https://docs.docker.com/storage/storagedriver/select-storage-driver/
https://docs.docker.com/storage/storagedriver/
Linux distribution | Recommended storage drivers | Alternative drivers |
---|---|---|
Docker Engine – Community on Ubuntu | overlay2 or aufs (for Ubuntu 14.04 running on kernel 3.13) | overlay ¹, devicemapper ², zfs , vfs |
Docker Engine – Community on Debian | overlay2 (Debian Stretch), aufs or devicemapper (older versions) | overlay ¹, vfs |
Docker Engine – Community on CentOS | overlay2 | overlay ¹, devicemapper ², zfs , vfs |
Docker Engine – Community on Fedora | overlay2 | overlay ¹, devicemapper ², zfs , vfs |
Get the current storage driver:
docker info

Set the storage driver explicitly using the daemon configuration file. This is the method that Docker recommends.
sudo vi /etc/docker/daemon.json
Add the details of storage driver in the daemon configuration file:
{
"storage-driver": "devicemapper"
}
Restart Docker after editing the file.
sudo systemctl restart docker
sudo systemctl status docker
Installing Docker on CentOS

Install required packages, these packages are pre-requsite for docker installation on CentOS:
sudo yum install -y device-mapper-persistent-data lvm2
Add the Docker CE repo:
sudo yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
Install the Docker CE packages and containerd.io
:
sudo yum install -y docker-ce-18.09.5 docker-ce-cli-18.09.5 containerd.io
Start and enable the Docker service:
sudo systemctl start docker
sudo systemctl enable docker
Add test_user
to the docker
group, giving the user permission to run docker
commands:
sudo usermod -a -G docker test_user
Log out and back log in and test the installation by running a simple container:
docker run hello-world
I am very exited, my book is out for sale.
Looking for oracle RAC administration jobs?
This book will provide complete details on Oracle RAC administration interview questions and answers. This book helps you in cracking your interview & acquire your dream career as Oracle RAC Administrator. This book is a perfect companion to stand ahead above the rest in today’s competitive job market.
Sections to be discussed:
Basic to advance RAC administration interview Questions
RAC installation Questions
RAC Upgrade/Patching Questions
RAC Data Guard Configuration Questions
RAC troubleshooting Questions
390 Oracle RAC administration interview questions for getting hired as an Oracle Database RAC administration.
Using Kafka Connect to Capture Data from a Relational Database (sqlite3)
Use any Kafka docker images to install and start kafka.
reference:
https://docs.confluent.io/current/connect/userguide.html
https://github.com/bitnami/bitnami-docker-kafka
https://docs.confluent.io/3.1.1/connect/connect-jdbc/docs/sink_connector.html
JDBC driver download for SQLlite3:
https://bitbucket.org/xerial/sqlite-jdbc/downloads/
- Start Kafka.
confluent start
- Install SQLite3.
apt-get update apt-get install sqlite3
- Create a New Database and Populate It with a Table and Some Data
Create a new database called “test.db”.
root@shanoj_srv1:/# sqlite3 test.db
- Create a new table in the SQLite database called “accounts”.
CREATE TABLE accounts (id INTEGER PRIMARY KEY AUTOINCREMENT NOT NULL, name VARCHAR (255));
- Insert values into the table to begin populating it.
INSERT INTO accounts(name) VALUES('sabu'); INSERT INTO accounts(name) VALUES('ronnie'); .quit
- Stop Kafka Connect.
confluent stop connect
- Make necessary changes to below files:
root@shanoj_srv1:/# vi /etc/schema-registry/connect-avro-standalone.properties bootstrap.servers=localhost:9092 key.converter=io.confluent.connect.avro.AvroConverter key.converter.schema.registry.url=http://localhost:8081 value.converter=io.confluent.connect.avro.AvroConverter value.converter.schema.registry.url=http://localhost:8081 # The internal converter used for offsets and config data is configurable and must be specified, internal.key.converter=org.apache.kafka.connect.json.JsonConverter internal.value.converter=org.apache.kafka.connect.json.JsonConverter internal.key.converter.schemas.enable=false internal.value.converter.schemas.enable=false # Local storage file for offset data offset.storage.file.filename=/tmp/connect.offsets
root@shanoj_srv1:/# vi etc/kafka-connect-jdbc/source-quickstart-sqlite.properties # A simple example that copies all tables from a SQLite database. The first few settings are # required for all connectors: a name, the connector class to run, and the maximum number of # tasks to create: name=test-source-sqlite-jdbc-autoincrement connector.class=io.confluent.connect.jdbc.JdbcSourceConnector tasks.max=1 # The remaining configs are specific to the JDBC source connector. In this example, we connect to a # SQLite database stored in the file test.db, use and auto-incrementing column called 'id' to # detect new rows as they are added, and output to topics prefixed with 'test-sqlite-jdbc-', e.g. # a table called 'users' will be written to the topic 'test-sqlite-jdbc-users'. connection.url=jdbc:sqlite:test.db mode=incrementing incrementing.column.name=id
- Start Kafka Connect in standalone mode.
root@shanoj_srv1:/#connect-standalone -daemon /etc/schema-registry/connect-avro-standalone.properties /etc/kafka-connect-jdbc/source-quickstart-sqlite.properties
- Verify that the connector was created.
root@shanoj_srv1:/# cat /logs/connectStandalone.out | grep -i "finished" [2019-08-15 15:45:49,421] INFO Finished creating connector test-source-sqlite-jdbc-autoincrement (org.apache.kafka.connect.runtime.Worker:225) [2019-08-15 15:45:49,504] INFO Source task WorkerSourceTask{id=test-source-sqlite-jdbc-autoincrement-0} finished initialization and start (org.apache.kafka.connect.runtime.WorkerSourceTask:143) [2019-08-15 15:46:49,484] INFO Finished WorkerSourceTask{id=test-source-sqlite-jdbc-autoincrement-0} commitOffsets successfully in 6 ms (org.apache.kafka.connect.runtime.WorkerSourceTask:373) root@shanoj_srv1:/# curl -s localhost:8083/connectors
- Examine the Kafka topic created.
root@shanoj_srv1:/# kafka-topics --list --zookeeper localhost:2181 | grep test-sqlite-jdbc test-sqlite-jdbc-accounts
Start a Kafka Consumer and Write New Data to the Database
- Open a Kafka consumer.
root@shanoj_srv1:/# kafka-avro-console-consumer --new-consumer --bootstrap-server localhost:9092 --topic test-sqlite-jdbc-accounts --from-beginning
Open a new tab to a new terminal session.
Open a new shell in this session.
root@shanoj_srv1:/# sudo docker exec -it sqlite-test //bin//bash
- Transfer to the tmp directory.
root@shanoj_srv1:/# cd /tmp
- Access the SQLite database test.db.
root@shanoj_srv1:/# sqlite3 test.db
- Insert a new value into the accounts table.
root@shanoj_srv1:/tmp# sqlite3 test.db SQLite version 3.8.7.1 2014-10-29 13:59:56 Enter ".help" for usage hints. sqlite> INSERT INTO accounts(name) VALUES('rama'); sqlite> INSERT INTO accounts(name) VALUES('lev'); sqlite> INSERT INTO accounts(name) VALUES('sriram'); sqlite> INSERT INTO accounts(name) VALUES('joby'); sqlite> INSERT INTO accounts(name) VALUES('shanoj'); sqlite>
- Return to the previous session with the consumer and verify the data has been written.
root@ip-10-0-1-100:/# kafka-avro-console-consumer --new-consumer --bootstrap-server localhost:9092 --topic test-sqlite-jdbc-accounts --from-beginning {"id":3,"name":{"string":"rama"}} {"id":4,"name":{"string":"lev"}} {"id":5,"name":{"string":"sriram"}} {"id":6,"name":{"string":"joby"}} {"id":7,"name":{"string":"shanoj"}}
Install and Configure PostgreSQL 9.x: RHEL/CentOS
1.Download and install it using the appropriate package management
~ $ rpm -Uvh https://yum.postgresql.org/9.4/redhat/rhel-7-x86_64/pgdg-centos94-9.4-3.noarch.rpm Retrieving https://yum.postgresql.org/9.4/redhat/rhel-7-x86_64/pgdg-centos94-9.4-3.noarch.rpm warning: /var/tmp/rpm-tmp.IZow7N: Header V4 DSA/SHA1 Signature, key ID 442df0f8: NOKEY Preparing... ################################# [100%] Updating / installing... 1:pgdg-redhat-repo-42.0-4 ################################# [100%]
2. Applying any necessary updates.
[root@tcox6 ~]# yum update
3. Install the PostgreSQL 9.4 server and associated contribed modules and utilities. Once installed, run the database initialization routine before starting the database.
[root@tcox6 ~]# yum install postgresql94-server postgresql94-contrib
4. Enable the PostgreSQL 9.4 server to run on system start and then start the database server.
[root@tcox6 ~]# systemctl enable postgresql-9.4
ln -s '/usr/lib/systemd/system/postgresql-9.4.service' '/etc/systemd/system/multi-user.target.wants/postgresql-9.4.service'
[root@tcox6 ~]# systemctl start postgresql-9.4
5. Check to see if SELinux is being run in enforced mode on your system. If so, run the command to allow external HTTP DB connections to the server through SELinux configuration.
# cat /etc/selinux/config
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=enforcing
# SELINUXTYPE= can take one of these two values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted
# setsebool -P httpd_can_network_connect_db 1
6. Login to the ‘postgres’ user and run the ‘psql’ command. Once at the database prompt, set a password for the ‘psql’ user.
[root@tcox6 ~]# su - postgres
Last login: Wed Sep 2 13:35:21 UTC 2015 on pts/0
-bash-4.2$ psql
psql (9.4.4)
Type "help" for help.
postgres=# \password postgres
Enter new password:
Enter it again:
postgres=# quit
postgres-# \q
-bash-4.2$ exit
logout
Oracle Exadata Interview Questions and Answers:
-
1) What are the advantages of Exadata?The Exadata cluster allows for consistent performance while allowing for increased throughput. As load increases on the cluster the performance remains consistent by utilizing inter-instance and intra-instance parallelism.It should not be expected that just moving to Exadata will improve performance. In most cases it will especially if the current database host is overloaded.2) What is the secret behind Exadata’s higher throughput?Exadata ships less data through the pipes between the storage and the database nodes and other nodes in the RAC cluster.Also it’s ability to do massive parallelism by running parallel processes across all the nodes in the cluster provides it much higher level of throughput.It also has much bigger pipes in the cluster using Infiniband interconnect for inter-instance data block transfers as high as 5X of fiberchannel networks.3) What are the key Hardware components?DB ServerStorage Server CellsHigh Speed Infiniband SwitchCisco Switch
-
4) What are the Key Software Features?Smart Scan,Smart Flash CacheStorage IndexExadata Hybrid Columnar Compression (EHCC)IORM (I/O Resource Manager)5) What is a Cell and Grid Disk?Cell and Grid Disk are a logical component of the physical Exadata storage. A cell or Exadata Storage server cell is a combination of Disk Drives put together to store user data. Each Cell Disk corresponds to a LUN (Logical Unit) which has been formatted by the Exadata Storage Server Software. Typically, each cell has 12 disk drives mapped to it.Grid Disks are created on top of Cell Disks and are presented to Oracle ASM as ASM disks. Space is allocated in chunks from the outer tracks of the Cell disk and moving inwards. One can have multiple Grid Disks per Cell disk.
-
6) What is IORM?IORM stands for I/O Resource Manager.It manages the I/O demand based on the configuration, with the amount of resources available. It ensures that none of the I/O cells become oversubscribed with the I/O requests. This is achieved by managing the incoming requests at a consumer group level.Using IORM, you can divide the I/O bandwidth between multiple databases.To implement IORM resource groups, consumers and plans need to be created first.7) What is hybrid columnar compression?
-
Hybrid Columnar compression, also called HCC, is a feature of Exadata which is used for compressing data at column level for a table.It creates compression data units which consist of logical grouping of columns values typically having several data blocks in it. Each data block has data from columns for multiple rows.This logarithm has the potential to reduce the storage used by the data and reduce disk I/O enhancing performance for the queries.The different types of HCC compression include:• Query Low• Query High• Archive High• Archive Low8) What is Flash cache?Four 96G PCIe flash memory cards are present on each Exadata Storage Server cell which provide very fast access to the data stored on it.This is further achieved by also provides mechanism to reduces data access latency by retrieving data from memory rather than having to access data from disk. A total flash storage of 384GB per cell is available on the Exadata appliance.9) What is Smart Scan?It is a feature of the Exadata Software which enhances the database performance many times over. It processes queries in an intelligent way, retrieving specific rows rather than the complete blocks.It applies filtering criteria at the storage level based on the selection criteria specified in the query.It also performs column projection which is a process of sending only required columns for the query back to the database host/instance.10) What are the Parallelism instance parameter used in Exadata?
-
The parameter PARALLEL_FORCE_LOCAL can be specified at the session level for a particular job.11) How do you Test performance of Exadata?You can use the “calibrate” commands at the cellcli command line.12)What are the ways to migrate onto Exadata?
-
Depending on the downtime allowed there are several options:Oracle DataGuardTraditional Export/ImportTablespace transportationGoldengate Replication after a data restore onto Exadata.13) What types of operations does Exadata “offload”?Some of the operations that are offloaded from the database host to the cell servers are:Predicate filteringColumn project filteringJoin processingBackups14) What is cellcli?This is the command line utility used to managed the cell storage.15) How do you create obtain info on the Celldisks?At the cellcli command line you can issue the “list celldisk” command.16) How would you create a grid disk?At the cellcli command you would need to issue the “create grididsk all ..” command.16) What are the cellinit.ora and the cellip.ora files used for?These files have the hostnames and the ip address of all the nodes in the cluster. They are used to run commands on remote database and cellserver nodes from a local host.17) Which package can be used to estimate the compression ration of table?DBMS_COMPRESSION18) Background services of Cell ServerMS- Management Servercellsrv – Cell ServerRS – Restart Server19) How many disk comes with in a storage cell?1220) What is the purpose of spine switch?Spine switch is used to connect or add more Exadata machine in the cluster21) How to migrate database from normal setup to Exadata ?There many methods we can use to migrate DB to Exadata. Below are some of them.1. Export/Import2. Physical Standby3. Logical Standby4. Transportable Tablespace5. Transportable Database6. Golden gate7. RMAN cold and hot backup restoration8. Oracle Streams22) Can we use flash disk as ASM disk?Yes23) Which protocol used for communication between database server and storage server?iDB protocol24) which OS is supports in Exadata?Database servers has two option for OS either Linux or Solaris which can be finalized at the time of configuration Cell storage comes with Linux only25) What is ASR?ASR is the tool to manage the Oracle hardware. Full form of ASR is Auto Service Request. Whenever any hardware fault occurs ASR automatically raise SR in Oracle Support and send notification to respective customer.26) How to upgrade firmware of Exadata components?It can be done through ILOM of DB or Cell server.27) Where we can define which cell storage can be used by particular database server?CELLIP.ORA file contains the list of storage server which is accessed by DB server.28) What are the Exadata Health check tools?1. Exacheck2. sundiagtest3. oswatcher4. OEM 12c29) What is EHCC?EHCC is Exadata Hybrid Columnar Compression which is used to compress data in the Database.30) What is offloading and how it works?It refers to the fact that part of the traditional SQL processing done by the database can be “offloaded” from the database layer to the storage layerThe primary benefit of Offloading is the reduction in the volume of data that must be returned to the database server. This is one of the major bottlenecks of most large databases.31) What is the difference between cellcli and dcli?Cellcli can be used on respective cell storage only.DCLi (Distributed command Line Utility) – DCLI can be used to replicate command on multipla storage as well as DB servers.32) What is IORM and what is its role in Exadata?IORM stand for I/O Resource Manager which manages the I/Os of multiple database on storage cell.33) How we can check whether oracle best practice has been configured on Exadata?We can execute Exacheck and verify the best practice setup on Exadata machine.34) How many networks required in Exadata?1. Public/Client Network — For Application Connectivity2. Management Network — For Exadata H/W management3. Private Network — For cluster inter connectivity and Storage connectivity35) What is the command to enable query high compression on table?SQL>alter table table_name move compress for query high;36) How to take cell storage software backup?It is not required to take a backup as it happens automatically. Exadata use internal USB drive called the Cellboot Flash Drive to take backup of software.37) What is the difference between wright-through and write-back flashcache mode?1. writethrough –> Falshcache will be used only for reading purpose2. writeback –> Flashcache will be used for both reading and writing38) Which feature of Exadata is used to eliminate disk IO?Flash Cache39) What is the capacity of Infiniband port ?40 Gbps40) What is the difference between high capacity and high performance disk?1. High capacity disk comes with more storage space and less rpm (7.5k)2. High Performance disk comes with less storage and high rpm (15k)41) When one should execute Exacheck?Before and after any configuration change in Database Machine42) What is grid disk?Grid Disks are created on top of Cell Disks and are presented to Oracle ASM as ASM disks.Space is allocated in chunks from the outer tracks of the Cell disk and moving inwards. One can have multiple Grid Disks per Cell disk.43) Which network is used for RAC inter-connectivity?Infiniband Network44) What is Smart Scan?It is a feature of the Exadata Software which enhances the database performance many times over. It processes queries in an intelligent way, retrieving specific rows rather than the complete blocks. It applies filtering criteria at the storage level based on the selection criteria specified in the query. It also performs column projection which is a process of sending only required columns for the query back to the database host/instance.45) What are the Parallelism instance parameter used in Exadata?The parameter PARALLEL_FORCE_LOCAL can be specified at the session level for a particular job.46) Which statistic can be used to check flash hit ration on database level?Cell flash cache read hits47) Which disk group is used to keep OCR files on Exadata?+DBFS_DG48) How many Exadata wait events contained in 11.2.0.3 release?There are 53 wait events are exadata specific events.49) What is the difference between DBRM and IORM?DBRM is the feature of database while IORM is the feature of storage server software.50) Which ASM parameters are responsible for Auto disk management in Exadata?_AUTO_MANAGE_MAX_ONLINE_TRIES — It controls maximum number of attempts to make disk Online_AUTO_MANAGE_EXADATA_DISKS — It control auto disk management feature_AUTO_MANAGE_NUM_TRIES — It controls maximum number of attempt to perform an automatic operation51) How to enable Flashcache compression?CellCLI> ALTER CELL flashCacheCompress=true52) How many Exadata Storage Server Nodes are included in Exadata Database Machine X4-8?14 storage nodes53) What is client or public network in exadata?Client or public network is used to established connectivity between database and application.54) What are the steps involved for initial Exadata configuration?Initial network preparationConfigure Exadata serversConfigure Exadata softwareConfigure database hosts to use ExadataConfigure ASM and database instancesConfigure ASM disk group for Exadata55) What is iDB protocol?iDB stands for intelligent database protocol. It is a network based protocol which is responsible to communicate between storage cell and database server.56) What is LIBCELL?Libcell stands for Library Cell which is linked with Oracle kernel. It allows oracle kernel to talk with the storage server via network based instead of operating system reads and writes.57) Which packaged is used by compression adviser utility?DBMS_COMPRESSION package58) What is the primary goal of storage index?Storage indexes are a feature unique to the Exadata Database Machine whose primary goal is to reduce the amount of I/O required to service I/O requests for Exadata Smart Scan.59) What is smart scan offloading?Offloading and Smart Scan are two terms that are used somewhat interchangeably. Exadata SmartScan offloads processing of queries from the database server to the storage server.Processors on the Exadata Storage Server process the data on behalf of the database SQL query. Only the data requested in the query is returned to the database server.60) What is checkip and what the use of it?Checkip is the OS level script which contains IP address and hostname which will be used by Exadata in configuration phase. It checks network readiness like proper DNS configuration, it also checks there is no IP duplication in the network by pinging it which not supposed to ping initially.61) Which script is used to reclaim the disk space of unused operating system?For Linux: reclaimdisks.shFor Solaris: reclaimdisks.pl62) How database server communicates to storage cell?Database server communicates with storage cell through infiniband network.63) Can I have multiple celldisk for one grid disk?No. Celldisk can have multiple griddisk but griddisk cannot have multiple celldisk64) How many FMods available on each flash card?Four FMods (Flash Modules) are available on each flash card.65) What is smart flash log?Smart flash log is a temporary storage area on Exadata smart flash cache to store redoes log data.66) Which parameter is used to enable and disable the smart scan?cell_offload_processing67) How to check infiniband topology?We can verify infiniband switch topology by executing verify-topology script from one of our database server.68) Can we use HCC on non-exadata environment?No, HCC is only available data stored on Exadata storage server.69) What is resource plan?It is collection of plan directives that determines how database resources are to be allocated.70) What is DBFS?DBFS stands for Database File system which can be built on ASM disk group using database tablespace.71) What is the purpose of infiniband spine switch?Spine switch is used to connect multiple exadata database machines.72) What is offload block filtering?Exadata storage server filters out the blocks that are not required for the incremental backup in progress so only the blocks that are required for the backup are sent to the database.73) Which protocol used by ASR to send notification?SNMP74) Is manually intervance possible in storage index?No75) What are the options to update cell_flashcache for any object?KEEPDEFAULTNONE76) What is the default size of smart flash log?512MB per module.Each storage cell having 4 modules so its 4X512 MB per CELL77) What is flash cache and how it works?The flash cache is a hardware component configured in the exadata storage cell server which delivers high performance in read and write operations.Primary task of smart flash cache is to hold frequently accessed data in flash cache so next time if same data required than physical read can be avoided by reading the data from flash cache.
Oracle Multi-tenant Application Containers – Create Application Container:
Application containers:
An application container consists of an application root and one or more
application PDBs. The container stores data for a specific application, which
contains common data and metadata. You can upgrade or patch the
application once in the application root, and then synchronize the application
PDBs with the root.
Reference book:
Oracle Database 12c Release 2 Multitenant by Franck Pachot, Vit Špinka, Anton Els.
Create Application root:
Login to the CDB root as SYSDBA and create an application root named shan_app.
[oracle@multisrv1 ~]$ ps -ef|grep pmon grid 4986 1 0 22:10 ? 00:00:00 asm_pmon_+ASM oracle 5086 1 0 22:10 ? 00:00:00 ora_pmon_CDB1 oracle 6241 6216 0 22:39 pts/0 00:00:00 grep pmon [oracle@multisrv1 ~]$ sqlplus sys/oracle@cdb1 as sysdba SQL*Plus: Release 12.2.0.1.0 Production on Sun Feb 3 22:39:37 2019 Copyright (c) 1982, 2016, Oracle. All rights reserved. Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production SQL> CREATE PLUGGABLE DATABASE shan_app AS APPLICATION CONTAINER ADMIN USER shan_appadm IDENTIFIED BY oracle; ALTER PLUGGABLE DATABASE shan_app OPEN; col name format a10 SELECT CON_ID, NAME, OPEN_MODE FROM V$PDBS WHERE APPLICATION_ROOT='YES'; CON_ID NAME OPEN_MODE ---------- ---------- ---------- 5 SHAN_APP READ ONLY
— save the state of shan_app (otherwise, it will be closed when you reboot):
ALTER PLUGGABLE DATABASE shan_app SAVE STATE;
View the tablespaces and the datafiles created for the application container shan_app.
Obtain the CON_ID value from the query in the previous step.
Observe that the tablespaces SYSTEM, SYSAUX, and UNDOTBS1 are created.
SQL> SELECT FILE_NAME, TABLESPACE_NAME FROM CDB_DATA_FILES WHERE CON_ID=5; FILE_NAME -------------------------------------------------------------------------------- TABLESPACE_NAME ------------------------------ +DATA/CDB1/80F305B39D922445E0535800A8C090A6/DATAFILE/system.275.999230255 SYSTEM +DATA/CDB1/80F305B39D922445E0535800A8C090A6/DATAFILE/sysaux.276.999230255 SYSAUX +DATA/CDB1/80F305B39D922445E0535800A8C090A6/DATAFILE/undotbs1.274.999230255 UNDOTBS1 Switch the current container to the application root and check which privilege is granted to SHAN_APPADM Observe that the application root administrator is only granted the role PDB_DBA. This role has three privileges granted to it. ALTER SESSION SET CONTAINER=SHAN_APP; -- check the roles granted to the user: col grantee format a10 col granted_role format a15 SELECT GRANTEE, GRANTED_ROLE, COMMON FROM DBA_ROLE_PRIVS where GRANTEE ='SHAN_APPADM'; -- check the privileges granted to the role: col role format a10 col privilege format a30 SELECT ROLE, PRIVILEGE, ADMIN_OPTION, COMMON, INHERITED FROM ROLE_SYS_PRIVS WHERE ROLE='PDB_DBA';
Configure the tnsnames.ora file to allow connecting to the application root through the listener.
vi tnsnames.ora SHAN_APP = (DESCRIPTION = (ADDRESS_LIST = (ADDRESS = (PROTOCOL = TCP)(HOST = srv1)(PORT = 1521)) ) (CONNECT_DATA = (SERVICE_NAME = shan_app.localdomain) ) )
Test the configuration
conn sys/oracle@shan_app as sysdba
Install an application in the application root
Login to the application root as SYSDBA.
conn sys/oracle@shan_app as sysdba
Begin installing the HR_APP application.
Note: When you install a new application in an application root, you always start with this statement.
ALTER PLUGGABLE DATABASE APPLICATION hr_app BEGIN INSTALL '1.0';
Create the application tablespace (HR_TBS) and create the application owner user (HR).
The application owner (HR) should have the privileges enough to control the application objects, like the application tables, indexes, sequences, PL/SQL program units… etc.
The application owner is common user in the application root.
CREATE TABLESPACE hr_tbs; CREATE USER HR IDENTIFIED BY oracle DEFAULT TABLESPACE HR_TBS QUOTA UNLIMITED ON HR_TBS CONTAINER = ALL; GRANT CREATE SESSION, CREATE TABLE, CREATE SEQUENCE, CREATE VIEW, CREATE PROCEDURE, CREATE TRIGGER TO HR;
Switch the current schema to the application user.
ALTER SESSION SET CURRENT_SCHEMA=hr;
Run the script file.
@<run the sql for the appication objects creation>.sql
End the application installation, if all the commands in the script successfully run.
ALTER PLUGGABLE DATABASE APPLICATION hr_app END INSTALL '1.0';
Verify that the application has been successfully created.
column app_name format a15 column app_version format a10 column app_status format a15 SELECT APP_NAME, APP_VERSION, APP_STATUS FROM DBA_APPLICATIONS WHERE APP_IMPLICIT='N';
Using Resource Manager with CDB and PDBs
With Resource Manager we can:
– Distribute CPU time among the users and applications
– Limit the amount of PGA memory
– Limit degree of parallelism
– Set priorities of parallel statements
– Limit the number of parallel execution servers
– Create an active session pool
– Limit session idle time
– Prevent executing long time operations
For more details, you can refer (MOS):
Managing OS Resources Among PDBs Using PDB Perfromance Profiles – 12.2 New Feature (Doc ID 2171135.1) |
Verify that PDB’s databases are available and opened in read/write mode.
SQL> col name format a10 SQL> SELECT NAME, OPEN_MODE FROM V$PDBS; NAME OPEN_MODE ---------- ---------- PDB$SEED READ ONLY PDB1 READ WRITE PDB2 READ WRITE
Create the resource plan BALANACED_PLAN and configure the plan directives in it.
This plan gives one share to both PDB1 and PDB2. This means that they both have the same priority to receive the CPU time resource
SQL> ALTER SESSION SET CONTAINER = CDB$ROOT; Session altered. SQL> EXEC DBMS_RESOURCE_MANAGER.CLEAR_PENDING_AREA(); PL/SQL procedure successfully completed. SQL> EXEC DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA(); PL/SQL procedure successfully completed. SQL> EXEC DBMS_RESOURCE_MANAGER.CREATE_CDB_PLAN('BALANACED_PLAN', 'One share to PDB1 and PDB2'); PL/SQL procedure successfully completed. SQL> EXEC DBMS_RESOURCE_MANAGER.CREATE_CDB_PLAN_DIRECTIVE('BALANACED_PLAN', 'PDB1',shares => 1); PL/SQL procedure successfully completed. SQL> EXEC DBMS_RESOURCE_MANAGER.CREATE_CDB_PLAN_DIRECTIVE('BALANACED_PLAN', 'PDB2',shares => 1); PL/SQL procedure successfully completed.
Create the resource plan BIASED_PLAN and configure the plan directives in it.
This plan gives four shares to PDB1 and one share to PDB2.
SQL> EXEC DBMS_RESOURCE_MANAGER.CREATE_CDB_PLAN('BIASED_PLAN', 'PDB1 is given high priority'); PL/SQL procedure successfully completed. SQL> EXEC DBMS_RESOURCE_MANAGER.CREATE_CDB_PLAN_DIRECTIVE('BIASED_PLAN', 'PDB1',shares => 3); PL/SQL procedure successfully completed. SQL> EXEC DBMS_RESOURCE_MANAGER.CREATE_CDB_PLAN_DIRECTIVE('BIASED_PLAN', 'PDB2',shares => 1); PL/SQL procedure successfully completed. SQL> EXEC DBMS_RESOURCE_MANAGER.VALIDATE_PENDING_AREA(); PL/SQL procedure successfully completed. SQL> EXEC DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA(); PL/SQL procedure successfully completed. SQL>
Make sure the plans and their associated directives were successfully created.
SQL> col plan format a20 SQL> SELECT Plan from CDB_CDB_Rsrc_Plans WHERE CON_ID = 1 AND PLAN IN ('BALANACED_PLAN','BIASED_PLAN')ORDER BY 1; PLAN -------------------- BALANACED_PLAN BIASED_PLAN SQL> col pluggable_database format a30 SQL> SELECT PLAN, PLUGGABLE_DATABASE, SHARES FROM CDB_CDB_RSRC_PLAN_DIRECTIVES WHERE CON_ID = 1 AND PLAN IN ('BALANACED_PLAN','BIASED_PLAN') ORDER BY 1, 2; PLAN PLUGGABLE_DATABASE SHARES -------------------- ------------------------------ ---------- BALANACED_PLAN ORA$AUTOTASK BALANACED_PLAN ORA$DEFAULT_PDB_DIRECTIVE 1 BALANACED_PLAN PDB1 1 BALANACED_PLAN PDB2 1 BIASED_PLAN ORA$AUTOTASK BIASED_PLAN ORA$DEFAULT_PDB_DIRECTIVE 1 BIASED_PLAN PDB1 3 BIASED_PLAN PDB2 1 8 rows selected.
Delete the resource plans.
begin DBMS_RESOURCE_MANAGER.CLEAR_PENDING_AREA(); DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA(); DBMS_RESOURCE_MANAGER.DELETE_CDB_PLAN('BALANACED_PLAN'); DBMS_RESOURCE_MANAGER.DELETE_CDB_PLAN('BIASED_PLAN'); DBMS_RESOURCE_MANAGER.VALIDATE_PENDING_AREA(); DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA(); end; /
Oracle RMAN Backup and Recovery in CDB & PDBs
Below points are demonstrated in this document :
1. Performing RMAN Whole CDB Backup
2. Performing RMAN Whole and Partial PDB Backup
3. Performing RMAN Recovery from SYSTEM PDB Datafile Loss
4. Performing RMAN Recovery from non-SYSTEM CDB Datafile Loss
5. Performing PITR in a Pluggable Database
6. Performing PITR on a Tablespace in a PDB
7. Performing Flashback for the CDB from Dropping a Common User
8. Performing Flashback for PDBs
9. Using Data Recovery Advisory commands
Performing RMAN Whole CDB Backup:
Run RMAN and connect as target to CDB1 and make the configurations as shown below.
[oracle@multisrv1 ~]$ ps -ef|grep pmon grid 4833 1 0 06:19 ? 00:00:00 asm_pmon_+ASM oracle 4935 1 0 06:19 ? 00:00:00 ora_pmon_CDB1 oracle 5794 5742 0 06:31 pts/0 00:00:00 grep pmon [oracle@multisrv1 ~]$ rman target / Recovery Manager: Release 12.2.0.1.0 - Production on Thu Jan 24 06:31:57 2019 Copyright (c) 1982, 2017, Oracle and/or its affiliates. All rights reserved. connected to target database: CDB1 (DBID=948697708) RMAN> CONFIGURE DEFAULT DEVICE TYPE TO disk; using target database control file instead of recovery catalog new RMAN configuration parameters: CONFIGURE DEFAULT DEVICE TYPE TO DISK; new RMAN configuration parameters are successfully stored RMAN> CONFIGURE CONTROLFILE AUTOBACKUP ON; new RMAN configuration parameters: CONFIGURE CONTROLFILE AUTOBACKUP ON; new RMAN configuration parameters are successfully stored
2. Take backup of all the datafiles of the database (the root and all its PDBs),
control files, SPFILE file, and the archived redo log files.
RMAN> BACKUP DATABASE PLUS ARCHIVELOG;
3. List the backupset files generated by RMAN.
RMAN> LIST BACKUP ;
Performing RMAN Whole and Partial PDB Backup
1. Login to RMAN and take a whole PDB backup of PDB2.
[oracle@multisrv1 ~]$ rman target / Recovery Manager: Release 12.2.0.1.0 - Production on Thu Jan 24 06:31:57 2019 Copyright (c) 1982, 2017, Oracle and/or its affiliates. All rights reserved. connected to target database: CDB1 (DBID=948697708) RMAN> BACKUP PLUGGABLE DATABASE pdb2;
2. List the backupset files generated by RMAN as a backup to PDB2.
RMAN> list backup of pluggable database PDB2;
3. Login to RMAN again and take a backup of the tablespace users in PDB2 (partial backup).
rman target / BACKUP TABLESPACE pdb2:users; LIST BACKUP OF TABLESPACE pdb2:users;
Performing RMAN Recovery from SYSTEM PDB Datafile Loss
Recovery procedures in CDB database is pretty much similar to them in non-CDB database.
Note: If the PDB was opened when the file is lost, you need to shutdown CDB and mount it before you proceed
with the recovery procedure. The recovery procedure is similar to the traditional recovery procedure from
losing a SYSTEM datafile in the non-CDB database.
Run RMAN and connect to CDB1 and proceed with the traditional procedure to restore the missing SYSTEM data file.
-- mount the CDB rman target / RMAN> SHUTDOWN ABORT RMAN> STARTUP MOUNT -- execute the following commands: RMAN> RESTORE TABLESPACE pdb2:SYSTEM; RMAN> RECOVER TABLESPACE pdb2:SYSTEM; -- OR the following commands: RMAN> RESTORE pluggable database pdb2; RMAN> RECOVER pluggable database pdb2; RMAN> ALTER DATABASE OPEN; RMAN> SELECT NAME, OPEN_MODE FROM V$PDBS WHERE NAME='PDB2';
Performing RMAN Recovery from non-SYSTEM Root Datafile Loss
1. Perform the recovery of a non-essential (non-SYSTEM) datafile in CDB1.
We will use the Data Recovery Advisory commands to discover, restore and recover the failure.
2. Run RMAN and connect to CDB1 as target. Discover the failure.
rman target / RMAN> LIST FAILURE; RMAN> LIST FAILURE DETAIL;
3. Obtain the recommendation from the Data Recovery Advisor to remediate the issue.
RMAN> ADVISE FAILURE;
3. Preview the suggested script to fix the issue.
RMAN> REPAIR FAILURE PREVIEW;
4. Execute the script.
When it prompts for confirmation, type YES then press ENTER.
RMAN> REPAIR FAILURE;
Note: in real life scenario, it is always advisable to take backup of the entire CDB after such a recovery procedure is implemented.
• There is not much difference between using RMAN to take backups in a CDB database and a non-CDB database. The recovery procedure is also nearly the same.
• The Data Recovery Advisor provides an easy approach to discover and restore from datafiles loss.
Point-in-time Recovery
Performing PITR in a Pluggable Database
1. Make sure the local undo mode is enabled.
sqlplus / as sysdba col PROPERTY_NAME format a25 col PROPERTY_VALUE format a10 SELECT PROPERTY_NAME, PROPERTY_VALUE FROM DATABASE_PROPERTIES WHERE PROPERTY_NAME = 'LOCAL_UNDO_ENABLED';
2. Perform the PDB PITR on PDB2.
Run rman and connect as target to the CDB and close PDB2
rman target / ALTER PLUGGABLE DATABASE pdb2 CLOSE IMMEDIATE;
3. Perform the PDB PITR on PDB2.
Note that this recovery is at the PDB level. The other containers are not affected.
RUN { SET UNTIL SCN= <scn>; RESTORE PLUGGABLE DATABASE pdb2; RECOVER PLUGGABLE DATABASE pdb2; } ALTER PLUGGABLE DATABASE pdb2 OPEN RESETLOGS;
4. Verify that the data has been restored in the table.
sqlplus SHANOJUSER/oracle@pdb2 SELECT COUNT(*) FROM SHANOJUSER.TB1;
for testing I inserted some value to table TB1
5. Take backup of the whole CDB. This is recommended after every PITR process.
rman target / BACKUP DATABASE PLUS ARCHIVELOG DELETE ALL INPUT;
Performing PITR on a Tablespace in a PDB
We will perform PITR in the tablespace level. This method provides higher availability than PITR at the pluggable database level because the other tablespaces will still be active while you are recovering the lost tablespace. However, it involves creating temporary (auxiliary) instance. Therefore, it takes longer time to execute and requires much more disk space than the PITR at the PDB level.
1. Verify that the table TB1 which is owned by SHANOJUSER is saved in the users tablespace.
sqlplus SHANOJUSER/oracle@pdb2 SELECT TABLESPACE_NAME FROM USER_TABLES WHERE TABLE_NAME='TB1';
2. As the users tablespace is the default tablespace, you need to switch the default tablespace of the PDB to some other tablespace before you proceed with the PITR procedure on it.
Create a staging tablespace and set it as the default tablespace.
conn / as sysdba ALTER SESSION SET CONTAINER=pdb2; -- verify the default tablespace is users tablespace SELECT PROPERTY_VALUE FROM DATABASE_PROPERTIES WHERE PROPERTY_NAME = 'DEFAULT_PERMANENT_TABLESPACE'; -- verify the users tablespace is online: SELECT STATUS FROM DBA_TABLESPACES WHERE TABLESPACE_NAME='USERS'; -- create a staging tablespace and set it as the default tablespace: CREATE TABLESPACE stagingtbs; ALTER DATABASE DEFAULT TABLESPACE stagingtbs;
3. Take backup of the whole CDB. This is among the best practices after adding or dropping a datafile in the database.
rman target / DELETE BACKUPSET; BACKUP DATABASE;
4. Delete the existing rows in the table SHANOJUSER.TB1 then insert some testing rows into it.
sqlplus SHANOJUSER/oracle@pdb2 DELETE SHANOJUSER.TB1; COMMIT; BEGIN FOR I IN 1..10 LOOP INSERT INTO SHANOJUSER.TB1 (ID,NOTES) VALUES (I, TO_CHAR(sysdate,'DD-MM-YY HH24:MI')); END LOOP; COMMIT; END; / SELECT COUNT(*) FROM SHANOJUSER.TB1;
5. Obtain the current database SCN and take a note of it.
conn / as sysdba SELECT CURRENT_SCN FROM V$DATABASE;
6. Switch the redo log files so that archived redo log files will be generated.
ALTER SYSTEM SWITCH LOGFILE;
7. Delete the rows in SHANOJUSER.TB1 and commit the deletion.
conn SHANOJUSER/oracle@pdb2 DELETE SHANOJUSER.TB1; COMMIT;
8. Make the users tablespace offline.
conn / as sysdba ALTER SESSION SET CONTAINER=PDB2; ALTER TABLESPACE USERS OFFLINE IMMEDIATE;
9. Perform the PITR at the tablespace users.
Create the directory in which the auxiliary files will be created
ls /home/oracle/backup
mkdir /home/oracle/backup
b. Run rman and connect as target to the CDB
rman target /
10. Perform the PITR of the tablespace users in the database PDB2.
To execute this statement online, this statement will make RMAN create an auxiliary instance from which the users tablespace will be recovered.
Make sure the used directory exists.
RECOVER TABLESPACE pdb2:USERS UNTIL SCN <scc> AUXILIARY DESTINATION='/home/oracle/backup';
11. Make the users tablespace online.
sqlplus / as sysdba ALTER SESSION SET CONTAINER=PDB2; ALTER TABLESPACE USERS ONLINE;
12. Verify that the data has been restored in the table.
conn SHANOJUSER/oracle@pdb2 SELECT COUNT(*) FROM SHANOJUSER.TB1;
13. Take backup of the whole CDB. This is recommended after every PITR process.
rman target / DELETE BACKUPSET; BACKUP DATABASE PLUS ARCHIVELOG DELETE ALL INPUT;
14. Make the users tablespace back as the default tablespace and drop the staging tablespace.
sqlplus / as sysdba ALTER SESSION SET CONTAINER=pdb2; ALTER DATABASE DEFAULT TABLESPACE users; DROP TABLESPACE stagingtbs INCLUDING CONTENTS AND DATAFILES;
Performing Flashback for the CDB from Dropping a Common User
we will enable database flashback in CDB1. Then we will use the flashback to recover from dropping a common user. Common users are created in the CDB level. To use the flashback to recover from dropping a common user, you must flashback the entire CDB.
1. Create a common user.
sqlplus / as sysdba CREATE USER C##AJAYUSER1 IDENTIFIED BY oracle CONTAINER=ALL; GRANT CREATE SESSION TO C##AJAYUSER1 CONTAINER=ALL;
2. Make sure the Fast Recovery Area is enabled.
SHOW PARAMETER DB_RECOVERY_FILE_DEST
Enable the flashback in the CDB. It requires CDB restart.
Note: DB restart is not needed.
-- verify the flashback is off: SELECT FLASHBACK_ON FROM V$DATABASE; -- enable the flashback SHUTDOWN IMMEDIATE STARTUP MOUNT ALTER SYSTEM SET DB_FLASHBACK_RETENTION_TARGET=1440 SCOPE=BOTH; ALTER DATABASE FLASHBACK ON; ALTER DATABASE OPEN;
3. Obtain the current SCN and take a note of it.
SELECT CURRENT_SCN FROM V$DATABASE;
4. Drop the common user C##AJAYUSER1
-- verify the user exists: col username format A20 SELECT USERNAME, COMMON, CON_ID FROM CDB_USERS WHERE USERNAME='C##AJAYUSER1'; DROP USER C##AJAYUSER1 CASCADE;
5. Switch the logfile multiple times to generate flashback logs.
— execute the following statement multiple times:
ALTER SYSTEM SWITCH LOGFILE;
6. Flashback the CDB database up to the obtained SCN.
SHUTDOWN IMMEDIATE STARTUP MOUNT FLASHBACK DATABASE TO SCN <scn>;
7. Open the database in READ ONLY mode and verify that the common user is recovered.
Note:Opening the CDB in READ ONLY mode does not automatically open the PDBs. You have to manually open them.
ALTER DATABASE OPEN READ ONLY; ALTER PLUGGABLE DATABASE ALL OPEN READ ONLY; -- you should see the common user recovered: SELECT USERNAME, COMMON, CON_ID FROM CDB_USERS WHERE USERNAME='C##AJAYUSER1';
8. Open the CDB with RESETLOGS option.
SHUTDOWN IMMEDIATE STARTUP MOUNT ALTER DATABASE OPEN RESETLOGS;
9. Take backup of the whole CDB. This is recommended after every time you open the database with RESETLOGS option.
rman target / DELETE BACKUPSET; BACKUP DATABASE PLUS ARCHIVELOG DELETE ALL INPUT;
Performing Flashback for PDBs
We will flashback a PDB up to a specific restore point.
1. Create a guaranteed restore point in PDB2.
sqlplus / as sysdba CREATE RESTORE POINT pre_change FOR PLUGGABLE DATABASE pdb2 GUARANTEE FLASHBACK DATABASE; col name format a20 SELECT SCN, NAME, CON_ID, PDB_RESTORE_POINT, GUARANTEE_FLASHBACK_DATABASE, CLEAN_PDB_RESTORE_POINT FROM V$RESTORE_POINT;
2. Delete all the rows in SHANOJUSER.TB1 (in PDB2). This is the destructive operation that you need to recover from.
conn SHANOJUSER/oracle@pdb2 DELETE SHANOJUSER.tb1; COMMIT;
3. Flashback PDB2 to the restore point.
conn / as sysdba ALTER PLUGGABLE DATABASE pdb2 CLOSE IMMEDIATE; FLASHBACK PLUGGABLE DATABASE pdb2 TO RESTORE POINT pre_change; ALTER PLUGGABLE DATABASE pdb2 OPEN RESETLOGS;
4. Verify that the data has been restored in the table.
conn SHANOJUSER/oracle@pdb2 SELECT COUNT(*) FROM SHANOJUSER.TB1;
5. Drop the restore point.
Note: It is important to drop the guaranteed restore points after you finish from the target of creating them. Otherwise, eventually you will run out of space in the FRA.
conn / as sysdba ALTER SESSION SET CONTAINER = pdb2; DROP RESTORE POINT pre_change;
6. Take backup of the whole CDB.
rman target /
DELETE BACKUPSET;
BACKUP DATABASE PLUS ARCHIVELOG DELETE ALL INPUT;
Notes:
• You can perform PITR for a single PDB, keeping the CDB and the other PDBs up and running meanwhile.
• It easier to execute a PITR for a PDB when local undo is enabled than when the shared undo is being used.
• You can perform PITR on a tablespace within a PDB, keeping the PDB and the other tablespaces available for the users.
However, this process involves creating an auxiliary instance. Which means longer recovery time and more disk space is needed.
• Flashback PDB is a more efficient recovery way than PITR. However, it is not practically a full replacement to the PITR.
Flashback is best used for recover to a short time period in the past, whereas PITR is best used for recovery to a long time point in the past.