ASM & RMAN Interview question with Answer:
RMAN Interview question with Answer:
RMAN Interview
question with Answer:
1) What is the difference between to back up the current control file and to backup up control file copy?
If you backup “current control file” you backup control file which is currently open by an instance where as If you backup “controlfile file copy" you backup the copy of control file which is created either with SVRMGRL command "alter system backup controlfile to .." or with RMAN command "copy current controlfile to ...".
In the other words, the control file copy is not current controlfile backup current controlfile creates a BACKUPSET containing controlfile.
You don't have to give the FILENAME where as backup controlfile copy creates a BACKUPSET from a copy of controlfile.
You have to give the FILENAME.
2) How much of overhead in running BACKUP VALIDATE DATABASE and RESTORE VALIDATE DATABASE commands to check for block corruptions using RMAN?
Can I run these commands anytime?
Backup validate works against the backups not against the live database so no impact on the live database, same for restore validate they do not impact the real thing (it is reading the files there only).
3) Is there a way to force rman to use these obsolete backups or once it is marked obsolete?
As per my understanding it is just a report, they are still there until you delete them.
4) Can I use the same snapshot controlfile to backup multiple databases(one after another) running on the same server?
This file is only use temporarily like a scratch file. Only one rman session can access the snapshot controlfile at any time so this would tend to serialize your backups if you do that.
5) Why does not oracle keep RMAN info after recreating the controlfile?
Creating the new controlfile from scratch how do you expect the create controlfile to "make up" the missing data?
that would be like saying similarly we have drop and recreated my table and now it is empty similarly here recreating from the scratch means the contents there will be naturally will be gone.
Use the rman catalog to deal this situation. It is just a suggestion.
6) What is the advantage of using PIPE in rman backups? In what circumstances one would use PIPE to backup and restore?
It lets 3rd parties (anyone really) build an alternative interface to RMAN as it permits anyone
that can connect to an Oracle instance to control RMAN programmatically.
7) How To turn Debug Feature on in rman?
run {
allocate channel c1 type disk;
debug on;
}
rman>list backup of database;
now you will see a output
You can always turn debug off by issuing
rman>debug off;
8) Assuming I have a "FULL" backup of users01.dbf containing employees table that contains 1000 blocks of data.
If I truncated employees table and then an incremental level 1 backup of user’s tablespace is taken, will RMAN include 1000 blocks that once contained data in the incremental backup?
The blocks were not written to the only changes made by the truncate was to the data dictionary (or file header) so no, it won't see them as changed blocks since they were not changed.
9)Where should the catalog be created?
The recovery catalog to be used by Rman should be created in a separate database other than the target database.
The reason is that the target database will be shutdown while datafiles are restored.
8)How many times does oracle ask before dropping a catalog?
The default is two times one for the actual command, the other for confirmation.
9) What are the various reports available with RMAN?
rman>list backup;
rman> list archive;
10) What is the use of snapshot controlfile in terms of RMAN backup?
Rman uses the snapshot controlfile as a way to get a read consistent copy of the controlfile,
it uses this to do things like RESYNC the catalog (else the controlfile is a ‘moving target’, constantly changing and Rman would get blocked and block the database)
11) Can RMAN write to disk and tape Parallel? Is it possible?
Rman currently won't do tape directly, y
ou need a media manager for that, regarding disk and tape parallel not as far as I know, you would run two backups separately (not sure).
May be trying to maintain duplicate like that could get the desired.
12) What is the difference between DELETE INPUT and DELETE ALL command in backup?
Generally speaking LOG_ARCHIVE_DEST_n points to two disk drive locations where we archive the files,
when a command is issued through rman to backup archivelogs it uses one of the location to backup the data. When we specify delete input the location which was backed up will get deleted,
if we specify delete all (all log_archive_dest_n) will get deleted.
DELETE all applies only to archived logs.
delete expired archivelog all;
13) Is it possible to restore a backupset (actually backup pieces) from a different location to where RMAN has recorded them to be.
With 9.2 and earlier it is not possible to restore a backupset (actually backup pieces) from a
different location to where RMAN has recorded them to be. As a workaround you would have to create a link using the location of where the backup was originally located.
Then when restoring, RMAN will think everything is the same as it was.
Starting in 10.1 it is possible to catalog the backup pieces in their new location into the
controlfile and recovery catalog. This means they are available for restoration by RMAN without creating the link.
14) What is difference between Report obsolete and Report obsolete orphan
Report obsolete backup are reported unusable according to the user’s retention policy where as Report obsolete orphan report the backup that are unusable because they belong to incarnation of the database that are not direct ancestor of the current incarnation.
15) How to Increase Size of Redo Log
1. Add new log files (groups) with new size
ALTER DATABASE ADD LOGFILE GROUP…
2. Switch with ‘alter system switch log file’ until a new log file group is in state current
3. Now you can delete the old log file
ALTER DATABASE DROP LOGFILE MEMBER…
16) What is the difference between alter database recover and sql*plus recover command?
ALTER DATABASE recover is useful when you as a user want to control the recovery where as SQL*PLUS recover command is useful when we prefer automated recovery.
Difference of two view V$Backup_Set and Rc_Backup_Set in respect of Rman
The V$Backup_Set is used to check the backup details when we are not managing Rman catalog that is the backup information is stored in controlfile where as Rc_Backup_Set is used when we are using catalog as a central repository to list the backup information.
17) Can I cancel a script from inside the script? How I cancil a select on Windows client?
Use ctl-c
18) How to Find the Number of Oracle Instances Running on Windows Machine
C:\>net start |find “OracleService”
19) How to create an init.ora from the spfile when the database is down?
Follow the same way as you are using
SQL> connect sys/oracle as sysdba
SQL> shutdown;
SQL> create pfile from spfile;
SQL> create spfile from pfile;
20) When you shutdown the database, how does oracle maintain the user session i.e.of sysdba?
You still have your dedicated server
!ps -auxww | grep ora920
sys@ORA920> !ps -auxww | grep ora920
sys@ORA920> shutdown
sys@ORA920> !ps -auxww | grep ora920
You can see you still have your dedicated server. When you connect as sysdba, you fire up dedicated server that is where it is.
21) What is ORA-002004 error? What you will do in that case?
A disk I/O failure was detected on reading the control file.
Basically you have to check whether the control file is available, permissions are right on the control file,
spfile/init.ora right to the right location, if all checks were done still you are getting the error, then from the multiplexed control file overlay on the corrupted one.
Let us say you have three control files control01.ctl, control02.ctl and control03.ctl and now you are getting errors on control03.ctl then just copy control01.ctl over to control03.ctl and you should be all set.
In order to issue ALTER DATABASE BACKUP CONTROLFILE TO TRACE;
database should be mounted and in our case it is not mounted then the only other option available is to restore control file from backup or copy the multiplexed control file over to the bad one.
22) Why do we need SCOPE=BOTH clause?
BOTH indicates that the change is made in memory and in the server parameter file. The new setting takes effect immediately and persists after the database is shut down and started up again. If a server parameter file was used to start up the database, then BOTH is the default. If a parameter file was used to start up the database, then MEMORY is the default, as well as the only scope you can specify.
23) How to know Number of CPUs on Oracle
Login as SYSDBA
SQL>show parameter cpu_count
NAME TYPE VALUE
cpu_count integer 2
24) Could you please tell me what are the possible reason for Spfile corruption and Recovery?
It should not be corrupt under normal circumstances, if it were, it would be a bug or failure of some component in your system. It could be a file system error or could be a bug.
You can easily recover however from
a) Your alert log has the non-default parameters in it from your last restart.
b) it should be in your backups
c) strings spfile.ora > init$ORACLE_SID.ora - and then edit the resulting file to clean it up would be options.
25) How you will check flashback is enabled or not?
Select flashback_on from v$database;
26) In case Revoke CREATE TABLE Privilege from an USER giving ORA-01952. What is the issue? How to do in that case?
SQL> revoke create table from Pay_payment_master;
ORA-01952: system privileges not granted to ‘PAY_PAYMENT_MASTER’
This is because this privilege is not assigned to this user directly rather it was assigned through role “CONNECT”
If you remove connect role from the user then you will not be able to create session (Connect) to database.
So basically we have to Revoke the CONNECT Role and Grant other than create table privilege to this user.
27) What kind of information is stored in UNDO segments?
Only before image of data is stored in the UNDO segments. If transaction is rolled back information from UNDO is applied to restore original datafile. UNDO is never multiplexed.
28) How to Remove Oracle Service in windows environment?
We can add or remove Oracle Service using oradim which is available in ORACLE_HOME/bin
C:\Oradim –delete –sid
or
Oradim –delete –svrc
29) Why ORA-28000: the account is locked? What you will do in that case?
The Oracle 10g default is to lock an account after 10 bad password attempts and giving ORA-28000: the account is locked.
In that case one of the solutions is increase default limit of the login attempts.
SQL> Alter profile default limit FAILED_LOGIN_ATTEMPTS unlimited;
30) How to Reduce the Physical Reads on Statistics?
You need to increase the Buffer Cache
Consider the situation Buffer Cache of the database is 300MB.
One SQL gave the Physical read as 100. I increased as 400MB and now the same SQL giving the Physical read value is 0
31) How many redo groups are required for a Oracle DB?
At least 2 redo groups are required for a Oracle database to be working normally.
32) My spfile is corrupt and now I cannot start my database running on my laptop. Is there a way to build spfile again?
if you are on unix then
$ cd $ORACLE_HOME/dbs
$ strings spfilename temp_pfile.ora
edit the temp_pfile.ora, clean it up if there is anything "wrong" with it and then
SQL> startup pfile=temp_pfile.ora
SQL> create spfile from pfile;
SQL> shutdown
SQL> startup
On windows -- just try editing the spfile [do not try with the prod db first try to check on test db. It can be dangerous], create a pfile from it. save it,
and do the same or if you got problem you can startup the db from the command line using sqlplus create a pfile, do a manual startup (start the oracle service, then use sqlplus to start the database)
33) What is a fractured block? What happens when you restore a file containing fractured block?
A block in which the header and footer are not consistent at a given SCN.
In a user-managed backup, an operating system utility can back up a datafile at the same time that DBWR is updating the file.
It is possible for the operating system utility to read a block in a half-updated state, so that the block that is copied to the backup media is updated in its first half,
while the second half contains older data. In this case, the block is fractured.
For non-RMAN backups, the ALTER TABLESPACE ... BEGIN BACKUP or ALTER DATABASE BEGIN BACKUP command is the solution for the fractured block problem.
When a tablespace is in backup mode, and a change is made to a data block, the database logs a copy of the entire block image before the change so that the database can reconstruct this block if media recovery finds that this block was fractured.
The block that the operating system reads can be split, that is, the top of the block is written at one point in time while the bottom of the block is written at another point in time.
If you restore a file containing a fractured block and Oracle reads the block, then the block is considered a corrupt.
34) You recreated the control file by “using backup control file to trace” and using alter database backup controlfile to ‘location’ command what have you lost in that case?
You lost all of the backup information as using backup controlfile to trace where as using other ALTER DATABASE BACKUP CONTROLFILE to ‘D:\Backup\control01.ctl’.
All backup information is retained when you take binary control file backup.
35) If a backup is issued after “shutdown abort” command what kind of backup
is that?
It is an inconsistent backup.
If you are in noarchivelog mode ensure that you issue the shutdown immediate command or startup force is another option that you can issue:
startup force->shutdown abort;
followed by
shutdown immediate;
36) What is split brain
What is split brain ?
In RAC environment, server nodes communicate with each other using High speed private interconnects
network. A split brain situation happens when all the links of the private interconnect fail to respond to
each other but instances are still up and running. So each instance thinks that the other nodes/instances are
dead and that it should take over the ownership.
In split brain situation, instances independtly access the data and modify the same blocks and the database
will end up with changed database overwritten which could lead to data corruption. To avoid this, various
algorithm are implemented to handle split brain scenario.
In RAC, the IMR (Instance Membership Recovery) service is one of the one of the efficient algorithm
used to detect & resolve the split-brain syndrome. When one instance fails to communicate with other
instances or when one instance becomes inactive due to any reason and is unable to issue the control file
heartbeat, the split brain is detected and the detecting instance will evict the failed instance from the
database.This process is called node eviction.
37) What does the #!bin/ksh at the beginning of a shell script do? Why should it be there?
Ans: On the first line of an interpreter script, the "#!", is the name of a program which should be used to interpret the contents of the file.
For instance, if the first line contains "#! /bin/ksh", then the contents of the file are executed as a korn shell
38) What command is used to find the status of Oracle 10g Clusterware (CRS) and the various components it manages
(ONS, VIP, listener, instances, etc.)?
Ans: $ocrcheck
39) How would you find the interconnect IP address from any node within an Oracle 10g RAC configuration?
using oifcfg command.
se the oifcfg -help command to display online help for OIFCFG. The elements of OIFCFG commands, some of which are
optional depending on the command, are:
*nodename—Name of the Oracle Clusterware node as listed in the output from the olsnodes command
*if_name—Name by which the interface is configured in the system
*subnet—Subnet address of the interface
*if_type—Type of interface: public or cluster_interconnect
40) 15.What is the Purpose of the voting disk in Oracle 10g Clusterware?
Voting disk record node membership information.
Oracle Clusterware uses the voting disk to determine which instances are members of a cluster.
The voting disk must reside on a shared disk. For high availability, Oracle recommends that you have a minimum of three voting disks.
If you configure a single voting disk, then you should use external mirroring to provide redundancy.
You can have up to 32 voting disks in your cluster.
41) Data Guard Protection Modes :
In some situations, a business cannot afford to lose data at any cost.
In other situations, some applications require maximum database performance and can tolerate a potential loss of data.
Data Guard provides three distinct modes of data protection to satisfy these varied requirements:
*Maximum Protection—> This mode offers the highest level of data protection.
Data is synchronously transmitted to the standby database from the primary database and transactions are not committed on the primary database unless the redo data is available on at least one standby database configured in this mode.
If the last standby database configured in this mode becomes unavailable, processing stops on the primary database.
This mode ensures no-data-loss, even in the event of multiple failures.
*Maximum Availability—> This mode is similar to the maximum protection mode, including zero data loss.
However, if a standby database becomes unavailable (for example, because of network connectivity problems),
processing continues on the primary database.
When the fault is corrected, the standby database is automatically resynchronized with the primary database.
This mode achieves no-data-loss in the event of a single failure (e.g. network failure, primary site failure . . .)
*Maximum Performance—> This mode offers slightly less data protection on the primary database, but higher performance than maximum availability mode.
In this mode, as the primary database processes transactions, redo data is asynchronously shipped to the standby database.
The commit operation of the primary database does not wait for the standby database to acknowledge receipt of redo data before completing write operations on the primary database.
If any standby destination becomes unavailable, processing continues on the primary database and there is little effect on primary database performance.
42) Connection hanging? what are the possibilities?
possibilities for Oracle hanging include:
External issues - The network being down, Kerberos security issues, SSO or a firewall issue can cause an Oracle connection to hang.
One way to test this is to set sqlnet.authentication_services=(none) in your sqlnet.ora file and retry connecting.
Listener is not running - Start by checking the listener (check lsnrctl stat). Also, see my notes on diagnosing Oracle network connectivity issues.
No RAM - Over allocation of server resources, usually RAM, whereby there is not enough RAM to spawn another connection to Oracle.
Contention - It is not uncommon for an end-user session to “hang” when they are trying to grab a shared data resource that is held by another end-user.
The end-user often calls the help desk trying to understand why they cannot complete their transaction, and the Oracle professional must quickly identify the source of the contention."
43) What is Partition Pruning ?
Partition Pruning: Oracle optimizes SQL statements to mark the partitions or subpartitions that need to be accessed and eliminates (prunes) unnecessary partitions or subpartitions from access by those SQL statements. In other words, partition pruning is the skipping of unnecessary index and data partitions or subpartitions in a query.
44) FAN in RAC.
With Oracle RAC in place, database client applications can leverage a number of high availability features including:
Fast Connection Failover (FCF): Allows a client application to be immediately notified of a planned or unplanned database service outage by subscribing to Fast Application Notification (FAN) events.
Run-time Connection Load-Balancing: Uses the Oracle RAC Load Balancing Advisory events to distribute work appropriately across the cluster nodes and to quickly react to changes in cluster configuration, overworked nodes, or hangs.
Connection Affinity (11g recommended/required): Routes connections to the same database instance based on previous connections to an instance to limit performance impacts of switching between instances.
RAC supports web session and transaction-based affinity for different client scenarios.
45) Why extra standby redo log group?
Determine the appropriate number of standby redo log file groups.
Minimally, the configuration should have one more standby redo log file group
than the number of online redo log file groups on the primary database....
(maximum number of logfiles for each thread + 1) * maximum number of threads
Using this equation reduces the likelihood that the primary instance's log
writer (LGWR) process will be blocked because a standby redo log file cannot be
allocated on the standby database. For example, if the primary database has 2
log files for each thread and 2 threads, then 6 standby redo log file groups
are needed on the standby database."
I think it says that if you have groups #1 and #2 on primary and #1, #2 on
standby, and if LGWR on primary just finished #1, switched to #2, and now it
needs to switch to #1 again because #2 just became full, the standby must catch
up, otherwise the primary LGWR cannot reuse #1 because the standby is still
archiving the standby's #1. Now, if you have the extra #3 on standby, the
standby in this case can start to use #3 while its #1 is being archived. That
way, the primary can reuse the primary's #1 without delay.
46) how to take voting disk backup ?
On 10gR2 RAC used "Can be done online" for backup voting disk but in 11g you cannot use DD online (use oracle command to do it).
First, as root user, stop Oracle Clusterware (with the crsctl stop crs command) on all nodes if you want to add/restore voting disk.
Then, determine the current voting disk by issuing the following command:
crsctl query votedisk css
issue the dd or ocopy command to back up a voting disk, as appropriate.
Give the syntax of backing up voting disks:-
On Linux or UNIX systems:
dd if=voting_disk_name of=backup_file_name
where,
voting_disk_name is the name of the active voting disk
backup_file_name is the name of the file to which we want to back up the voting disk contents
On Windows systems, use the ocopy command:
ocopy voting_disk_name backup_file_name
47) What is the Oracle Recommendation for backing up voting disk?
Oracle recommends us to use the dd command to backup the voting disk with aminimum block size of 4KB.
48) How do you restore a voting disk?
To restore the backup of your voting disk, issue the dd or ocopy command for Linux and UNIX systems or ocopy for Windows systems respectively.
On Linux or UNIX systems:
dd if=backup_file_name of=voting_disk_name
On Windows systems, use the ocopy command:
ocopy backup_file_name voting_disk_name
where,
backup_file_name is the name of the voting disk backup file
voting_disk_name is the name of the active voting disk
49) How can we add and remove multiple voting disks?
If we have multiple voting disks, then we can remove the voting disks and add them back into our environment using the following commands,
where path is the complete path of the location where the voting disk resides:
crsctl delete css votedisk path
crsctl add css votedisk path
50) How do we stop Oracle Clusterware?When do we stop it?
Before making any modification to the voting disk, as root user,
stop Oracle Clusterware using the crsctl stop crs command on all nodes.
51) How do we add voting disk?
To add a voting disk, issue the following command as the root user,
replacing the path variable with the fully qualified path name for the voting disk we want to add:
crsctl add css votedisk path -force
52) How do we move voting disks?
To move a voting disk, issue the following commands as the root user,
replacing the path variable with the fully qualified path name for the voting disk we want to move:
crsctl delete css votedisk path -force
crsctl add css votedisk path -force
53) How do we remove voting disks?
To remove a voting disk,
issue the following command as the root user, replacing the path variable with the fully qualified path name for the voting disk we want to remove:
crsctl delete css votedisk path -force
54) What should we do after modifying voting disks?
After modifying the voting disk,
restart Oracle Clusterware using the crsctl start crs command on all nodes, and verify the voting disk location using the following command:
crsctl query css votedisk
55) When can we use -force option?
If our cluster is down, then we can include the -force option to modify the voting disk configuration,
without interacting with active Oracle Clusterware daemons.
However, using the -force option while any cluster node is active may corrupt our configuration.
56) How to find Cluster Interconnect IP address from Oracle Database ?
Hello, The easiest way to find the cluster interconnect is to view the “hosts” file. The “hosts” file is located under: UNIX .......... /etc
Windows ...... C:\WINDOWS\system32\drivers\etc
Following are the ways to find the cluster interconnect through Oracle database:
1) Query X$KSXPIA
The following query provides the interconnect IP address registered with Oracle database:
view plaincopy to clipboardprint?
SQL> select IP_KSXPIA from x$ksxpia where PUB_KSXPIA = 'N';
IP_KSXPIA
----------------
192.168.10.11
This query should be run on all instances to find the private interconnect IP address used on their respective nodes.
2) Query GV$CLUSTER_INTERCONNECTS view
Querying GV$CLUSTER_INTERCONNECTS view lists the interconnect used by all the participating instances of the RAC database.
view plaincopy to clipboardprint?
SQL> select INST_ID, IP_ADDRESS from GV$CLUSTER_INTERCONNECTS;
INST_ID IP_ADDRESS
---------- ----------------
1 192.168.10.11
2 192.168.10.12
57) How to Identify master node in RAC ?
Grep crsd log file
# /u1/app/../crsd>grep MASTER crsd.log | tail -1
(or)
cssd >grep -i "master node" ocssd.log | tail -1
OR You can also use V$GES_RESOURCE view to identify the master node.
58) how to monitor block transfer interconnects nodes in rac ?
The v$cache_transfer and v$file_cache_transfer views are used to examine RAC statistics.
The types of blocks that use the cluster interconnects in a RAC environment are monitored with the v$ cache transfer series of views:
v$cache_transfer: This view shows the types and classes of blocks that Oracle transfers over the cluster interconnect on a per-object basis.
The forced_reads and forced_writes columns can be used to determine the types of objects the RAC instances are sharing.
Values in the forced_writes column show how often a certain block type is transferred out of a local buffer cache due to the current version being requested by another instance.
59) what is global cache service monitoring?
Global Cache Services (GCS) Monitoring
The use of the GCS relative to the number of buffer cache reads, or logical reads can be estimated
by dividing the sum of GCS requests (global cache gets + global cache converts + global cache cr blocks received + global cache current blocks received )
by the number of logical reads (consistent gets + db block gets ) for a given statistics collection interval.
A global cache service request is made in Oracle when a user attempts to access a buffer cache to read or modify a data block and the block is not in the local cache.
A remote cache read, disk read or change access privileges is the inevitable result.
These are logical read related. Logical reads form a superset of the global cache service operations.
===========================================================
Rman Interview Questions:
2. Difference between using recovery catalog and control file?
When new incarnation happens, the old backup information in control file will be lost. It will be preserved in recovery catalog.
In recovery catalog, we can store scripts.
Recovery catalog is central and can have information of many databases.
3. Can we use same target database as catalog?
No. The recovery catalog should not reside in the target database (database to be backed up), because the database can't be recovered in the mounted state.
4. How do u know how much RMAN task has been completed?
By querying v$rman_status or v$session_longops
5. From where list & report commands will get input?
6. Command to delete archive logs older than 7days?
RMAN> delete archivelog all completed before sysdate-7;
7. How many days backup, by default RMAN stores?
8. What is the use of crosscheck command in RMAN?
Crosscheck will be useful to check whether the catalog information is intact with OS level information.
9. What are the differences between crosscheck and validate commands?
10. Which is one is good, differential (incremental) backup or cumulative (incremental) backup?
A differential backup, which backs up all blocks changed after the most recent incremental backup at level 1 or 0
A cumulative backup, which backs up all blocks changed after the most recent incremental backup at level 0
11. What is Level 0, Level 1 backup?
A level 0 incremental backup, which is the base for subsequent incremental backups, copies all blocks containing data, backing the datafile up into a backup set just as a full backup would. A level 1 incremental backup can be either of the following types:
A differential backup, which backs up all blocks changed after the most recent incremental backup at level 1 or 0
A cumulative backup, which backs up all blocks changed after the most recent incremental backup at level 0
12. Can we perform level 1 backup without level 0 backup?
If no level 0 backup is available, then the behavior depends upon the compatibility mode setting. If compatibility < 10.0.0, RMAN generates a level 0 backup of the file contents at the time of the backup. If compatibility is >= 10.0.0, RMAN copies all blocks changed since the file was created, and stores the results as a level 1 backup. In other words, the SCN at the time the incremental backup is taken is the file creation SCN.
13. Will RMAN put the database/tablespace/datafile in backup mode?
Nope.
14. What is snapshot control file?
15. What is the difference between backup set and backup piece?
Backup set is logical and backup piece is physical.
16. RMAN command to backup for creating standby database?
RMAN> duplicate target database to standby database ....
17. How to do cloning by using RMAN?
RMAN> duplicate target database …
18. You loss one datafile and DB is running in ARCHIVELOG mode. You have full database backup of 1 week/day old and don’t have backup of this (newly created) datafile. How do you restore/recover file?
create the datafile and recover that datafile.
SQL> alter database create datafile ‘…path..’ size n;
RMAN> recover datafile file_id;
19. What is obsolete backup & expired backup?
A status of "expired" means that the backup piece or backup set is not found in the backup destination.
A status of "obsolete" means the backup piece is still available, but it is no longer needed. The backup piece is no longer needed since RMAN has been configured to no longer need this piece after so many days have elapsed, or so many backups have been performed.
20. What is the difference between hot backup & RMAN backup?
For hot backup, we have to put database in begin backup mode, then take backup.
RMAN won’t put database in backup mode.
21. How to put manual/user-managed backup in RMAN (recovery catalog)?
By using catalog command.
RMAN> CATALOG START WITH '/tmp/backup.ctl';
22. What are new features in Oracle 11g RMAN?
23. What is the difference between auxiliary channel and maintenance channel?
===========================================================
RMAN Question & Answers
What is RMAN and How to configure it?
RMAN is an Oracle Database client
It performs backup and recovery tasks on your databases and automates administration of your backup strategies
It greatly simplifies the dba jobs by managing the production database's backing up, restoring, and recovering database files
This tool integrates with sessions running on an Oracle database to perform a range of backup and recovery activities, including maintaining an RMAN repository of historical data about backups
There is no additional installation required for this tool
It is by default get installed with the oracle database installation
The RMAN environment consists of the utilities and databases that play a role in acking up your data
We can access RMAN through the command line or through Oracle Enterprise Manager
2) Why to use RMAN?
RMAN gives you access to several backup and recovery techniques and features not available with user-managed backup and recovery. The most noteworthy are the following:
Automatic specification of files to include in a backup
Establishes the name and locations of all files to be backed up
Maintain backup repository
Backups are recorded in the control file, which is the main repository of RMAN metadata
Additionally, you can store this metadata in a recovery catalog
Incremental backups
Incremental backup stores only blocks changed since a previous backup
Thus, they provide more compact backups and faster recovery, thereby reducing the need to apply redo during datafile media recovery
Unused block compression:
In unused block compression, RMAN can skip data blocks that have never been used
Block media recovery
We can repair a datafile with only a small number of corrupt data blocks without taking it offline or restoring it from backup
Binary compression
A binary compression mechanism integrated into Oracle Database reduces the size of backups
Encrypted backups
RMAN uses backup encryption capabilities integrated into Oracle Database to store backup sets in an encrypted format
Corrupt block detection
RMAN checks for the block corruption before taking its backup
3) How RMAN works?
RMAN backup and recovery operation for a target database are managed by RMAN client
RMAN uses the target database control file to gather metadata about the target database and to store information about its own operations
The RMAN client itself does not perform backup, restore, or recovery operations
When you connect the RMAN client to a target database, RMAN allocates server sessions on the target instance and directs them to perform the operations
The work of backup and recovery is performed by server sessions running on the target database
A channel establishes a connection from the RMAN client to a target or auxiliary database instance by starting a server session on the instance
The channel reads data into memory, processes it, and writes it to the output device
When you take a database backup using RMAN, you need to connect to the target database using RMAN Client
The RMAN client can use Oracle Net to connect to a target database, so it can be located on any host that is connected to the target host through Oracle Net
For backup you need to allocate explicit or implicit channel to the target database
An RMAN channel represents one stream of data to a device, and corresponds to one database server session.
This session dynamically collect information of the files from the target database control file before taking the backup or while restoring
For example if you give ' Backup database ' from RMAN, it will first get all the datafiles information from the controlfile
Then it will divide all the datafiles among the allocated channels. (Roughly equal size of work as per the datafile size)
Then it takes the backup in 2 steps
Step1:
The channel will read all the Blocks of the entire datafile to find out all the formatted blocks to backup
Note:
RMAN do not take backup of the unformatted blocks
Step2:
In the second step it takes back up of the formatted blocks
Example:
This is the best advantage of using RMAN as it only takes back up of the required blocks
Lets say in a datafile of 100 MB size, there may be only 10 MB of use full data and rest 90 MB is free then RMAN will only take backup of those 10 MB
4) What O/S and oracle user privilege required using RMAN?
RMAN always connects to the target or auxiliary database using the SYSDBA privilege
RMAN always connects to the target or auxiliary database using the SYSDBA privilege
Its connections to a database are specified and authenticated in the same way as SQL*Plus connections to a database
The O/S user should be part of the DBA group
For remote connection it needs the password file Authentication
Target database should have the initialization parameter REMOTE_LOGIN_PASSWORDFILE set to EXCLUSIVE or SHARED
5) RMAN terminology:
A target database:
An Oracle database to which RMAN is connected with the TARGET keyword
A target database is a database on which RMAN is performing backup and recovery operations
RMAN always maintains metadata about its operations on a database in the control file of the database
A recovery Catalog:
A separate database schema used to record RMAN activity against one or more target databases
A recovery catalog preserves RMAN repository metadata if the control file is lost, making it much easier to restore and recover following the loss of the control file
The database may overwrite older records in the control file, but RMAN maintains records forever in the catalog unless deleted by the user
Backup sets:
RMAN can store backup data in a logical structure called a backup set, which is the smallest unit of an RMAN backup
One backup set contains one or more datafiles a section of datafile or archivelogs
Backup Piece:
A backup set contains one or more binary files in an RMAN-specific format
This file is known as a backup piece
Each backup piece is a single output file
The size of a backup piece can be restricted; if the size is not restricted, the backup set will comprise one backup piece
Backup piece size should be restricted to no larger than the maximum file size that your filesystem will support
Image copies:
An image copy is a copy of a single file (datafile, archivelog, or controlfile)
It is very similar to an O/S copy of the file
It is not a backupset or a backup piece
No compression is performed
Snapshot Controlfile:
When RMAN needs to resynchronize from a read-consistent version of the control file, it creates a temporary snapshot control file
The default name for the snapshot control file is port-specific
Database Incarnation:
Whenever you perform incomplete recovery or perform recovery using a backup control file, you must reset the online redo logs when you open the database
The new version of the reset database is called a new incarnation
The reset database command directs RMAN to create a new database incarnation record in the recovery catalog
This new incarnation record indicates the current incarnation
6) What is RMAN Configuration and how to configure it?
The RMAN backup and recovery environment is preconfigured for each target database
The configuration is persistent and applies to all subsequent operations on this target database, even if you exit and restart RMAN
RMAN configured settings can specify backup devices, configure a connection to a backup device , policies affecting backup strategy, encryption algorithm, snap shot controlfile loaion and others
By default there are few default configuration are set when you login to RMAN
You can customize them as per your requirement
Any time you can check the current setting by using the "Show all” command
CONFIGURE command is used to create persistent settings in the RMAN environment, which apply to all subsequent operations, even if you exit and restart RMAN
7) How to check RMAN configuration?
RMAN>Show all;
8) How to reset to default configuration?
To reset the default configuration setting use Connect to the target database from sqlplus and run
SQL> connect @target_database;
SQL> execute dbms_backup_restore.resetConfig;
RMAN Catalog Database
9) What is Catalog database and How to configure it?
This is a separate database which contains catalog schema
You can use the same target database as the catalog database but it’s not at all recommended
10) How Many catalog database I can have?
You can have multiple catalog databases for the same target database
But at a time you can connect to only 1 catalog database via RMAN. Its not recommended to have multiple catalog database
11) Is this mandatory to use catalog database?
No! It’s an optional one
12) What is the advantage of catalog database?
Catalog database is a secondary storage of backup metadata
It’s very useful in case you lost the current controlfile, as all the backup information are there in the catalog schema
Secondly from contolfile the older backup information are aged out depending upon the control_file_record_keep_time
RMAN catalog database mainten the history of data
13) What is the difference between catalog database & catalog schema?
Catalog database is like any other database which contains the RMAN catalog user's schema
14) What happen if catalog database lost?
Since catalog database is an optional there is no direct effect of loss of catalog database
Create a new catalog database and register the target database with the newly created catalog one All the backup information from the target database current controlfile will be updated to the catalog schema
If any backup information which is aged out from the target database then you need to manually catalog those backup pieces
RMAN backup:
15) What are the database file's that RMAN can backup?
RMAN can backup Controlfile, Datafiles, Archive logs, standby database controfile, Spfile
16) What are the database file's that RMAN cannot backup?
RMAN can not take backup of the pfile, Redo logs, network configuration files, password files, external tables and the contents of the Oracle home files
17) Can I have archivelogs and datafile backup in a single backupset?
No. We can not put datafiles and archive logs in the same backupset
18) Can I have datafiles and contolfile backup in a single backup set?
Yes
If the controlfile autobackup is not ON then RMAN takes backup of controlfile along with the datafile 1, whenever you take backup of the database or System tablespace
19) Can I regulate the size of backup piece and backup set?
Yes!
You can set max size of the backupset as well as the backup piece
By default one RMAN channel creates a single backupset with one backup piece in it
You can use the MAXPIECESIZE channel parameter to set limits on the size of backup pieces
You can also use the MAXSETSIZE parameter on the BACKUP and CONFIGURE commands to set a limit for the size of backup sets
20) What is the difference between backup set backup and Image copy backup?
A backup set is an RMAN-specific proprietary format, whereas an image copy is a bit-for-bit copy of a file
By default, RMAN creates backup sets
21) What is RMAN consistent backup and inconsistent backup?
A consistent backup occurs when the database is in a consistent state
That means backup of the database taken after a shutdown immediate, shutdown normal or shutdown transactional
If the database is shutdown with abort option then its not a consistent backup
A backup when the database is up and running is called an inconsistent backup
When a database is restored from an inconsistent backup, Oracle must perform media recovery before the database can be opened, applying any pending changes from the redo logs
You can not take inconsistent backup when the database is in Noarchivelog mode
22) Can I take RMAN backup when the database is down?
No!
You can take RMAN backup only when the target database is Open or in Mount stage
It’s because RMAN keep the backup metadata in controfile
Only in open or mount mode controlfile is accessible
23) Do I need to place the database in begin backup mode while taking RMAN inconsistent backup?
RMAN does not require extra logging or backup mode because it knows the format of data blocks
RMAN is guaranteed not to back up fractured blocks
No extra redo is generated during RMAN backup
24) Can I compress RMAN backups?
ü RMAN supports binary compression of backup sets
ü The supported algorithms are BZIP2 (default) and ZLIB
ü It’s not recommended to compress the RMAN backup using any other OS or third party utility
Note:
ü RMAN compressed backup with BZIP2 provides great compression but is CPU intensive
ü Using ZLIB compression requires the Oracle Database 11g Advanced Compression Option and is only supported with an 11g database
ü The feature is not backward compatible with 10g databases
25) Can I encrypt RMAN backup?
ü RMAN supports backup encryption for backup sets
ü You can use wallet-based transparent encryption, password-based encryption, or both
ü You can use the CONFIGURE ENCRYPTION command to configure persistent transparent encryption
ü Use the SET ENCRYPTION, command at the RMAN session level to specify password-based encryption
26) Can RMAN take backup to Tape?
ü Yes!
ü We can use RMAN for the tape backup
ü But RMAN can not able to write directly to tape
ü You need to have third party Media Management Software installed
ü Oracle has published an API specification which Media Management Vendor's who are members of Oracle's Backup Solutions Partner program have access to
ü Media Management Vendors (MMVs) then write an interface library which the Oracle server uses to write and read to and from tape
Starting from oracle 10g R2 oracle has its Own Media management software for the database backup to tape called OSB
27) How RMAN Interact with Media manager?
ü Before performing backup or restore to a media manager, you must allocate one or more channels or configure default channels for use with the media manager to handle the communication with the media manager
ü RMAN does not issue specific commands to load, label, or unload tapes
ü When backing up, RMAN gives the media manager a stream of bytes and associates a unique name with this stream
ü When RMAN needs to restore the backup, it asks the media manager to retrieve the byte stream
ü All details of how and where that stream is stored are handled entirely by the media manager
28) What is Proxy copy backup to tape?
ü Proxy copy is functionality, supported by few media manager in which they handle the entire data movement between datafiles and the backup devices
ü Such products may use technologies such as high-speed connections between storage and media subsystems to reduce load on the primary database server
ü RMAN provides a list of files requiring backup or restore to the media manager, which in turn makes all decisions regarding how and when to move the data
29) What is Oracle Secure backup?
ü Oracle Secure Backup is a media manager provided by oracle that provides reliable and secure data protection through file system backup to tape
ü All major tape drives and tape libraries in SAN, Gigabit Ethernet, and SCSI environments are supported
30) Can I restore or duplicate my previous version database using a later version of Oracle?
For example, is it possible to restore a 9i backup while using the 10g executables?
It is possible to use the 10.2 RMAN executable to restore a 9.2 database (same for 11.2 to 11.1 or 11.1 to 10.2, etc) even if the restored datafiles will be stored in ASM
RMAN is configured so that a higher release is able to restore a lower release, but it is strongly suggested you use only the same version
31) Can I restore or duplicate between two different patchset levels?
ü As you can restore between different Oracle versions, you can also do so between two different patchset levels
Alter database open resetlogs upgrade;
OR
alter database open resetlogs downgrade;
32) Can I restore or duplicate between two different versions of the same operating system?
For example, can I restore my 9.2.0.1.0 RMAN backup taken against a host running Solaris 9 to a different machine where 9.2.0.1.0 is installed but where that host is running Solaris 10?
If the same Oracle Server installation CDs (media pack) can be used to install 9.2.0.1.0 on Solaris 9 and Solaris 10, this type of restore is supportable
33) Is it possible to restore or duplicate when the bit level (32 bit or 64 bit) of Oracle does not match?
For example, is it possible to restore or duplicate my 9.2. 64-bit database to a 9.2.32-bit installation?
It is preferable to keep the same bit version when performing a restore/recovery
However, excluding the use of duplicate command, the use of the same operating system platform should allow for a restore/recovery between bit levels (32 bit or 64 bit) of Oracle
Note, this may be specific to the particular operating system and any problems with this should be reported to Oracle Support
If you will be running the 64-bit database against the 32-bit binary files or vice versa, after the recovery has ended the database bit version must be converted using utlirp.sql
If you do not run utlirp.sql you will see errors including but not limited to:
ORA-06553: PLS-801: INTERNAL ERROR [56319]
34) Can I restore or duplicate my RMAN backup between two different platforms such as Solaris to Linux?
In general, you cannot restore or duplicate between two different platforms
35) What are the corruption types?
ü Datafile Block Corruption - Physical/Logical
ü Table/Index Inconsistency
ü Extents Inconsistencies
ü Data Dictionary Inconsistencies
Scenarios:
Goal: How to identify all the corrupted segments in the database reported by RMAN?
Solution:
Step 1: Identify the corrupt blocks (Datafile Block Corruption - Intra block corruption)
RMAN> backup validate check logical database;
To make it faster, it can be configured to use PARALLELISM with multiple channels:
RMAN> run {
allocate channel d1 type disk;
allocate channel d2 type disk;
allocate channel d3 type disk;
allocate channel d4 type disk;
backup validate check logical database;
}
Step2: Using the view v$database_block_corruption:
SQL> select * from v$database_block_corruption;
FILE# BLOCK# BLOCKS CORRUPTION_CHANGE# CORRUPTIO
------------------------------------------------------------------------------------------------------------
6 10 1 8183236781662 LOGICAL
6 42 1 0 FRACTURED
6 34 2 0 CHECKSUM
6 50 1 8183236781952 LOGICAL
6 26 4 0 FRACTURED
5 rows selected.
Datafile Block Corruption - Intra block corruption
It refers to intra block corruptions that may cause different errors like ORA-1578, ORA-8103, ORA-1410, ORA-600 etc.
Oracle classifies the corruptions as Physical and Logical
ü To identify both Physical and Logical Block Corruptions use the "CHECK LOGICAL" option
ü It checks the complete database for both corruptions without actually doing a backup
Solution1:
$ rman target /
RMAN> backup check logical validate database;
$ rman target /
RMAN> backup check logical database;
Solution2:
ü Chek the view V$DATABASE_BLOCK_CORRUPTION to identify the block corruptions detected by RMAN
Solution3: DBVerify - Identify Datafile Block Corruptions
ü DBVERIFY identify Physical and Logical Intra Block Corruptions by default
ü Dbverify cannot be run for the whole database in a single command
ü It does not need a database connection either
dbv file= blocksize=
RMAN Vs DBVerify - Datafile Intra Block Corruption
When the logical option is used by RMAN, it does exactly the same checks as DBV does for intra block corruption.
RMAN can be run with PARALLELISM using multiple channels making it faster than DBV which can not be run in parallel in a single command
DBV checks for empty blocks. In 10g RMAN may not check blocks in free extents when Locally Managed Tablespaces are used. In 11g RMAN checks for both free and used extents.
Both DBV and RMAN (11g) can check for a range of blocks. RMAN: VALIDATE DATAFILE 1 BLOCK 10 to 100;. DBV: start=10 end=100
RMAN keeps corruption information in the control file (v$database_block_corruption, v$backup_corruption). DBV does not.
RMAN may not report the corruption details like what is exactly corrupted in a block reported as a LOGICAL corrupted block. DBV reports the corruption details in the screen or in a log file.
DBV can scan blocks with a higher SCN than a given SCN.
DBV does not need a connection to the database.
dentify TABLE / INDEX Inconsistency
Table / Index inconsistencies is when an entry in the Table does not exist in the Index or vice versa. The common errors are ORA-8102, ORA-600 [kdsgrp1], ORA-1499 by "analyze validate structure cascade".
The tool to identify TABLE / INDEX inconsistencies is the ANALYZE command:
analyze table validate structure cascade;
1) What is the difference between to back up the current control file and to backup up control file copy?
If you backup “current control file” you backup control file which is currently open by an instance where as If you backup “controlfile file copy" you backup the copy of control file which is created either with SVRMGRL command "alter system backup controlfile to .." or with RMAN command "copy current controlfile to ...".
In the other words, the control file copy is not current controlfile backup current controlfile creates a BACKUPSET containing controlfile.
You don't have to give the FILENAME where as backup controlfile copy creates a BACKUPSET from a copy of controlfile.
You have to give the FILENAME.
2) How much of overhead in running BACKUP VALIDATE DATABASE and RESTORE VALIDATE DATABASE commands to check for block corruptions using RMAN?
Can I run these commands anytime?
Backup validate works against the backups not against the live database so no impact on the live database, same for restore validate they do not impact the real thing (it is reading the files there only).
3) Is there a way to force rman to use these obsolete backups or once it is marked obsolete?
As per my understanding it is just a report, they are still there until you delete them.
4) Can I use the same snapshot controlfile to backup multiple databases(one after another) running on the same server?
This file is only use temporarily like a scratch file. Only one rman session can access the snapshot controlfile at any time so this would tend to serialize your backups if you do that.
5) Why does not oracle keep RMAN info after recreating the controlfile?
Creating the new controlfile from scratch how do you expect the create controlfile to "make up" the missing data?
that would be like saying similarly we have drop and recreated my table and now it is empty similarly here recreating from the scratch means the contents there will be naturally will be gone.
Use the rman catalog to deal this situation. It is just a suggestion.
6) What is the advantage of using PIPE in rman backups? In what circumstances one would use PIPE to backup and restore?
It lets 3rd parties (anyone really) build an alternative interface to RMAN as it permits anyone
that can connect to an Oracle instance to control RMAN programmatically.
7) How To turn Debug Feature on in rman?
run {
allocate channel c1 type disk;
debug on;
}
rman>list backup of database;
now you will see a output
You can always turn debug off by issuing
rman>debug off;
8) Assuming I have a "FULL" backup of users01.dbf containing employees table that contains 1000 blocks of data.
If I truncated employees table and then an incremental level 1 backup of user’s tablespace is taken, will RMAN include 1000 blocks that once contained data in the incremental backup?
The blocks were not written to the only changes made by the truncate was to the data dictionary (or file header) so no, it won't see them as changed blocks since they were not changed.
9)Where should the catalog be created?
The recovery catalog to be used by Rman should be created in a separate database other than the target database.
The reason is that the target database will be shutdown while datafiles are restored.
8)How many times does oracle ask before dropping a catalog?
The default is two times one for the actual command, the other for confirmation.
9) What are the various reports available with RMAN?
rman>list backup;
rman> list archive;
10) What is the use of snapshot controlfile in terms of RMAN backup?
Rman uses the snapshot controlfile as a way to get a read consistent copy of the controlfile,
it uses this to do things like RESYNC the catalog (else the controlfile is a ‘moving target’, constantly changing and Rman would get blocked and block the database)
11) Can RMAN write to disk and tape Parallel? Is it possible?
Rman currently won't do tape directly, y
ou need a media manager for that, regarding disk and tape parallel not as far as I know, you would run two backups separately (not sure).
May be trying to maintain duplicate like that could get the desired.
12) What is the difference between DELETE INPUT and DELETE ALL command in backup?
Generally speaking LOG_ARCHIVE_DEST_n points to two disk drive locations where we archive the files,
when a command is issued through rman to backup archivelogs it uses one of the location to backup the data. When we specify delete input the location which was backed up will get deleted,
if we specify delete all (all log_archive_dest_n) will get deleted.
DELETE all applies only to archived logs.
delete expired archivelog all;
13) Is it possible to restore a backupset (actually backup pieces) from a different location to where RMAN has recorded them to be.
With 9.2 and earlier it is not possible to restore a backupset (actually backup pieces) from a
different location to where RMAN has recorded them to be. As a workaround you would have to create a link using the location of where the backup was originally located.
Then when restoring, RMAN will think everything is the same as it was.
Starting in 10.1 it is possible to catalog the backup pieces in their new location into the
controlfile and recovery catalog. This means they are available for restoration by RMAN without creating the link.
14) What is difference between Report obsolete and Report obsolete orphan
Report obsolete backup are reported unusable according to the user’s retention policy where as Report obsolete orphan report the backup that are unusable because they belong to incarnation of the database that are not direct ancestor of the current incarnation.
15) How to Increase Size of Redo Log
1. Add new log files (groups) with new size
ALTER DATABASE ADD LOGFILE GROUP…
2. Switch with ‘alter system switch log file’ until a new log file group is in state current
3. Now you can delete the old log file
ALTER DATABASE DROP LOGFILE MEMBER…
16) What is the difference between alter database recover and sql*plus recover command?
ALTER DATABASE recover is useful when you as a user want to control the recovery where as SQL*PLUS recover command is useful when we prefer automated recovery.
Difference of two view V$Backup_Set and Rc_Backup_Set in respect of Rman
The V$Backup_Set is used to check the backup details when we are not managing Rman catalog that is the backup information is stored in controlfile where as Rc_Backup_Set is used when we are using catalog as a central repository to list the backup information.
17) Can I cancel a script from inside the script? How I cancil a select on Windows client?
Use ctl-c
18) How to Find the Number of Oracle Instances Running on Windows Machine
C:\>net start |find “OracleService”
19) How to create an init.ora from the spfile when the database is down?
Follow the same way as you are using
SQL> connect sys/oracle as sysdba
SQL> shutdown;
SQL> create pfile from spfile;
SQL> create spfile from pfile;
20) When you shutdown the database, how does oracle maintain the user session i.e.of sysdba?
You still have your dedicated server
!ps -auxww | grep ora920
sys@ORA920> !ps -auxww | grep ora920
sys@ORA920> shutdown
sys@ORA920> !ps -auxww | grep ora920
You can see you still have your dedicated server. When you connect as sysdba, you fire up dedicated server that is where it is.
21) What is ORA-002004 error? What you will do in that case?
A disk I/O failure was detected on reading the control file.
Basically you have to check whether the control file is available, permissions are right on the control file,
spfile/init.ora right to the right location, if all checks were done still you are getting the error, then from the multiplexed control file overlay on the corrupted one.
Let us say you have three control files control01.ctl, control02.ctl and control03.ctl and now you are getting errors on control03.ctl then just copy control01.ctl over to control03.ctl and you should be all set.
In order to issue ALTER DATABASE BACKUP CONTROLFILE TO TRACE;
database should be mounted and in our case it is not mounted then the only other option available is to restore control file from backup or copy the multiplexed control file over to the bad one.
22) Why do we need SCOPE=BOTH clause?
BOTH indicates that the change is made in memory and in the server parameter file. The new setting takes effect immediately and persists after the database is shut down and started up again. If a server parameter file was used to start up the database, then BOTH is the default. If a parameter file was used to start up the database, then MEMORY is the default, as well as the only scope you can specify.
23) How to know Number of CPUs on Oracle
Login as SYSDBA
SQL>show parameter cpu_count
NAME TYPE VALUE
cpu_count integer 2
24) Could you please tell me what are the possible reason for Spfile corruption and Recovery?
It should not be corrupt under normal circumstances, if it were, it would be a bug or failure of some component in your system. It could be a file system error or could be a bug.
You can easily recover however from
a) Your alert log has the non-default parameters in it from your last restart.
b) it should be in your backups
c) strings spfile.ora > init$ORACLE_SID.ora - and then edit the resulting file to clean it up would be options.
25) How you will check flashback is enabled or not?
Select flashback_on from v$database;
26) In case Revoke CREATE TABLE Privilege from an USER giving ORA-01952. What is the issue? How to do in that case?
SQL> revoke create table from Pay_payment_master;
ORA-01952: system privileges not granted to ‘PAY_PAYMENT_MASTER’
This is because this privilege is not assigned to this user directly rather it was assigned through role “CONNECT”
If you remove connect role from the user then you will not be able to create session (Connect) to database.
So basically we have to Revoke the CONNECT Role and Grant other than create table privilege to this user.
27) What kind of information is stored in UNDO segments?
Only before image of data is stored in the UNDO segments. If transaction is rolled back information from UNDO is applied to restore original datafile. UNDO is never multiplexed.
28) How to Remove Oracle Service in windows environment?
We can add or remove Oracle Service using oradim which is available in ORACLE_HOME/bin
C:\Oradim –delete –sid
or
Oradim –delete –svrc
29) Why ORA-28000: the account is locked? What you will do in that case?
The Oracle 10g default is to lock an account after 10 bad password attempts and giving ORA-28000: the account is locked.
In that case one of the solutions is increase default limit of the login attempts.
SQL> Alter profile default limit FAILED_LOGIN_ATTEMPTS unlimited;
30) How to Reduce the Physical Reads on Statistics?
You need to increase the Buffer Cache
Consider the situation Buffer Cache of the database is 300MB.
One SQL gave the Physical read as 100. I increased as 400MB and now the same SQL giving the Physical read value is 0
31) How many redo groups are required for a Oracle DB?
At least 2 redo groups are required for a Oracle database to be working normally.
32) My spfile is corrupt and now I cannot start my database running on my laptop. Is there a way to build spfile again?
if you are on unix then
$ cd $ORACLE_HOME/dbs
$ strings spfilename temp_pfile.ora
edit the temp_pfile.ora, clean it up if there is anything "wrong" with it and then
SQL> startup pfile=temp_pfile.ora
SQL> create spfile from pfile;
SQL> shutdown
SQL> startup
On windows -- just try editing the spfile [do not try with the prod db first try to check on test db. It can be dangerous], create a pfile from it. save it,
and do the same or if you got problem you can startup the db from the command line using sqlplus create a pfile, do a manual startup (start the oracle service, then use sqlplus to start the database)
33) What is a fractured block? What happens when you restore a file containing fractured block?
A block in which the header and footer are not consistent at a given SCN.
In a user-managed backup, an operating system utility can back up a datafile at the same time that DBWR is updating the file.
It is possible for the operating system utility to read a block in a half-updated state, so that the block that is copied to the backup media is updated in its first half,
while the second half contains older data. In this case, the block is fractured.
For non-RMAN backups, the ALTER TABLESPACE ... BEGIN BACKUP or ALTER DATABASE BEGIN BACKUP command is the solution for the fractured block problem.
When a tablespace is in backup mode, and a change is made to a data block, the database logs a copy of the entire block image before the change so that the database can reconstruct this block if media recovery finds that this block was fractured.
The block that the operating system reads can be split, that is, the top of the block is written at one point in time while the bottom of the block is written at another point in time.
If you restore a file containing a fractured block and Oracle reads the block, then the block is considered a corrupt.
34) You recreated the control file by “using backup control file to trace” and using alter database backup controlfile to ‘location’ command what have you lost in that case?
You lost all of the backup information as using backup controlfile to trace where as using other ALTER DATABASE BACKUP CONTROLFILE to ‘D:\Backup\control01.ctl’.
All backup information is retained when you take binary control file backup.
35) If a backup is issued after “shutdown abort” command what kind of backup
is that?
It is an inconsistent backup.
If you are in noarchivelog mode ensure that you issue the shutdown immediate command or startup force is another option that you can issue:
startup force->shutdown abort;
followed by
shutdown immediate;
36) What is split brain
What is split brain ?
In RAC environment, server nodes communicate with each other using High speed private interconnects
network. A split brain situation happens when all the links of the private interconnect fail to respond to
each other but instances are still up and running. So each instance thinks that the other nodes/instances are
dead and that it should take over the ownership.
In split brain situation, instances independtly access the data and modify the same blocks and the database
will end up with changed database overwritten which could lead to data corruption. To avoid this, various
algorithm are implemented to handle split brain scenario.
In RAC, the IMR (Instance Membership Recovery) service is one of the one of the efficient algorithm
used to detect & resolve the split-brain syndrome. When one instance fails to communicate with other
instances or when one instance becomes inactive due to any reason and is unable to issue the control file
heartbeat, the split brain is detected and the detecting instance will evict the failed instance from the
database.This process is called node eviction.
37) What does the #!bin/ksh at the beginning of a shell script do? Why should it be there?
Ans: On the first line of an interpreter script, the "#!", is the name of a program which should be used to interpret the contents of the file.
For instance, if the first line contains "#! /bin/ksh", then the contents of the file are executed as a korn shell
38) What command is used to find the status of Oracle 10g Clusterware (CRS) and the various components it manages
(ONS, VIP, listener, instances, etc.)?
Ans: $ocrcheck
39) How would you find the interconnect IP address from any node within an Oracle 10g RAC configuration?
using oifcfg command.
se the oifcfg -help command to display online help for OIFCFG. The elements of OIFCFG commands, some of which are
optional depending on the command, are:
*nodename—Name of the Oracle Clusterware node as listed in the output from the olsnodes command
*if_name—Name by which the interface is configured in the system
*subnet—Subnet address of the interface
*if_type—Type of interface: public or cluster_interconnect
40) 15.What is the Purpose of the voting disk in Oracle 10g Clusterware?
Voting disk record node membership information.
Oracle Clusterware uses the voting disk to determine which instances are members of a cluster.
The voting disk must reside on a shared disk. For high availability, Oracle recommends that you have a minimum of three voting disks.
If you configure a single voting disk, then you should use external mirroring to provide redundancy.
You can have up to 32 voting disks in your cluster.
41) Data Guard Protection Modes :
In some situations, a business cannot afford to lose data at any cost.
In other situations, some applications require maximum database performance and can tolerate a potential loss of data.
Data Guard provides three distinct modes of data protection to satisfy these varied requirements:
*Maximum Protection—> This mode offers the highest level of data protection.
Data is synchronously transmitted to the standby database from the primary database and transactions are not committed on the primary database unless the redo data is available on at least one standby database configured in this mode.
If the last standby database configured in this mode becomes unavailable, processing stops on the primary database.
This mode ensures no-data-loss, even in the event of multiple failures.
*Maximum Availability—> This mode is similar to the maximum protection mode, including zero data loss.
However, if a standby database becomes unavailable (for example, because of network connectivity problems),
processing continues on the primary database.
When the fault is corrected, the standby database is automatically resynchronized with the primary database.
This mode achieves no-data-loss in the event of a single failure (e.g. network failure, primary site failure . . .)
*Maximum Performance—> This mode offers slightly less data protection on the primary database, but higher performance than maximum availability mode.
In this mode, as the primary database processes transactions, redo data is asynchronously shipped to the standby database.
The commit operation of the primary database does not wait for the standby database to acknowledge receipt of redo data before completing write operations on the primary database.
If any standby destination becomes unavailable, processing continues on the primary database and there is little effect on primary database performance.
42) Connection hanging? what are the possibilities?
possibilities for Oracle hanging include:
External issues - The network being down, Kerberos security issues, SSO or a firewall issue can cause an Oracle connection to hang.
One way to test this is to set sqlnet.authentication_services=(none) in your sqlnet.ora file and retry connecting.
Listener is not running - Start by checking the listener (check lsnrctl stat). Also, see my notes on diagnosing Oracle network connectivity issues.
No RAM - Over allocation of server resources, usually RAM, whereby there is not enough RAM to spawn another connection to Oracle.
Contention - It is not uncommon for an end-user session to “hang” when they are trying to grab a shared data resource that is held by another end-user.
The end-user often calls the help desk trying to understand why they cannot complete their transaction, and the Oracle professional must quickly identify the source of the contention."
43) What is Partition Pruning ?
Partition Pruning: Oracle optimizes SQL statements to mark the partitions or subpartitions that need to be accessed and eliminates (prunes) unnecessary partitions or subpartitions from access by those SQL statements. In other words, partition pruning is the skipping of unnecessary index and data partitions or subpartitions in a query.
44) FAN in RAC.
With Oracle RAC in place, database client applications can leverage a number of high availability features including:
Fast Connection Failover (FCF): Allows a client application to be immediately notified of a planned or unplanned database service outage by subscribing to Fast Application Notification (FAN) events.
Run-time Connection Load-Balancing: Uses the Oracle RAC Load Balancing Advisory events to distribute work appropriately across the cluster nodes and to quickly react to changes in cluster configuration, overworked nodes, or hangs.
Connection Affinity (11g recommended/required): Routes connections to the same database instance based on previous connections to an instance to limit performance impacts of switching between instances.
RAC supports web session and transaction-based affinity for different client scenarios.
45) Why extra standby redo log group?
Determine the appropriate number of standby redo log file groups.
Minimally, the configuration should have one more standby redo log file group
than the number of online redo log file groups on the primary database....
(maximum number of logfiles for each thread + 1) * maximum number of threads
Using this equation reduces the likelihood that the primary instance's log
writer (LGWR) process will be blocked because a standby redo log file cannot be
allocated on the standby database. For example, if the primary database has 2
log files for each thread and 2 threads, then 6 standby redo log file groups
are needed on the standby database."
I think it says that if you have groups #1 and #2 on primary and #1, #2 on
standby, and if LGWR on primary just finished #1, switched to #2, and now it
needs to switch to #1 again because #2 just became full, the standby must catch
up, otherwise the primary LGWR cannot reuse #1 because the standby is still
archiving the standby's #1. Now, if you have the extra #3 on standby, the
standby in this case can start to use #3 while its #1 is being archived. That
way, the primary can reuse the primary's #1 without delay.
46) how to take voting disk backup ?
On 10gR2 RAC used "Can be done online" for backup voting disk but in 11g you cannot use DD online (use oracle command to do it).
First, as root user, stop Oracle Clusterware (with the crsctl stop crs command) on all nodes if you want to add/restore voting disk.
Then, determine the current voting disk by issuing the following command:
crsctl query votedisk css
issue the dd or ocopy command to back up a voting disk, as appropriate.
Give the syntax of backing up voting disks:-
On Linux or UNIX systems:
dd if=voting_disk_name of=backup_file_name
where,
voting_disk_name is the name of the active voting disk
backup_file_name is the name of the file to which we want to back up the voting disk contents
On Windows systems, use the ocopy command:
ocopy voting_disk_name backup_file_name
47) What is the Oracle Recommendation for backing up voting disk?
Oracle recommends us to use the dd command to backup the voting disk with aminimum block size of 4KB.
48) How do you restore a voting disk?
To restore the backup of your voting disk, issue the dd or ocopy command for Linux and UNIX systems or ocopy for Windows systems respectively.
On Linux or UNIX systems:
dd if=backup_file_name of=voting_disk_name
On Windows systems, use the ocopy command:
ocopy backup_file_name voting_disk_name
where,
backup_file_name is the name of the voting disk backup file
voting_disk_name is the name of the active voting disk
49) How can we add and remove multiple voting disks?
If we have multiple voting disks, then we can remove the voting disks and add them back into our environment using the following commands,
where path is the complete path of the location where the voting disk resides:
crsctl delete css votedisk path
crsctl add css votedisk path
50) How do we stop Oracle Clusterware?When do we stop it?
Before making any modification to the voting disk, as root user,
stop Oracle Clusterware using the crsctl stop crs command on all nodes.
51) How do we add voting disk?
To add a voting disk, issue the following command as the root user,
replacing the path variable with the fully qualified path name for the voting disk we want to add:
crsctl add css votedisk path -force
52) How do we move voting disks?
To move a voting disk, issue the following commands as the root user,
replacing the path variable with the fully qualified path name for the voting disk we want to move:
crsctl delete css votedisk path -force
crsctl add css votedisk path -force
53) How do we remove voting disks?
To remove a voting disk,
issue the following command as the root user, replacing the path variable with the fully qualified path name for the voting disk we want to remove:
crsctl delete css votedisk path -force
54) What should we do after modifying voting disks?
After modifying the voting disk,
restart Oracle Clusterware using the crsctl start crs command on all nodes, and verify the voting disk location using the following command:
crsctl query css votedisk
55) When can we use -force option?
If our cluster is down, then we can include the -force option to modify the voting disk configuration,
without interacting with active Oracle Clusterware daemons.
However, using the -force option while any cluster node is active may corrupt our configuration.
56) How to find Cluster Interconnect IP address from Oracle Database ?
Hello, The easiest way to find the cluster interconnect is to view the “hosts” file. The “hosts” file is located under: UNIX .......... /etc
Windows ...... C:\WINDOWS\system32\drivers\etc
Following are the ways to find the cluster interconnect through Oracle database:
1) Query X$KSXPIA
The following query provides the interconnect IP address registered with Oracle database:
view plaincopy to clipboardprint?
SQL> select IP_KSXPIA from x$ksxpia where PUB_KSXPIA = 'N';
IP_KSXPIA
----------------
192.168.10.11
This query should be run on all instances to find the private interconnect IP address used on their respective nodes.
2) Query GV$CLUSTER_INTERCONNECTS view
Querying GV$CLUSTER_INTERCONNECTS view lists the interconnect used by all the participating instances of the RAC database.
view plaincopy to clipboardprint?
SQL> select INST_ID, IP_ADDRESS from GV$CLUSTER_INTERCONNECTS;
INST_ID IP_ADDRESS
---------- ----------------
1 192.168.10.11
2 192.168.10.12
57) How to Identify master node in RAC ?
Grep crsd log file
# /u1/app/../crsd>grep MASTER crsd.log | tail -1
(or)
cssd >grep -i "master node" ocssd.log | tail -1
OR You can also use V$GES_RESOURCE view to identify the master node.
58) how to monitor block transfer interconnects nodes in rac ?
The v$cache_transfer and v$file_cache_transfer views are used to examine RAC statistics.
The types of blocks that use the cluster interconnects in a RAC environment are monitored with the v$ cache transfer series of views:
v$cache_transfer: This view shows the types and classes of blocks that Oracle transfers over the cluster interconnect on a per-object basis.
The forced_reads and forced_writes columns can be used to determine the types of objects the RAC instances are sharing.
Values in the forced_writes column show how often a certain block type is transferred out of a local buffer cache due to the current version being requested by another instance.
59) what is global cache service monitoring?
Global Cache Services (GCS) Monitoring
The use of the GCS relative to the number of buffer cache reads, or logical reads can be estimated
by dividing the sum of GCS requests (global cache gets + global cache converts + global cache cr blocks received + global cache current blocks received )
by the number of logical reads (consistent gets + db block gets ) for a given statistics collection interval.
A global cache service request is made in Oracle when a user attempts to access a buffer cache to read or modify a data block and the block is not in the local cache.
A remote cache read, disk read or change access privileges is the inevitable result.
These are logical read related. Logical reads form a superset of the global cache service operations.
===========================================================
Rman Interview Questions:
2. Difference between using recovery catalog and control file?
When new incarnation happens, the old backup information in control file will be lost. It will be preserved in recovery catalog.
In recovery catalog, we can store scripts.
Recovery catalog is central and can have information of many databases.
3. Can we use same target database as catalog?
No. The recovery catalog should not reside in the target database (database to be backed up), because the database can't be recovered in the mounted state.
4. How do u know how much RMAN task has been completed?
By querying v$rman_status or v$session_longops
5. From where list & report commands will get input?
6. Command to delete archive logs older than 7days?
RMAN> delete archivelog all completed before sysdate-7;
7. How many days backup, by default RMAN stores?
8. What is the use of crosscheck command in RMAN?
Crosscheck will be useful to check whether the catalog information is intact with OS level information.
9. What are the differences between crosscheck and validate commands?
10. Which is one is good, differential (incremental) backup or cumulative (incremental) backup?
A differential backup, which backs up all blocks changed after the most recent incremental backup at level 1 or 0
A cumulative backup, which backs up all blocks changed after the most recent incremental backup at level 0
11. What is Level 0, Level 1 backup?
A level 0 incremental backup, which is the base for subsequent incremental backups, copies all blocks containing data, backing the datafile up into a backup set just as a full backup would. A level 1 incremental backup can be either of the following types:
A differential backup, which backs up all blocks changed after the most recent incremental backup at level 1 or 0
A cumulative backup, which backs up all blocks changed after the most recent incremental backup at level 0
12. Can we perform level 1 backup without level 0 backup?
If no level 0 backup is available, then the behavior depends upon the compatibility mode setting. If compatibility < 10.0.0, RMAN generates a level 0 backup of the file contents at the time of the backup. If compatibility is >= 10.0.0, RMAN copies all blocks changed since the file was created, and stores the results as a level 1 backup. In other words, the SCN at the time the incremental backup is taken is the file creation SCN.
13. Will RMAN put the database/tablespace/datafile in backup mode?
Nope.
14. What is snapshot control file?
15. What is the difference between backup set and backup piece?
Backup set is logical and backup piece is physical.
16. RMAN command to backup for creating standby database?
RMAN> duplicate target database to standby database ....
17. How to do cloning by using RMAN?
RMAN> duplicate target database …
18. You loss one datafile and DB is running in ARCHIVELOG mode. You have full database backup of 1 week/day old and don’t have backup of this (newly created) datafile. How do you restore/recover file?
create the datafile and recover that datafile.
SQL> alter database create datafile ‘…path..’ size n;
RMAN> recover datafile file_id;
19. What is obsolete backup & expired backup?
A status of "expired" means that the backup piece or backup set is not found in the backup destination.
A status of "obsolete" means the backup piece is still available, but it is no longer needed. The backup piece is no longer needed since RMAN has been configured to no longer need this piece after so many days have elapsed, or so many backups have been performed.
20. What is the difference between hot backup & RMAN backup?
For hot backup, we have to put database in begin backup mode, then take backup.
RMAN won’t put database in backup mode.
21. How to put manual/user-managed backup in RMAN (recovery catalog)?
By using catalog command.
RMAN> CATALOG START WITH '/tmp/backup.ctl';
22. What are new features in Oracle 11g RMAN?
23. What is the difference between auxiliary channel and maintenance channel?
===========================================================
RMAN Question & Answers
What is RMAN and How to configure it?
RMAN is an Oracle Database client
It performs backup and recovery tasks on your databases and automates administration of your backup strategies
It greatly simplifies the dba jobs by managing the production database's backing up, restoring, and recovering database files
This tool integrates with sessions running on an Oracle database to perform a range of backup and recovery activities, including maintaining an RMAN repository of historical data about backups
There is no additional installation required for this tool
It is by default get installed with the oracle database installation
The RMAN environment consists of the utilities and databases that play a role in acking up your data
We can access RMAN through the command line or through Oracle Enterprise Manager
2) Why to use RMAN?
RMAN gives you access to several backup and recovery techniques and features not available with user-managed backup and recovery. The most noteworthy are the following:
Automatic specification of files to include in a backup
Establishes the name and locations of all files to be backed up
Maintain backup repository
Backups are recorded in the control file, which is the main repository of RMAN metadata
Additionally, you can store this metadata in a recovery catalog
Incremental backups
Incremental backup stores only blocks changed since a previous backup
Thus, they provide more compact backups and faster recovery, thereby reducing the need to apply redo during datafile media recovery
Unused block compression:
In unused block compression, RMAN can skip data blocks that have never been used
Block media recovery
We can repair a datafile with only a small number of corrupt data blocks without taking it offline or restoring it from backup
Binary compression
A binary compression mechanism integrated into Oracle Database reduces the size of backups
Encrypted backups
RMAN uses backup encryption capabilities integrated into Oracle Database to store backup sets in an encrypted format
Corrupt block detection
RMAN checks for the block corruption before taking its backup
3) How RMAN works?
RMAN backup and recovery operation for a target database are managed by RMAN client
RMAN uses the target database control file to gather metadata about the target database and to store information about its own operations
The RMAN client itself does not perform backup, restore, or recovery operations
When you connect the RMAN client to a target database, RMAN allocates server sessions on the target instance and directs them to perform the operations
The work of backup and recovery is performed by server sessions running on the target database
A channel establishes a connection from the RMAN client to a target or auxiliary database instance by starting a server session on the instance
The channel reads data into memory, processes it, and writes it to the output device
When you take a database backup using RMAN, you need to connect to the target database using RMAN Client
The RMAN client can use Oracle Net to connect to a target database, so it can be located on any host that is connected to the target host through Oracle Net
For backup you need to allocate explicit or implicit channel to the target database
An RMAN channel represents one stream of data to a device, and corresponds to one database server session.
This session dynamically collect information of the files from the target database control file before taking the backup or while restoring
For example if you give ' Backup database ' from RMAN, it will first get all the datafiles information from the controlfile
Then it will divide all the datafiles among the allocated channels. (Roughly equal size of work as per the datafile size)
Then it takes the backup in 2 steps
Step1:
The channel will read all the Blocks of the entire datafile to find out all the formatted blocks to backup
Note:
RMAN do not take backup of the unformatted blocks
Step2:
In the second step it takes back up of the formatted blocks
Example:
This is the best advantage of using RMAN as it only takes back up of the required blocks
Lets say in a datafile of 100 MB size, there may be only 10 MB of use full data and rest 90 MB is free then RMAN will only take backup of those 10 MB
4) What O/S and oracle user privilege required using RMAN?
RMAN always connects to the target or auxiliary database using the SYSDBA privilege
RMAN always connects to the target or auxiliary database using the SYSDBA privilege
Its connections to a database are specified and authenticated in the same way as SQL*Plus connections to a database
The O/S user should be part of the DBA group
For remote connection it needs the password file Authentication
Target database should have the initialization parameter REMOTE_LOGIN_PASSWORDFILE set to EXCLUSIVE or SHARED
5) RMAN terminology:
A target database:
An Oracle database to which RMAN is connected with the TARGET keyword
A target database is a database on which RMAN is performing backup and recovery operations
RMAN always maintains metadata about its operations on a database in the control file of the database
A recovery Catalog:
A separate database schema used to record RMAN activity against one or more target databases
A recovery catalog preserves RMAN repository metadata if the control file is lost, making it much easier to restore and recover following the loss of the control file
The database may overwrite older records in the control file, but RMAN maintains records forever in the catalog unless deleted by the user
Backup sets:
RMAN can store backup data in a logical structure called a backup set, which is the smallest unit of an RMAN backup
One backup set contains one or more datafiles a section of datafile or archivelogs
Backup Piece:
A backup set contains one or more binary files in an RMAN-specific format
This file is known as a backup piece
Each backup piece is a single output file
The size of a backup piece can be restricted; if the size is not restricted, the backup set will comprise one backup piece
Backup piece size should be restricted to no larger than the maximum file size that your filesystem will support
Image copies:
An image copy is a copy of a single file (datafile, archivelog, or controlfile)
It is very similar to an O/S copy of the file
It is not a backupset or a backup piece
No compression is performed
Snapshot Controlfile:
When RMAN needs to resynchronize from a read-consistent version of the control file, it creates a temporary snapshot control file
The default name for the snapshot control file is port-specific
Database Incarnation:
Whenever you perform incomplete recovery or perform recovery using a backup control file, you must reset the online redo logs when you open the database
The new version of the reset database is called a new incarnation
The reset database command directs RMAN to create a new database incarnation record in the recovery catalog
This new incarnation record indicates the current incarnation
6) What is RMAN Configuration and how to configure it?
The RMAN backup and recovery environment is preconfigured for each target database
The configuration is persistent and applies to all subsequent operations on this target database, even if you exit and restart RMAN
RMAN configured settings can specify backup devices, configure a connection to a backup device , policies affecting backup strategy, encryption algorithm, snap shot controlfile loaion and others
By default there are few default configuration are set when you login to RMAN
You can customize them as per your requirement
Any time you can check the current setting by using the "Show all” command
CONFIGURE command is used to create persistent settings in the RMAN environment, which apply to all subsequent operations, even if you exit and restart RMAN
7) How to check RMAN configuration?
RMAN>Show all;
8) How to reset to default configuration?
To reset the default configuration setting use Connect to the target database from sqlplus and run
SQL> connect @target_database;
SQL> execute dbms_backup_restore.resetConfig;
RMAN Catalog Database
9) What is Catalog database and How to configure it?
This is a separate database which contains catalog schema
You can use the same target database as the catalog database but it’s not at all recommended
10) How Many catalog database I can have?
You can have multiple catalog databases for the same target database
But at a time you can connect to only 1 catalog database via RMAN. Its not recommended to have multiple catalog database
11) Is this mandatory to use catalog database?
No! It’s an optional one
12) What is the advantage of catalog database?
Catalog database is a secondary storage of backup metadata
It’s very useful in case you lost the current controlfile, as all the backup information are there in the catalog schema
Secondly from contolfile the older backup information are aged out depending upon the control_file_record_keep_time
RMAN catalog database mainten the history of data
13) What is the difference between catalog database & catalog schema?
Catalog database is like any other database which contains the RMAN catalog user's schema
14) What happen if catalog database lost?
Since catalog database is an optional there is no direct effect of loss of catalog database
Create a new catalog database and register the target database with the newly created catalog one All the backup information from the target database current controlfile will be updated to the catalog schema
If any backup information which is aged out from the target database then you need to manually catalog those backup pieces
RMAN backup:
15) What are the database file's that RMAN can backup?
RMAN can backup Controlfile, Datafiles, Archive logs, standby database controfile, Spfile
16) What are the database file's that RMAN cannot backup?
RMAN can not take backup of the pfile, Redo logs, network configuration files, password files, external tables and the contents of the Oracle home files
17) Can I have archivelogs and datafile backup in a single backupset?
No. We can not put datafiles and archive logs in the same backupset
18) Can I have datafiles and contolfile backup in a single backup set?
Yes
If the controlfile autobackup is not ON then RMAN takes backup of controlfile along with the datafile 1, whenever you take backup of the database or System tablespace
19) Can I regulate the size of backup piece and backup set?
Yes!
You can set max size of the backupset as well as the backup piece
By default one RMAN channel creates a single backupset with one backup piece in it
You can use the MAXPIECESIZE channel parameter to set limits on the size of backup pieces
You can also use the MAXSETSIZE parameter on the BACKUP and CONFIGURE commands to set a limit for the size of backup sets
20) What is the difference between backup set backup and Image copy backup?
A backup set is an RMAN-specific proprietary format, whereas an image copy is a bit-for-bit copy of a file
By default, RMAN creates backup sets
21) What is RMAN consistent backup and inconsistent backup?
A consistent backup occurs when the database is in a consistent state
That means backup of the database taken after a shutdown immediate, shutdown normal or shutdown transactional
If the database is shutdown with abort option then its not a consistent backup
A backup when the database is up and running is called an inconsistent backup
When a database is restored from an inconsistent backup, Oracle must perform media recovery before the database can be opened, applying any pending changes from the redo logs
You can not take inconsistent backup when the database is in Noarchivelog mode
22) Can I take RMAN backup when the database is down?
No!
You can take RMAN backup only when the target database is Open or in Mount stage
It’s because RMAN keep the backup metadata in controfile
Only in open or mount mode controlfile is accessible
23) Do I need to place the database in begin backup mode while taking RMAN inconsistent backup?
RMAN does not require extra logging or backup mode because it knows the format of data blocks
RMAN is guaranteed not to back up fractured blocks
No extra redo is generated during RMAN backup
24) Can I compress RMAN backups?
ü RMAN supports binary compression of backup sets
ü The supported algorithms are BZIP2 (default) and ZLIB
ü It’s not recommended to compress the RMAN backup using any other OS or third party utility
Note:
ü RMAN compressed backup with BZIP2 provides great compression but is CPU intensive
ü Using ZLIB compression requires the Oracle Database 11g Advanced Compression Option and is only supported with an 11g database
ü The feature is not backward compatible with 10g databases
25) Can I encrypt RMAN backup?
ü RMAN supports backup encryption for backup sets
ü You can use wallet-based transparent encryption, password-based encryption, or both
ü You can use the CONFIGURE ENCRYPTION command to configure persistent transparent encryption
ü Use the SET ENCRYPTION, command at the RMAN session level to specify password-based encryption
26) Can RMAN take backup to Tape?
ü Yes!
ü We can use RMAN for the tape backup
ü But RMAN can not able to write directly to tape
ü You need to have third party Media Management Software installed
ü Oracle has published an API specification which Media Management Vendor's who are members of Oracle's Backup Solutions Partner program have access to
ü Media Management Vendors (MMVs) then write an interface library which the Oracle server uses to write and read to and from tape
Starting from oracle 10g R2 oracle has its Own Media management software for the database backup to tape called OSB
27) How RMAN Interact with Media manager?
ü Before performing backup or restore to a media manager, you must allocate one or more channels or configure default channels for use with the media manager to handle the communication with the media manager
ü RMAN does not issue specific commands to load, label, or unload tapes
ü When backing up, RMAN gives the media manager a stream of bytes and associates a unique name with this stream
ü When RMAN needs to restore the backup, it asks the media manager to retrieve the byte stream
ü All details of how and where that stream is stored are handled entirely by the media manager
28) What is Proxy copy backup to tape?
ü Proxy copy is functionality, supported by few media manager in which they handle the entire data movement between datafiles and the backup devices
ü Such products may use technologies such as high-speed connections between storage and media subsystems to reduce load on the primary database server
ü RMAN provides a list of files requiring backup or restore to the media manager, which in turn makes all decisions regarding how and when to move the data
29) What is Oracle Secure backup?
ü Oracle Secure Backup is a media manager provided by oracle that provides reliable and secure data protection through file system backup to tape
ü All major tape drives and tape libraries in SAN, Gigabit Ethernet, and SCSI environments are supported
30) Can I restore or duplicate my previous version database using a later version of Oracle?
For example, is it possible to restore a 9i backup while using the 10g executables?
It is possible to use the 10.2 RMAN executable to restore a 9.2 database (same for 11.2 to 11.1 or 11.1 to 10.2, etc) even if the restored datafiles will be stored in ASM
RMAN is configured so that a higher release is able to restore a lower release, but it is strongly suggested you use only the same version
31) Can I restore or duplicate between two different patchset levels?
ü As you can restore between different Oracle versions, you can also do so between two different patchset levels
Alter database open resetlogs upgrade;
OR
alter database open resetlogs downgrade;
32) Can I restore or duplicate between two different versions of the same operating system?
For example, can I restore my 9.2.0.1.0 RMAN backup taken against a host running Solaris 9 to a different machine where 9.2.0.1.0 is installed but where that host is running Solaris 10?
If the same Oracle Server installation CDs (media pack) can be used to install 9.2.0.1.0 on Solaris 9 and Solaris 10, this type of restore is supportable
33) Is it possible to restore or duplicate when the bit level (32 bit or 64 bit) of Oracle does not match?
For example, is it possible to restore or duplicate my 9.2. 64-bit database to a 9.2.32-bit installation?
It is preferable to keep the same bit version when performing a restore/recovery
However, excluding the use of duplicate command, the use of the same operating system platform should allow for a restore/recovery between bit levels (32 bit or 64 bit) of Oracle
Note, this may be specific to the particular operating system and any problems with this should be reported to Oracle Support
If you will be running the 64-bit database against the 32-bit binary files or vice versa, after the recovery has ended the database bit version must be converted using utlirp.sql
If you do not run utlirp.sql you will see errors including but not limited to:
ORA-06553: PLS-801: INTERNAL ERROR [56319]
34) Can I restore or duplicate my RMAN backup between two different platforms such as Solaris to Linux?
In general, you cannot restore or duplicate between two different platforms
35) What are the corruption types?
ü Datafile Block Corruption - Physical/Logical
ü Table/Index Inconsistency
ü Extents Inconsistencies
ü Data Dictionary Inconsistencies
Scenarios:
Goal: How to identify all the corrupted segments in the database reported by RMAN?
Solution:
Step 1: Identify the corrupt blocks (Datafile Block Corruption - Intra block corruption)
RMAN> backup validate check logical database;
To make it faster, it can be configured to use PARALLELISM with multiple channels:
RMAN> run {
allocate channel d1 type disk;
allocate channel d2 type disk;
allocate channel d3 type disk;
allocate channel d4 type disk;
backup validate check logical database;
}
Step2: Using the view v$database_block_corruption:
SQL> select * from v$database_block_corruption;
FILE# BLOCK# BLOCKS CORRUPTION_CHANGE# CORRUPTIO
------------------------------------------------------------------------------------------------------------
6 10 1 8183236781662 LOGICAL
6 42 1 0 FRACTURED
6 34 2 0 CHECKSUM
6 50 1 8183236781952 LOGICAL
6 26 4 0 FRACTURED
5 rows selected.
Datafile Block Corruption - Intra block corruption
It refers to intra block corruptions that may cause different errors like ORA-1578, ORA-8103, ORA-1410, ORA-600 etc.
Oracle classifies the corruptions as Physical and Logical
ü To identify both Physical and Logical Block Corruptions use the "CHECK LOGICAL" option
ü It checks the complete database for both corruptions without actually doing a backup
Solution1:
$ rman target /
RMAN> backup check logical validate database;
$ rman target /
RMAN> backup check logical database;
Solution2:
ü Chek the view V$DATABASE_BLOCK_CORRUPTION to identify the block corruptions detected by RMAN
Solution3: DBVerify - Identify Datafile Block Corruptions
ü DBVERIFY identify Physical and Logical Intra Block Corruptions by default
ü Dbverify cannot be run for the whole database in a single command
ü It does not need a database connection either
dbv file= blocksize=
RMAN Vs DBVerify - Datafile Intra Block Corruption
When the logical option is used by RMAN, it does exactly the same checks as DBV does for intra block corruption.
RMAN can be run with PARALLELISM using multiple channels making it faster than DBV which can not be run in parallel in a single command
DBV checks for empty blocks. In 10g RMAN may not check blocks in free extents when Locally Managed Tablespaces are used. In 11g RMAN checks for both free and used extents.
Both DBV and RMAN (11g) can check for a range of blocks. RMAN: VALIDATE DATAFILE 1 BLOCK 10 to 100;. DBV: start=10 end=100
RMAN keeps corruption information in the control file (v$database_block_corruption, v$backup_corruption). DBV does not.
RMAN may not report the corruption details like what is exactly corrupted in a block reported as a LOGICAL corrupted block. DBV reports the corruption details in the screen or in a log file.
DBV can scan blocks with a higher SCN than a given SCN.
DBV does not need a connection to the database.
dentify TABLE / INDEX Inconsistency
Table / Index inconsistencies is when an entry in the Table does not exist in the Index or vice versa. The common errors are ORA-8102, ORA-600 [kdsgrp1], ORA-1499 by "analyze validate structure cascade".
The tool to identify TABLE / INDEX inconsistencies is the ANALYZE command:
analyze table validate structure cascade;
When an inconsistency is identified, the above analyze command will produce error ORA-1499 and a trace file.
35) What Happens When A Tablespace/Database Is Kept In Begin Backup Mode?
ü One danger in making online backups is the possibility of inconsistent data within a block
ü For example, assume that you are backing up block 100 in datafile users.dbf
ü Also, assume that the copy utility reads the entire block while DBWR is in the middle of updating the block
ü In this case, the copy utility may read the old data in the top half of the block and the new data in the bottom top half of the block
ü The result is called a fractured block, meaning that the data contained in this block is not consistent at a given SCN
Therefore oracle internally manages the consistency as below :
The first time a block is changed in a datafile that is in hot backup mode, the entire block is written to the redo log files, not just the changed bytes
Normally only the changed bytes (a redo vector) is written
In hot backup mode, the entire block is logged the first time
This is because you can get into a situation where the process copying the datafile and DBWR are working on the same block simultaneously
Lets say they are and the OS blocking read factor is 512bytes (the OS reads 512 bytes from disk at a time). The backup program goes to read an 8k Oracle block. The OS gives it 4k. Meanwhile -- DBWR has asked to rewrite this block. the OS schedules the DBWR write to occur right now. The entire 8k block is rewritten. The backup program starts running again (multi-tasking OS here) and reads the last 4k of the block. The backup program has now gotten an fractured block -- the head and tail are from two points in time.
We cannot deal with that during recovery. Hence, we log the entire block image so that during recovery, this block is totally rewritten from redo and is consistent with itself atleast. We can recover it from there.
2. The datafile headers which contain the SCN of the last completed checkpoint are not updated while a file is in hot backup mode. This lets the recovery process understand what archive redo log files might be needed to fully recover this file.
===========================================================================================================================================================================================================================================
ASM Interview questions:
*****************************
1) What are the background processes in ASM
Ans:
RABL- Rebalancer: It opens all the device files as part of disk discovery and coordinates the ARB processes for rebalance activity.
ARBx - Actual Rebalancer: They perform the actual rebalancing activities.
The number of ARBx processes depends on the ASM_POWER_LIMIT init parameter.
ASMB - ASM Bridge: This process is used to provide information to and from the Cluster Synchronization Service (CSS) used by ASM to manage the disk resources.
It is also used to update statistics and provide a heartbeat mechanism.
2) What is the use of ASM (or) Why ASM preferred over filesystem?
ANS: ASM provides striping and mirroring.
3) What are the init parameters related to ASM?
ANS:
INSTANCE_TYPE = ASM
ASM_POWER_LIMIT = 11
ASM_DISKSTRING = '/dev/rdsk/*s2', '/dev/rdsk/c1*'
ASM_DISKGROUPS = DG_DATA, DG_FRA
4) What is rebalancing (or) what is the use of ASM_POWER_LIMIT?
ANS:
ASM_POWER_LIMIT is dynamic parameter, which will be useful for rebalancing the data across disks.
Value can be 1(lowest) to 11 (highest).
5) What are different types of redundancies in ASM & explain?
ANS:
External redundancy,
Normal redundancy,
High redundancy.
6) How to copy file to/from ASM from/to filesystem?
ANS:
By using ASMCMD cp command
7) How to find out the databases, which are using the ASM instance?
ANS:
ASMCMD> lsct
DB_Name Status Software_Version Compatible_version Instance_Name Disk_Group
amxdcmp1 CONNECTED 11.2.0.2.0 11.2.0.2.0 amxdcmp1 DG1_DCM_DATA
amxddip1 CONNECTED 11.2.0.2.0 11.2.0.2.0 amxddip1 DG1_DDI_DATA
ASMCMD>
(or)
SQL> select DB_NAME from V$ASM_CLIENT;
8) What are different types of stripings in ASM & their differences?
ANS:
Fine-grained striping
Coarse-grained striping
ASMCMD> lsdg
State Type Rebal Sector Block AU Total_MB Free_MB Req_mir_free_MB Usable_file_MB Offline_disks Voting_files Name
MOUNTED EXTERN N 512 4096 1048576 6835200 1311391 0 1311391 0 N DG1_DCM_DATA/
MOUNTED EXTERN N 512 4096 1048576 486400 154487 0 154487 0 N DG1_DDI_DATA/
ASMCMD>
SQL> select NAME,ALLOCATION_UNIT_SIZE/1024/1024 "MB" from v$asm_diskgroup;
NAME MB
------------------------------ ----------
DG1_DCM_DATA 1
DG1_DDI_DATA 1
9) What is allocation unit and what is default value of au_size and how to change?
ANS:
Every ASM disk is divided into allocation units (AU).
An AU is the fundamental unit of allocation within a disk group.
A file extent consists of one or more AU. An ASM file consists of one or more file extents.
CREATE DISKGROUP disk_group_2 EXTERNAL REDUNDANCY DISK '/dev/sde1' ATRRIBUTE 'au_size' = '32M';
10) What process does the rebalancing?
ANS:
RBAL, ARBn
11) How to add/remove disk to/from diskgroup?
ANS:
add disk:
ALTER DISKGROUP DG1_ZABBIX_DATA ADD DISK
'/zabbix_u03/oradata/zbxprd1/ZBX_DATA_DISK009' name ZBX_DATA_DISK009,
'/zabbix_u04/oradata/zbxprd1/ZBX_DATA_DISK010' name ZBX_DATA_DISK010,
'/zabbix_u05/oradata/zbxprd1/ZBX_DATA_DISK011' name ZBX_DATA_DISK011;
remove disk:
alter diskgroup DG1_CIE_DATA drop disk
DG_CIE_DATA_DISK001,
DG_CIE_DATA_DISK002,
DG_CIE_DATA_DISK003,
DG_CIE_DATA_DISK004;
*******************************************************************************************************************************************
Oracle RMAN Interview Questions/FAQs:
**************************************
1) Difference between catalog and nocatalog?
ANS: CATALOG is used when you use a repository database as catalog.
NOCATALOG is used when you used the controlfile to register your backup information.
Default in NOCATALOG.
2) Difference between using recovery catalog and control file?
ANS:
When new incarnation happens, the old backup information in control file will be lost.
It will be preserved in recovery catalog.
In recovery catalog, we can store scripts.
Recovery catalog is central and can have information of many databases.
3) Can we use same target database as catalog?
ANS:
No.
The recovery catalog should not reside in the target database (database to be backed up),
because the database can't be recovered in the mounted state.
4) How do u know how much RMAN task has been completed?
ANS:
By querying v$rman_status or v$session_longops
5) From where list & report commands will get input
LIST:
The primary purpose of the LIST command is to list backup and copies. For example, you can list:
Backups and proxy copies of a database, tablespace, datafile, archived redo log, or control file
Backups that have expired
Backups restricted by time, path name, device type, tag, or recoverability
Archived redo log files and disk copies
REPORT:
You can use the REPORT command to answer important questions, such as:
Which files need a backup?
Which files have had unrecoverable operations performed on them?
Which backups are obsolete and can be deleted?
What was the physical schema of the target database or a database in the Data Guard environment at some previous time?
Which files have not been backed up recently?
6) Command to delete archive logs older than 7days?
ANS:
RMAN> delete archivelog all completed before sysdate-7;
7) What is the use of crosscheck command in RMAN?
ANS:
Crosscheck will be useful to check whether the catalog information is intact with OS level information.
8) What are the differences between crosscheck and validate commands
ANS:
Use the CROSSCHECK command to synchronize the physical reality of backups and copies with their logical records in the RMAN repository.
Use the VALIDATE command to check for corrupt blocks and missing files, or to determine whether a backup set can be restored.
9) Which is one is good, differential (incremental) backup or cumulative (incremental) backup?
ANS:
A differential backup, which backs up all blocks changed after the most recent incremental backup at level 1 or 0
A cumulative backup, which backs up all blocks changed after the most recent incremental backup at level 0
10) What is Level 0, Level 1 backup?
ANS:
A level 0 incremental backup, which is the base for subsequent incremental backups, copies all blocks containing data,
backing the datafile up into a backup set just as a full backup would.
A level 1 incremental backup can be either of the following types:
A differential backup, which backs up all blocks changed after the most recent incremental backup at level 1 or 0
A cumulative backup, which backs up all blocks changed after the most recent incremental backup at level 0
11) Can we perform level 1 backup without level 0 backup?
ANS:
If no level 0 backup is available, then the behavior depends upon the compatibility mode setting.
If compatibility < 10.0.0, RMAN generates a level 0 backup of the file contents at the time of the backup.
If compatibility is >= 10.0.0, RMAN copies all blocks changed since the file was created, and stores the results as a level 1 backup.
In other words, the SCN at the time the incremental backup is taken is the file creation SCN.
12) Will RMAN put the database/tablespace/datafile in backup mode ?
RMAN does not require you to put the database in backup mode.
13) What is snapshot control file?
ANS:
The snapshot CONTROLFILE is a copy of the CONTROLFILE that RMAN utilizes during long running operation (such as backup).
RMAN needs a read consistent view of the CONTROLFILE for the backup operation, but by its nature the control file is extremely volatile.
Instead of putting lock on the control file and causing all kinds of db enqueue problems, RMAN makes a copy of controlfile called snapshot controlfile.
The snapshot is refreshed at the beginning of every backup.
14) what is controlfile auto backup ?
ANS:
then RMAN automatically backs up the control file and server parameter file after every backup and after database structural changes.
The control file autobackup contains metadata about the previous backup, which is crucial for disaster recovery.
15) What is the difference between backup set and backup piece?
ANS:
Backup set is logical and backup piece is physical.
16) What is obsolete backup & expired backup?
A status of "expired" means that the backup piece or backup set is not found in the backup destination.
A status of "obsolete" means the backup piece is still available, but it is no longer needed.
The backup piece is no longer needed since RMAN has been configured to no longer need this piece after so many days have elapsed,
or so many backups have been performed.
17) What is the difference between hot backup & RMAN backup?
For hot backup, we have to put database in begin backup mode, then take backup.
RMAN won’t put database in backup mode.
18) How to put manual/user-managed backup in RMAN (recovery catalog)?
By using catalog command.
RMAN> CATALOG START WITH '/tmp/backup.ctl';
19) What is the difference between auxiliary channel and maintenance channel ?
AUXILIARY:
Specifies a connection between RMAN and an auxiliary database instance.
An auxiliary instance is used when executing the DUPLICATE or TRANSPORT TABLESPACE command,
and when performing TSPITR with RECOVER TABLESPACE . When specifying this option, the auxiliary instance must be started but not mounted.
See Also: DUPLICATE to learn how to duplicate a database, and CONNECT to learn how to connect to a duplicate database instance
CHANNEL:
Specifies a connection between RMAN and the target database instance.
The channel_id is the case-sensitive name of the channel.
The database uses the channel_id to report I/O errors.
Each connection initiates an database server session on the target or auxiliary instance: this session performs the work of backing up, restoring, or recovering RMAN backups.
You cannot make a connection to a shared server session.
Whether ALLOCATE CHANNEL allocates operating system resources immediately depends on the operating system.
On some platforms, operating system resources are allocated at the time the command is issued.
On other platforms, operating system resources are not allocated until you open a file for reading or writing.
Each channel operates on one backup set or image copy at a time.
RMAN automatically releases the channel at the end of the job.
*******************************************************************************************************************************************
Oracle RAC Interview Questions/FAQs Part1 :
----------------------------------------------
1) What is the use of RAC
ANS:
Oracle RAC allows multiple computers to run Oracle RDBMS software simultaneously while accessing a single database, thus providing clustering.
2) What are the prerequisites for RAC setup ?
3) What are Oracle Clusterware/Daemon processes and what they do?
Ans:
ocssd, crsd, evmd, oprocd, racgmain, racgimon
4) What are the special background processes for RAC (or) what is difference in stand-alone database & RAC database background processes?
ANS:
DIAG, LCKn, LMD, LMSn, LMON
5) What are structural changes in 11g R2 RAC?
Ans:
http://satya-racdba.blogspot.com/2010/07/new-features-in-9i-10g-11g-rac.html
Grid & ASM are on one home,
Voting disk & ocrfile can be on the ASM,
SCAN,
By using srvctl, we can mange diskgroups, home, ons, eons, filesystem, srvpool, server, scan, scan_listener, gns, vip, oc4j,GSD
6) What is cache fusion?
Ans:
Transferring of data between RAC instances by using private network.
Cache Fusion is the remote memory mapping of Oracle buffers,
shared between the caches of participating nodes in the cluster.
When a block of data is read from datafile by an instance within the cluster and another instance is in need of the same block,
it is easy to get the block image from the instance which has the block in its SGA rather than reading from the disk.
7) What is the purpose of Private Interconnect?
Ans:
Clusterware uses the private interconnect for cluster synchronization (network heartbeat) and daemon communication between the clustered nodes. This communication is based on the TCP protocol.
RAC uses the interconnect for cache fusion (UDP) and inter-process communication (TCP).
8) What are the Clusterware components?
Ans:
Voting Disk - Oracle RAC uses the voting disk to manage cluster membership by way of a health check and arbitrates cluster ownership among the instances in case of network failures. The voting disk must reside on shared disk.
Oracle Cluster Registry (OCR) - Maintains cluster configuration information as well as configuration information about any cluster database within the cluster. The OCR must reside on shared disk that is accessible by all of the nodes in your cluster.
The daemon OCSSd manages the configuration info in OCR and maintains the changes to cluster in the registry.
Virtual IP (VIP) - When a node fails, the VIP associated with it is automatically failed over to some other node
and new node re-arps the world indicating a new MAC address for the IP.
Subsequent packets sent to the VIP go to the new node, which will send error RST packets back to the clients.
This results in the clients getting errors immediately.
crsd – Cluster Resource Services Daemon
cssd – Cluster Synchronization Services Daemon
evmd – Event Manager Daemon
oprocd / hangcheck_timer – Node hang detector
9) What is OCR file?
Ans:
RAC configuration information repository that manages information about the cluster node list and instance-to-node mapping information.
The OCR also manages information about Oracle Clusterware resource profiles for customized applications.
Maintains cluster configuration information as well as configuration information about any cluster database within the cluster.
The OCR must reside on shared disk that is accessible by all of the nodes in your cluster.
The daemon OCSSd manages the configuration info in OCR and maintains the changes to cluster in the registry.
10) What is Voting file/disk and how many files should be there?
Ans:
Voting Disk File is a file on the shared cluster system or a shared raw device file.
Oracle Clusterware uses the voting disk to determine which instances are members of a cluster.
Voting disk is akin to the quorum disk, which helps to avoid the split-brain syndrome.
Oracle RAC uses the voting disk to manage cluster membership by way of a health check and arbitrates cluster ownership among the instances
in case of network failures. The voting disk must reside on shared disk.
11) How to take backup of OCR file?
Ans:
#ocrconfig -manualbackup
#ocrconfig -export file_name.dmp
#ocrdump -backupfile my_file
$cp -p -R /u01/app/crs/cdata /u02/crs_backup/ocrbackup/RAC1
12) How to recover OCR file?
Ans:
#ocrconfig -restore backup_file.ocr
#ocrconfig -import file_name.dmp
13) What is local OCR?
Ans:
/etc/oracle/local.ocr
/var/opt/oracle/local.ocr
14) How to check backup of OCR files?
Ans:
#ocrconfig –showbackup
15) How to take backup of voting file?
Ans:
dd if=/u02/ocfs2/vote/VDFile_0 of=$ORACLE_BASE/bkp/vd/VDFile_0
crsctl backup css votedisk -- from 11g R2
16) How do I identify the voting disk location?
Ans:
# crsctl query css votedisk
17) How do I identify the OCR file location?
check /var/opt/oracle/ocr.loc or /etc/ocr.loc
Ans:
# ocrcheck
18) If voting disk/OCR file got corrupted and don’t have backups, how to get them?
Ans:
We have to install Clusterware.
19) Who will manage OCR files?
Ans:
cssd will manage OCR.
20) Who will take backup of OCR files?
Ans:
crsd will take backup.
21) What is split brain syndrome?
Ans:
Will arise when two or more instances attempt to control a cluster database.
In a two-node environment, one instance attempts to manage updates simultaneously while the other instance attempts to manage updates.
22) What are various IPs used in RAC? Or How may IPs we need in RAC?
Ans:
Public IP, Private IP, Virtual IP, SCAN IP
23) What is the use of virtual IP?
Ans:
When a node fails,
the VIP associated with it is automatically failed over to some other node and new node re-arps the world indicating a new MAC address for the IP.
Subsequent packets sent to the VIP go to the new node, which will send error RST packets back to the clients.
This results in the clients getting errors immediately.
Without using VIPs or FAN, clients connected to a node that died will often wait for a TCP timeout period (which can be up to 10 min) before getting an error.
As a result, you don't really have a good HA solution without using VIPs.
24) What is the use of SCAN IP (SCAN name) and will it provide load balancing?
Ans:
Single Client Access Name (SCAN) is a new Oracle Real Application Clusters (RAC) 11g Release 2,
feature that provides a single name for clients to access an Oracle Database running in a cluster.
The benefit is clients using SCAN do not need to change if you add or remove nodes in the cluster.
25) How many SCAN listeners will be running?
Ans:
Three SCAN listeners only.
26) What is FAN?
Ans:
Applications can use Fast Application Notification (FAN) to enable rapid failure detection, balancing of connection pools after failures,
and re-balancing of connection pools when failed components are repaired.
The FAN process uses system events that Oracle publishes when cluster servers become unreachable or if network interfaces fail.
27) What is FCF?
Ans:
Fast Connection Failover provides high availability to FAN integrated clients, such as clients that use JDBC, OCI, or ODP.NET.
If you configure the client to use fast connection failover, then the client automatically subscribes to FAN events and can react to database UP and DOWN events.
In response, Oracle gives the client a connection to an active instance that provides the requested database service.
30) What is TAF and TAF policies?
Ans:
Transparent Application Failover (TAF) - A runtime failover for high availability environments,
such as Real Application Clusters and Oracle Real Application Clusters Guard, TAF refers to the failover and re-establishment of application-to-service connections.
It enables client applications to automatically reconnect to the database if the connection fails, and optionally resume a SELECT statement that was in progress.
This reconnect happens automatically from within the Oracle Call Interface (OCI) library.
31) What are nodeapps?
Ans:
VIP, listener, ONS, GSD
32) What is gsd (Global Service Daemon)? [ http://www.datadisk.co.uk/html_docs/rac/rac_cs.htm ]
runs on each node with one GSD process per node.
The GSD coordinates with the cluster manager to receive requests from clients such as the DBCA, EM, and the SRVCTL utility to execute administrative job tasks such as instance startup or shutdown.
The GSD is not an Oracle instance background process and is therefore not started with the Oracle instance
33) How to do load balancing in RAC?
Client Side Connect-Time Load Balance:
---------------------------------------
The client load balancing feature enables clients to randomize connection requests among the listeners.
This is done by client Tnsnames Parameter: LOAD_BALANCE.
The (load_balance=yes) instructs SQLNet to progress through the list of listener addresses in the address_list section of the net service name in a random sequence. When set to OFF, instructs SQLNet to try the addresses sequentially until one succeeds.
Client Side Connect-Time failover
-------------------------------------
This is done by client Tnsnames Parameter: FAILOVER
The (failover=on) enables clients to connect to another listener if the initial connection to the first listener fails. Without connect-time failover, Oracle Net attempts a connection with only one listener.
Server Side Listener Connection Load Balancing.
-------------------------------------------------
With server-side load balancing, the listener directs a connection request to the best instance currently providing the service.
Init parameter remote_listener should be set. When set, each instance registers with the TNS listeners running on all nodes within the cluster.
There are two types of server-side load balancing:
--------------------------------------------------
Load Based — Server side load balancing redirects connections by default depending on node load. This id default.
Session Based — Session based load balancing takes into account the number of sessions connected to each node and then distributes the connections to balance the number of sessions across the different nodes.
From 10g release 2 the service can be setup to use load balancing advisory. This mean connections can be routed using SERVICE TIME and THROUGHPUT. Connection load balancing means the goal of a service can be changed, to reflect the type of connections using the service.
Transparent Application Failover (TAF) :
----------------------------------------------
Transparent Application Failover (TAF) is a feature of the Oracle Call Interface (OCI) driver at client side. It enables the application to automatically reconnect to a database, if the database instance to which the connection is made fails. In this case, the active transactions roll back.
Tnsnames Parameter: FAILOVER_MODE
e.g (failover_mode=(type=select)(method=basic))
Failover Mode Type can be Either SESSION or SELECT.
Session failover will have just the session to failed over to the next available node. With SELECT, the select query will be resumed.
TAF can be configured with just server side service settings by using dbms_service package.
Fast Connection Failover (FCF):
-----------------------------------
Fast Connection Failover is a feature of Oracle clients that have integrated with FAN HA Events.
Oracle JDBC Implicit Connection Cache, Oracle Call Interface (OCI), and Oracle Data Provider for .Net (ODP.Net) include fast connection failover.
With fast connection failover, when a down event is received, cached connections affected by the down event are immediately marked invalid and cleaned up.
34) What are the uses of services? How to find out the services in cluster?
Ans:
Applications should use the services to connect to the Oracle database.
Services define rules and characteristics (unique name, workload balancing, failover options, and high availability) to control how users and applications connect to database instances.
35) How to find out the nodes in cluster (or) how to find out the master node?
Ans:
# olsnodes -- Which ever displayed first, is the master node of the cluster.
select MASTER_NODE from v$ges_resource;
To find out which is the master node, you can see ocssd.log file and search for "master node number".
36) How to know the public IPs, private IPs, VIPs in RAC?
Ans:
# olsnodes -n -p -i
node1-pub 1 node1-prv node1-vip
node2-pub 2 node2-prv node2-vip
37) What utility is used to start DB/instance?
Ans:
srvctl start database –d database_name
srvctl start instance –d database_name –i instance_name
38) How can you shutdown single instance?
Ans:
Change cluster_database=false
srvctl stop instance –d database_name –i instance_name
39) What is HAS (High Availability Service) and the commands?
Ans:
HAS includes ASM & database instance and listeners.
crsctl check has
crsctl config has
crsctl disable has
crsctl enable has
crsctl query has releaseversion
crsctl query has softwareversion
crsctl start has
crsctl stop has [-f]
40) How many nodes are supported in a RAC Database?
Ans:
10g Release 2, support 100 nodes in a cluster using Oracle Clusterware, and 100 instances in a RAC database.
41) What is fencing?
Ans:
I/O fencing prevents updates by failed instances, and detecting failure and preventing split brain in cluster.
When a cluster node fails, the failed node needs to be fenced off from all the shared disk devices or diskgroups.
This methodology is called I/O Fencing, sometimes called Disk Fencing or failure fencing.
42) Why Clusterware installed in root (why not oracle)?
Oracle Clusterware works closely with the operating system, system administrator access is required for some of the installation tasks.
In addition, some of the Oracle Clusterware processes must run as the special operating system user, root.
43) What are the wait events in RAC?
Ans:
http://satya-racdba.blogspot.com/2012/10/wait-events-in-oracle-rac-wait-events.html
http://orainternals.wordpress.com/2009/12/23/rac-performance-tuning-understanding-global-cache-performance/
gc buffer busy
gc buffer busy acquire
gc current request
gc cr request
gc cr failure
gc current block lost
gc cr block lost
gc current block corrupt
gc cr block corrupt
gc current block busy
gc cr block busy
gc current block congested
gc cr block congested.
gc current block 2-way
gc cr block 2-way
gc current block 3-way
gc cr block 3-way
(gc current/cr block n-way, n is number of nodes)
gc current grant 2-way
gc cr grant 2-way
gc current grant busy
gc current grant congested
gc cr grant congested
gc cr multi block read
gc current multi block request
gc cr multi block request
gc cr block build time
gc current block flush time
gc cr block flush time
gc current block send time
gc cr block send time
gc current block pin time
gc domain validation
gc current retry
ges inquiry response
gcs log flush sync
44) What are the initialization parameters that must have same value for every instance in an Oracle RAC database?
Ans:
http://satya-racdba.blogspot.com/2012/09/init-parameters-in-oracle-rac.html
ACTIVE_INSTANCE_COUNT
ARCHIVE_LAG_TARGET
COMPATIBLE
CLUSTER_DATABASE
CLUSTER_DATABASE_INSTANCE
CONTROL_FILES
DB_BLOCK_SIZE
DB_DOMAIN
DB_FILES
DB_NAME
DB_RECOVERY_FILE_DEST
DB_RECOVERY_FILE_DEST_SIZE
DB_UNIQUE_NAME
INSTANCE_TYPE
PARALLEL_MAX_SERVERS
REMOTE_LOGIN_PASSWORD_FILE
UNDO_MANAGEMENT
45) What is the difference between cr block and cur (current) block?
46) New features in Oracle Clusterware 12c ?
Oracle Flex ASM - This feature of Oracle Clusterware 12c claims to reduce per-node overhead of using ASM instance.
Now the instances can use remote node ASM for any planned/unplanned downtime. ASM metadata requests can be converted by non-local instance of ASM.
ASM Disk Scrubbing - From RAC 12c, ASM comes with disk scrubbing feature so that logical corruptions can be discovered.
Also Oracle 12c ASM can automatically correct this in normal or high redundancy diskgroups.
Oracle ASM Disk Resync & Rebalance enhancements.
Commands Databases Supporting To the application Gameing Game What is raid
Application Continuity (AC) - is transparent to the application and in-case the database or the infrastructure is unavailable, this new features which work on JDBC drivers, masks recoverable outages.
This recovers database session beneath the application so that the outage actually appears to be delayed connectivity or execution.
Transaction guard (improvements of Fast Application Notification).
IPv6 Support - Oracle RAC 12c now supports IPv6 for Client connectivity, Interconnect is still on IPv4.
Per Subnet multiple SCAN - RAC 12c, per-Subnet multiple SCAN can be configured per cluster.
Each RAC instance opens the Container Database (CDB) as a whole so that versions would be same for CDB as well as for all of the Pluggable Databases (PDBs). PDBs are also fully compatible with RAC.
Oracle installer will run root.sh script across nodes. We don't have to run the scripts manually on all RAC nodes.
new "ghctl" command for patching.
47) New features in Oracle 9i/10g/11g RAC ? [ http://satya-racdba.blogspot.in/2010/07/new-features-in-9i-10g-11g-rac.html ]
Oracle Real Application Clusters New features
Oracle 9i RAC:
---------------------
OPS (Oracle Parallel Server) was renamed as RAC
CFS (Cluster File System) was supported
OCFS (Oracle Cluster File System) for Linux and Windows
watchdog timer replaced by hangcheck timer
Oracle 10g R1 RAC :
-------------------
Cluster Manager replaced by CRS
ASM introduced
Concept of Services expanded
ocrcheck introduced
ocrdump introduced
AWR was instance specific
Oracle 10g R2 RAC :
-------------------
CRS was renamed as Clusterware
asmcmd introduced
CLUVFY introduced
OCR and Voting disks can be mirrored
Can use FAN/FCF with TAF for OCI and ODP.NET
The Waiting The Wait Latest News Resource Manager Installing Music Downloads
Oracle 11g R1 RAC :
---------------------
--> Oracle 11g RAC parallel upgrades - Oracle 11g have rolling upgrade features whereby RAC database can be upgraded without any downtime.
-->Hot patching - Zero downtime patch application.
-->Oracle RAC load balancing advisor - Starting from 10g R2 we have RAC load balancing advisor utility.
11g RAC load balancing advisor is only available with clients who use .NET, ODBC, or the Oracle Call Interface (OCI).
-->ADDM for RAC - Oracle has incorporated RAC into the automatic database diagnostic monitor, for cross-node advisories.
The script addmrpt.sql run give report for single instance, will not report all instances in RAC, this is known as instance ADDM.
But using the new package DBMS_ADDM, we can generate report for all instances of RAC, this known as database ADDM.
--> Optimized RAC cache fusion protocols - moves on from the general cache fusion protocols in 10g to deal with specific scenarios where the protocols could be further optimized.
--> Oracle 11g RAC Grid provisioning - The Oracle grid control provisioning pack allows us to "blow-out" a RAC node without the time-consuming install, using a pre-installed "footprint".
Oracle 11g R2 RAC :
-----------------------
--> We can store everything on the ASM. We can store OCR & voting files also on the ASM.
--> ASMCA
--> Single Client Access Name (SCAN) - eliminates the need to change tns entry when nodes are added to or removed from the Cluster.
RAC instances register to SCAN listeners as remote listeners. SCAN is fully qualified name.
Oracle recommends assigning 3 addresses to SCAN, which create three SCAN listeners.
--> Clusterware components: crfmond, crflogd, GIPCD.
--> AWR is consolidated for the database.
--> 11g Release 2 Real Application Cluster (RAC) has server pooling technologies so it’s easier to provision and manage database grids.
This update is geared toward dynamically adjusting servers as corporations manage the ebb and flow between data requirements for datawarehousing and applications.By default, LOAD_BALANCE is ON.
--> GSD (Global Service Deamon), gsdctl introduced.
--> GPnP profile.
--> Cluster information in an XML profile.
--> Oracle RAC OneNode is a new option that makes it easier to consolidate databases that aren’t mission critical, but need redundancy.
--> raconeinit - to convert database to RacOneNode.
--> raconefix - to fix RacOneNode database in case of failure.
--> racone2rac - to convert RacOneNode back to RAC.
--> Oracle Restart - the feature of Oracle Grid Infrastructure's High Availability Services (HAS) to manage associated listeners, ASM instances and Oracle instances.
--> Oracle Omotion - Oracle 11g release2 RAC introduces new feature called Oracle Omotion, an online migration utility.
This Omotion utility will relocate the instance from one node to another, whenever instance failure happens.
Omotion utility uses Database Area Network (DAN) to move Oracle instances.
Database Area Network (DAN) technology helps seamless database relocation without losing transactions.
--> Cluster Time Synchronization Service (CTSS) is a new feature in Oracle 11g R2 RAC, which is used to synchronize time across the nodes of the cluster. --> CTSS will be replacement of NTP protocol.
--> Grid Naming Service (GNS) is a new service introduced in Oracle RAC 11g R2. With GNS, Oracle Clusterware (CRS) can manage Dynamic Host Configuration Protocol --> (DHCP) and DNS services for the dynamic node registration and configuration.
--> Cluster interconnect: Used for data blocks, locks, messages, and SCN numbers.
--> Oracle Local Registry (OLR) - From Oracle 11gR2 "Oracle Local Registry (OLR)" something new as part of Oracle Clusterware. OLR is node’s local repository, --> similar to OCR (but local) and is managed by OHASD. It pertains data of local node only and is not shared among other nodes.
--> Multicasting is introduced in 11gR2 for private interconnect traffic.
--> I/O fencing prevents updates by failed instances, and detecting failure and preventing split brain in cluster. When a cluster node fails, the failed node needs to be fenced off from all the shared disk devices or diskgroups. This methodology is called I/O Fencing, sometimes called Disk Fencing or failure fencing.
--> Re-bootless node fencing (restart)? - instead of fast re-booting the node, a graceful shutdown of the stack is attempted.
--> Clusterware log directories: acfs*
--> HAIP (IC VIP).
--> Redundant interconnects: NIC bonding, HAIP.
--> RAC background processes: DBRM – Database Resource Manager, PING – Response time agent.
--> Virtual Oracle 11g RAC cluster - Oracle 11g RAC supports virtualization.
*************************************************************************************************************************************************************
Oracle GoldenGate Interview Questions/FAQs :
**********************************************
1) What are processes/components in GoldenGate?
Ans:
Manager, Extract, Replicat, Data Pump
2) What is Data Pump process in GoldenGate ?
he Data Pump (not to be confused with the Oracle Export Import Data Pump) is an optional secondary Extract group that is created on the source system. When Data Pump is not used, the Extract process writes to a remote trail that is located on the target system using TCP/IP. When Data Pump is configured, the Extract process writes to a local trail and from here Data Pump will read the trail and write the data over the network to the remote trail located on the target system.
The advantages of this can be seen as it protects against a network failure as in the absence of a storage device on the local system, the Extract process writes data into memory before the same is sent over the network. Any failures in the network could then cause the Extract process to abort (abend). Also if we are doing any complex data transformation or filtering, the same can be performed by the Data Pump. It will also be useful when we are consolidating data from several sources into one central target where data pump on each individual source system can write to one common trail file on the target.
3) What is the command line utility in GoldenGate (or) what is ggsci?
ANS: Golden Gate Command Line Interface essential commands – GGSCI
GGSCI -- (Oracle) GoldenGate Software Command Interpreter
4) What is the default port for GoldenGate Manager process?
ANS:
7809
5) What are important files GoldenGate?
GLOBALS, ggserr.log, dirprm, etc ...
6) What is checkpoint table?
ANS:
Create the GoldenGate Checkpoint table
GoldenGate maintains its own Checkpoints which is a known position in the trail file from where the Replicat process will start processing after any kind of error or shutdown.
This ensures data integrity and a record of these checkpoints is either maintained in files stored on disk or table in the database which is the preferred option.
7) How can you see GoldenGate errors?
ANS:
ggsci> VIEW GGSEVT
ggserr.log file
*************************************************************************************************************************************************************
Oracle Data Guard Interview Questions/FAQs :
************************************************
1) How to setup Data Guard?
2) What are different types of modes in Data Guard and which is default?
ANS:
Maximum performance:
This is the default protection mode.
It provides the highest level of data protection that is possible without affecting the performance of a primary database.
This is accomplished by allowing transactions to commit as soon as all redo data generated by those transactions has been written to the online log.
Maximum protection:
This protection mode ensures that no data loss will occur if the primary database fails.
To provide this level of protection, the redo data needed to recover a transaction must be written to both the online redo log and to at least one standby database before the transaction commits.
To ensure that data loss cannot occur, the primary database will shut down, rather than continue processing transactions.
Maximum availability:
This protection mode provides the highest level of data protection that is possible without compromising the availability of a primary database.
Transactions do not commit until all redo data needed to recover those transactions has been written to the online redo log and to at least one standby database.
3) How many standby databases we can create (in 10g/11g)?
ANS:
Till Oracle 10g, 9 standby databases are supported.
From Oracle 11g R2, we can create 30 standby databases..
4) What are the parameters we’ve to set in primary/standby for Data Guard ?
ANS:
DB_UNIQUE_NAME
LOG_ARCHIVE_CONFIG
LOG_ARCHIVE_MAX_PROCESSES
DB_CREATE_FILE_DEST
DB_FILE_NAME_CONVERT
LOG_FILE_NAME_CONVERT
LOG_ARCHIVE_DEST_n
LOGARCHIVE_DEST_STATE_n
FAL_SERVER
FAL_CLIENT
STANDBY_FILE_MANAGEMENT
5) What is the use of fal_server & fal_client, is it mandatory to set these ?
ANS:
FAL_SERVER
specifies the FAL (fetch archive log) server for a standby database. The value is an Oracle Net service name, which is assumed to be configured properly on the standby database system to point to the desired FAL server.
FAL_CLIENT
specifies the FAL (fetch archive log) client name that is used by the FAL service, configured through the
FAL_SERVER initialization parameter, to refer to the FAL client.
The value is an Oracle Net service name, which is assumed to be configured properly on the FAL server system to point to the FAL client (standby database).
6) What are differences between physical, logical, snapshot standby and ADG (or) what are different types of standby databases?
Physical standby – in mount state, MRP will apply archives
ADG – in READ ONLY state, MRP will apply archives
Logical standby – in READ ONLY state, LSP will run
Snapshot standby databases – Physical standby database can be converted to snapshot standby database, which will be in READ WRITE mode, can do any kind of testing, then we can convert back snapshot standby database to physical standby database and start MRP which will apply all pending archives.
7) How to find out backlog of standby?
select round((sysdate - a.NEXT_TIME)*24*60) as "Backlog",m.SEQUENCE#-1 "Seq Applied",m.process, m.status
from v$archived_log a, (select process,SEQUENCE#, status from v$managed_standby where process like '%MRP%')m where a.SEQUENCE#=(m.SEQUENCE#-1);
8) If you didn't have access to the standby database and you wanted to find out what error has occurred in a data guard configuration, what view would you check in the primary database to check the error message?
ANS:
You can check the v$dataguard_status view.
select message from v$dataguard_status;
9) How can u recover standby which far behind from primary (or) without archive logs how can we make standby sync?
ANS:
By using RMAN incremental backup.
10) What is snapshot standby (or) How can we give a physical standby to user in READ WRITE mode and let him do updates and revert back to standby?
ANS:
Till Oralce 10g, create guaranteed restore point, open in read write, let him do updates, flashback to restore point, start MRP.
From Oracle 11g, convert physical standby to snapshot standby, let him do updates, convert to physical standby, start MRP.
11) What are new features in 11g Data Guard?
ANS:
Here is some data guard category and there enhancement
1) Data Protection
Advanced Compression
Lost-write protection
Fast-Start Failover
2) Increase ROI
Active Data Guard
Snapshot Standby
3) High Availability
Faster Redo Apply
Faster failover & switchover
Automatic Failover using ASYNC
4) Manageability
Mixed Windows/Linux
12) What are the uses of standby redo log files
A standby redo log is required for the maximum protection and maximum availability modes and the LGWR ASYNC transport mode is recommended for all databases. Data Guard can recover and apply more redo data from a standby redo log than from archived redo log files alone.
If the real-time apply feature is enabled, log apply services can apply redo data as it is received, without waiting for the current standby redo log file to be archived.
This results in faster switchover and failover times because the standby redo log files have been applied already to the standby database by the time the failover or switchover begins.
13) What is dg_config ?
ANS:
Specify the DG_CONFIG attribute to identify the DB_UNIQUE_NAME for the primary database and each standby database in the Data Guard configuration.
The default value of this parameter enables the primary database to send redo data to remote destinations and enables standby databases to receive redo data.
14) What is RTA (real time apply) mode MRP?
ANS:
real-time apply where before log shipping the LGWR process writes to a standbylog file simultaneously along with the online redolog file.
This standby logfile is written to standby log file on standby server. There is no loss of any committed transaction whatsoever in Real-Time Apply scenario.
•In Real Time Apply, once a transaction is committed on the Primary, the committed changes will be available on the Standby in Real Time even without switching the log at the Primary
MRP - Managed recovery process - For Data Guard, the background process that applies archived redo log to the standby database.
15) What is the difference between normal MRP (managed apply) and RTA MRP (real time apply)?
ANS:
The difference between Redo Apply & Real-Time Apply
------------------------------------------------------
Normally, by default, Archiver processes will be responsible for Redo Transport from Primary to Standby.
Once a log switch happens on the Primary, the online redo log is archived in the Local Archive destination as pointed to by Log_archive_dest_1
by an Archiver process.
Another Archiver process will then transmit the redo to the remote standby destination as indicated by Log_archive_dest_2.
Data Guard Remote File Server (RFS) Process on the Standby then writes redo data from the Standby redo log file to archive redo log file.
Log apply services then makes use of Managed Recovery Process (MRP) process to apply the redo to the standby database.
This method of propagating redo from the primary to standby is called Redo Apply and it happens only on log switch at the Primary.
When using Redo Apply mode, the status of MRP in v$managed_standby view will show as WAIT_FOR_LOG.
Real Time Apply, in contrast, uses either LGWR or Archiver on the Primary to write redo data to Standby Redo log on the Standby and Log Apply Services can apply the redo data in real-time without the need of the current standby redo log being archived. Once a transaction is committed on the Primary, the committed changes will be available on the Standby in Real Time even without switching the log.
When using Real Time Apply mode, the status of MRP in v$managed_standby view will show as APPLYING_LOG.
16) What is the difference between SYNC/ASYNC, LGWR/ARCH, and AFFIRM/NOAFFIRM ?
ANS:
Specifies that network I/O is to be done synchronously (SYNC) or asynchronously (ASYNC) when archival is performed using the log writer process (LGWR).
Specifies whether redo transport services use archiver processes (ARCn) or the log writer process (LGWR) to collect transaction redo data and transmit it to standby destinations. If neither the ARCH or LGWR attributes are specified, the default is ARCH.
Controls whether redo transport services use synchronous or asynchronous I/O to write redo data to disk
AFFIRM—specifies that all disk I/O to archived redo log files and standby redo log files is performed synchronously and completes successfully before the log writer process continues.
NOAFFIRM—specifies that all disk I/O to archived redo log files and standby redo log files is performed asynchronously; the log writer process on the primary database does not wait until the disk I/O completes before continuing.
17) What is StaticConnectIdentifier property used for?
ANS:
11gr2 new database property, StaticConnectIdentifier, which allows the user to specify a static connect identifier that the DGMGRL client will use to start database instances.
18) What is failover/switchover (or) what is the difference between failover & switchover
ANS:
Switchover – This is done when both primary and standby databases are available. It is pre-planned.
Failover – This is done when the primary database is NO longer available (ie in a Disaster). It is not pre-planned.
29) What are the background processes involved in Data Guard?
ANS:
MRP, LSP,
21)
*********************************************************************
Oracle Export/Import (exp/imp)- Data Pump (expdp/imp) Interview Questions/FAQs :
*********************************************************************
1) What is use of CONSISTENT option in exp?
Cross-table consistency. Implements SET TRANSACTION READ ONLY. Default value N.
2) What is use of DIRECT=Y option in exp?
Setting direct=yes, to extract data by reading the data directly, bypasses the SGA,
bypassing the SQL command-processing layer (evaluating buffer), so it should be faster. Default value N.
3) What is use of COMPRESS option in exp?
Imports into one extent. Specifies how export will manage the initial extent for the table data.
This parameter is helpful during database re-organization.
Export the objects (especially tables and indexes) with COMPRESS=Y.
If table was spawning 20 Extents of 1M each (which is not desirable, taking into account performance), if you export the table with COMPRESS=Y, the DDL generated will have initial of 20M. Later on when importing the extents will be coalesced.
Sometime it is found desirable to export with COMPRESS=N, in situations where you do not have contiguous space on disk (tablespace), and do not want imports to fail.
4) How to improve exp performance?
ANS:
a). Set the BUFFER parameter to a high value. Default is 256KB.
b). Stop unnecessary applications to free the resources.
c). If you are running multiple sessions, make sure they write to different disks.
d). Do not export to NFS (Network File Share). Exporting to disk is faster.
e). Set the RECORDLENGTH parameter to a high value.
f). Use DIRECT=yes (direct mode export).
5) How to improve imp performance?
ANS:
a). Place the file to be imported in separate disk from datafiles.
b). Increase the DB_CACHE_SIZE.
c). Set LOG_BUFFER to big size.
d). Stop redolog archiving, if possible.
e). Use COMMIT=n, if possible.
f). Set the BUFFER parameter to a high value. Default is 256KB.
g). It's advisable to drop indexes before importing to speed up the import process or set INDEXES=N and building indexes later on after the import.
Indexes can easily be recreated after the data was successfully imported.
h). Use STATISTICS=NONE
i). Disable the INSERT triggers, as they fire during import.
j). Set Parameter COMMIT_WRITE=NOWAIT(in Oracle 10g) or COMMIT_WAIT=NOWAIT (in Oracle 11g) during import.
6) What is use of INDEXFILE option in imp?
ANS:
Will write DDLs of the objects in the dumpfile into the specified file.
7) What is use of IGNORE option in imp?
ANS:
Will ignore the errors during import and will continue the import.
8) What are the differences between expdp and exp (Data Pump or normal exp/imp)?
ANS:
Data Pump is server centric (files will be at server).
Data Pump has APIs, from procedures we can run Data Pump jobs.
In Data Pump, we can stop and restart the jobs.
Data Pump will do parallel execution.
Tapes & pipes are not supported in Data Pump.
Data Pump consumes more undo tablespace.
Data Pump import will create the user, if user doesn’t exist.
9) Why expdp is faster than exp (or) why Data Pump is faster than conventional export/import?
Data Pump is block mode, exp is byte mode.
Data Pump will do parallel execution.
Data Pump uses direct path API.
10) How to improve expdp performance?
ANS:
Using parallel option which increases worker threads. This should be set based on the number of cpus.
11) How to improve impdp performance?
ANS:
Using parallel option which increases worker threads. This should be set based on the number of cpus.
12) In Data Pump, where the jobs info will be stored (or) if you restart a job in Data Pump, how it will know from where to resume?
Whenever Data Pump export or import is running, Oracle will create a table with the JOB_NAME and will be deleted once the job is done. From this table, Oracle will find out how much job has completed and from where to continue etc.
Default export job name will be SYS_EXPORT_XXXX_01, where XXXX can be FULL or SCHEMA or TABLE.
Default import job name will be SYS_IMPORT_XXXX_01, where XXXX can be FULL or SCHEMA or TABLE.
13) What is the order of importing objects in impdp?
Tablespaces
Users
Roles
Database links
Sequences
Directories
Synonyms
Types
Tables/Partitions
Views
Comments
Packages/Procedures/Functions
Materialized views
14) How to import only metadata?
ANS:
CONTENT= METADATA_ONLY
15) How to import into different user/tablespace/datafile/table?
ANS:
REMAP_SCHEMA
REMAP_TABLESPACE
REMAP_DATAFILE
REMAP_TABLE
REMAP_DATA
16) Using Data Pump, how to export in higher version (11g) and import into lower version (10g), can we import to 9i?
ANS:
Import data pump can always read export datapump dumpfile sets created by older versions of database. In your case it works, normal expdp on 10g and impdp on 11g
VERSION parameter in datapump is for other way around, if you want to import data taken from 11g into 10g database you need
to specify VERSION while taking backup.
17) How to do transport tablespaces (and across platforms) using exp/imp or expdp/impdp?
ANS: [http://satya-dba.blogspot.in/2010/01/oracle-transportable-tablespaces-tts.html ]
We can use the transportable tablespaces feature to copy/move subset of data (set of user tablespaces), from an Oracle database and plug it in to another Oracle database. The tablespaces being transported can be either dictionary managed or locally managed.
With Oracle 8i, Oracle introduced transportable tablespace (TTS) technology that moves tablespaces between databases. Oracle 8i supports tablespace transportation between databases that run on same OS platforms and use the same database block size.
With Oracle 9i, TTS (Transportable Tablespaces) technology was enhanced to support tablespace transportation between databases on platforms of the same type, but using different block sizes.
With Oracle 10g, TTS (Transportable Tablespaces) technology was further enhanced to support transportation of tablespaces between databases running on different OS platforms (e.g. Windows to Linux, Solaris to HP-UX), which has same ENDIAN formats. Oracle Database 10g Release 1 introduced cross platform transportable tablespaces (XTTS), which allows data files to be moved between platforms of different endian format. XTTS is an enhancement to the transportable tablespace (TTS). If ENDIAN formats are different we have to use RMAN (e.g. Windows to Solaris, Tru64 to AIX).
select * from v$transportable_platform order by platform_id;
18)
strings dumpfile.dmp | grep SCHEMA_LIST
(or)
$ strings myfile.dmp|more
*************************************************************************************************************************************************************
Oracle Performance Related Interview Questions/FAQs :
**********************************************************
1) What you’ll check whenever user complains that his session/database is slow?
2) What is the use of statistics?
ANS:
Optimizer statistics are a collection of data that describe more details about the database and the objects in the database. These statistics are used by the query optimizer to choose the best execution plan for each SQL statement.
3) How to generate explain plan?
ANS:
EXPLAIN PLAN FOR ;
4) How to check explain plan of already ran SQLs?
ANS:
select * from TABLE(dbms_xplan.display_cursor('&SQL_ID'));
5) How to find out whether the query has ran with RBO or CBO?
ANS:
ts very simple..from sql alone you cannot tell wheather they used CBO or RBO..its like this
If your optimizer_mode=choose
then all sql statements will use the CBO
when the tables they are acessing will have statistics collected..
then all sql statements will use the RBO
when the tables they are acessing will have no statistics..
6) What are top 5 wait events (in AWR report) and how you will resolve them?
ANS:
http://satya-dba.blogspot.in/2012/10/wait-events-in-oracle-wait-events.html
db file sequential read => tune indexing, tune SQL (to do less I/O), tune disks, increase buffer cache. This event is indicative of disk contention on index reads. Make sure all objects are analyzed. Redistribute I/O across disks. The wait that comes from the physical side of the database. It related to memory starvation and non selective index use. Sequential read is an index read followed by table read because it is doing index lookups which tells exactly which block to go to.
db file scattered read => disk contention on full table scans. Add indexes, tune SQL, tune disks, refresh statistics, and create materialized view. Caused due to full table scans may be because of insufficient indexes or unavailability of updated statistics.
db file parallel read => tune SQL, tune indexing, tune disk I/O, increase buffer cache. If you are doing a lot of partition activity then expect to see that wait even. It could be a table or index partition.
db file parallel write => if you are doing a lot of partition activity then expect to see that wait even. It could be a table or index partition.
db file single write => if you see this event than probably you have a lot of data files in your database.
control file sequential read
control file parallel write
log file sync => committing too often, archive log generation is more. Tune applications to commit less, tune disks where redo logs exist, try using nologging/unrecoverable options, log buffer could be too large.
log file switch completion => May need more log files per group.
log file parallel write => Deals with flushing out the redo log buffer to disk. Disks may be too slow or have an I/O bottleneck. Look for log file contention.
log buffer space => Increase LOG_BUFFER parameter or move log files to faster disks. Tune application, use NOLOGGING, and look for poor behavior that updates an entire row when only a few columns change.
log file switch (checkpoint incomplete) => May indicate excessive db files or slow IO subsystem.
log file switch (archiving needed) => Indicates archive files are written too slowly.
redo buffer allocation retries => shows the number of times a user process waited for space in the redo log buffer.
redo log space wait time => shows cumulative time (in 10s of milliseconds) waited by all processes waiting for space in the log buffer.
buffer busy waits/ read by other session => Increase DB_CACHE_SIZE. Tune SQL, tune indexing, we often see this event along with full table scans, if the SQL is inserting data, consider increasing FREELISTS and/or INITRANS, if the waits are on segment header blocks, consider increasing extent sizes.
free buffer waits => insufficient buffers, process holding buffers too long or i/o subsystem is over loaded. Also check you db writes may be getting clogged up.
cache buffers lru chain => Freelist issues, hot blocks.
no free buffers => Insufficient buffers, dbwr contention.
latch free
latch: session allocation
latch: in memory undo latch => If excessive could be bug, check for your version, may have to turn off in memory undo.
latch: cache buffer chains => check hot objects.
latch: cache buffer handles => Freelist issues, hot blocks.
direct path write => You wont see them unless you are doing some appends or data loads.
direct Path reads => could happen if you are doing a lot of parallel query activity.
direct path read temp or direct path write temp => this wait event shows Temp file activity (sort,hashes,temp tables, bitmap) check pga parameter or sort area or hash area parameters. You might want to increase them.
library cache load lock
library cache pin => if many sessions are waiting, tune shared pool, if few sessions are waiting, lock is session specific.
library cache lock => need to find the session holding the lock, look for DML manipulating an object being accessed, if the session is trying to recompile PL/SQL, look for other sessions executing the code.
undo segment extension => If excessive, tune undo.
wait for a undo record => Usually only during recovery of large transactions, look at turning off parallel undo recovery.
enque wait events => Look at V$ENQUEUE_STAT
SQL*Net message from client
SQL*Net message from dblink
SQL*Net more data from client
SQL*Net message to client
SQL*Net break/reset to client
7) What are the init parameters related to performance/optimizer?
ANS:
optimizer_mode = choose
optimizer_index_caching = 90
optimizer_index_cost_adj = 25
optimizer_max_permutations = 100
optimizer_use_sql_plan_baselines=true
optimizer_capture_sql_plan_baselines=true
optimizer_use_pending_statistics = true;
optimizer_use_invisible_indexes=true
_optimizer_connect_by_cost_based=false
_optimizer_compute_index_stats= true;
8) What are the values of optimizer_mode init parameters and their meaning?
ANS:
optimizer_mode = choose
9) What is the use of AWR, ADDM, ASH?
10) How to generate AWR report and what are the things you will check in the report?
11). How to generate ADDM report and what are the things you will check in the report?
12). How to generate ASH report and what are the things you will check in the report?
13) How to generate TKPROF report and what are the things you will check in the report?
ANS:
The tkprof tool is a tuning tool used to determine cpu and execution times for SQL statements. Use it by first setting timed_statistics to true in the initialization file and then turning on tracing for either the entire database via the sql_trace parameter or for the session using the ALTER SESSION command. Once the trace file is generated you run the tkprof tool against the trace file and then look at the output from the tkprof tool. This can also be used to generate explain plan output.
14)
*********************************************************************
UNIX Interview Questions/FAQs for Oracle DBAs:
************************************************
1) What’s the difference between soft link and hard link?
Ans:
A symbolic (soft) linked file and the targeted file can be located on the same or different file system while for a hard link they must be located on the same file system, because they share same inode number and an inode table is unique to a file system, both must be on the same file system.
2) How you will read a file from shell script?
Ans:
while read line
do
echo $line
done < file_name
3) 3. What’s the use of umask?
ANS:
Will decide the default permissions for files.
4) What is crontab and what are the arguments?
Ans:
The entries have the following elements:
field allowed values
----- --------------
minute 0-59
hour 0-23
day of month 1-31
month 1-12
day of week 0-7 (both 0 and 7 are Sunday)
user Valid OS user
command Valid command or script
? ? ? ? ? command
| | | | |_________day of the week (0-6, 0=Sunday)
| | | |___________month (1-12)
| | |_____________day of the month (1-31)
| |_______________hour (0-23)
|_________________minute (0-59)
5) How to find operating system (OS) version?
Ans:
uname –a
6) How to find out the run level of the user?
Ans:
uname –r
7) How to delete 7 days old trace files?
Ans:
find ./trace –name *.trc –mtime +7 –exec rm {} \;
8) What is top command?
Ans:
top is a operating system command, it will display top processes which are taking high cpu and memory.
9) 8. How to get 10th line of a file (by using grep)?
10)
*************************************************************************************************************************************************************
Architecture:
Oracle DBA Interview Questions/FAQs Part1 :
1) What is an instance?
ANS:
SGA + background processes.
2) What is SGA?
ANS:
System/Shared Global Area.
3) What is PGA (or) what is pga_aggregate_target?
ANS:
Programmable Global Area.
4) What are new memory parameters in Oracle 10g?
ANS:
SGA_TARGET, PGA_TARGET
5) What are new memory parameters in Oracle 11g?
ANS:
MEMORY_TARGET
6) What are the mandatory background processes?
ANS:
DBWR LGWR SMON PMON CKPT RECO.
7) What are the optional background processes?
ANS:
ARCH, MMAN, MMNL, MMON, CTWR, ASMB, RBAL, ARBx etc.
8) What are the new background processes in Oracle 10g?
ANS:
MMAN MMON MMNL CTWR ASMB RBAL ARBx
9) What are the new features in Oracle 9i?
http://satya-dba.blogspot.com/2009/01/whats-new-in-9i.html
10) . What are the new features in Oracle 10g?
http://satya-dba.blogspot.com/2009/01/whats-new-in-10g.html
11). What are the new features in Oracle 11g?
http://satya-dba.blogspot.com/2009/01/whats-new-in-11g.html
12). What are the new features in Oracle 11g R2?
http://satya-dba.blogspot.com/2009/09/whats-new-in-11g-release-2.html
13) What process will get data from datafiles to DB cache?
ANS:
Server process
14) What background process will writes data to datafiles?
ANS:
DBWR
15) What background process will write undo data?
ANS:
DBWR
16) What are physical components of Oracle database?
ANS:
Oracle database is comprised of three types of files. One or more datafiles, two or more redo log files, and one or more control files.
Password file and parameter file also come under physical components.
17) What are logical components of Oracle database?
ANS:
Blocks, Extents, Segments, Tablespaces.
18) What is segment space management?
ANS:
LMTS and DMTS.
19) What is extent management?
ANS:
Auto and Manual.
20) What are the differences between LMTS and DMTS?
Tablespaces that record extent allocation in the dictionary are called dictionary managed tablespaces,
and tablespaces that record extent allocation in the tablespace header are called locally managed tablespaces.
*************************************************************************************************************************************************************
Oracle DBA Interview Questions/FAQs Part2 :
********************************************
1) What is a datafile?
ANS:
Every Oracle database has one or more physical datafiles. Datafiles contain all the database data. The data of logical database structures such as tables and indexes is physically stored in the datafiles allocated for a database.
2) What are the contents of control file?
ANS:
Database name, SCN, LSN, datafile locations, redolog locations, archive mode, DB Creation Time, RMAN Backup & Recovery Details, Flashback mode.
3) What is the use of redo log files?
ANS:
Online redo logs serve to protect the database in the event of an instance failure. Whenever a transaction is committed, the corresponding redo entries temporarily stored in redo log buffers of the system global area are written to an online redo log file by the background process LGWR.
4) What are the uses of undo tablespace or redo segments?
ANS:
Undo records are used to:
Roll back transactions when a ROLLBACK statement is issued
Recover the database
Provide read consistency
Analyze data as of an earlier point in time by using Flashback Query
Recover from logical corruptions using Flashback features
5) How undo tablespace can guarantee retain of required undo data?
ANS:
Alter tablespace undo_ts retention guarantee;
6) What is 01555 - snapshot too old error and how do you avoid it?
ANS:
http://www.dba-oracle.com/t_ora_01555_snapshot_old.htm
7) What is the use/size of temporary tablespace?
ANS:
Temporary tablespaces are used to manage space for database sort operations and for storing global temporary tables
8) What is the use of password file?
ANS:
If the DBA wants to start up an Oracle instance there must be a way for Oracle to authenticate this DBA. That is if (s)he is allowed to do so. Obviously, his password can not be stored in the database, because Oracle can not access the database before the instance is started up. Therefore, the authentication of the DBA must happen outside of the database. There are two distinct mechanisms to authenticate the DBA: using the password file or through the operating system.
The init parameter remote_login_passwordfile specifies if a password file is used to authenticate the DBA or not. If it set either to shared or exclusive a password file will be used.
9) How to create password file?
ANS:
$ orapwd file=orapwSID password=sys_password force=y nosysdba=y
10) How many types of indexes are there?
ANS:
Clustered and Non-Clustered
1.B-Tree index
2.Bitmap index
3.Unique index
4.Function based index
Implicit index and explicit index.
Explicit indexes are again of many types like simple index, unique index, Bitmap index, Functional index, Organisational index, cluster index.
11) What is bitmap index & when it’ll be used?
ANS:
Bitmap indexes are preferred in Data warehousing environment.
Preferred when cardinality is low.
12) What is B-tree index & when it’ll be used?
ANS:
B-tree indexes are preferred in OLTP environment.
Preferred when cardinality is high.
13) How you will find out fragmentation of index?
ANS:
AUTO_SPACE_ADVISOR_JOB will run in daily maintenance window and report fragmented indexes/Tables.
analyze index validate structure;
This populates the table ‘index_stats’. It should be noted that this table contains only one row and therefore only one index can be analysed at a time.
An index should be considered for rebuilding under any of the following conditions:
* The percentage of deleted rows exceeds 30% of the total, i.e. if
del_lf_rows / lf_rows > 0.3.
* If the ‘HEIGHT’ is greater than 4.
* If the number of rows in the index (‘LF_ROWS’) is significantly smaller than ‘LF_BLKS’ this can indicate a large number of deletes, indicating that the index should be rebuilt.
14) What is the difference between delete and truncate?]
ANS:
Truncate will release the space. Delete won’t.
Delete can be used to delete some records. Truncate can’t.
Delete can be rollbacked.
Delete will generate undo (Delete command will log the data changes in the log file where as the truncate will simply remove the data without it. Hence data removed by Delete command can be rolled back but not the data removed by TRUNCATE).
Truncate is a DDL statement whereas DELETE is a DML statement.
Truncate is faster than delete.
15) What's the difference between a primary key and a unique key?
ANS:
Both primary key and unique enforce uniqueness of the column on which they are defined.
But by default primary key creates a clustered index on the column, where unique key creates a nonclustered index by default.
Primary key doesn't allow NULLs, but unique key allows one NULL only.
16) What is the difference between schema and user?
Schema is collection of user’s objects.
17) What is the difference between SYSDBA, SYSOPER and SYSASM?
ANS:
SYSOPER can’t create and drop database.
SYSOPER can’t do incomplete recovery.
SYSOPER can’t change character set.
SYSOPER can’t CREATE DISKGROUP, ADD/DROP/RESIZE DISK
SYSASM can do anything SYSDBA can do.
18) What is the difference between SYS and SYSTEM?
SYSTEM can’t shutdown the database.
SYSTEM can’t create another SYSTEM, but SYS can create another SYS or SYSTEM.
19) What is the difference between view and materialized view?
View is logical, will store only the query, and will always gets latest data.
Mview is physical, will store the data, and may not get latest data.
20)
************************************************************************************************************************************************************
Oracle DBA Interview Questions/FAQs Part3 :
*********************************************
1) What are materialized view refresh types and which is default?
ANS:
Complete, fast, force(default)
2) How to find out when was a materialized view refreshed?
ANS:
Query dba_mviews or dba_mview_analysis or dba_mview_refresh_times
SQL> select MVIEW_NAME, to_char(LAST_REFRESH_DATE,'YYYY-MM-DD HH24:MI:SS') from dba_mviews;
(or)
SQL> select NAME, to_char(LAST_REFRESH,'YYYY-MM-DD HH24:MI:SS') from dba_mview_refresh_times;
(or)
SQL> select MVIEW_NAME, to_char(LAST_REFRESH_DATE,'YYYY-MM-DD HH24:MI:SS') from dba_mview_analysis;
3) What is atomic refresh in mviews?
ANS:
From Oracle 10g, complete refresh of single materialized view can do delete instead of truncate.
To force the refresh to do truncate instead of delete, parameter ATOMIC_REFRESH must be set to false.
ATOMIC_REFRESH = FALSE, mview will be truncated and whole data will be inserted. The refresh will go faster, and no undo will be generated.
ATOMIC_REFRESH = TRUE (default), mview will be deleted and whole data will be inserted. Undo will be generated. We will have access at all times even while it is being refreshed.
SQL> EXEC DBMS_MVIEW.REFRESH('mv_emp', 'C', atomic_refresh=FALSE);
4) How to find out whether database/tablespace/datafile is in backup mode or not?
ANS:
Query V$BACKUP view.
5) What is row chaining?
ANS:
If the row is too large to fit into an empty data block in this case the oracle stores the data for the row in a chain of one or more data blocks. Can occur when the row is inserted.
6) What is row migration?
ANS:
An update statement increases the amount of data in a row so that the row no longer fits in its data blocks.
Now the oracle tries to find another free block with enough space to hold the entire row if such a block is available oracle moves entire row to new block.
7) What are different types of partitions?
ANS:
With Oracle8, Range partitioning (on single column) was introduced.
With Oracle8i, Hash and Composite(Range-Hash) partitioning was introduced.
With Oracle9i, List partitioning and Composite(Range-List) partitioning was introduced.
With Oracle 11g, Interval partitioning, REFerence partitioning, Virtual column based partitioning, System partitioning and Composite partitioning [Range-Range, List-List, List-Range, List-Hash, Interval-Range, Interval-List, Interval-Interval] was introduced.
8) What is local partitioned index and global partitioned index?
ANS:
A local index is an index on a partitioned table which is partitioned in the exact same manner as the underlying partitioned table. Each partition of a local index corresponds to one and only one partition of the underlying table.
A global partitioned index is an index on a partitioned or non partitioned tables which are partitioned using a different partitioning key from the table and can have different number of partitions. Global partitioned indexes can only be partitioned using range partitioning.
9) How you will recover if you lost one/all control file(s)?
10) Why more archivelogs are generated, when database is begin backup mode?
ANS:
During begin backup mode datafile headers get freezed and as result row information cannot be retrieved as a result the entire block is copied to redo logs as a result more redo generated and more log switch and in turn more archive logs.
Normally only deltas (change vectors) are logged to the redo logs.
When in backup mode, Oracle will write complete changed blocks to the redo log files.
Mainly to overcome fractured blocks. Most of the cases Oracle block size is equal to or a multiple of the operating system block size.
e.g. Consider Oracle blocksize is 2k and OSBlocksize is 4k. so each OS Block is comprised of 2 Oracle Blocks. Now you are doing an update when your db is in backup mode. An Oracle Block is updating and at the same time backup is happening on the OS block which is having this particular DB block. Backup will not be consistent since the one part of the block is being updated and at the same time it is copied to the backup location. In this case we will have a fractured block, so as to avoid this Oracle will copy the whole OS block to redo logfile which can be used for recovery. Because of this redo generation is more.
11) What UNIX parameters you will set while Oracle installation?
ANS:
shmmax, shmmni, shmall, sem,
12) What is the use of inittrans and maxtrans in table definition?
13) What are differences between dbms_job and dbms_schedular?
Through dbms_schedular we can schedule OS level jobs also.
14) What are differences between dbms_schedular and cron jobs?
Through dbms_schedular we can schedule database jobs, through cron we can’t set.
15) Difference between CPU & PSU patches?
CPU - Critical Patch Update - includes only Security related patches.
PSU - Patch Set Update - includes CPU + other patches deemed important enough to be released prior to a minor (or major) version release.
16) What you will do if (local) inventory corrupted [or] opatch lsinventory is giving error?
17) What are the entries/location of oraInst.loc?
ANS:
/etc/oraInst.loc is pointer to central/local Oracle Inventory.
18) What is the difference between central/global inventory and local inventory?
ANS:
19)
*************************************************************************************************************************************************************
Oracle DBA Interview Questions/FAQs Part4 :
**********************************************
1) What is the use of root.sh & oraInstRoot.sh?
Ans:
Changes ownership & permissions of oraInventory
Creating oratab file in the /etc directory
In RAC, starts the clusterware stack
2) How can you transport tablespaces across platforms with different endian formats?
Ans:
RMAN
3) What is transportable tablespace (and across platforms)?
4) What is xtss (cross platform transportable tablespace)?
5) What is the difference between restore point & guaranteed restore point?
6) How to find if your Oracle database is 32 bit or 64 bit?
Ans:
execute the command "file $ORACLE_HOME/bin/oracle", you should see output like /u01/db/bin/oracle: ELF 64-bit MSB executable SPARCV9 Version 1
means you are on 64 bit oracle.
If your oracle is 32 bit you should see output like below
oracle: ELF 32-bit MSB executable SPARC Version 1
7) How to find opatch Version ?
Ans:
opatch is utility to apply database patch, In order to find opatch version execute"$ORACLE_HOME/OPatch/opatch version"
8) suppose i created one table after few days i did some insert,update how can i know when will i did ddl or dml operation is undergone on that table ?
ANS:
DDL:
select OWNER,OBJECT_NAME,CREATED,LAST_DDL_TIME,from dba_objects where OBJECT_NAME='&object_name';
DML:
SQL> select max(ora_rowscn), scn_to_timestamp(max(ora_rowscn)) from PS_PAY_TAX;
MAX(ORA_ROWSCN) SCN_TO_TIMESTAMP(MAX(ORA_ROWSCN))
--------------- ---------------------------------------------------------------------------
6016929147 04-JAN-12 08.41.20.000000000 AM
SQL>
SQL> select table_name, inserts, updates, deletes, timestamp,truncated from user_tab_modifications where table_name='TEST1';
TABLE_NAME INSERTS UPDATE DELETES TIMESTAMP TRU DROP_SEG
--------- -------- ------- -------- ------------------- --- --------
TEST1 4 0 0 04.08.2008 12:03:32 NO 0
=====================================================================
$ strings myfile.dmp|more
*************************************************************************************************************************************************************
Oracle Performance Related Interview Questions/FAQs :
**********************************************************
1) What you’ll check whenever user complains that his session/database is slow?
2) What is the use of statistics?
ANS:
Optimizer statistics are a collection of data that describe more details about the database and the objects in the database. These statistics are used by the query optimizer to choose the best execution plan for each SQL statement.
3) How to generate explain plan?
ANS:
EXPLAIN PLAN FOR ;
4) How to check explain plan of already ran SQLs?
ANS:
select * from TABLE(dbms_xplan.display_cursor('&SQL_ID'));
5) How to find out whether the query has ran with RBO or CBO?
ANS:
ts very simple..from sql alone you cannot tell wheather they used CBO or RBO..its like this
If your optimizer_mode=choose
then all sql statements will use the CBO
when the tables they are acessing will have statistics collected..
then all sql statements will use the RBO
when the tables they are acessing will have no statistics..
6) What are top 5 wait events (in AWR report) and how you will resolve them?
ANS:
http://satya-dba.blogspot.in/2012/10/wait-events-in-oracle-wait-events.html
db file sequential read => tune indexing, tune SQL (to do less I/O), tune disks, increase buffer cache. This event is indicative of disk contention on index reads. Make sure all objects are analyzed. Redistribute I/O across disks. The wait that comes from the physical side of the database. It related to memory starvation and non selective index use. Sequential read is an index read followed by table read because it is doing index lookups which tells exactly which block to go to.
db file scattered read => disk contention on full table scans. Add indexes, tune SQL, tune disks, refresh statistics, and create materialized view. Caused due to full table scans may be because of insufficient indexes or unavailability of updated statistics.
db file parallel read => tune SQL, tune indexing, tune disk I/O, increase buffer cache. If you are doing a lot of partition activity then expect to see that wait even. It could be a table or index partition.
db file parallel write => if you are doing a lot of partition activity then expect to see that wait even. It could be a table or index partition.
db file single write => if you see this event than probably you have a lot of data files in your database.
control file sequential read
control file parallel write
log file sync => committing too often, archive log generation is more. Tune applications to commit less, tune disks where redo logs exist, try using nologging/unrecoverable options, log buffer could be too large.
log file switch completion => May need more log files per group.
log file parallel write => Deals with flushing out the redo log buffer to disk. Disks may be too slow or have an I/O bottleneck. Look for log file contention.
log buffer space => Increase LOG_BUFFER parameter or move log files to faster disks. Tune application, use NOLOGGING, and look for poor behavior that updates an entire row when only a few columns change.
log file switch (checkpoint incomplete) => May indicate excessive db files or slow IO subsystem.
log file switch (archiving needed) => Indicates archive files are written too slowly.
redo buffer allocation retries => shows the number of times a user process waited for space in the redo log buffer.
redo log space wait time => shows cumulative time (in 10s of milliseconds) waited by all processes waiting for space in the log buffer.
buffer busy waits/ read by other session => Increase DB_CACHE_SIZE. Tune SQL, tune indexing, we often see this event along with full table scans, if the SQL is inserting data, consider increasing FREELISTS and/or INITRANS, if the waits are on segment header blocks, consider increasing extent sizes.
free buffer waits => insufficient buffers, process holding buffers too long or i/o subsystem is over loaded. Also check you db writes may be getting clogged up.
cache buffers lru chain => Freelist issues, hot blocks.
no free buffers => Insufficient buffers, dbwr contention.
latch free
latch: session allocation
latch: in memory undo latch => If excessive could be bug, check for your version, may have to turn off in memory undo.
latch: cache buffer chains => check hot objects.
latch: cache buffer handles => Freelist issues, hot blocks.
direct path write => You wont see them unless you are doing some appends or data loads.
direct Path reads => could happen if you are doing a lot of parallel query activity.
direct path read temp or direct path write temp => this wait event shows Temp file activity (sort,hashes,temp tables, bitmap) check pga parameter or sort area or hash area parameters. You might want to increase them.
library cache load lock
library cache pin => if many sessions are waiting, tune shared pool, if few sessions are waiting, lock is session specific.
library cache lock => need to find the session holding the lock, look for DML manipulating an object being accessed, if the session is trying to recompile PL/SQL, look for other sessions executing the code.
undo segment extension => If excessive, tune undo.
wait for a undo record => Usually only during recovery of large transactions, look at turning off parallel undo recovery.
enque wait events => Look at V$ENQUEUE_STAT
SQL*Net message from client
SQL*Net message from dblink
SQL*Net more data from client
SQL*Net message to client
SQL*Net break/reset to client
7) What are the init parameters related to performance/optimizer?
ANS:
optimizer_mode = choose
optimizer_index_caching = 90
optimizer_index_cost_adj = 25
optimizer_max_permutations = 100
optimizer_use_sql_plan_baselines=true
optimizer_capture_sql_plan_baselines=true
optimizer_use_pending_statistics = true;
optimizer_use_invisible_indexes=true
_optimizer_connect_by_cost_based=false
_optimizer_compute_index_stats= true;
8) What are the values of optimizer_mode init parameters and their meaning?
ANS:
optimizer_mode = choose
9) What is the use of AWR, ADDM, ASH?
10) How to generate AWR report and what are the things you will check in the report?
11). How to generate ADDM report and what are the things you will check in the report?
12). How to generate ASH report and what are the things you will check in the report?
13) How to generate TKPROF report and what are the things you will check in the report?
ANS:
The tkprof tool is a tuning tool used to determine cpu and execution times for SQL statements. Use it by first setting timed_statistics to true in the initialization file and then turning on tracing for either the entire database via the sql_trace parameter or for the session using the ALTER SESSION command. Once the trace file is generated you run the tkprof tool against the trace file and then look at the output from the tkprof tool. This can also be used to generate explain plan output.
14)
*********************************************************************
UNIX Interview Questions/FAQs for Oracle DBAs:
************************************************
1) What’s the difference between soft link and hard link?
Ans:
A symbolic (soft) linked file and the targeted file can be located on the same or different file system while for a hard link they must be located on the same file system, because they share same inode number and an inode table is unique to a file system, both must be on the same file system.
2) How you will read a file from shell script?
Ans:
while read line
do
echo $line
done < file_name
3) 3. What’s the use of umask?
ANS:
Will decide the default permissions for files.
4) What is crontab and what are the arguments?
Ans:
The entries have the following elements:
field allowed values
----- --------------
minute 0-59
hour 0-23
day of month 1-31
month 1-12
day of week 0-7 (both 0 and 7 are Sunday)
user Valid OS user
command Valid command or script
? ? ? ? ? command
| | | | |_________day of the week (0-6, 0=Sunday)
| | | |___________month (1-12)
| | |_____________day of the month (1-31)
| |_______________hour (0-23)
|_________________minute (0-59)
5) How to find operating system (OS) version?
Ans:
uname –a
6) How to find out the run level of the user?
Ans:
uname –r
7) How to delete 7 days old trace files?
Ans:
find ./trace –name *.trc –mtime +7 –exec rm {} \;
8) What is top command?
Ans:
top is a operating system command, it will display top processes which are taking high cpu and memory.
9) 8. How to get 10th line of a file (by using grep)?
10)
*************************************************************************************************************************************************************
Architecture:
Oracle DBA Interview Questions/FAQs Part1 :
1) What is an instance?
ANS:
SGA + background processes.
2) What is SGA?
ANS:
System/Shared Global Area.
3) What is PGA (or) what is pga_aggregate_target?
ANS:
Programmable Global Area.
4) What are new memory parameters in Oracle 10g?
ANS:
SGA_TARGET, PGA_TARGET
5) What are new memory parameters in Oracle 11g?
ANS:
MEMORY_TARGET
6) What are the mandatory background processes?
ANS:
DBWR LGWR SMON PMON CKPT RECO.
7) What are the optional background processes?
ANS:
ARCH, MMAN, MMNL, MMON, CTWR, ASMB, RBAL, ARBx etc.
8) What are the new background processes in Oracle 10g?
ANS:
MMAN MMON MMNL CTWR ASMB RBAL ARBx
9) What are the new features in Oracle 9i?
http://satya-dba.blogspot.com/2009/01/whats-new-in-9i.html
10) . What are the new features in Oracle 10g?
http://satya-dba.blogspot.com/2009/01/whats-new-in-10g.html
11). What are the new features in Oracle 11g?
http://satya-dba.blogspot.com/2009/01/whats-new-in-11g.html
12). What are the new features in Oracle 11g R2?
http://satya-dba.blogspot.com/2009/09/whats-new-in-11g-release-2.html
13) What process will get data from datafiles to DB cache?
ANS:
Server process
14) What background process will writes data to datafiles?
ANS:
DBWR
15) What background process will write undo data?
ANS:
DBWR
16) What are physical components of Oracle database?
ANS:
Oracle database is comprised of three types of files. One or more datafiles, two or more redo log files, and one or more control files.
Password file and parameter file also come under physical components.
17) What are logical components of Oracle database?
ANS:
Blocks, Extents, Segments, Tablespaces.
18) What is segment space management?
ANS:
LMTS and DMTS.
19) What is extent management?
ANS:
Auto and Manual.
20) What are the differences between LMTS and DMTS?
Tablespaces that record extent allocation in the dictionary are called dictionary managed tablespaces,
and tablespaces that record extent allocation in the tablespace header are called locally managed tablespaces.
*************************************************************************************************************************************************************
Oracle DBA Interview Questions/FAQs Part2 :
********************************************
1) What is a datafile?
ANS:
Every Oracle database has one or more physical datafiles. Datafiles contain all the database data. The data of logical database structures such as tables and indexes is physically stored in the datafiles allocated for a database.
2) What are the contents of control file?
ANS:
Database name, SCN, LSN, datafile locations, redolog locations, archive mode, DB Creation Time, RMAN Backup & Recovery Details, Flashback mode.
3) What is the use of redo log files?
ANS:
Online redo logs serve to protect the database in the event of an instance failure. Whenever a transaction is committed, the corresponding redo entries temporarily stored in redo log buffers of the system global area are written to an online redo log file by the background process LGWR.
4) What are the uses of undo tablespace or redo segments?
ANS:
Undo records are used to:
Roll back transactions when a ROLLBACK statement is issued
Recover the database
Provide read consistency
Analyze data as of an earlier point in time by using Flashback Query
Recover from logical corruptions using Flashback features
5) How undo tablespace can guarantee retain of required undo data?
ANS:
Alter tablespace undo_ts retention guarantee;
6) What is 01555 - snapshot too old error and how do you avoid it?
ANS:
http://www.dba-oracle.com/t_ora_01555_snapshot_old.htm
7) What is the use/size of temporary tablespace?
ANS:
Temporary tablespaces are used to manage space for database sort operations and for storing global temporary tables
8) What is the use of password file?
ANS:
If the DBA wants to start up an Oracle instance there must be a way for Oracle to authenticate this DBA. That is if (s)he is allowed to do so. Obviously, his password can not be stored in the database, because Oracle can not access the database before the instance is started up. Therefore, the authentication of the DBA must happen outside of the database. There are two distinct mechanisms to authenticate the DBA: using the password file or through the operating system.
The init parameter remote_login_passwordfile specifies if a password file is used to authenticate the DBA or not. If it set either to shared or exclusive a password file will be used.
9) How to create password file?
ANS:
$ orapwd file=orapwSID password=sys_password force=y nosysdba=y
10) How many types of indexes are there?
ANS:
Clustered and Non-Clustered
1.B-Tree index
2.Bitmap index
3.Unique index
4.Function based index
Implicit index and explicit index.
Explicit indexes are again of many types like simple index, unique index, Bitmap index, Functional index, Organisational index, cluster index.
11) What is bitmap index & when it’ll be used?
ANS:
Bitmap indexes are preferred in Data warehousing environment.
Preferred when cardinality is low.
12) What is B-tree index & when it’ll be used?
ANS:
B-tree indexes are preferred in OLTP environment.
Preferred when cardinality is high.
13) How you will find out fragmentation of index?
ANS:
AUTO_SPACE_ADVISOR_JOB will run in daily maintenance window and report fragmented indexes/Tables.
analyze index validate structure;
This populates the table ‘index_stats’. It should be noted that this table contains only one row and therefore only one index can be analysed at a time.
An index should be considered for rebuilding under any of the following conditions:
* The percentage of deleted rows exceeds 30% of the total, i.e. if
del_lf_rows / lf_rows > 0.3.
* If the ‘HEIGHT’ is greater than 4.
* If the number of rows in the index (‘LF_ROWS’) is significantly smaller than ‘LF_BLKS’ this can indicate a large number of deletes, indicating that the index should be rebuilt.
14) What is the difference between delete and truncate?]
ANS:
Truncate will release the space. Delete won’t.
Delete can be used to delete some records. Truncate can’t.
Delete can be rollbacked.
Delete will generate undo (Delete command will log the data changes in the log file where as the truncate will simply remove the data without it. Hence data removed by Delete command can be rolled back but not the data removed by TRUNCATE).
Truncate is a DDL statement whereas DELETE is a DML statement.
Truncate is faster than delete.
15) What's the difference between a primary key and a unique key?
ANS:
Both primary key and unique enforce uniqueness of the column on which they are defined.
But by default primary key creates a clustered index on the column, where unique key creates a nonclustered index by default.
Primary key doesn't allow NULLs, but unique key allows one NULL only.
16) What is the difference between schema and user?
Schema is collection of user’s objects.
17) What is the difference between SYSDBA, SYSOPER and SYSASM?
ANS:
SYSOPER can’t create and drop database.
SYSOPER can’t do incomplete recovery.
SYSOPER can’t change character set.
SYSOPER can’t CREATE DISKGROUP, ADD/DROP/RESIZE DISK
SYSASM can do anything SYSDBA can do.
18) What is the difference between SYS and SYSTEM?
SYSTEM can’t shutdown the database.
SYSTEM can’t create another SYSTEM, but SYS can create another SYS or SYSTEM.
19) What is the difference between view and materialized view?
View is logical, will store only the query, and will always gets latest data.
Mview is physical, will store the data, and may not get latest data.
20)
************************************************************************************************************************************************************
Oracle DBA Interview Questions/FAQs Part3 :
*********************************************
1) What are materialized view refresh types and which is default?
ANS:
Complete, fast, force(default)
2) How to find out when was a materialized view refreshed?
ANS:
Query dba_mviews or dba_mview_analysis or dba_mview_refresh_times
SQL> select MVIEW_NAME, to_char(LAST_REFRESH_DATE,'YYYY-MM-DD HH24:MI:SS') from dba_mviews;
(or)
SQL> select NAME, to_char(LAST_REFRESH,'YYYY-MM-DD HH24:MI:SS') from dba_mview_refresh_times;
(or)
SQL> select MVIEW_NAME, to_char(LAST_REFRESH_DATE,'YYYY-MM-DD HH24:MI:SS') from dba_mview_analysis;
3) What is atomic refresh in mviews?
ANS:
From Oracle 10g, complete refresh of single materialized view can do delete instead of truncate.
To force the refresh to do truncate instead of delete, parameter ATOMIC_REFRESH must be set to false.
ATOMIC_REFRESH = FALSE, mview will be truncated and whole data will be inserted. The refresh will go faster, and no undo will be generated.
ATOMIC_REFRESH = TRUE (default), mview will be deleted and whole data will be inserted. Undo will be generated. We will have access at all times even while it is being refreshed.
SQL> EXEC DBMS_MVIEW.REFRESH('mv_emp', 'C', atomic_refresh=FALSE);
4) How to find out whether database/tablespace/datafile is in backup mode or not?
ANS:
Query V$BACKUP view.
5) What is row chaining?
ANS:
If the row is too large to fit into an empty data block in this case the oracle stores the data for the row in a chain of one or more data blocks. Can occur when the row is inserted.
6) What is row migration?
ANS:
An update statement increases the amount of data in a row so that the row no longer fits in its data blocks.
Now the oracle tries to find another free block with enough space to hold the entire row if such a block is available oracle moves entire row to new block.
7) What are different types of partitions?
ANS:
With Oracle8, Range partitioning (on single column) was introduced.
With Oracle8i, Hash and Composite(Range-Hash) partitioning was introduced.
With Oracle9i, List partitioning and Composite(Range-List) partitioning was introduced.
With Oracle 11g, Interval partitioning, REFerence partitioning, Virtual column based partitioning, System partitioning and Composite partitioning [Range-Range, List-List, List-Range, List-Hash, Interval-Range, Interval-List, Interval-Interval] was introduced.
8) What is local partitioned index and global partitioned index?
ANS:
A local index is an index on a partitioned table which is partitioned in the exact same manner as the underlying partitioned table. Each partition of a local index corresponds to one and only one partition of the underlying table.
A global partitioned index is an index on a partitioned or non partitioned tables which are partitioned using a different partitioning key from the table and can have different number of partitions. Global partitioned indexes can only be partitioned using range partitioning.
9) How you will recover if you lost one/all control file(s)?
10) Why more archivelogs are generated, when database is begin backup mode?
ANS:
During begin backup mode datafile headers get freezed and as result row information cannot be retrieved as a result the entire block is copied to redo logs as a result more redo generated and more log switch and in turn more archive logs.
Normally only deltas (change vectors) are logged to the redo logs.
When in backup mode, Oracle will write complete changed blocks to the redo log files.
Mainly to overcome fractured blocks. Most of the cases Oracle block size is equal to or a multiple of the operating system block size.
e.g. Consider Oracle blocksize is 2k and OSBlocksize is 4k. so each OS Block is comprised of 2 Oracle Blocks. Now you are doing an update when your db is in backup mode. An Oracle Block is updating and at the same time backup is happening on the OS block which is having this particular DB block. Backup will not be consistent since the one part of the block is being updated and at the same time it is copied to the backup location. In this case we will have a fractured block, so as to avoid this Oracle will copy the whole OS block to redo logfile which can be used for recovery. Because of this redo generation is more.
11) What UNIX parameters you will set while Oracle installation?
ANS:
shmmax, shmmni, shmall, sem,
12) What is the use of inittrans and maxtrans in table definition?
13) What are differences between dbms_job and dbms_schedular?
Through dbms_schedular we can schedule OS level jobs also.
14) What are differences between dbms_schedular and cron jobs?
Through dbms_schedular we can schedule database jobs, through cron we can’t set.
15) Difference between CPU & PSU patches?
CPU - Critical Patch Update - includes only Security related patches.
PSU - Patch Set Update - includes CPU + other patches deemed important enough to be released prior to a minor (or major) version release.
16) What you will do if (local) inventory corrupted [or] opatch lsinventory is giving error?
17) What are the entries/location of oraInst.loc?
ANS:
/etc/oraInst.loc is pointer to central/local Oracle Inventory.
18) What is the difference between central/global inventory and local inventory?
ANS:
19)
*************************************************************************************************************************************************************
Oracle DBA Interview Questions/FAQs Part4 :
**********************************************
1) What is the use of root.sh & oraInstRoot.sh?
Ans:
Changes ownership & permissions of oraInventory
Creating oratab file in the /etc directory
In RAC, starts the clusterware stack
2) How can you transport tablespaces across platforms with different endian formats?
Ans:
RMAN
3) What is transportable tablespace (and across platforms)?
4) What is xtss (cross platform transportable tablespace)?
5) What is the difference between restore point & guaranteed restore point?
6) How to find if your Oracle database is 32 bit or 64 bit?
Ans:
execute the command "file $ORACLE_HOME/bin/oracle", you should see output like /u01/db/bin/oracle: ELF 64-bit MSB executable SPARCV9 Version 1
means you are on 64 bit oracle.
If your oracle is 32 bit you should see output like below
oracle: ELF 32-bit MSB executable SPARC Version 1
7) How to find opatch Version ?
Ans:
opatch is utility to apply database patch, In order to find opatch version execute"$ORACLE_HOME/OPatch/opatch version"
8) suppose i created one table after few days i did some insert,update how can i know when will i did ddl or dml operation is undergone on that table ?
ANS:
DDL:
select OWNER,OBJECT_NAME,CREATED,LAST_DDL_TIME,from dba_objects where OBJECT_NAME='&object_name';
DML:
SQL> select max(ora_rowscn), scn_to_timestamp(max(ora_rowscn)) from PS_PAY_TAX;
MAX(ORA_ROWSCN) SCN_TO_TIMESTAMP(MAX(ORA_ROWSCN))
--------------- ---------------------------------------------------------------------------
6016929147 04-JAN-12 08.41.20.000000000 AM
SQL>
SQL> select table_name, inserts, updates, deletes, timestamp,truncated from user_tab_modifications where table_name='TEST1';
TABLE_NAME INSERTS UPDATE DELETES TIMESTAMP TRU DROP_SEG
--------- -------- ------- -------- ------------------- --- --------
TEST1 4 0 0 04.08.2008 12:03:32 NO 0
=====================================================================
THE END
====================================================================
No comments:
Post a Comment