Skip to main content

Oracle Technology Day on Data Integration -- 18th May 2011 Bangalore at Oberoi's

One of the conferences that I attended over last few months was Oracle Technology Day on Data Integration. This happened on may 18th 2011. Couple of presentations on Data Integration and GoldenGate. Interesting topic for me was on GoldenGate 11g Zero downtime upgrades and migration. This is one of beautiful features of GoldenGate through which you can achieve Zero downtime for upgrades or migration (This does not mean that no downtime required, downtime can be reduced to nearly zero but not zero). Unfortunately this feature cannot be used with CRM/ERP upgrades as E-Business Suite is not completely supported by GoldenGate. There are about 20 objects specific to E-biz suite which are still under development to be included with GoldenGate. As GoldenGate is an acquired product Oracle this development will take some time to get supported. Following are the list of presentations and details of the sessions I attended. Please go through this, share it with your colleagues/friends and let me know if this was useful.

1. Oracle Keynote: Real-time access to real-time Information with Oracle Data Integration 11g solutions  -
2. Successful strategies in optimizing for Real-time BI & Data Warehouse - 3. Achieve maximum data availability with Oracle GoldenGate 11g: Zero-downtime upgrades & migration, Query offloading and continuous availability -
4. Next Generation Data-Centric architectures with Oracle Data Integrator Enterprise Edition 11g  -

More information on GoldenGate refer to the links below,


I have a question about Flashback Data Archive tables in Goldengate replication. Are these tables usually excluded in an extract like "TABLEEXCLUDE .SYS_FBA_*"?
What is the approach for replicating and initial load of Flash Back Data Archive tables?
Here is my problem. I exported a user schema using Oracle DataPump, imported into destination database. GoldenGate abended saying some tables dont exist on dest. I checked tables and there were about 200 tables that were not exported because they are FBDA tables and DataPump just ignores them. So, I recreated them on source with scripts.
So, how is this done? Source and destination have their own FBDA and these tables should not be neither recreated with scripts on dest or replicated? Or they have to be replicated with contents?
Raghavendr V Kulkarni said…
Here i m Raghavendr V Kulkarni. i m student of Qspiders. two days back i finished SQL classes in Qspiders
yesterday i talked to you about SQL Project.

please search n send that project to me sir.
Thank you sir.
my email id.

Popular posts from this blog

SQL Interview Questions on Subqueries

SUB Queries:
1. List the employees working in research department 2. List employees who are located in New York and Chicago
3. Display the department name in which ANALYSTS are working
4. Display employees who are reporting to JONES
5. Display all the employees who are reporting to Jones Manager
6. Display all the managers in SALES and ACCOUNTING department
7. Display all the employee names in Research and Sales Department who are having at least 1 person reporting to them
8. Display all employees who do not have any reportees
9. List employees who are having at least 2 reporting
10. List the department names which are having more than 5 employees
11. List department name having at-least 3 salesman
12. List employees from research and accounting having at-least 2 reporting
13. Display second max salary
14. Display 4th max salary
15. Display 5th max salary  -- Answer for nth Max Salary
Co-Related Subqueries:
16. Write a query to get 4th max salary from EMP table
17. Write a query to get 2nd…

'Linux-x86_64 Error: 28: No space left on device' While trying to start the database -- Error

SQL> startup mount pfile='/tmp/initdlfasp12.ora'
ORA-27102: out of memory
Linux-x86_64 Error: 28: No space left on device

This as you can see is on Linux x86 with 64 bit processor. We got this error after we changed SGA on 10gR2 database. So was sure that this is something to do with the OS.

Parameters to check for this are shmall.

shmall is the total amount of shared memory, in pages, that the system can use at one time.

$cat /proc/sys/kernel/shmmax

$ getconf PAGE_SIZE

As per Oracle SHMALL should be set to the total amount of physical RAM divided by page size.

Our system has 64GB memory, so change kernel.shmall = 1024 * 1024 * 1024 * 64 / 4096 = 16777216

Once this value is calculated you can modify Linux system configuration file directly.

$ su - root
vi /etc/sysctl.conf file:


# sysctl -p

Once this is done the database was started without any problem.

Answers for SUB Queries

1. SQL> select empno, ename from emp where deptno=(select deptno from dept where dname='RESEARCH');

2. SQL> select empno, ename from emp where deptno in (select deptno from dept where loc in ('NEW YORK','CHICAGO'));

3. SQL> select dname from dept where deptno in ( select deptno from emp where job ='ANALYST');

4. SQL> select empno, ename, mgr from emp where mgr = (select empno from emp where ename='JONES');

5. SQL> select empno, ename, mgr from emp where mgr = (select mgr from emp where ename='JONES')

6. SQL> select empno, ename, job from emp where deptno in ( select deptno from dept where dname in ('SALES','ACCOUNTING'))

7. SQL> select empno, ename, job from emp where deptno in ( select deptno from dept where dname in ('SALES','RESEARCH')) and empno in (select mgr from emp)

8. SQL> select empno, ename from emp where empno not in ( select mgr from emp where mgr is not null)

9. select…