ORA-22990: LOB locators cannot span transactions
Below is the error reported in BPEL opmn logs :-
############
<2010-02-07 10:50:59,852><:logerror>Error while invoking bean "cube engine": Cannot update lob column.The process domain was unable to update the lob column "3" in the datastore. The exception reported is: ORA-22990: LOB locators cannot span transactions
Please check that the machine hosting the datasource is physically connected to the network. Otherwise, check that the datasource connection parameters (user/password) is currently valid.
sql statement: BEGIN INSERT INTO cube_scope( cikey, domain_ref, scope_bin ) VALUES ( ?, ?, '0' ) RETURNING scope_bin INTO ?; END;
ORABPEL-04040
Cannot update lob column.
The process domain was unable to update the lob column "3" in the datastore. The exception reported is: ORA-22990: LOB locators cannot span transactions
Please check that the machine hosting the datasource is physically connected to the network. Otherwise, check that the datasource connection parameters (user/password) is currently valid.
sql statement: BEGIN INSERT INTO cube_scope( cikey, domain_ref, scope_bin ) VALUES ( ?, ?, '0' ) RETURNING scope_bin INTO ?; END;
<2010-02-07 10:50:59,795><:insert>java.io.IOException: ORA-22990: LOB locators cannot span transactions
##############
Solution given In metalink :-
Problem Description
-------------------
While inserting a large BLOB using JDBC, the application fails with:
ORA-22990: LOB locators cannot span transactions at
oracle.jdbc.dbaccess.DBError.SQLToIOException(DBError.java, Compiled Code)
Explanation
-----------
The Oracle oracle.sql.BLOB OutputStream writes the data in chunks. Since autocommit defaults
to true, the first chunk is committed. This results in the write operation for
the next chunk of the Blob to fail since it appears to be in the next
transaction.
In those conditions, the ORA-22990 exception will occur with any version of Oracle JDBC driver.
Solution Description
--------------------
Issue the setAutoCommit(false) command. Then, explicitly commit the
transaction after all of the Blob chunks have been written to the row and
the stream.close() method has been executed.
If using the Oracle 10g JDBC driver (or greater version), a second solution
consists of using the standard JDBC api (setBinaryStream method
of java.sql.PreparedStatement interface). And in this case, AutoCommit can
be set to true;
Here is an example :
PreparedStatement stmt = conn.prepareStatement("INSERT INTO blobTest VALUES (?,?)");
File fd = new File(testFile);
fis = new FileInputStream(fd);
stmt.setInt(1,1);
stmt.setBinaryStream(2,fis,(int)fd.length());
where blobTest is a table defined as the following :
SQL> create table blobTest (id number (4), data blob
############
<2010-02-07 10:50:59,852>
Please check that the machine hosting the datasource is physically connected to the network. Otherwise, check that the datasource connection parameters (user/password) is currently valid.
sql statement: BEGIN INSERT INTO cube_scope( cikey, domain_ref, scope_bin ) VALUES ( ?, ?, '0' ) RETURNING scope_bin INTO ?; END;
ORABPEL-04040
Cannot update lob column.
The process domain was unable to update the lob column "3" in the datastore. The exception reported is: ORA-22990: LOB locators cannot span transactions
Please check that the machine hosting the datasource is physically connected to the network. Otherwise, check that the datasource connection parameters (user/password) is currently valid.
sql statement: BEGIN INSERT INTO cube_scope( cikey, domain_ref, scope_bin ) VALUES ( ?, ?, '0' ) RETURNING scope_bin INTO ?; END;
<2010-02-07 10:50:59,795>
##############
Solution given In metalink :-
Problem Description
-------------------
While inserting a large BLOB using JDBC, the application fails with:
ORA-22990: LOB locators cannot span transactions at
oracle.jdbc.dbaccess.DBError.SQLToIOException(DBError.java, Compiled Code)
Explanation
-----------
The Oracle oracle.sql.BLOB OutputStream writes the data in chunks. Since autocommit defaults
to true, the first chunk is committed. This results in the write operation for
the next chunk of the Blob to fail since it appears to be in the next
transaction.
In those conditions, the ORA-22990 exception will occur with any version of Oracle JDBC driver.
Solution Description
--------------------
Issue the setAutoCommit(false) command. Then, explicitly commit the
transaction after all of the Blob chunks have been written to the row and
the stream.close() method has been executed.
If using the Oracle 10g JDBC driver (or greater version), a second solution
consists of using the standard JDBC api (setBinaryStream method
of java.sql.PreparedStatement interface). And in this case, AutoCommit can
be set to true;
Here is an example :
PreparedStatement stmt = conn.prepareStatement("INSERT INTO blobTest VALUES (?,?)");
File fd = new File(testFile);
fis = new FileInputStream(fd);
stmt.setInt(1,1);
stmt.setBinaryStream(2,fis,(int)fd.length());
where blobTest is a table defined as the following :
SQL> create table blobTest (id number (4), data blob
Comments