Friday, April 29, 2011

Using Jenkins for SOA Deployment Automation

  Last month, my client wanted me  to have a framework for automating the builds.Already the build scripts were in place. Because of the amount of changes/code fixes that different teams were checking in, the situation called for daily/frequent builds to be initiated.To make my customers job easy I was on the lookout for automating the deployments using a GUI automation tool. In my previous project we had used Cruise Control. After comparing different automation tools, I decided on Jenkins (Hudson) a more light weight easy to use tool and having strong support base.
To get started download Jenkins.war file from

There are 2 ways in which you can use Jenkins
  1. Run Jenkins in Winstone servlet container
The easiest way to execute Jenkins is through the built in Winstone servlet container. You can execute Jenkins like this:
Then run java -jar jenkins.war
Accessing Jenkins
To see Jenkins, simply bring up a web browser and go to URL http://myServer:8080 where myServer is the name of the system running Jenkins.
  1. Deploy  Jenkins into Weblogic server
The Jenkins.war cannot be deployed to Weblogic server without some changes. These are necessary because of Weblogic's proprietary class loaders which behave differently compared to Tomcat, JBoss, et. al.

Once the web app is up and running.
In Dashboard --> Create New Job 

Add steps to Execute shell scripts and build files

Once the build is initiated the log entries and progress of build can be monitored in the Console output.

If you ask me which automation tool to pick for your project my answer will be Jenkins.

Thursday, April 28, 2011

B2B Callout - B2B runtime error: java.lang.IndexOutOfBoundsException

Recently I was approached with a B2B Callout error. The transaction fails with a Generic B2B error B2B-50029:  B2B runtime error: java.lang.IndexOutOfBoundsException.

Error Stack trace

[soa_server1] [ERROR] [] [oracle.soa.b2b.engine] Informational -:  B2B-50029:  B2B runtime error: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0[[ at oracle.tip.b2b.callout.B2BCalloutHandler.handleOutgoingCallout(    at oracle.tip.b2b.msgproc.Request.outgoingRequestPostColab(
            at oracle.tip.b2b.msgproc.Request.outgoingRequest(
            at oracle.tip.b2b.engine.Engine.processOutgoingMessageImpl(
            at oracle.tip.b2b.engine.Engine.processOutgoingMessage(


Looking at the error log, it was evident that the Callout java class was not populating the output list, which was causing the issue. When the B2B Outbound Handler tries to look up the output list it throws this java.lang.IndexOutOfBoundsException: Index: 0, Size: 0 excpetion.

After you play around with the message then you should be populating the output message before exiting the execute method.

           CalloutMessage cmOut = new CalloutMessage(s);


The below code should fix the issue. We get the input, populate the output and return control back to B2B engine.

    public void execute(CalloutContext calloutContext, List input,
                        List output) throws CalloutDomainException,
                                            CalloutSystemException {
        b2blog(" Callout execute() called - Start");
        try {

            CalloutMessage cmIn = (CalloutMessage)input.get(0);
            String s = cmIn.getBodyAsString();
            b2blog((new StringBuilder()).append("Callout execute() - All Parameter = ").append(cmIn.getParameters()).toString());
            b2blog((new StringBuilder()).append("Callout execute() - CalloutMessage body = ").append(s).toString());

                //add logic of the task that needs to be done like archiving, manipulating the payload

           CalloutMessage cmOut = new CalloutMessage(s);
            b2blog((new StringBuilder()).append("Callout execute() - End Callout Testing = "));

        } catch (Exception e) {
            // System.out.println("Exception: "+ e.printStackTrace())
            b2blog((new StringBuilder()).append("Callout execute() - Exception = ").append(e).toString());


            throw new CalloutDomainException(e);

b2blog method call is explained in one of my previous blogs on B2B logging.

Wednesday, April 27, 2011

B2B routing messages to SOA Composite using Document Routing ID

One of the queries I recently got from my client was with regard to B2B engine routing messages to SOA Composite process. They had implemented routing with document definitions for 4010 and 4010V to different composites. They wanted to route both 4010 and 4010V messages to a single composite process.  What was the easiest way to implement these changes across processes and different trading partners?
I will list down the changes that need to be done
  1. Add a common Routing ID for 4010 and 4010V document definitions.
1)      Login to B2B Console .Go to Administration -->Document
Document Protocols -->EDI_X12 --> 4010 -->850 -->GEO_850_def
And 4010VICS --> 850 -->GEO_850_def

2)      Update the Routing. Go to Routing sub tab
Document Routing ID -->GEO_850_Routing

  1. In the composite.xml of the composite process update binding.b2b docRef value
Old Entry in composite.xml
<service name="Read850Msg" ui:wsdlLocation=" Read850Msg.wsdl">
    <interface.wsdl interface=" Read850Msg /#wsdl.interface(B2B_receive_ptt)"/>
    <binding.b2b docRef="EDI_X12--4010--850--GEO_850_def"/>

Modified entry  in composite.xml

<service name="Read850Msg" ui:wsdlLocation=" Read850Msg.wsdl">
    <interface.wsdl interface=" Read850Msg /#wsdl.interface(B2B_receive_ptt)"/>
    <binding.b2b docRef="DOC_ROUTING_ID--GEO_850_Routing"/>

a.    Deploy the process.
b.    Test the B2B with both 4010 and 4010V files/messages. It will get routed to the single composite process.
Note:- In case the files are not getting routed you will need to re-deploy the agreements for trading partners for which testing is being carried out.

Thursday, April 21, 2011

getting started with custom commands in WLST @SOASuite11g

I was recently having a requirement to use WebLogic Scripting Tool because it gave a lot of flexibility to configure SOA composite applications. In WLST there was OFFLINE option for doing deployments and for Composite Application Management.

You can get  wlst commands  @ here

Some of the commands which I loved using are
1.    sca_deployComposite : Deploy a SOA composite application.
2.    sca_undeployComposite : Un deploy a SOA composite application.
3.    sca_startComposite : Start a previously stopped SOA composite application.
4.    sca_stopComposite :Stop a SOA composite application.
5.    sca_activateComposite : Activate a previously retired SOA composite application.
6.    sca_retireComposite : Retire a SOA composite application.
7.    sca_listDeployedComposites : List the deployed SOA composite applications.

Oracle SOA Suite, MDS, and services such as SSL and logging, supply custom WLST commands.
To use those custom commands, you must invoke the WLST script from the Oracle home. Do not use the WLST script in the WebLogic Server home.
The script is located at:

(Windows) MIDDLEWARE_HOME\Oracle_SOA1\common\bin\wlst.cmd

In case you use the WLS tool from
Or  from
MIDDLEWARE_HOME \wlserver_10.3\common\bin

It will throw following error.

Traceback (innermost last):
  File "", 1,="" ?<="" in="" line="" span="">",>
NameError: sca_undeployComposite

To show the way, I will use  sca_undeployComposite command. For un-deploying composite processes using wlst while the soa-infra is offline use the following custom command.

>>  sca_undeployComposite

sca_undeployComposite(serverURL, compositeName, revision, user, password)
URL of the server that hosts the SOA Infrastructure application (for example, http://localhost:8001).
Name of the SOA composite application.
Revision ID of the SOA composite application.
Optional. User name to access the composite deployer servlet when basic authentication is configured.
Optional. Password to access the composite deployer servlet when basic authentication is configured.

How to use it

(UNIX) ORACLE_HOME/common/bin/
(Windows) ORACLE_HOME\common\bin\wlst.cmd

Initializing WebLogic Scripting Tool (WLST) ...

Welcome to WebLogic Server Administration Scripting Shell

Type help() for help on available commands

wls:/offline> connect()
Please enter your username [weblogic] :weblogic
Please enter your password [welcome1] :
Please enter your server URL [t3://localhost:7001] : t3://localhost:7001
Connecting to t3://localhost:7001 with userid weblogic ...
Successfully connected to Admin Server 'AdminServer' that belongs to domain 'soa_domain'.

wls:/soa_domain/serverConfig> sca_undeployComposite("http://localhost:8001","GeoComposite", "1.0","weblogic","weblogic123")
serverURL = http://localhost:8001
user = weblogic
partition = default
compositeName = GeoComposite
revision = 1.0
timeout= -1
set user and password...
compositeDN = default/GeoComposite!1.0
Creating HTTP connection to host:localhost, port:8001
Received HTTP response from the server, response code=200
---->Undeploying composite (default/GeoComposite!1.0) success.

That’s pretty much for the day. Happy Easter and a long weekend

Monday, April 11, 2011

Compilation issue with MDS repository

Recently while migrating the code from DEV to Test environment, customer had issues with respect to MDS repository. Since the customer was using MDS for the first time there were many queries on how the migration will work, the amount of work involved in the process of migration. They had used MDS store as the common repository for all the artifacts (DVM,XSD’s,XRef). The different processes refer these artifacts from MDS.

The build scripts are being used for the compilation /deployment of processes to the Server. During compilation build script looks up adf-config.xml to get the MDS store connection details. Everything works fine when customer uses connection string/username/pwd in adf-config.xml.

As a pre-requisite for migrating codebase, all the shared artifacts@ MDS were migrated to the new environment. So for compilation, the team was planning to use the new environment MDS details, but customer was not willing to expose the MDS store credentials in different process files.

These are some approaches that we tried out for resolving the issue and completing the deployment to the new environment.
1.    First approach was to mention jndi-datasource as mentioned in release notes  @
If the MDS database has a JNDI name, then use the following entries in adf-config.xml:
     <property name="jndi-datasource" value="${}"/>
     <property name="partition-name" value="soa-infra"/>

Here = jdbc/mds/MDS_LocalTxDataSource , the JNDI name of mds-soa datasource.

But this did not work out as the build scripts were not able to look up the JNDI from the server.

The error stacktrace while compiling the process.

     [scac]     at oracle.adf.share.config.ADFContextMDSConfigHelperImpl.createMDSSession(
     [scac]     ... 26 more
     [scac] Caused by: oracle.mds.config.MDSConfigurationException: MDS-01330: unable to load MDS configuration document
     [scac] MDS-01329: unable to load element "persistence-config"
     [scac] MDS-01370: MetadataStore configuration for metadata-store-usage "mstore-usage_2" is invalid.
     [scac] MDS-00922: The ConnectionManager "oracle.mds.internal.persistence.db.JNDIConnectionManagerImpl" cannot be instantiated.
     [scac] MDS-00929: unable to look up name "jdbc/mds/MDS_LocalTxDataSource" in JNDI context
     [scac] Need to specify class name in environment or system property, or as an applet parameter, or in an application resource file:  java.naming.factory.initial

This approach was a failure. Not yet fully incorporated into 11g build scripts.
2.       Use any file based repository and complete the deployment. The Compilation process will look up the adf-config.xml for using the MDS store details to validate the XSD’s referred in the process. Place all the XSD’s referred by the process in a machine and specify the path in metadata-path.

          File –based repository can be referred in adf-config.xml as below.

            <metadata-store class-name="oracle.mds.persistence.stores.file.FileMetadataStore">
              <property value="/GEO_BUILD/mdsutil"  name="metadata-path"/>
              <property value="seed" name="partition-name"/>

3.       Use any dev MDS repository connection details and complete the deployment. The Compilation process will look up the adf-config.xml for using the MDS store details to validate the XSD’s referred in the process.

            <metadata-store class-name="oracle.mds.persistence.stores.db.DBMetadataStore">
              <property value="DEV_MDS" name="jdbc-userid"/>
              <property value="password" name="jdbc-password"/>
              <property value="jdbc:oracle:thin:@//localhost:1522/GEODB"
              <property value="soa-infra" name="partition-name"/>

This 2 & 3 approaches will work fine because the adf-config.xml details are used only during compilation /validation. At runtime the BPEL engine will refer the artifacts directly from MDS store configured in the environment.

Hope this helps.