Its been a long time that I stopped blogging. I always wanted to ,but was so much busy that was unable to spend time on recollecting and writing it down. Its always good to be back. :)
I have been working with iPaas platforms for sometime now. Its really interesting to see that so many cloud based iPaas solutions are competing with the on-premise solutions. To work in cloud based solution is different from working on robust on-premise solution. The vast array of options available in the tech stack is always good to have when you are having complex integration's in house. But once your cloud presence increases the no.of connectors and flexibility you have with the on-premise solution decreases leaving the architects and dev team to worry about how to implement these integration. According to the new research there are 2500+ SAAS applications and the list keeps growing , the only way to keep up pace with these many no.of applications is to have a cloud based solution for integration and thats where IPaas solutions start adding value. Informatica's Informatica cloud ,Dell bhoomi, Mulesoft ,snaplogic, Jitterbit and now oracle ICS, SOACS are competing with each other for their share of business. I loved the way informatica cloud has evolved. Even though their initial focus was on salesforce, they have build lot of capabilities around other SAAS applciations as well. One of the things I feel that informatica can improve on is the High availability feature where currently the cloud Agent architecture will fail if the agent is down. If they can build in that high availability feature it can add alue to their selling point. Informatica cloud brings ETL capability which they are famous for in Mapping designer and Active-VOS BPEL/BPM feature into informatica process designer /guide designer. The salesforce app exchange plugin for guide desiger also helps customers to create workflows in salesforce and not to move into informatica cloud console. Informatica still lacks a good API gateway. If they are able to build in that capability it will be able to compete with Mulesoft and Oracle in all fronts. The connector concept is similar in all iPaas vendors moving away from adapter which tightly couples the object as well. The connector gives access to all objects which makes it easier to configure and setup the process. Mulesoft integration platform needs more of java programming background as its more on a sprig based application framework. Informatica has been able to bring in more of drag and drop and zero programming model in their cloud products. More focus has been on configurations. I will write more in detail on how the ther iPaas vendors compete and their positives in future blogs.
Random Cerebrations on SOA
My take on SOA,Oracle BPEL, ESB, OSB, BPM,iPaas, cloud computing, AWS ,MEAN stack.
Monday, February 1, 2016
Thursday, March 19, 2015
SOA Software/Akana API Gateway
There is wide adoption of REStful services across
enterprise. Initially we had exposed services on OSB even though the json
support was not that great with OSB. There used to be lot of custom java codes
that needs to be used to maintain these services. All oracle 11g products were
fully focused on the XML , so the json support want great. The overhead of maintaining
and converting the payload from xml to json and back was bit taxing on the service
performance. Other issue we had was onboarding of App developers who are the
real consumers for this API.Even though we had WIKI it had to kept updated
based on the enhancements that happen during each sprint. In case the process
is missed the Wiki will be outdated. The new App developers had to be helped to
troubleshoot to resolve integration issues with services, and this steps are repeated for each App. There was no
way they could interact with the developer and then use the knowledge base for
the issues already documented. That’s when we started at exploring the option
of onboarding a API Gateway which should help us in interaction with our
consumers. SOA software or now rebranded AKana was chosen. I have been working
extensively on SOA Software /Akana API Gateway for past 8 months. The tool had
some initial hiccups to be molded into the client’s environment. But once it
became stable, it has been really good. The more familiarity you get on the
tool the easier it becomes to debug issues.
The architecture is sort made of 3
components.
•Community Manager - API Enablement Tool /App
Developer Portal
•Policy
Manager: Database of run-time policies, access contracts, service
definitions and related metadata•
•Network
Director: The proxy service that receives virtual service requests,
queries Policy Manager for run-time instructions as to how to deal with the
requests, then sends the requests to the physical service. The physical service
responds, and Network Director applies any policies, and then sends the
response.
The CM helps App developers to interact easily
with the API developer using a Board and ticketing dash board.
The
policy manager helps in adding all the non-functional requirements.-API
Security .traffic monitoring, throttling, QOS Management, caching.
Network director acts as the proxy or the
gateway which internally uses the PM .
It really simplifies the virtualizing of
service. There is a small process engine to do aggregation of services or
orchestrating different services. The out of the box REST support, json and Oauth 2.0 support helps in exposing the RESTful services in quick time and keep up to date the
new standards.
Thursday, February 5, 2015
coherence adapter in osb 12c
Recently I started working with 12c , At weblogic level all looks same except for the new look and feel. Coherence adapters looked new, so I was trying it out in the osb. The only issue I faced was on how to assign the coherence key
You can use an insert operation to populate the jca.coherence.key value in the jca headers.
Rest of the steps are similar to how you do in BPEL which is well documented.
Log:
outbound: <con:endpoint name="BusinessService$HelloWorld$GetFromCache" xmlns:con="http://www.bea.com/wli/sb/context">
<con:service>
<con:operation>Get</con:operation>
</con:service>
<con:transport>
<con:mode>request-response</con:mode>
<con:qualityOfService>best-effort</con:qualityOfService>
<con:request xsi:type="jca:JCARequestMetaDataXML" xmlns:jca="http://www.bea.com/wli/sb/transports/jca" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<tran:headers xsi:type="jca:JCARequestHeadersXML" xmlns:tran="http://www.bea.com/wli/sb/transports">
<jca:SOAPAction>"Get"</jca:SOAPAction>
<jca:Content-Type>text/xml</jca:Content-Type>
<jca:jca.coherence.Key>4009</jca:jca.coherence.Key>
</tran:headers>
</con:request>
</con:transport>
<con:security>
<con:doOutboundWss>true</con:doOutboundWss>
</con:security>
</con:endpoint>>
You can use an insert operation to populate the jca.coherence.key value in the jca headers.
Rest of the steps are similar to how you do in BPEL which is well documented.
Log:
outbound: <con:endpoint name="BusinessService$HelloWorld$GetFromCache" xmlns:con="http://www.bea.com/wli/sb/context">
<con:service>
<con:operation>Get</con:operation>
</con:service>
<con:transport>
<con:mode>request-response</con:mode>
<con:qualityOfService>best-effort</con:qualityOfService>
<con:request xsi:type="jca:JCARequestMetaDataXML" xmlns:jca="http://www.bea.com/wli/sb/transports/jca" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<tran:headers xsi:type="jca:JCARequestHeadersXML" xmlns:tran="http://www.bea.com/wli/sb/transports">
<jca:SOAPAction>"Get"</jca:SOAPAction>
<jca:Content-Type>text/xml</jca:Content-Type>
<jca:jca.coherence.Key>4009</jca:jca.coherence.Key>
</tran:headers>
</con:request>
</con:transport>
<con:security>
<con:doOutboundWss>true</con:doOutboundWss>
</con:security>
</con:endpoint>>
Friday, March 28, 2014
JMS Message Selectors and OSB
JMS Topics are a good way to keep publishing messages
and multiple subscribers can fetch messages based on their criteria. How we
define the criteria is one of the main decision points that should be
considered while defining the message structure and headers of the message. In
one of the implementations done recently there was a requirement where consumers
wanted to filter the messages based on a status, that was part of the body .So
instead of reading all messages and parsing the status ,they were looking for a
way to easily filter the messages.
As per the JMS spec
::
A
JMS message selector allows a client to specify, by header field references and
property references, the messages it is interested in. Only messages whose
header and property values match the selector are delivered. What it means for
a message not to be delivered depends on the Message Consumer being used.
Message
selectors cannot reference message body values.
A
message selector matches a message if the selector evaluates to true when the
message's header field values and property values are substituted for their
corresponding identifiers in the selector.
A
message selector is a String whose syntax is
based on a subset of the SQL92 conditional expression syntax. If the value of a
message selector is an empty string, the value is treated as a null and
indicates that there is no message selector for the message consumer.
A
property value may duplicate a value
in a message's body, or it may not. Although JMS does not define a policy for
what should or should not be made a property, application developers should
note that JMS providers will likely handle data in a message's body more
efficiently than data in a message's properties. For best performance,
applications should use message properties only when they need to customize a
message's header. The primary reason for doing this is to support customized
message selection
Weblogic
customized the message selector for weblogic JMS.
JMS_BEA_SELECT
is a built-in function in WebLogic JMS SQL syntax. You specify the syntax type,
which must be set to xpath (XML Path Language) and an XPath expression.
Eg:- JMS_BEA_SELECT('xpath', '/message/status/text()')
= ‘REQUEST_SUBMITTED’
But from OSB point of view, the message selector goes
by strict JMS spec and the custom Message selector function -
JMS_BEA_SELECT is not supported as it is designed to be more generic and needs
to communicate with JMS Topics from different vendors. So the only supported
options will be to have custom message headers defined on the message and
consumers filtering on them, even it means duplicating the fields in header and
body of the message.
Wednesday, March 19, 2014
Weblogic JMS Topic does not support durable subscriptions-workaround
One
of our service consumers where facing issues with durable subscription. They
were able to successfully connect as durable subscribers in Single node topic,
but they started issues once they started connecting to clustered environments
where the topic where configured as Uniform Distributed Topic.
Error Snippet
-------------
weblogic.jms.common.JMSException:
[JMSClientExceptions:055030]This topic does not support durable subscriptions
at weblogic.jms.client.JMSSession.createDurableSubscriber(JMSSession.java:2482)
at weblogic.jms.client.JMSSession.createDurableSubscriber(JMSSession.java:2460)
at
weblogic.jms.client.WLSessionImpl.createDurableSubscriber(WLSessionImpl.java:1204)
at com.artesia.video.flip.ArtesiaConnectionInitializer.run(ArtesiaConnectionInitializer.java:168)
Topic
configuration
--------------------------
GeoDownloadTopic
|
Uniform
Distributed Topic
|
jms/Topic/GeoTestTopic
|
GEOJMSTopic_SD
|
GEOJMSServer_1, GEOJMSServer_2, GEOJMSServer_3,
GEOJMSServer_4
|
To
use durable subscription you will need to use JNDI of individual node Topic.
Eg: - GEOJMSServer_1@jms.Topic.GeoDownloadTopic
How to get the JNDI name of each node
- Click the Servers node to expand it and expose the names of the servers currently being administered through the console.
- Click the name of the server whose JNDI tree you want to view.
- Scroll down to the bottom of the Configuration pane, and Click the "View JNDI Tree" link.
- The JNDI tree will appear in a new browser window. You can click on an individual object name to view information about its bind name and hash code.
Subscribe to:
Posts (Atom)