Monday, February 29, 2016

Namespace prefix can not be resolved


I was using following function in SOA 12c to get the title for the composite

setCompositeInstanceTitle

However when i set this property it was giving me compilation issue

namespace prefix "oraext" can not be resolved



The reason for this error is that even though we get this functionality in the Jdeveloper by applying patch but it was missing an entry for the prefix.

By default when you use this function it comes as follows

oraext:setCompositeInstanceTitle()

Now you need to make sure that you are adding the prefix oraext explicitly in your BPEL code.

Open your BPEL code

Go to the source code and add following name space in the already existing list of namespace

xmlns: oraext ="http://www.oracle.com/XSL/Transform/java/oracle.tip.pc.services.functions.ExtFunc"

Save the changes and rebuild

Tuesday, February 23, 2016

Generate archive file name manually in SOA Suite



In one of my project i was supposed to read large files, I implemented it using chunk reading in BPEL.

The issue i was facing in this case was the file was not getting archived, this i achieved using file move operation. However the client was very specific for the naming convention of archive file.

I tried to search for some built in commands but was not successful in generating the same so i used the following expression which generate the same file naming convention as Archive by does by default

concat(xp20:format-dateTime(xp20:current-dateTime(),'[Y0001][M01][D01]'),"_",xp20:format-dateTime(xp20:current-dateTime(),'[H01][m01][s01]'),"_",xp20:format-dateTime(xp20:current-dateTime(),'[f001]'))


Friday, February 05, 2016

Distributed Queue and OSB cluster



I tried working in a scenario where in we have a distributed queue in OSB cluster and then i tried to subscribe the queue using a proxy service.
However it didn't worked for me the same way Distributed topic worked(It is creating multiple instances). I need to do some more R&D on this topic before publishing. Let me know if you have any pointers.


Distributed Topic and SOA Cluster



If you are wokring in a SOA Suite cluster with multiple node, you will have to face a lot of issues configuring Distributed topic.

I faced a lot of issue first time , some of the common issues were

1> how to point the distributed topic to multiple targets.
2> For one message multiple subscribers are getting triggered.
3> Messages are going to all the servers.

But all these are just a one time issue. One you know the correct steps you should be able to get it working in a first shot.

In this post we will discuss about the task to be taken care of from admin and developer side.

First step is to create a distributed topic.

first of all it is important to understand that there is a diffence between topic and distributed topic. If you are creating a topic it can just point to one of the managed server , however the distributed topic can be pointed to multiple server.

Steps to be followed for creating a distributed topic are

1> Create JMS servers pointing to individual managed server.



Next create a JMS module

And then a subdeployment

While creating a subdeployment make sure you are targetting it to the JMS server you have created in the previous step.




Now go ahead and create a distributed topic

Go for advance targeting of topic and point it to the sub deployment created.

While creating a distributed topic, one important point is that make sure you are specifying the forwarding policy as Partitioned otherwise the message will be replicated to all the servers.



Now you are good from admin side but a task is required from developer as well that is to add a singleton property in your SOA process.

<binding.jca>
<property name="singleton">true</property>
</binding.jca>

In case you are using OSB as a publisher and subscriber

Make sure you are enabling Quality of service for exactly once.



clustering in OSB is again more confusing so i will write a separate post on my next exercise to make it more clear.



Thursday, February 04, 2016

Configuring Work Manager in SOA OSB 12c



The prioritization of work in weblogic server is based on an execution model which takes into account user/admin defined parameters and actual performance of the server. Weblogic server allows developer to configure work manager to prioritize pending works and improve performance of process. In this exercise we will try to understand how we can configure work managers in SOA/OSB 12c to improve the performance of services.


We will have a look in OSB configuration first as it is straight forward and then we will check on SOA configuration.

WorkManager can be configured in OSB for proxy as well as business services.

Work Manager when configured on Proxy service is used to limit the number of threads running a proxy Service and Work Manager when configured for a Business service used to limit the number of threads that process response from the back end system. It is important to understand that work managers are used to prioritize work unlike throttling which is used to restricts the number of records. So there is a chance that you might loose some data when you have enabled throttling but with work manager you will not loose the data, it will be jsut the prioritization of service will change.

As per oracle documentation

https://docs.oracle.com/middleware/1213/osb/develop/GUID-7A9661AE-6FE5-4A92-A418-694A84D0B0BF.htm#OSBDV89434


The Work Manager (dispatch policy) configuration for a business service should depend on how the business service is invoked. If a proxy service invokes the business service using a service callout, a publish action, or routing with exactly-once QoS (as described in Pipeline Actions), consider using different Work Managers for the proxy service and the business service instead of using the same for both. For the business service Work Manager, configure the Min Thread Constraint property to a small number (1-3) to guarantee an available thread.

Now without going much in to theoretical details we will see how we can configure work manager in OSB 12c. The concept is similar to that of 11g however the screens have changed for osb applications.

So we will first go ahead and login to admin console.

http://host:port/console

Go to Environment-->Work Managers



Create new and create a minimum thread constraint.

Minimum thread constraint ensure that whatever may be the load in the server this minimum number of
threads will be allocated to the service.



Give it a logical name and assign minimum thread

by default it is -1 which mean infinite.



Say next and point it to the server.

Next again go to the work manager and this time create a work manager



Give it some logical name



Next point it to server and finish the wizard


You can now go to your work manager and select the constraint you have defined



Save the changes.

Now once you have the work manager created you just need to attach this work manager to the business process in OSB.

Open up the osb console

http://host:port/sbconsole

Go to your business service

go to transport details

and select the work manager that you have created in the previous steps.



Save the changes and reactivate your session.

For configuring work manager in soa we do not much options.

SOA Services uses a defalt work manager called as wm/SOAWorkManager


YOu can configure your own constraints and update the work managers to use the contraints.



Wednesday, February 03, 2016

Processing large files in Oracle SOA


Recently i was wokring on a requirement to process very large payload. Though now we have MFT feature which we can use to process the files, still we will go ahead and see the three approach to processing it- that is through BPEL,OSB and MFT

For BPEL and OSB it is going to be the same concept that is to use chunking, we will see a demo for BPEL and the same you can replicate in OSB as well.

Before starting i will just clarify that chunking is differnt than debatching. when you debatch your file you actually create multiple instances for the file. However when you chunk read you actually read your whole file in chunk within a single instance. This is a confusion many people have so i just thought of clarifying it. Now with that we will go ahead and see how to create a BPEL process to chunk read the file. Further in chunk read the file does not get deleted once the file is read completely so we will also see how we can achieve that as well.

I will give some details on the implementation however the working code is already provided by oracle at following location.

https://java.net/projects/oraclesoasuite11g/downloads/directory/Adapters/File

Infact i can see that the solution is already provided in details in followin blog

https://technology.amis.nl/2014/05/07/processing-large-files-through-soa-suite-using-synchronous-file-read/

With this sample code and the blog you should be able to create a sample for chunk read easily.

I will just add the extra part of deleting the file from the file polling location.

If you will implement the chunk read you will find that the file does not get deleted post reading the file.

It is because the file can be deleted only outside the loop of chunk read.

File adpater provides feature to delete the file from a location.

Provided you are getting the file name and file location you easily create an adapter to delete the file post chunk reading

Create a simple file adapter with sync read option

Once it is created, a jca file will be created.

Open the jca file and update the changes as shown in the diagram



the class name is oracle.tip.adapter.file.outbound.FileIoInteractionSpec

connect the adapter using an invoke activity.

After connection the process will look like following




Since we have defined logical name and logical directory now pass on the same in the adapter call

Add two properties in the invoke acitivity

jca.file.TargetDirectory

jca.file.TargetFileName



Add an assign activity and copy the file name and directory to the variables



Having said that there are situations where in you will use MFT as well.

I have worked in a situation where in the files were place in some shared drive which was location in a different server.

I had to explicitly use MFT to transfer the file from remote NAS path to local soa server.

Even if we implement a process using MFT we have to make a local copy of the file in the server and a local path is required which is more or less a chunking mechanism (MFT internally uses chunking to process data if you are using MFT adapter).

SO i better thought i will stick to my chunking process and will just use MFT to transfer the file from Shared drive to local path.