Archive

Posts Tagged ‘SOA’

An overview of a SOA Architecture

09/11/2012 1 comment

Hi, Folks!

I know it’s been quite a while since I’ve last posted in here… lots of projects 🙂

As things are evolving on SOA, I would like to give my two cents here about SOA Architectures. So, just to begin the subject, I would like to introduce…

What is SOA?

SOA stands for Service-Oriented Architecture (as you probably know by now 😉 ). But, WTH is a service?

The efforts to define a service have gone really far, but no one seems to be able to really define a service. People are able to describe a service by how it should be, not by what it actually is. According to Thomas Erl (http://www.soaglossary.com/service_orientation.php), a service should:

  • Have a standardized service contract
  • Promote loose coupling
  • promote abstraction
  • be reusable
  • be autonomous
  • be stateless
  • be discoverable
  • be composable

OK, but what does it mean in terms of real-world services?

It means that you should design your services to be as granular as possible to promote composability and reusability. It also means that these services should be stateless (i.e., they should not assume that the server is in a state a previous request set it on); these services should be autonomous (i.e., they should not depend on other services to perform it’s actual task – unless, of course, we are talking about a composition); they should be discoverable (i.e., you should have some kind of directory where you can find the service that performs the task you are looking for); they should be loose coupled to each other; and last but not least, they should have a contract you can rely on – one that is standardized such as your clients will not have trouble when using your service.

Note that, so far, no one has talked about WS-* or REST. That’s because SOA is not defined in terms of technology, or tools, or whatever. SOA is based on a set of good practices, in order to ensure ROI (Return of Investment). BTW, of course SOA is not that cheap (studies prove that a brand new implementation of SOA is 30% more expensive in the initial phases, but they pay off over time).

But, getting back to technology… nor WS-* nor REST define SOA. You can have lots of WS-* Web Services on your architecture, or lots of REST services on your architecture, but still have no SOA. Again: SOA is based on practices, not technology. Of course, enterprises over the world have developed tools to help achieve the goals (some of them don’t even give so much help ;)), but the tools are not the end – they are the means.

If you define your architecture such as enough business logic is exposed as services (again, no one is talking about technology here), then you will have SOA.

Buuuut… here comes the little lines at the bottom.

Why my company always talk about technology and vendors when it comes to SOA?

The truth is that it can be very hard to promote SOA within the terms mentioned above (or, at least, to do so without tools). That’s why companies almost always have SOA with WS-* and tooling. It’s really hard to have directories of services without any kind of tools, and it’s very hard to promote service discovery without WSDL’s (OK, REST has got WADL, but it hasn’t got to a level where every REST service has one. Actually, REST doesn’t need WADL, and that’s not the case with WS-* services).

Also, even that your company decides to make its own tooling to promote these practices, it can be very hard as well to comply with non-functional requirements, such as performance, monitorability, scalability, etc. Remember that these services are granular, and that the traffic over network is XML/JSON. Imagine lots of services doing marshalling / unmarshalling of these data everywhere.

So, the first tool that came around to solve these problems is the ESB.

WTH is an ESB?

ESB stands for Enterprise Service Bus. As the name says, it is nothing but a bus to your services. But, wait… a bus that takes what to what?

The responsability of an ESB is to map requests from a client (usually, these clients see a service, with a contract and everything else) to… somewhere. Really, the ESB may lead to a WS-*, a REST service, the file system, BPEL (I’ll talk about this one later), JMS, and lots of other protocols. Actually, it should be able to transform any protocol to any protocol. But, of course, this very core of an ESB may lead to accomplish several other purposes. Take monitorability, for example: if an ESB will map these protocols, and data will pass through it, it may take metrics over it, and map response times, success rate, failure rates, fire alerts when these metrics reach a given threshold, and so on. Also, it may guarantee security to these services (by exposing only the ESB for clients, and inserting some assertions to routings), as well as several other things.

Also, an ESB may promote security by an application level. They usually have a feature known as throttling, which prevents a client from sending too much data (so much it can cause service failure). Of course, these kind of stuff should be ensured within XML Schemas, but they know detailed schemas are hard to maintain inside an ecosystem of dozens, hundreds or even thousands of web services.

But what an ESB does not do is to recompose requests. Remember service composability? That’s not the function of an ESB (although some of them even do it). Service composability requires a more business-oriented tool, something like…

BPEL

BPEL stands for Business Process Execution Language. It is the industry’s standard for composing workflows, made of several web service calls. What it does is not only the workflow, but handling of several issues, like:

  • expose the composition as a web service itself;
  • rolling back of failed transactions (known as compensatory transactions);
  • keeping the state of transactions, both in case of server failure as for auditing;
  • optimizing the calls to the several web services it may compose;
  • keeping the flow visual, i.e., to ease the development and/or auditing.

As you may have realized, BPEL is specialized in keeping state of stuff. I know, web services should be stateless, and BPEL is – from the point of view of the client. But it keeps the state of the compositions for more pragmatic stuff like the mentioned above. It is conceptually different from an ESB as ESB’s should not keep the state of the requests, anywhere.

So, how to organize these together?

A good architecture should leverage all non-functional requirements and, at the same time, be able to change whenever it is needed. It’s a bit hard to talk about this subject in a single blog post, but within the mentioned features above, a good architecture could be like this:

Yes, I totally suck at drawing =/

As the picture doesn’t make justice to the thought, let me explain it:

As an ESB does every kind of transformation, protocol A -> protocol B (doesn’t mean, of course, that A is different from B), it is fair to place all data traffic over it, just to place metrics, alerts, throttling, and everything else, onto it. BPEL, WS-* services, REST services, and so on, will be accessible through it, and BPEL will not see directly the other services, but will reference the services that are already placed inside the ESB.

Of course this architecture has a huge drawback: too much overhead inside one piece. The ESB, here, must be intensively took care of (I hope your company has some baby sitters! =D). It means that maybe a few network cards, some gigabytes of RAM, and clustered to some four, five nodes. But believe me, depending on the number of services your company has (maybe the very size of the company), it totally pays off. Of course, maybe does not make sense if you have only a few services, but I’m talking here of a few hundreds (or thousands) of services.

Conclusion

You should not believe a single word that’s written here =D (Just kidding!)

Architectures don’t come inside a box. You need to be very very extra judiciously to analyze whether you need to place these kind of stuff in your company. Perhaps you don’t really need BPEL. Perhaps you don’t really need an ESB at all. Perhaps you need it but you can’t afford it. Perhaps it makes more sense having REST services than WS-* and these bunch of stuff. Every single aspect of SOA comes with positives and negatives. What you need to do is to analyze these points and carefully think if you really need the positives and if you can handle the negatives.

Advertisements

Oracle BPEL Hello World

Hi, everybody! Today, I´m gonna show you how to do a hello world using Oracle´s BPEL engine. You are gonna need:

  • A properly installed SOA Suite , 11g (I´m not gonna show here how to install it, but there are plenty of good stuff on this subject on the web);
  • A JDeveloper 11g with SOA Extensions enabled
  • A test tool named SOAP UI

So, let´s do it: start your SOA Suite and JDeveloper. Once your JDeveloper is open, right-click the applications area, as shown in the figure:

Then, select the menu “SOA Tier” and the SOA Project:

Select your project´s name and the project technology (in our case, SOA):

Create your project using Composite with BPEL (as it should be just a simple BPEL project; some day I will explain here what the other types are):

Select the synchronous template and mark the “expose as a SOAP service” checkbox:

Then, you should see something similar to the image:

Click on the assign component and drag it to the diagram. You see highlighted positions; they are places where you can drop the component:

Place the assign in the proper position:

Then, double click the just-placed component. You should see an image similar to the next one:

Click the ‘plus’ icon and you will see the following options:

Once doing so, click the “copy operation” option. You should see the following screen:

Expand both sides until you see the following screen (you should select the options, too):

Then, click OK. You will get back to the previous screen. Select the “general” tab and change the name of the operation to “AssignEcho”, like the screen:

Click OK and you will get back to the BPEL Designer screen. Now, it´s time to deploy our process. Right-click your project, and you should see a menu like the following:

As you maybe don´t have a connection in place, select the “new connection” option. Then, follow the wizard:

I´m assuming here that your username is weblogic (and you know the password as well):

Also, I´m assuming here that your SOA Suite is loaded on localhost, port 7001 (or 7002 if it´s SSL), with a domain loaded to soa_domain. They are the defaults.

Then, click “test connection”. If everything is OK, you should see the “8 of 8 tests succesful” status message.

Click OK and, once again, you will get back to the BPEL Designer screen. Now, your new connection should be available on the connections list:

Click the newly-created connection and you should see the deploy screen:

While deploying, it should ask for username and password:

Then, access the Enterprise Manager site (for me, it is available on http://localhost:7001/em). Once inputting the username and password, you should see the following screen:

Expand the selections according to the image and select your newly-deployed project:

If you click the “test” button, you should see a screen like this:

Personally, I don´t like using the enterprise manager to test my services, for personal reasons. So, I´d rather using SOAP UI. Select the WSDL of your service and then, create a project in SOAP UI, like this one:

Once doing so, SOAP UI will create a screen like the following (if everything is OK, change the interrogation sign for “hello, world!” or anything like this):

If everything is OK, then you should see a screen like this:

And that´s it! Your BPEL Process is working, as it is echoing every phrase you input to it. If you want to go further, try to explore the Assign component and the others, to improve your knowledge.

See ya!

Categories: BPEL Tags: , , ,

Protecting your services with a simple fuse..

Michael Nygard, in his book Release It! Design and Deploy Production-Ready Software describes a pattern he called Circuit Breaker. It is based on the idea of fuses, that is, anything that may be dangerous should be put around a safe structure, that may disable the operation requests if it has any chance to do any harm to the application itself or others. It is best described through the image (click to enlarge):

The Circuit breaker pattern, as described by Michael Nygard

The flow is like that: the dangerous operation has the fuse as a shell. The first state, “closed fuse”, has a counter of failed invocations and a threshold. When the client produces an invocation, it allows the invocation to pass through. If the call succeeds, then resets the counter. If it fails, increment the counter. If the counter reaches the threshold, it disables the fuse, going to the “open fuse” state. This state has a variable that represents an ammount of time and another one representing the moment in time when it has become the active state. Any call to the dangerous operation in this state will cause it to fail without even invoking the operation. When it is in this state for the amount of time specified in the variable, it decides that the call deserves another chance. So the invocation goes to the “half open fuse”. This state tries to invoke the operation again. If it fails, go to the open state again, resetting the timer. If it succeeds, go to the closed state again, resetting the counter of failed invocations.

OK, nice pattern, but what does it have to do with SOA?

The magic in this pattern (and the whole book) is that it brings light to subjects that most developers don´t give enough attention. One of these subjects is that you cannot rely on the network. Final. I have never seen any thrustable network (and I believe you haven´t, too!), so, as long as SOA is a kind of distributed architecture that relies on the network and we cannot rely on the network, so we can´t rely on services either! So, as long as services are unthrustable, we can apply this pattern, to:

  • Ensure that we won´t get stuck waiting for services that might never return;
  • Ensure that, if the server that is holding the web service is drowning from lots of invocations, at least we are not the ones that are gonna disable it for good;
  • And many many other good reasons to do so. Read the book 😉

As Michael himself doesn´t give any hints on the implementation of such a pattern, I decided to implement it, and you can download it from the downloads section. It is very simple, as it doesn´t allow only web services to be invoked via the pattern, but any other kind of dangerous operation too. You can modify the code the way you want to achieve your desire. My hint is that, allied to stuff like AOP and interceptors in general, you may do it the ultimate solution to never, ever have this kind of problem again.

Cheers!

A Simple Explanation on what an ESB is…

Today, I would like to say just a few words on ESB´s. As you may know, I´m a Java developer, but I work specifically with SOA (Service-Oriented Architectures) and Service-Oriented Computing in general. So, my day-by-day tools include not only those of Java, but some known as belonging to SOA stack, like BPM, BPEL and ESB.

ESB ?

ESB is an acronym for Enterprise Service Bus. It is a tool designed to provide flexibility to SOA, and refers often to the Message Broker pattern. It´s use often provides flexibility to SOA, but, indeed, adds more complexity to the overall architecture.

Generally, an ESB must provide or enhance the following features:

  • Services virtualization
  • Services security
  • Services management
  • Services availability
  • (other) SOA Messaging Patterns
  • WS-* specs support

Services virtualization

To be succeeded on its intent, the Enterprise Service Bus must have the ability to “hide” the underlying services. Such capability is achived by deploying services on the very own ESB, with abstract service contracts, or by “hiding” them, providing a new service contract and then routing messages to the underlying implementation. This capability is very important because by doing so, the ESB can:

  • Re-route messages (for example, if a service is not available, it may call another one instead)
  • Add security and SLA (Service Level Agreements) layers
  • Group (or split) service capabilities from (or to) several different service contracts
  • Hide informations on implementations (for example, suppose that that we don´t want the client to know that the underlying implementation is a JMS service – yes, it can be one!)
  • And the main purpose: provide service decoupling

The last quoted capability is so important that it deserves its own explanation: the structure of a concrete service contract includes a section where the service address is specified. But, suppose that I can not guarantee that this address will always be the same. If this service address changes, all clients will be impacted, and, for sure, that is not what we expect when implementing SOA. Also, suppose that a service model changes (which, of course, is highly undesirable if someone wants to succeed when adopting SOA, but it may happen in the real world). The ESB may completely override service contracts, routing messages to the real implementation and even transforming messages so the can be compliant to the real implementation (just a note here: transformations, in SOA, are very undesirable as they decrease the system performance, but, still, they may be neccessary).

Services security

Suppose that you want a given service to be secure. This service needs muthual authentication through certificates, but still, it needs to be very, very, very (very!) fast, as you cannot tolerate it to delay too much. Now, consider that this same service is going to be consumed both from the inside of your application (still being used as a service) and from outside. The outside requests must be handled in a secure way in opposite to the inside ones. Then, you can place the whole security stuff in the ESB, as it does the rest. This approach has a bonus, which is that you are sppliting processing need through layers and machines, as the ESB machine processes the security layer and the service layer machine processes the logic itself (along with some XML parsing, for both of them).

Services management

Well, services management is pretty wide term, but as far as I concern, the main pieces that an ESB provides in this sense are:

  • Services “split” capability
  • Services “regroup” capability (in oppose to the previous topic)
  • message filtering
  • add SLA capabilities

So, the “split” and “regroup” has been mentioned in the topic of services virtualization, as the ability to “rewrite” service contracts. Message filtering is in the sense that, generally, the ESB transcends the web services capabilities and that it can filter messages. A practical example of this filtering is that, suppose that you set in place a query service and that a malicious user place a query that intends to cause overflow on the server, making it to crash. The ESB can cut off the request and/or the response, limiting, for example, the size of the message, allowing to pass only messages below 8 megabytes.

Services availability

An ESB can enable high availability / load balancing capabilities for web services, increasing the failure recovery hability of the SOA application. Just in case you ask: this is an example of why one of the services design principles is to make statelss services. So, if any of you ask me how to make a stateful service, it is more likely that I answer you something like “you don´t need this” rather than “do x, y, and z”.

SOA Messaging Patterns

There are lots of SOA (and Enterprise Integration in general) patterns that an ESB implements. I would be here for at least a couple of hours writing about them, but I would be repeating what is already catalogued, and you can check these patterns at SOAPatterns.org website.

WS-* specs support

A good ESB must comply with a few standards, as this is one of the major goals of SOA. The WS-* specs are a couple of specs to address some common issues related to web services, like distributed transactions (WS-Transaction), dynamic addressing (WS-Addressing) and security (WS-Security), just to mention a few. You can check a more complete list of specifications here.

So, what am I waiting for? I want to use an ESB!!

Hold on. There are lots and lots of discussions on whether it is good or not to place an ESB over a SOA architecture, and how far the benefits go and how far the headaches go. I would only be awakening the flame war of “to ESB or not to ESB” by exposing my opinion here, so I would like to keep it for myself. If you want to check it for yourself, have a look at this google query.

Rules hot deploy using Drools / Guvnor – final

05/30/2010 5 comments

In this final part of the tutorial, I will present how to deploy rules to Guvnor.

As a web application (. War), Guvnor provides much of its functionality as Servlets. Recently, a feature to communicate via REST has been incorporated so we can create, update or delete content from the server. This API is known as REST API, and responds in the URL of / api. For example, create a package, the syntax used is as follows:

/ org.drools.guvnor.Guvnor / api / packages /<package name>. package

Similar to the package creation is the creation of rules, made as follows:

/ org.drools.guvnor.Guvnor / api / packages / <package name> /<rule name>. drl

As these invocations are done via REST, HTTP methods to these URLs are created as follows:

  • GET -> reading
  • POST -> creation
  • PUT -> update
  • DELETE -> delete

That is, assuming the previous examples, and in the case of my machine, reading the rule created previously could be done by making a GET to the URL http://localhost:8080/drools-guvnor/org. drools.guvnor.Guvnor / packages / seguradora/ admissaosegurado.drl

If the reader is in doubt as to how to make such a request, below is an example on how to create a rule using Apache Commons HTTP:

 
/**
* @param drl The rule to be created, that is, the body of the rule itself
* @param drlName The rule name
* @param packageName Guess =)
*/
public static void createDrl (String drl, String drlName, String packageName) throws HttpException, IOException {
		String uri = URL_FOR_RULES_CREATION.replace("{0}", packageName).replace("{1}", drlName);
		PostMethod post = new PostMethod(uri);
		post.setRequestBody(drl);
		sendRequest(post);
	}
private static void sendRequest (HttpMethod method, String username, String password) throws HttpException, IOException {
		HttpClient client = new HttpClient();		
		client.getState().setCredentials("users", host, new UsernamePasswordCredentials(username, password));
		method.setDoAuthentication(true);
		
		client.executeMethod(method);
		
		int statusCode = method.getStatusCode();
		String response = method.getResponseBodyAsString();
		
		if (statusCode != 200 || !response.equals("OK")) {
			throw new HttpException("Something went wrong with the invocation. Status code: " + statusCode + ". Response: " + response);
		}		
	}

However, here is a sign of a problem with this approach: even the present version of Guvnor, the build and create a package / snapshot can not be done in an automated manner using this approach (so, it prevents the immediate consumption of these rules ). I´m not sure if a can expose my solution to this problem on this blog (due to a number of restrictions). However, if I have any appeal of the community I will be happy to check these constraints and if there are no issues, I may publish it here.

How to dynamically create the rule

Rule creation can be done through a set of classes available in the package org.drools.guvnor.client.modeldriven.brl .* , available in JAR drools-compiler- . Thus the creation of a simple rule can be made as follows:

 

**
* @param rule Presumably, a POJO that contains data from the the rule to be generated.
*/
public String generateRuleData(Rule rule) {
		RuleModel model = new RuleModel();
		model.name = rule.getName();
		
		FieldConstraint constraint = evaluate(rule);
		 
		FactPattern fact = new FactPattern(NOME_DO_FATO);
		fact.boundName = VARIABLE_THAT_REPRESENTS_THE_FACT;
		fact.addConstraint(constraint);
		
		model.lhs = new IPattern[]{fact};
		model.rhs = getActions(rule);
		
		BRLPersistence persistence = BRDRLPersistence.getInstance();
		String ruleData = persistence.marshal(model);
		
		return ruleData;
	}

/**
* This method will retrieve a custom model of what will happen inside a rule´s body.
* Remeber that IAction is an interface, so, evaluate the possible implementations.
*/
protected IAction[] getActions(Rule rule) {

//In my case, the method will retrieve a representation of setting a value onto the fact.
		ActionSetField actionSetField = new ActionSetField(VARIABLE_THAT_REPRESENTS_THE_FACT);		
		
		actionSetField.fieldValues = new ActionFieldValue[] {new ActionFieldValue(
"attribute", "value", "type - String, Integer, etc.")};
		
		
		return new IAction[]{actionSetField};
	}

protected FieldConstraint evaluate (Rule rule) {
/*Must return an implementation of the interface FieldConstraint. The available implementations (so far) are the class SingleFieldConstraint and the class CompositeFieldConstraint, that represent, respectively, a single operation on a fact definition and many operations. For particular reasons, I won´t present here this method implementation, so it´s on the reader. */

return null;
}

Where:

  • FactPattern class represents the fact. In the constructor of the same, will the fact that the name will be used. In boundName attribute, you create a variable to perform the assignment.

  • In the method

  • addConstraint are added to the clauses of fact.
  • In

  • attributes rhs and lhs, class RuleModel are added, respectively, the definition of fact and the definition of the consequences.

This method will then return the representation of the rule as String. I won´t go into details here of how to accomplish the definition of fact, as unknown, until now, automated way of creating this in the body’s own rule. Also, since I went into detail about how the syntax for defining the first part of this tutorial, then left to the reader the automated generation ddo fact.

Once the generation is made, just synchronize it with the model presented in the first part of this tutorial and ready, we have the rule generation and creation in Guvnor presented.

solution analysis

Thereby closing the tutorial, step to the critical analysis of the solution (as I hope to do every time I post a tutorial here).

One of the most serious problems with this solution was the fact that you were referred from the party itself, ie the impossibility of creating a snapshot to enable the immediate consumption of the rules. This can cause major problems for those who want immediate solution to the problem (although, as already mentioned, I have the solution to this problem and hope to submit as soon as possible for both the community and for the team responsible for the Guvnor).

Another problem, too, is the creation of a sequence of rules. You can create sequences through the implementation of the rules attribute salience , that can be embedded in the body of rules, and by creating flows rule . However, both mechanisms add problems when it made an automatic creation of new rules, which ultimately makes the scheme impractical.

And finally, citing the problems I cite also the management of these rules is problematic when done by Guvnor, recommended, so some sort of maintaining the route database. This creates a different problem when the use of rules is achieved, because the consumer of the rules is optimized to work with URLs or files, or if consumption of rules is done via the database, it is necessary to create a REST API own (or at least a Servlet that responds by GET method). Moreover, the consumer rules only works with the built package, which can generate an additional problem generation.

That said, it must be noted that the control of an application via API rules is extremely powerful, because besides being a flexible mechanism (from the standpoint of the system operator) is also relatively easy to manage.

Furthermore, a mechanism is also extremely fast (in benchmarks that I made between two different machines and after the creation of consumer rules, the meter accused times between 1 and 10 msec).

Therefore, it is the burden of developers and / or the architect of an application to analyze the cost / benefit of this. Just leave here a reminder that the problems I mentioned occur in development time, and advantages occur with the application already put into production. That is, I believe that the benefits always overwhelm the problems, either from or in any other Drools Rules Engine.

What is BPEL and its purpose?

BPEL is an acronym for Business Process Execution Language. It was created as a way of translating BPMN into the SOA world, or bring a notation for business interaction between web services. However, BPEL is a specification fault, and here I present my reasons:

1) Make the common case fast – or – Keep It Simple, Sir

The BPEL specification aims to achieve between web services orchestration. However, the interaction between services often requires, translation of complex objects to other complex objects. If there are variations between input parameters or output, some parts of the process should be partially or completely redone, making it difficult to maintain this.

2) Offer Bonus

The BPEL specification provides standard elements which facilitate the addressing of complex issues, such as waiting for a certain time interval before proceeding with certain tasks. However, most common cases in BPMN, as the interaction with humans, there is no standard specification. This turns out to be the burden of specific tools such as Oracle BPEL – which makes the process strongly coupled to a vendor .

3) Learn what to use, and when

BPEL is a tool that should be used on systems that have had the development cycle ended. The tool is primarily for service orchestration – which does not mean it should be taken as the only means to do so. What I mean is that if a system is developed in a single programming language, is a high probability that integration done within the scope of this language is better and faster than using BPEL tools. Thus, BPEL is ideal for orchestration between separate services for creating new – but should not be used to create basic services.

Explained my reasons, I say I have been using BPEL for any SOA project. It is inevitable to say before starting a project that has claim to be SOA, ask yourself if it is SOA or just SOA-Ready . Most systems do not need SOA-Ready BPEL, SOA in contrast to systems where the most need.

It’s as simple as can be. Development approaches top-down, meet-in-the-middle or green field should be avoided to the maximum use of BPEL. Only bottom-up approaches to the use of the tool can be considered (not yet approved). Even for bottom-up, one must answer some questions:

  • How many different languages are used in the project? If the answer to this question is “more than one”, there is good chance that BPEL fits well in context. Otherwise, consider adapting what already exists before considering the adoption of this tool.
  • There must be the existence of a single transaction during the invocation of these services? If the answer is yes, I issue a warning: it is possible, with specific tools, controllers use distributed transactions to ensure that this requirement is fulfilled. However, it is always easier to accomplish this control within the scope of the applications themselves ( use the right tool in what she does best )
  • Are there business analysts involved in the project? If so, maybe BPEL is well put: because the process can help the understanding between the BPMN and BPEL implementation. Otherwise, BPEL may not be justified.

In conclusion … BPEL, like so many other well-established tools, is a fantastic tool. However, it should be able to use. The correct use of certain tools in a project may make its use essential. But the use of wrong tools can make your design to be developed slowly, much more expensive and difficult to maintain. As with any tool misused.

Categories: BPEL Tags: , ,

Is SOA dead or not?

03/11/2010 1 comment

Well, boys and girls … inaugurate this blog as a way to express some ideas about IT and, if possible, share some knowledge. Much of the knowledge that accumulates in IT does not come with critical analysis, ie, managers, architects, developers and almost everyone involved in any way with IT not do an analysis of how important a tool / technology is important for their business / system.

And example is the technology that is mentioned in the title of post:

SOA serves for what? And to whom?

SOA is essentially an integration technology (EDI, EAI and other acronyms go hand in hand with SOA). This means that any procedure in SOA should be carefully thought out. Prepare a system to embark on SOA is a task that requires concern with several aspects:

  • What other applications might need on my system
  • data that will be exposed will be sufficient / excessive?
  • who use my system will be inside or outside the organization? How to ensure that only the right people see the data exposed by my services?
  • How to ensure that the system will always flexible

These are the concerns that surface when using an SOA implementation. And, somehow, are responsible for the failure of technology to return the ROI and agility that is promised by the vendors of SOA.

But these concerns are in development, right? How to ensure that developers will have these concerns? The answer is simple: the developers not going to worry about it. And nor should they. The key to effective implementation of the SOA Governance SOA.

Apparently, it’s simple. And it is. The problem is that some companies refuse to systematically implement a governance strategy for thinking that it is not necessary, or because you think the developers alone can meet the data requirements. Turns out, no, developers are not paid to think about the flexibility of future systems, these systems only. And so they do. Then, the deployment of SOA failure every time a project has no SOA governance.

This is a very simple logic. 100% of SOA projects that I know that they had failed SOA governance. And then, some people may say “is, but almost all cases of SOA governance have also fail.” And those people I say nothing, show evidence. IBM has a page of success stories for anyone who thinks that SOA is dead.

I say the following: SOA is not dead, nor die. Just let people know just apply the knowledge properly.

Links:

Marco Mendes Blog >> “SOA Dead!? … Another of the series … The festival of foolishness that plagues the Internet.” (pt-BR)

SOA is Dead, Long Live Services

Categories: SOA Tags: , ,