Event Emitting Microservices (EEMS)

Microservices is the new buzz word. Since that buzz, I have been following its patterns and approaches. While it has definitely got positive sides when adopting in enterprise architecture, I have been also thinking about a little bit modified/ enhanced/ different version of it called “Event Emitting Microservices (EEMS)”.  You might now know where this is going. Yes, I am talking about microservices that always emits event after they are done with their atomic job, irrespective of its status.

I am not saying it is a new idea. Many smart minds out there might have already implemented services in this fashion. But, wouldn’t it be great if this becomes one of the common microservices pattern or standards? Event processing is the most popular way things are going on these days and all the streaming analytics feed on events. So this approach would play a role in that too. Apart from above, services just being atomic and isolated is not enough, sometimes they need to notify their outcome. They have to be expressive in nature too.

Now, let’s see what can we achieve by incorporating the event emitting nature into microservices.

  1. We can overcome lack of Transaction Management in SOA and RESTful designs
  2. Multiple subscribers can consume the event at same time and cascade their own event chains
  3. We can perform data analysis for predictions, being proactive, business monitoring, transaction monitoring, etc.

What do I mean by above three? Lets know in detail:

Point #1 – Transaction Management

It is a known fact that calls to services over HTTP are not transactional in nature. For example, RESTful invocations cannot be maintained as a typical transaction where we can easily commit or rollback a couple of service calls. That is why SOA and REST has always got some minimal design issues no matter how good the architects design. To overcome this, big giants like IBM, Microsoft etc along with OASIS have come up with many web service standards like WS-TX (Webservices transaction), WS-ReliableMessaging, WS-Addressing etc. to save SOA and make it robust.

Unfortunately, not many organizations incorporate these standards in their designs for many valid reasons. For example, standards like reliable messaging, addressing needs some features to be enabled on both provider and consumer sides. As part of this, there might be some need to change the way the providers are implemented to support these standards, which may not be accepted by some of the departments in the organization for a reason that the changes to theses webservices might disturb the ecosystem. So, some all-new webservices will have to be created to support this need.

To avoid this and proceed with normal service calls, some crazy stuff like Resubmission or Auto retry capabilities needs to be pulled into enterprise design. Though they are good and useful in some scenarios they cause more chaos than benefit from what I saw.

So, how EEMS can overcome this transaction issue?

Basically, it is an EDA approach that guarantees and promises that any given job will be processed successfully and continues further. There is a reason for underlining the last few words. What it means exactly is, if any step fails while running a process, then it will be retried for success and then continuous the flow with next subsequent steps. So, it is not a typical retry behavior we see except in BPM. This way, we need not rollback the previous steps as the job completion is guaranteed.

To be able to do this, invocation of microservices should not be done in a single composite service. Because, it will be difficult for a retry/resubmission framework to continue execution from the point of failure. Instead, they should be invoked in an event driven approach. For example, say there are 3 service invocations involved to accomplish a job. We make call to Service1 directly, then if Service1 is an Event Emitting Micro Service (EEMS), then it will emit an event stating its success or failure. Ofcourse, in both cases relevant details will have to be included in the emitted event like, success case should hold the outcome results if applicable and failure case should hold failure details.

Then there should be subscribers to these events to perform relevant action. For example, in case of success, subscribed service should invoke Service2 using the outcome of Service1 and this continues.

By doing this way, there is no need to involve transaction management as there is a clear scope to continue flow from any point of failure as events will cascade the service invocations.

But there is one limitation with this approach, In the current IT trend, this case fits in most of the use cases. But, if the use case is strictly interactive then responses are expected immediately and they should be prompt. So, these cases should be handled separately. For use cases which have to be synchronous in nature only way is to have transaction management. So they have to adopt one of the popular transaction management approaches for SOA and RESTful services like TCC (Try, Confirm or Cancel). You can get more information on this @ https://www.atomikos.com/Blog/TransactionsForSOA.

Point #2 – Concurrency/ Multi tasking

This is fairly self-explanatory. As you see, there is an opportunity to have multiple subscribers for an event. So mutually exclusive services or use cases can all subscriber to an interested event and continue their flow in a parallel fashion. This also adopts the concurrency nature in the design where ever necessary.

Point #3 – Data Analytics/ Predictions

We are living in the world of data. Till now we have seen lots of approaches in development like Test Driven Development (TDD), Domain Driven Development (DDD) etc. But now, new trend is the same DDD but this time it is Data Driven Development. Every organization are now interested in knowing stuff. I read a nice analogy somewhere regarding this. In current IT, data is being used as oil to so called analytic engines to produce an energy called “Information”. And what is Information? Information is wealth. So Data Driven Development has to do with setting focus on capturing all the data about what is happening around and inside the systems or environment. With EEMS, we have support for that too. All the emitted events can be fed to a powerful messaging system like Apache Kafka and it can route it to a distributed file system like HDFS from there big data framework will take care of it on what to do with it.

So, over all, having event emitting nature to a microservice has some benefits after all and in my opinion I feel having it as part of design from initial stages itself does only good. Some of the above achievements or advantages are more likely inherited from EDA but they add even more value when incorporated into microservices. We have more fine grained control this way on what to do with the emitted events.

One need not worry about the factors like velocity, volume and variety of these events. We are in the big data world now. These 3V’s can be handled by any popular big data framework. Whether to process the event or to ignore it will be handled by the filters/analyzers which are part of these frameworks.

I hope I tried to make some sense out of this. I am open to hear any improvements that can be made to make this better or any limitations that are missed out or overlooked.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s