A Microflow is the combination of a Microservice, and a well-defined transaction integration flow implemented with an Integration Framework Library.
The main reason of using an Integration Framework Library like Mule or Apache Camel is to take advantage of the implemented 65 integration patterns. The implementation of the patterns is proven, tested, and maintained in these integration libraries, so we do not need to reinvent the wheel and custom implement them when we want to use a Microservices approach instead of an ESB.
Microflows promote service autonomy and abstraction, it only exposes a well grain-defined public interface to communicate with external applications, like other Microflows. Alike Microservices, to set a good application boundary for our Microflows, is key. Containers are a good artifact that can help us to achieve the application division that we are looking for.
Docker is very popular in the containerization world, and we can find images of almost all application runtimes and servers. Therefore, it is very tempting to utilize an App Server like Mule Server or Apache Server to host our integration applications. I don’t recommend this practice since it can break the desired application independence if we host more than one app in the Docker App Server image, we are practically taking the common integration problems to the next level. Instead, we should host a lightweight application process in our containers, where the main dependency is the application run-time – like JRE or .NET -, Docker OpenJDK is a good image to use for our purpose.
The Principal Components of a Microflow:
How to achieve inter-service communication is a common discussion when designing Microservices. This should not be foreign to Microflows either. Many purists may say that the best way to communicate services should be via HTTP with RESTful interfaces. When talking to system integrations, especially when uptime is not a quality that all systems, applications, and third parties have, there is a need to guarantee successful data exchange delivery. While in Microservices the arguments focus mainly on sync vs async communication, Microflows do it in terms of system availability and SLAs. Messaging patterns fit most of the system integration needs very well, regardless of the communication protocol used.
To make our integrations more resilient, we need a buffer in between our services when transmitting data. This buffer will serve as the transient storage for messages of data that cannot be processed by the application destination yet. Message queues and event streams are good examples of technologies that can be used as transient storage. The Enterprise Integration Patterns language defines several mechanisms that we can implement to guarantee message delivery and how to setup fault tolerance techniques in case that a message cannot reach its destination.
A Microflow should not be limited to one simple message communication exchange, in many cases, we need to expose different channels for integration, leaving to the consumer the decision of what message channel exchange fits better its integration use case. I recommend that for every Microflow, you expose an HTTP/S endpoint and a message queue listener as the entry inbound components of the flow.
To migrate legacy implementations of integrations flows to Microflows, it is necessary to have a good understanding of transaction processing and better yet, experience. Transaction processing will help us with identifying indivisible operations that must succeed or fail as a complete unit, any other behavior would result in data inconsistency across the integrating systems. These identified indivisible operations are the transactions that we will separate to start crafting our Microflows. Each one transaction must fulfill the ACID properties to provide reliable executions. There are some design patterns that can facilitate the transaction design and implementation like the Unit of Work.
System integrations commonly exchange data among applications distributed in different servers and locations, where no single node is responsible for all data affecting a transaction. Guaranteeing ACID properties in this type of distributed transactions is not a trivial task. The two-phase commit protocol is a good example of an algorithm that ensures the correct completion of a distributed transaction. One main goal when designing Microflows is that one Microflow’s implementation handles one single distributed transaction.
Database management systems and messages brokers are technologies that normally provide the mechanisms to participate in distributed transactions. We should take advantage of this benefit and always be diligent on investigating what integrating systems or components can enlist in our Microflow’s transaction scope. File systems and FTP servers are commonly not transaction friendly, for this purpose we need to use a compensating transaction to undo a failed execution to bring the system back to its initial state. We need to consider what our integration flow must do in the case that the compensating transaction fails too. Fault tolerance techniques are key to maintain system data consistency in this corner scenarios.
Dead letter queues and retry mechanisms are artifacts that we should always consider improving our fault tolerance in our transaction processing. If we are creating Web APIs, our APIs must provide operations that we can use to undo transactions.
In summary, these are the steps to follow when migrating a legacy integration flow app to Microflows. The steps are not limited to Microflows migration since they can be used to design Microflows integrations from the green field:
Finally, we promote each transaction to a Microflow to deploy in our solution environments. This will help us to treat any Microflow independently to better maintaining it and supporting it.