I think its no secret that more and more development teams are trying to take more of a micro service oriented approach (and for all the good reasons). With the power that cloud providers give plus the benefit of container (Docker and Kubernetes) I think it is inevitable that micro services will become a standard. Of course this approach is not problem proof but it allows us to move some of the monolith type application issues somewhere else where we have more flexibility. For example we trade separation of functionalities for the cost of having to create and maintain the communication channel between the services. That's a good deal because dealing with one Godzilla size application is usually more suicidegenic than working with one of already existing communication solutions like RESTful services, SOAP services, GRPC (maybe in one of my future articles I will talk a little bit about this abomination) or message queue systems.
In this post I would like to focus on the message queue systems, or more precisely on event base architecture. I will not beat a dead horse and talk about what are events, commands, how the architecture works etc. There are already tons of articles explaining all of those things. I would like to focus on mechanism that is not a mandatory in event base architecture but can make your life much easier. Its called event sourcing (roll credits).
For me event sourcing consists of two elements:
1) Event storage
2) Event replay system
Event storage
Let's have a look at the simple web page <-> database system with events.
Simple right? User edits the model, api sends event to some sort of notification system, event is processed by event listener which applies the changes to read model. Now lets add the event storage.
The only difference is that before api sends the event it saves it in the event store. On the first look this solution bares the question: why is api the one saving the event to event store? Shouldn't there be a additional micro service with responsibility to do it? After all we are doing micro services right?
In normal circumstances that would be an excellent point. But there is one thing that makes it a bad idea in this scenario. And that is: our event store is suppose to be our ultimate source of truth. Imagine the situation where the event is applied to the read model but the process fails to save it to event store. Was the event send or not? And how are we suppose to know that? Event store tells us what was send. If the event was not save in event store then this event did not exist. It is "acceptable" if the process fails to apply the event to read model as long as we have the information that the event actually was send. So if api can not assure us that the event was "logged" then the process is considered a failure.
Event replay system
And now comes the fun part. Imagine that the web page is part of internal system. Some employee is filling up the client data for 8 hours every day. And now lets say that there was a bug in the system which made some nasty data issue for one of the client. The bug was discovered and fixed. But the data issue is still in the read model. What do we do? We have two obvious options:
1) We can try and retrieve the data from the backup. But what if the last uncorrupted data is in the backup from 2 weeks back? What about the changet during the last 2 weeks? Are they lost?
2) We could ask the web page user to type in the current model state from whatever documents he got. He could do it. But imagine if we have hundreds of those data issues. Don't think the user will be happy about that.
As you can see both of them suck. But what if we could pick up all of the events and reapply them? We have all of the events in the event storage remember? All we have to do is query the events we want and reprocess them! There are 2 ways we can do this:
1) We can move those events to the event queue and let the listener reapply the changes to the read model. I think this is the best way to do it because we are letting the already existing process do the dirty work. But if we will send thousands of events to event queue then the listener will become the bottle neck.
2) The replay process applies the changes directly to read model. This solution requires us to have the exact same handling as the listener. We can achieve this by keeping the handlers in separate nuget package and use them in both the listener and the replay process. The downside is that we are forced to keep both the listener and the reply process up to date when it comes to event handlers. So there are two deploys when handlers are modified.
So whenever there is a data issue all you have to do is to run the replay process and everything will be fixed. You can make it an exe file, internal api or a part of the web page or whatever you like. As long as you can keep any your event storage clean from any issues you can rebuild even your whole data base.
Before I end this post I would like to mention some PRO TIPS OF THE DAY regarding event sourcing.
PRO TIP 1: Keep your event storage safe
That sounds quite obvious right? But I want to emphasize once again that your event storage should be your ultimate source of truth. With it you can rebuild you whole data base. So keep it save, make backups, and pray it stays alive.
PRO TIP 2: Use one bigger events instead of multiple smaller ones
Not everyone might agree with me on this one. In my opinions if you have a model that is modified on the editing page, after the user saves the model there should be an event created that contains the whole model (with the data included). Multiple events can choke the event notifier/ event bus/ listener more easily if there is a bigger replay process executed (because more events need to be replayed). Plus you store less events in your event store (although bigger). Unless you have some event size limit in your communication pipeline I don't think it is worth using smaller events.
PRO TIP 4: One user action equals one event
If you are creating a new user and you want automatically create new order list assigned to that user, do it by one event. Do not ever create 2-3 events like: CreateUser, CreateOrderList, AssignOrderListToUser! This is a suicide practice. Imagine if handling of one of those events fail. Or event something worse... your events will be processed in incorrect order. Unless your event bus/notifier can assure FIFO 100% of the time this practice is a really bad idea. There are some tricks to try and correlate those events together and add some ordering but it is quite difficult to achieve. Not worth the trouble when you can just create one event.
Thanks for reading :)
Komentarze
Prześlij komentarz