Using Azure ServiceBus to keep data in sync across a distributed Blockchain system (1)

........     
I'm currently having fun developing a blockchain system distributed across quite a few organisations. Writing data to blockchain is inherently slow, and persisting a transaction can fail. You can't treat blockchain like you treat a database, just write data as soon as you get it from the user and return the response when you have the confirmation the write operation has succeed. Well, of course you can, it's just that you shouldn't. You should instead cache the request and treat it asynchronously. Return the response immediately to the user ("your request has been received, we're now working to persist it in blockchain"), persist the data to blockchain, and maybe, after the blockchain transaction has succeeded, send another confirmation "it's now in blockchain".

You can do all these using a regular database and several workers triggered by a timer, but why reinvent the wheel ?  A messaging system can do the same, only much better than you. I've chose to work with Azure ServiceBus because I was already using Azure Cloud for the rest of my project. Google PUB/SUB, Amazon Simple Queue Service, or even a self deployed Kafka or RabbitMQ would work more or less the same.


Why use a messaging service ?

Because it simplifies your work a lot. Really. You get the data from your client (browser or whatever), and you just post it to the message queue. You job is done.

The queue subscribers (actual modules that will write data to the blockchain) are notified that there is work for them, and they get the posted message. Such a worker does the best to persist the data. If successful, everything fine. If it fails (for example writing to blockchain fails with an exception), that's still fine. The message will be again resent a bit later. And so on, until saving either succeeds, or a resend retries threshold is reached. Even then, the message is not discarded, it's instead sent to the DeadLetterQueue, where you can manually (or in a automated way, if you want) inspect it, maybe alter/fix it, and re-post it to the processing queue once more.

Once a message is posted to a queue, you'll never lose it. It's either processed successfully, or you'll have to discard it on purpose. I think that's really powerfull.

Besides that, you can do a lot more. Like having multiple subscribers to the same messaging queue being notified  when a messaging is being posted to the queue (and this is really useful for notifying actors about events, like new data being persisted to blockchain), or having just one of the many workers being able to receive a message (such as when processing an update request for the blockchain data), or creating filters for some type of message to go to a dedicated worker type, autoforwarding data to other queues, scheduling/throttling message flows, duplicate detection and others. See all (for Azure ServiceBus)  here - https://docs.microsoft.com/en-us/azure/service-bus-messaging/service-bus-messaging-overview

Comments