Blog

    • How we built a sports social networking platform with microservices, Event sourcing and CQRS

    Sandeep Rajoria, technical architect

    High performance and scalability – those were two major expectations when a client entrusted us to build a new sports social networking platform. In this blog post, our technical architect Sandeep Rajoria explains how we created the architecture of this top-of-the-line platform.

    Since microservices are designed for scalability and high performance, we decided to use Event Sourcing and CQRS as the architecture of this new product.

    The aim of this blog is to share some insights and implementation details about the main components of these architecture patterns – this is by no means a tutorial for microservices!

    Okay, so you might have noticed that I have talked about using Event Sourcing, CQRS and microservices all together. The solution we built combines some important attributes from those concepts. Following is a bird’s eye view of the design:

    Architecture Microservices

    Here are the main technologies used in the architecture:

    1. Python based Chalice framework for AWS Lambda through API Gateway
    2. Python based Domovoi framework to trigger functions on events
    3. DynamoDB and DynamoDB Streams
    4. SQS and SNS
    5. RDS
    6. ElastiCache
    7. PHP/Nginx based setup on EC2
    8. And a lot of frontend tech like jQuery, Handlebars, materialize-css etc.

    Let’s talk a little about the main components of the system.

    Event Store

    The event store is the backbone of this whole setup which serves as append only ledger of all the “events” that occur in the system, saved in a chronological order. Even though the event store implementation we are using is not a full-blown event store, it still carries all the basic goodness of an event store. Like we don’t yet have a replay mechanism in place, and still have to figure out a standard process of data population for a new microservice.

    But every change on the system is logged in the Event Store upon validation by the involved command. The attributes that get saved here are the event type, payload, user who initiated it, and timestamp to name a few important ones.

    DynamoDB table with streams enabled acts as our event store, and we have a fan-out lambda which gets invoked on every new entry (every new event) in the DDB table.

    Pros: Easy to setup, one can utilise the free tier on Dynamo, you can physically check the data in the rows.

    Cons: Can only trigger one or max two lambda functions hence a fan-out is required, Resetting of cursor (or replaying of events) not available out-of-the-box.

    Other options: Kinesis Streams.

    Fan-out Lambda

    The job of the Fan-out lambda, as the name suggests, is to fan out events to microservices which are subscribed (or are interested in) to specific type of events. Used Python based Domovoi framework which acts as the stream trigger. The main and only job of this function is to trickle the events further to other lambda functions.

    Pros: Ease of setup, one single git repo with all the code.

    Cons: Too many responsibilities on it, code specific to different microservices can sit in the fan-out, very high number of events can create a lag in processing of events.

    Other options: Kinesis as it can have separate triggers on different streams.

    CQRS Microservice

    Every microservice we build has a command and a query part, where the command part gets the POST/PUT/DELETE request directly from the UI/Browser/App, processes it, updates it internal DB (and a caching layer or a read optimized DB like NoSQL when required) and publishes all the events produced by this request.

    The read part simply reads the caching layer via a write-through cache whenever there is a GET type of request.

    Used Python based Chalice framework for all the microservices.

    Pros: Separation of concern, reads are independent, faster and optimised, ease of implementation.

    Cons: Extra code and complexity, slight paradigm shift from traditional client-server architecture, read and write DBs could be out of sync, lambda cold-start was a problem initially.

    EC2 server

    The main job of the Ubuntu based EC2 is to host the frontend application with some PHP based legacy part of the application, taking care of the registration and authentication. The microservices use JWT based token for auth which is obviously generated by the legacy application, gets saved on the browser local storage and is passed on every microservice request. Along with the main job, it also pushes events to the event store from the legacy system via pushing those events to a event queue, which are polled and processed through a SNS job mechanism.

    Pros: Battle tested VPS servers with all the wonderful debugging tools.

    Cons: Always on, not the most optimal usage of resources.

    Other options: S3 based hosted UI through Route53.

    All right folks, it’s a wrap for this time. As you can see we touched upon all the major parts of this architecture, and explained a little bit on how the system works in its entirety. It really did help us achieve the desired performance and scalability, although managing this whole application has its own challenges. We are also working on other projects in different domains where we are working towards breaking an existing monolith into microservices using the same set of patterns which is turning out to be even more interesting as it has its own set of unanticipated problems.

    Sandeep Rajoria, technical architect

  • Watch video of how it works here