Wednesday 5 October 2016

Serverless Delivery Architecture


Serverless Delivery Architecture

Serverless computing has gained momentum as more than just the hot topic of the moment. It attempts to build and deploy applications and services by abstracting away infrastructure, OS, availability and scalability issues.  it’s quickly becoming a must have for the modern enterprise. It will soon be a competitive advantage for those already implementing it.
While the name itself is a misnomer as serverless architecture doesn’t mean getting rid of the data center through some form of magic that powers compute cycles in thin air.The basic idea behind it seems is make the unit of compute a single function, effectively providing a lightweight and dynamically scalable compute environment. It is a way of decoupling application components as independent microservices that automatically run when a predetermined event occurs. 

While event-driven computing patterns have existed for some time, the serverless trend really caught on with the introduction of AWS Lambda in late 2014. Microsoft has a similar offer with WebJobs, and Google recently announced its version with Google Functions

So what is the advantage of using a serveless architecture:-


So here is an example of a typical server architecture.




This would be how a serverless architecture would look like






Obviously this approach would simplify architecture considerably. 

Lets look at the design patterns that enable a serverless architecture

The server less architecture is primarily based on two important design patterns CQRS and Event Sourcing. Both the patterns pretty much emerged at the same time, and the community of people that were working on them was much interwoven and the two things were not so clearly delineated. But there are two distinct architectural concepts there. They often work well together but it’s useful to think of them separate concepts. In essence, CQRS says that you break your system into components that either are read-only things or they are things that process updates.

As an example when a new request comes in, in CQRS we’d put that in the form of a command. A command meaning the C in CQRS. And the command would be “Create an User”.  Now, it goes to some microservice,  whose job is to take this kind of command and see if it can be done; like it might say, “I’m not going to execute this as you do not an admin.” So commands, as they ‘redefined in CQRS, you can say no. So that would be the response to that.Let’s say, okay, we do go ahead and we process the request and we create a user in DB, we link the user to some roles, send a email confirmation etc. Some events come out: some events that say things like “the email has been send” and another event that says “User has been provisioned”. This is the responsibility of that command processing part. Now the query, that’s the “Q”, sometimes you’d say, "Give me the status of user".  So this is the Q part.  And the idea is that this part of the system would be kept very simple. There would be a part of the system where you’d have to figure out how to create an user. But once the provisioning has started, you’d update the status in a query part that would say the status of the provisioning. So queries that way can scale differently than the command processing. And in a system where you have to do a lot we can scale them independently, we can recognize that queries take less processing power perhaps; that since there’s no change happening, we don’t have to worry about consistency rules. So the query part is very simple and fast and scales that way. The command part is where we have to deal with all the issues of, well, what if a command came in to inactivate the user during the provisioning what are the rules around that? Does the command still get processed? I mean it will get processed but does its till get cancelled? On and on.


You could have an event source system that was not CQRS. So for example, you could just have a module that responds to queries and also can process commands, and if you had that you wouldn’t really be practicing CQRS because you wouldn’t be separating them. 

However the event-driven methods for instantiating workloads also means to take care of what happens once invoked. If there are too many events generated it may lead to system overload or resource maxing if not careful. If each event is treated as an API endpoint, then one must ensure that the right policies in place for authentication, authorization, rate limiting, error handling, and more.It’s important to understand where different events can be sourced. When triggered, the job is posted to a queue that either pushes the message to a running worker node, or holds the message until a worker node is available.




Benefits of Serverless

The primary reason is  Efficiency / Cost
You only pay for the time that your function(s) are running and since you don’t have to run an app 24/7 anymore, this can be a good cost savings.  

Challenges of Serverless

  •  Functions are deployed to containers in isolation, so sharing code or libraries between function is not trivial.  Serverless solves this by allowing us to configure the specific directory level we want to include in the deployment package. As a result, you can include code in your function from a folder at a higher directory, which could also be used by other functions, and the included files will be zipped up with your deployment package.
  • It is difficult to have environment variables for example a DB connection string or encryption key without inlining them in your code and making them visible to users of your repo. One option is to store environmental variable definitions in JSON files in a gitignored _meta folder and inlines the values into your functions as part of the deployment process.
  • Long running jobs (high minutes to hours) are not a good fit, typical expectations are in the second’s range. 
  • Logging and monitoring are a big challenge as and log aggregation capabilities are required


Applications of Serverless

One of the most natural applications for Serverless is the Internet of Things.Most IoT devices are essentially sensors combined with some control points.
In a typical IoT use cases one would gather data from sensors and  aggregate it in some form of gateway, possibly doing some basic control actions, and then the data is submitted to a system where it is processed and events triggered.

Serverless functions would be ideal fit for this kind event processing.


Conclusion

Serverless compute is here to stay, however, rather than viewing it as a silver bullet, applicable to all sorts of use cases it should be perceived as  design pattern meant for specific cases. 

No comments:

Post a Comment