March 23, 2018 at 01:20PM
I’ve been in a number of conversations recently about Functions as a Service (FaaS), and more specifically, AWS’ Lambda instantiation of the idea. For the lay person, this is where you don’t have to actually provide anything but program code — “everything else” is taken care of by the environment.
You upload and press play. Sounds great, doesn’t it? Unsurprisingly, some see application development moving inexorably towards a serverless, i.e. FaaS-only, future. As with all things technological however, there are plusses and minuses to any such model. FaaS implementations tend to be stateless and event-driven — that is, they react to whatever they are asked to do without remembering what position they were in.
This means you have to manage state within the application code. FaaS frameworks are vendor-specific by nature, and tend to add transactional latency, so a re good for doing small things with huge amounts of data, rather than lots of little things each with small amounts of data. For a more detailed explanation of the pros and cons, check Martin Fowler’s blog (HT Mike Roberts) .
So, yes, horses for courses as always. We may one day arrive in a place where our use of technology is so slick, we don’t have to think about hardware, or virtual machines, or containers, or anything else. But right now, and as with so many over-optimistic predictions, we are continuing to fan-out into more complexity (cf the Internet of Things).
Plus, each time we reach a new threshold of hardware advances, we revisit many areas which need to be newly understood, re-integrated and so on. We are a long way from a place where we don’t have to worry about anything but a few lines of business logic.
A very interesting twist on the whole FaaS thing is around its impact on server efficiency. Anecdotally, AWS sees Lambda not only as a new way of helping customers, but also as a model which makes better use of spare capacity in its data centres. This merits some thought, not least that serverless models are anything but.
From an architectural perspective, these models involve a software stack which is optimised for a specific need — think of it as a single, highly distributed application architecture which can be spread over as many server nodes as it needs to get its current jobs done. Unlike relatively clunky and immobile VMs, or a bit less flexible containers, you can orchestrate your serverless capabilities much more dynamically, to use up spare headroom in your server racks.
Which is great, at least for cloud providers. A burning question is, why aren’t such capabilities available for private clouds, or indeed, traditional data centres? In principle, the answer is, well, there should be. Despite a number of initiatives, such an option has still to take off. Which begs a very big question of — what’s holding them back?
Don’t get me wrong, there’s nothing wrong with the public cloud model as a highly flexible, low-entry-cost outsourcing mechanism. But nothing technological exists that gives AWS, or any other public cloud provider some magical advantage over internal systems: the same tools are available to all.
As long as we live in a hybrid world, which will be the case as long as it keeps changing so fast, we will have to deal with managing IT resources from multiple places, internal and external. Perhaps, like the success story of Docker, we will see a sudden uptake in internal FaaS, with all the advantages — not least efficiency — that come with it.
No comments:
Post a Comment