Just four people with their heads in the cloud.
In this episode, we talk about the past, present, and future of serverless and the cloud with Erica Windisch, principal software engineer at New Relic and founder of IOpipe, and Yan Cui, AWS Serverless Hero and principal consultant at The Burning Monk.
Jess Lee is co-founder of DEV.
Ben Halpern is co-founder and webmaster of DEV/Forem.
Erica Windisch is principal engineer at New Relic.
Yan Cui is an AWS Serverless Hero and a developer advocate at Lumigo. He is also an author, speaker and helps businesses around the world go faster for less by successfully adopting serverless technologies as an independent consultant.
[00:00:01] JL: This season of DevDiscuss is sponsored by Heroku. Heroku is a platform that enables developers to build, run, and operate applications entirely in the cloud. It streamlines development, allowing you to focus on your code, not your infrastructure. Also, you’re not locked into the service. So why not start building your apps today with Heroku?
[00:00:18] BH: Fastly is today’s leading edge cloud platform. It’s way more than a content delivery network. It provides image optimization, cloud security features, and low latency video streaming. Test run the platform for free today.
[00:00:31] JL: Triplebyte is a job search platform that allows you to take a coding quiz for a variety of tracks to identify your strengths, improve your skills, and help you find your dream job. The service is free for engineers and Triplebyte will even cover your flights and hotels for final interviews. So go to triplebyte.com/devdiscuss today.
[00:00:48] BH: Smallstep is an open source security company offering single sign-on SSH. Want to learn more about Zero Trust Security and perimeter list user access? Head over to the Smallstep Organization page on Dev to read a series about Zero Trust SSH access using certificates. Learn more and follow Smallstep on dev.to/smallstep.
[00:01:11] EW: I think that it would behoove people to learn serverless because it’s less to maintain. It’s less expensive for many applications. There are certain applications that are massively enabled by this.
[00:01:34] BH: Welcome to DevDiscuss, the show where we cover the burning topics that impact all our lives as developers. I’m Ben Halpern, a co-founder of Forem.
[00:01:41] JL: And I’m Jess Lee, also a co-founder of Forem. Today, we’re talking about serverless and the cloud with Erica Windisch, Principal Software Engineer at New Relic and founder of IOpipe, and Yan Cui, AWS Serverless Hero and Principal Consultant at THEBURNINGMONK. Thank you both so much for joining us.
[00:01:57] EW: Of course. Thank you for having us.
[00:01:59] YC: Thanks for having us.
[00:01:59] BH: So Erica, can you kick us off by telling us a little bit about your development background?
[00:02:04] EW: Oh, wow! I started developing in notebooks, physical notebooks off of manuals of like Commodore 64, which I didn’t own, and for magazines. I was interested in the concept of programming and wanted to program and I wanted a computer and that’s how I convinced my parents to get me one. So reading into books like Applied Cryptography and doing some programming on my own in high school. And I started with a company called Cloudscaling, where I became a maintainer of some open stack projects. And I then became one of the early engineers at Docker before starting up IOpipe, which was acquired by New Relic last year.
[00:02:42] JL: Tell us about IOpipe.
[00:02:44] EW: So IOpipe was a point solution for developers building applications on AWS Lambda. So it was specifically for several applications for observability to help developers really understand what their application is doing inside that black box. It’s an environment that was harder for you just to really get insights into how their application was running after they shipped it. So we want to use this to be able to get, not just use as metrics and runtime metrics, but also application metrics so that they could like being able to know that an error that occurred on a background processing job was for a specific user or a specific order or number, things like that, so that you could aggregate and correlate that data together.
[00:03:30] JL: And New Relic, now that IOpipe was acquired by them, do you get to continue that work or are you working on other projects?
[00:03:36] EW: Yes. So I do continue working in a different capacity with the serverless team. I work now as an architect with that team, but I’m also now an architect for some other projects, such as our synthetics product, which allows you to do testing of your application. So it was similar to things like Pingdom, right? So that’s our version of that sort of tool where you can build a script that tests your service based on its APIs, so that external behavior.
[00:04:07] BH: In sort of describing everything IOpipe is doing and the work you’ve progressed to, how much of this would you say is solving problems that were created because of the serverless application concept and it needs an ecosystem around it? Or how much of this is sort of like new things that are enabled in the first place because of serverless? So how much was like just missing from the serverless ecosystem?
[00:04:33] EW: Yeah. I think part of it is with serverless wasn’t that these were new problems necessarily. I think that what was interesting was that we found that we could deliver value with less noise in serverless because you’re not managing the servers, you’re not managing these other infrastructure components. So there is less noise of these other things. And that allow us to the focus more on application level value, application level metrics, application level structured logs, for instance, which is what I think is true observability is being able to have the structured logs in context and correlate it into, yes, metrics, but metrics is one of the least interesting things for me and that’s something like a serverless application. It’s how users use the application, what users are doing in it, how the thing is breaking, where the fault lines are in context in correlation to the users and their workflows.
[00:05:32] BH: And I’m sure that really acts in concert with all the observability stuff New Relic does as a whole. It seems like it’s all kind of rowing in the same direction.
[00:05:40] EW: Yeah, generally. This is one of the reasons why I am spending a little bit less time focused specifically on serverless just because we are all growing in the same direction. When we were at IOpipe, we had to do things like build a learning system. We had to build our own errors management. We had to build all of these things and they were kind of table stakes functionality. And now these are also components at New Relic, but they’re much larger things with individual teams for everything. So I have to spend more time on more projects with more people. So I’m spread a lot more thin.
[00:06:14] JL: Yan, can you tell us about your background?
[00:06:17] YC: Yeah, sure. So I guess I probably didn’t start as early as Erica. So I got my first computer in ’97 when I was about, what, 15 or 16. And then I went to a university and did my computer science, finished that and worked in the investment banking for a few years as engineer, but then I kind of realized pretty quickly that I’m building stuff that’s used by 20 people in the bank. And then there are all these things that are happening outside that’s way more interesting. So I left the banking and went into, I guess, working in this for, firstly, Facebook games for a few years then we all transitioned to mobile games because everyone left Facebook to play games on mobile. So I’ve spent most of my career working as a games developer, building backend for various different kinds of social games and been working with AWS for over 10 years now, and the last couple of years, primarily focusing on building things using serverless components, including moving social networks who run pretty much entirely on Lambda, API Gateway, DynamoDB, and so on. The last couple of years, I’ve worked for a fair few different companies focusing around serverless technologies. I guess nowadays I spend half my time working with Lumigo as a developer advocate and Lumigo is another troubleshooting platform for serverless applications, probably focusing on more complicated AWS serverless environments where you’ve got lots of events. And it’s not just about API cause. It’s a lot of the event driven architecture, with the events fly into SNS, SQS, EventBridge, triggering Lambda functions and trying to get visibility out of the whole process. And the other half of my time I work as an independent consultant. We work with companies all around the world to help them adopt serverless technologies to be able to go faster and to deliver more value with less overhead in terms of managing infrastructure.
[00:08:12] JL: Wow! Going from banking to gaming to now the cloud. What an interesting progression. So you were given the title, “AWS Serverless Hero”. Can you tell us more about that and how you became one?
[00:08:25] YC: Basically, Erica is also AWS Hero as well, Serverless Hero. So basically they’ve got this I guess internal committee. As far as I can tell, they meet every quarter. I guess they nominate people based on the work that you’re doing in the community. I write a lot of content. I guess I published maybe at this point close to 200 articles on serverless, very different aspects and also done a lot of talks in public. So I guess they probably have seen my content. So someone nominated me in one of these meetings. They decided that we should give Yan, making him a serverless a hero. And so since then, every quarter they’ve announced the new heroes for the different specializations, I guess. They got container heroes, they got the machine learning heroes, and also serverless heroes as well. So how do you become one? Honestly, I’m not sure. I think if you’ve got a lot of content, you are contributing to the community through contents, through maybe open source contributions, then maybe it’s just a matter of time before someone notice what you’re doing because some of that is always going to be logged at the right person, notice what you’re doing and nominates you. But being nominated for being a serverless hero, it doesn’t really mean much. You get a free ticket to the event. That’s kind of the main perk you get. We want to do it because it’s a new way of doing things that we feel passionate and that we believe in and that’s why we do it.
[00:09:54] EW: Yeah. I would say that the main perk, on my side, is access to secrets, getting to know things that are happening behind the scenes at AWS services and features that haven’t been released yet, which we can’t talk about. But we do get to know about them sooner than others.
[00:10:10] YC: Yeah. We go to those briefing meetings. So it’ll tell us the roadmap for what’s coming up. That’s really cool as well.
[00:10:16] JL: Yan, can you tell us about the kind of things you’re doing as an independent consultant at THEBURNINGMONK?
[00:10:21] YC: So I work with lots of companies in both, I guess, large enterprises. I’ve done some work with people like Toyota, with a DVLA in the UK, also for a lot of medium, small-sized companies who are just taking the first step into the cloud or maybe they’re migrating from an existing application. There’s all kinds of different use cases you see now with serverless, and being an independent consultant, it’s kind of quite nice to see all these different things people are doing. You get a nice, broad perspective in terms of different contexts people are working with. You’ve got people doing lots of APIs, so definitely lots of them, but that you also see lots of people doing IoT things. You see people doing lots of big data machine learning type of work loads with Lambda and the Kinesis moving data around. So there’s some really interesting stuff that’s happening out there. Like I said, as a consultant, I mostly work with the loss companies to offer them advice on architecture, providing them with reviews. Occasionally, I work with clients and have them build up a POC. So recently I did a work with one of my clients in Belgium and they were building a new social network. So I built the backend for them in a couple of weeks using AppSync, which is a GraphQL as a service, on AWS and Lambda and DynamoDB. We were able to build something and the project went into production in just a few weeks. So it’s pretty amazing how much work you can get done with just one person when you don’t have to worry about infrastructure and all these other things.
[00:11:57] BH: So cloud serverless, it’s all sort of words for the concept of somebody else’s computer really. Can we get in a bit of the history of cloud and the beginnings before all of this was maybe a little bit more obvious to some of us? Yan, do you want to start in going back in time a little bit and talking about the beginnings?
[00:12:17] YC: Sure. So I guess AWS has started in 2004, 2005 timeframe when they just had the first service SQS or something like that. When I first got involved with AWS was in 2009. I was just leaving banking. I was building everything. We were sitting in these gigantic meetings. We have like 10, 15 people talking about how we’re going to get a new server in three months’ time and the different spec and what needs to be installed. And then that server comes, everyone memorizes the IP addresses straight away because it’s a very memorable event. And then moving to the cloud where you need a new server, just every five minutes and you’ve got a new VM running. So that kind of turnaround time becomes very, very different. And in the beginning of the cloud, the people would talk about OPEX versus CAPEX or Operational Expenditure versus Capital Expenditure where if you have to run all these giant machines, you have to have a lot of capital to even get started. But the cloud allows you to just rent what you need. So it kind of levels the playing field for a lot of companies to be able to do things that until then they just won’t be able to because they don’t have say couple millions sitting in the bank to be able to afford all these data centers and all this hardware. As we moved towards a high level of abstraction where managing servers becomes quite difficult and trying to reliably reproduce the same service set up every single time, that becomes a challenge, and then you see tools come up. They kind of help you manage data to be better and then Docker came around. And around the same time actually, Lambda was announced in 2014. When it was first announced, it didn’t really do much. It lets you do a straight to trigger a Lambda function. And I think the initial idea was that, “Well, this is just something that lets you do some I guess asynchronous processing for files that you put into S3.” But then when they announced the API Gateway Integration for Lambda, I think that was 2015 when it did that. That made, I guess, serverless a real thing, a real tangible thing that people can use to build real applications that can do a lot of things. And since then as they add more and more triggers, more and more ways to integrate Lambda function tightly with the rest of AWS, serverless become more of an integral part for a lot of people from building APIs, to building big data pipelines, to building IoT systems or a machine learning or even for DevOps and the DevSecOps, a lot of the automation can be done using Lambda and things like CloudTrail, EventBridge, and so on and so forth.
[00:15:01] BH: Erica, how did the last 5 to 10 years of the evolution of cloud and serverless land for you? What was your perspective when this component of the evolution started taking place?
[00:15:13] EW: For me, it started earlier than that because I started in the web hosting industry. I dropped out of school, got into the web hosting industry and data center providers back in 2001. I was there running my own web hosting service and building a lot of these cloud type services at the same time that Amazon was, and certainly there were more successful at doing so. But in this last decade, I had leveraged all the experience I had through the 2000s of building clouds and building data centers and took that into building OpenStack and doing contracts for customers, particularly in Korea, but also here domestically in the US, to build clouds for people. So I started in the 2000s at least with cloud.com build-outs. There’s also Eucalyptus and eventually OpenStack. And I became one of the maintainers. There was an OpenStack Oslo project, which I was deeply engaged with, where I was building out drivers for inter-process communication between the nodes of an OpenStack cluster. And then I went to Docker. The whole process there was really interesting because there was so much money being spent and a lot of failures, right? There was a lot of money spent failing at doing cloud with companies, really trying to catch up with Amazon and willing to just throw money at the wall and see what stuck. And I think a lot of that really happened with OpenStack and OpenStack wasn’t just cloud. It was also open source. It was this rebirth of open source in a way where it was such a large project, so much money was put into it, and the way that it created a foundation for its work that was not the Apache foundation was not to say new, but it was definitely influential in to things like how Docker chose not to do those same things. And then the Cloud Native Computing Foundation did decide to do a lot of those same things and follow a lot of the same paths that OpenStack tried. So I think that that’s for me was a big part of it was just the model, the community and just the paradigm that came with building these massive projects in open source.
[00:17:46] JL: Over nine million apps have been created and ran on Heroku’s cloud service. It scales and grows with you from free apps to enterprise apps, supporting things at enterprise scale. It also manages over two million data stores and makes over 175 add-on services available. Not only that. It allows you to use the most popular open source languages to build web apps. And while you’re checking out their services, make sure to check out their podcast, Code[ish], which explores code, technology, tools, tips, and the life of the developer. Find it at heroku.com/podcast.
[00:18:19] BH: Empower your developers, connect with your customers, and grow your business with Fastly. We use Fastly over here at Dev and have been super happy with the service. It’s the most performant and reliable edge cloud platform. Fastly CDN moves content, data, and applications closer to your users at the edge of the network to help your websites and apps perform faster, safer, and at a global scale. Test run your platform for free today.
[00:18:46] JL: So a moment ago, Ben sort of gave us a catchall definition of serverless in the cloud. He said, “On someone else’s computer,” but I want to get into the actual terminology. Erica, can you explain the difference between serverless and the cloud for our listeners who might not be familiar?
[00:19:01] EW: So fundamentally, the idea of serverless is really using the cloud for more of the things. It is building fewer of the components and systems yourself. I mean, I just spoke about OpenStack and Docker and the open source models that they encompass. But one of the things that’s really missing in serverless is exactly those things, right? Because it is consuming services that are already pre-baked for you that solve your problems rather than necessarily having to build it yourself. There’s a lot of advantages to building it yourself, to participating in the open source world. The Cloud Native Computing Foundation is doing a lot of really fantastic things. I mean, OpenStack still exists. Docker still exists. They’re all doing their thing. But when you’re building on serverless, you’re building your application and you’re not necessarily managing Kubernetes or Istio or Prometheus or any of these other things. So especially if your application is simple, you skip a lot of the steps that are necessary, just groundwork. I mean, you could spend a month setting up a Kubernetes cluster, so you could run an application or you could potentially build the application or deploy the application serverlessly in a few days potentially. At the very least, the level of effort to get started is so much large just because you’re not building the world in order to build a building. So much of that foundation already exists for you.
[00:20:34] BH: When a developer who came out maybe in the ’80s or ’90s with a lot of hardware interaction versus maybe a developer who first got into software two or three years ago, they went through a bootcamp, they’ve been in software, how does the concept of cloud and serverless sort of interact with each of these two people in terms of maybe one is more experienced than the other, but also the other one sort of came up in serverless? Do you see like an evolution of serverless and/or cloud computing when it’s a little bit more normal to be further from the hardware?
[00:21:11] EW: I think there’s definitely a stronger tendency for a lot of developers to be going into things like Kubernetes because it does look like a virtualization or an abstraction of your hardware and extraction of systems that you’re familiar with. I think that a lot of developers, especially those that have been doing a lot of front end work, are less challenged by the serverless paradigm because you’re not dealing with managing servers. There’s less to learn. So it’s easier to learn. It’s also restricted in some way. You can’t do everything serverlessly. So you can’t not do containers or Docker or Kubernetes or anything. You’re going to need these skills eventually, but there’s a lot you can do. You can be empowered to do by learning serverless first without learning all of these other things also. And I think that for those that came up longer ago, industry veterans, I could see a perspective that they don’t need serverless because they already know this other way and this well-established way, but I think that it would behoove people to learn serverless because it’s less to maintain. It’s less expensive for many applications. There are certain applications that are massively enabled by this. Just the Cloudflare Workers for instance is immensely popular. Lambda@Edge is a similar solution. Fastly and others have solutions like this. And these are solutions that don’t really have an alternative other than building out infrastructure. You can go build out your own edge network. Sure. Go ahead and do it. It’s going to cost you money. You can build it on top of one of these existing CDNs with custom code components, Lambda@Edge or Cloudflare Workers, and you’re now enabled to do something you couldn’t do before. So I think there’s value for both, but I think the value is probably a little bit different. I think more of the veterans are using it as a point solution to solve particular problems where newer developers might look at it as an easier way to onboard into building applications.
[00:25:05] EW: Yeah. I want to add to this, just because you can do something at a lower level does not make you more productive. And there’s also this lock-in myth I think that exists around serverless where you can write assembly code and that’s many layers of obstruction underneath of so many things that we build as machine code. You could write machine code, you could write assembly language, but it’s not productive to do so generally. And furthermore, you’re actually locking yourself further in by writing machine code or assembly than if you write in something like C or Python or Node. And even C, you’re compiling it into an architecture. We’re now currently looking at an x86 to ARM64 migration that’s occurring in this industry. People who are compiling applications written in compile languages are having technical debt now that they need to make build systems that work for multiple architectures because a lot of people will just hard code in things that are x86-isms. So you can get more productivity actually by working at these higher, more abstracted layers like serverless. And in some ways, you’re actually locked in last because you’re not presuming behaviors that are particular to the architecture that’s underneath of it. When you deploy Node.js code, it doesn’t matter if it’s ARM or x86 underneath of it when it’s serverless. AWS could probably move their fleet of Lambda services to ARM and very few customers will be affected. And not to say nobody, but very, very few customers would be affected by that kind of migration on Lambda. Whereas if they were to try that migration on Fargate or EC2, it’s a much bigger and more complex migration for those customers. Here is them building something in a way that they may see as more productive or more traditional, but it is actually more locked in in a way.
[00:27:03] YC: Yeah. This whole lock in argument is such a lazy thing to do to drive fear. I mean, you’re never really locked in. There’s always a way out. It’s just a question of how much effort you’re going to put into when you decide to switch your platform or decide to go from AWS to GCP. So there’s a cost to pay, but there’s a lot of companies now spending a lot of effort upfront and engineering time upfront to protect, to hedge against this lock-in risk and they end up paying a lot of that cost upfront rather than when they have a problem when they need to switch and the worst thing is that you end up paying additional costs in terms of how much burden you ask your engineering team to basically run everything themselves. The moment we decided that we can’t be locked into AWS, that means that you can’t derive any values from the managed services that the cloud gives, which 90% of the value that you’re going to get is all the managed services. It’s not just renting virtual machines. And as an industry, we sort of copy ourselves a lot from the manufacturing world, but the manufacturing world used to have this adversary view of the vendors as well that they didn’t trust them. This whole thing about vendor lock in was a thing, but then they realized it just wasn’t productive. It’s just not the best use of resources and time. So you’ve looked at the manufacturing world now, most of the time people like Toyota, they will select a small number of vendors and they will try to integrate as tightly with those vendors as possible so that they can get the maximum value from those vendors, from those relationships, and it becomes a partnership rather than, “Okay, we always worry about what if our vendors try to do us over.” And I think a lot of companies in the service certainly understands that, understand the value of the cloud is going to give them and is seen taking more and using more of it as Erica said at the start and that’s where they’re going to get the most value. And when you look at the real risk that most companies face is not, “Okay, what if AWS decides to, I don’t know, queue some service?” That’s a Google thing. The real risk is if you have a competitor that can iterate faster than you, then you’re going to lose out the market before anything else happens to you because AWS or Google or Amazon or Microsoft start to do something.
[00:29:26] JL: So clearly there’s a lot to consider with the technologies and the different vendors involved in this industry. Can we maybe go through like a really quick exercise and do some rapid-fire definitions of some of the words I heard you all share just now? Maybe we can go one by one. And Erica, if you want to start, could you tell us what a container is?
[00:29:47] EW: So it’s easy and complicated. It is both a process model for running applications on a server, but it’s also a way of packaging applications in a way that makes them portable between systems.
[00:30:01] JL: And Yan, what is a virtual machine?
[00:30:04] YC: A virtual machine, how can I explain that without saying virtual machine?
[00:30:11] EW: I can take it if you want.
[00:30:13] YC: Yeah, Erica. You have a better definition.
[00:30:15] EW: So like containers, it is a different process model for running your applications on your server. But instead of running on the CPU within its normal domain, the CPU basically steps back and says, “The real machine is really underneath. It’s in the basement.” Because normally machines like hardware machines where like everything was on the ground floor. And then with virtual machines, they moved all the logic into the basement. And then they said, “Now you can have rooms on the first floor.”
[00:30:50] JL: I love that metaphor. That’s great. Yan, what is Kubernetes?
[00:30:54] YC: The worst thing that has been invented. So Kubernetes, so when you’re working with containers, so you can’t just have one container. So if that container dies, your application essentially dies. So you can’t have that. So you still need to have a high availability setup where you need to have multiple containers and you need some way to build the allocated containers to your virtual machines and have different strategies to maximize the efficiency for how close and how tightly you pack those containers so that you maximize the resource, the utilization or resources you have and routing traffic to the right container and so on and so forth. So that’s where you need, I guess, a bit more than just containers themselves. You need some obstruction layer that manages all your container infrastructure and the Kubernetes, one of the container schedulers that you can have. But seriously, it is such a complicated piece of tech that you spent so much time just tinkering and then treating with Kubernetes and very little time to actually just go and build your application. So for most companies, I would say it’s a terrible choice for you to go Kubernetes.
[00:32:01] JL: Thank you for the editorialized version of the definition. Ben, do you have any terminology that you would like to find?
[00:32:09] BH: Yeah. Let’s go to Erica for a definition of CDNs and/or edge compute.
[00:32:17] EW: So CDNs, at their core, tend to provide hashing of your resources at various edge endpoints around the world. So those edge end points basically just mean it’s close to you. If you have a service or like some content that is in, let’s say, Virginia, if somebody asks for it out of say Australia, where there’s a very large latency between these two points, the person in Australia can request it from a server in Australia, which may or may not have that data cache. But hopefully if there’s lots of users in Australia who will cache that data, so that those lookups are not going across the ocean, but are actually staying local to Australia. And then the edge compute says. “Well, great. Now when that data and content is there, it’s easier to act on the data where it is.” So rather than when you have to compute on that data and run compute tasks on it, you can’t pass compute content. So rather than going all the way back to the US it says, “Well, why don’t we embed a little bit of compute in here and you just make your application itself more distributed and distributing your code with the content that is being cached to distribute it in these large CDN networks?
[00:33:38] BH: Yan, can you give us a definition of AWS Fargate?
[00:33:42] YC: Yeah, sure. Fargate is what AWS would call a technology that runs on top of ECS, which is the Elastic Container Service. So basically, Fargate is a way for you, I guess, to simplify running containers with ECS, which is the Elastic Container Service whereby you don’t have to maintain and run you own EC2 cluster so that your ECS can allocate containers into a runtime. Instead, you just specify a task. So what containers you need, the amount of memory, the amount of CPU, resources, and so on, and what to run on them and the Docker image that you want to run and that they will manage the provisioning of that resource for you on the virtual machines that the AWS is managing for you. In a way they kind of like to call it “the serverless container” whereby you minimize the amount of, I guess, infrastructure that you have to manage in order to run containers.
[00:34:41] BH: All right. So we actually have like a little long list of a bunch of AWS services. All of which are interesting to define, but I don’t think we need to get through our entire list we have here. But let me finish off by throwing the definition of managed service over to Erica.
[00:34:59] EW: Well, I think we have to start with the definition of service. So if you have a service, it’s something that’s provided to you, really a managed service is just one that you’re not running yourself. So a service might be like you build a web service, which serves web pages. A managed web service will be one where somebody runs that application for you and you provide the content for that service potentially or the data for it, but somebody else runs the actual code.
[00:35:28] JL: So through our conversation, we learned a little bit about the history of the cloud and where we are today, but what do you two think the future of all of this looks like? Yan, why don’t you start? Will most companies be serverless in the future?
[00:35:41] YC: Definitely. I don’t think serverless is right for every company. I certainly have very strong opinions about using Kubernetes, but certain companies do need it. Service is great at some point when you’re paying for someone to manage your infrastructure for you. There is a going to be a premium. So for systems that are really high throughput, you may also just find that that cost of running stuff on serverless components are just too high. So if you look at someone on Netflix, if they move everything, actually I spoke with some of the guys at Netflix before. They were really interested in the Lambda, but they did some back-of-the-napkin calculation and found that if they run everything on Lambda, it would cost them seven times as much, which when you add that, that scale is pretty significant, but at the same time, for lots of companies and for lots of different workloads within even large companies serverless is the right fit. So I certainly think that the right mindset to think about this would be to talk about what we like to call “serverless first”, that you try to build things in a serverless way. And if you can’t, then you move things back into containers and the virtual machines. So I don’t think serverless is going to be the only way, but I certainly think there should be a much healthier, I guess, the proportion of applications being built serverlessly compared to what we are seeing just yet. And also, I do think that in the future, you’re going to find many of the container features into the service space and vice versa. And the Fargate and the Cloud Run from Google, they’re all kind of blurring the line between running things with serverless components versus running containers.
[00:37:23] JL: And Erica, where do you think we’re going with all of this?
[00:37:25] EW: Yeah, I felt like when Docker was happening, I knew it was a revolution. I knew that things were changing. Kubernetes came after serverless came after-ish. And I think that machine learning is going to start taking a bigger part of what we’re doing, but I think also more and more people are recognizing the constraints of those machine learning models and the ethics of doing so of running and doing machine learning. But I do think it will become a bigger part of what we’re doing and how we’re doing it. I think that Arm is going to be a significant change on the server side and I think it’s a change that will be easier than it could have been a long time ago, just because we have so many high level languages and high level things building. These two things I’m talking about are actually deeply related because NVIDIA is now currently in the position of attempting to acquire Arm and NVIDIA is the largest semi-conductor manufacturer of machine learning computer chips and Arm is in all of your phones. It’s in more and more servers. It’s even going to be on a desktop. By the end of this year, Apple will be coming out with an Arm-based laptop. So I think Arm is going to be a big thing in the next few years, machine learning, and the fact that NVIDIA is going to own manufacturing for these or at least licensing for these technologies is going to be extremely impactful. I don’t know what the impacts of that are going to be quite yet, but I’m really curious to see what’s going to happen with it.
[00:39:20] JL: Join over 200,000 top engineers who have used Triplebyte to find their dream job. Triplebyte shows your potential based on proven technical skills by having you take a coding quiz from a variety of tracks and helping you identify high growth opportunities and getting your foot in the door with their recommendation. It’s also free for engineers, since companies pay Triplebyte to make their hiring process more efficient. Go to triplebyte.com/devdiscuss to sign up today.
[00:39:45] BH: Another message from our friends at Smallstep. They’ve published a series called, “If you’re not using SSH certificates, you’re doing SSH wrong,” on their dev organization page. It’s a great overview of Zero Trust SSH user access and includes a number of links and examples to get started right away. Head over to dev.to/smallstep to follow their posts and learn more.
[00:40:10] JL: Now we’re going to move into a segment where we look at responses that you, the audience, have sent us to a question we made in relation to this episode.
[00:40:17] BH: The question we asked you was, “What do you wish you knew about serverless and the cloud?” Our first response was from Jeriel who wrote in, “What would a personal project in serverless and cloud computing look like? Since there’s always a pricing model for these service platforms, how do you effectively learn these tools without incurring massive costs?”
[00:40:39] YC: Okay. I guess, for personal projects the chances are it’s going to be free because all of these services has got a free tier. And since the pricing model for serverless is you pay for what you use. So when you don’t use it, then you don’t pay for it. In fact, not just for personal projects for a lot, as your commercial project, you may find that your dev environment and things like that, they’re also essentially free because the spaces have no traffic and not enough to take you out of the free tier anyway. So that’s also one of the benefits of serverless is the fact that it is pay as you go.
[00:41:17] JL: Your response was a perfect segue into a comment from Basti, which was, “The pricing model for various cloud services have always confused me. The “pay-for-what-you-use” model is often ambiguously worded with terms like “micro cents per minute” and whatnot. What does it mean to charge per minute? Does it take into account the total time of service is idle? Or does it take into account the total time it took to process all requests within a month? Or perhaps the price reflects the total uptime of the service itself within a month? It would be great to clear up some of these ambiguities in the computation of “processing minutes”, especially when there are some gotchas involved.
[00:41:53] YC: Unfortunately, there’s no easy answer for that. And I guess when you’re talking about paying for uptime, you’re talking about server full services and not server less services where you’re paying for uptime of how long you’re running an institute instance, for example, or how long you have PPC running or application load balancer or whatnot. The unfortunate thing is that some of those nuances that they are all different per service and it’s not always clear, even for people who’s been using it for a very long time, exactly how they calculate it, and the different cloud providers, different services may have a slightly different caveat. So for example, for EC2, Erica, I don’t know if I’m still right with this. I think EC2 is charging you by the second.
[00:42:45] EW: Yes.
[00:42:46] YC: Is that right? Okay. Yep. Whereas before it used to be a roundup to the next hour. So with Lambda, you’re paying for how long your function actually runs for. So if your code runs for five milliseconds, you pay for hundred milliseconds of compute time. So unfortunately, there’s no, I guess, one answer that can clarify all of them.
[00:43:08] EW: Yeah. I will also add, “Don’t forget your network requests.” And honestly, the most expensive thing for me on AWS is always networking because to build your application correctly, if you’re building server-full, it’s going to be your load balancers, it’s going to be VPC connections. I mean, I think “I configured an application correctly the way it should be configured.” I was paying over a hundred dollars a month just to have like a base networking stack. And it’s really not sustainable, I think, to do that. But I think this is a complex topic. Something that would be worthwhile is maybe following Corey Quinn on Twitter because he’s like the AWS billing expert.
[00:43:48] YC: Yeah. Corey had one podcast where Corey actually mentioned one client of his where they were paying some ridiculous amounts. It was something in the region of like couple hundred thousand dollars a month or per quarter for a net gateway because this is a system that was sending a mass amount of data out of the VPC and they’re using that gateway, which charges you by gigabytes. So when you don’t have a lot of traffic, it’s great. It’s cheap. You’re not paying for up time. But when you’ve got a lot of throughput, it gets really, really expensive. And I think he basically brought that cost down to almost nothing by just moving them to using that instances from AWS marketplace, especially around the networking, there’s a lot of cost-conscious that can really, really, really hurt you.
[00:44:40] BH: I think it’s also important to note that the way serverless or cloud computing is priced is a totally made up paradigm. The rules can be set by whoever, is the decision maker for that product and that company and everything. And if it’s not coming intuitively to you, it’s probably because this isn’t necessarily intuitive stuff. It’s a communication thing. As we talked about, we have experts in billing alone. There’s new concepts even that are coming out in terms of like cost management services for clouds, like independent companies whose only job is to kind of help be an abstraction layer over top of the pricing models. So I don’t think there’s any shame and sort of feeling like it’s more difficult than you’d think or just because you know how to do the coding that the pricing stuff is going to be easy because its own complex universe.
[00:45:36] YC: Yep. Absolutely.
[00:45:37] EW: One hundred percent.
[00:45:39] JL: All right. This was a really fun conversation. Thank you both for your insights and for joining us today.
[00:45:44] EW: Thank you.
[00:45:45] YC: Thanks for having us.
[00:45:55] JL: I want to thank everyone who sent in responses. For all of you listening, please be on the lookout for our next question. We’d especially love it if you would dial into our Google Voice. The number is +1 (929) 500-1513 or you can email us a voice memo so we can hear your responses in your own beautiful voices. This show is produced and mixed by Levi Sharpe. Editorial oversight by Peter Frank and Saron Yitbarek. Our theme song is by Slow Biz. If you have any questions or comments, please email firstname.lastname@example.org and make sure to join our DevDiscuss Twitter chats on Tuesdays at 9:00 PM Eastern, or if you want to start your own discussion, write a post on Dev using the #discuss. Please rate and subscribe to this show on Apple Podcasts.
[00:46:51] SY: Hi there. I’m Saron Yitbarek, founder of CodeNewbie, and I’m here with my two cohosts, Senior Engineers at Dev, Josh Puetz.
[00:46:58] JP: Hello.
[00:46:59] SY: And Vaidehi Joshi.
[00:47:00] VJ: Hi everyone.
[00:47:01] SY: We’re bringing you DevNews. The new show for developers by developers.
[00:47:05] JP: Each season, we’ll cover the latest in the world with tech and speak with diverse guests from a variety of backgrounds to dig deeper into meaty topics, like security.
[00:47:13] WOMAN: Actually, no. I don’t want Google to have this information. Why should they have information on me or my friends or family members, right? That information could be confidential.
[00:47:21] VJ: Or the pros and cons of outsourcing your site’s authentication.
[00:47:24] BH: Really, we need to offer a lot of solutions that users expect while hopefully simplifying the mental models.
[00:47:32] SY: Or the latest bug and hacks.
[00:47:34] VJ: So if listening to us nerd out about the tech news that’s blowing up our Slack channels sounds up your alley, check us out.
[00:47:40] JP: Find us wherever you get your podcasts.
[00:47:42] SY: Please rate and subscribe. Hope you enjoy the show.