This week, we are starting a new interview series as part of the CenturyLink Labs. We are going to try to keep these interviews short (30-40 minutes) and packed with great content every week. Last week, we talked about Deis and Flynn. This week, we are talking with the CTO of Deis, Gabriel Monroy.
Gabriel Monroy, CTO at Deis
Gabriel Monroy is a systems architect with 15 years experience designing, developing and operating large-scale IT infrastructure. Gabriel is the creator of Deis, the leading Docker PaaS. As an early contributor to Docker and CoreOS, Gabriel has deep experience pushing the boundaries of early-stage technology and speaks frequently at industry events on PaaS and distributed systems. Gabriel co-founded OpDemand which was acquired by Engine Yard in 2015. He lives in Boulder, CO, but spends a lot of time in airplanes traveling to tech conferences around the country.
Listen to the Podcast
You can subscribe to this podcast on:
Do you have any statistics about how large the Deis user base is?
Over 100 known installations
Is Deis built to be multi-tenant or single tenant?
Is Deis built to be mutli-server or single-server?
Deis is built to be multi-server, but without any database or backend services managed by Deis
What’s next for Deis?
Deis 0.9 just released with Dockerfiles, Domain names and HA Routing
How is Deis different than Flynn or Dokku?
Different from Dokku because it is built for multi-server
In Deis, apps are git deployed like Heroku, but can you talk about how services are managed in Deis?
Deis does not do any database service management, only managed application code
What do you think about ambassadors in Docker?
Deis uses ambassadors for auto-registering and load balancing application instances transparently
How do you scale Deis?
By adding more Deis images, registering them into CoreOS via etcd
What is Deis’ relationship with OpDemand?
OpDemand is the parent company that develops the Deis open-source project
And a few more great questions...
If you have any questions for Gabriel, feel free to leave comments in this post.
Here is the full transcript:
Lucas Carlson: I am the Chief Innovation Officer for CenturyLink and I'm interviewing Gabriel Monroy from Deis. He's the CTO and I'm very excited to have him on, hear about him and the Deis project. Could you tell us more about yourself, Gabriel?
Gabriel Monroy: Sure, yeah. First of all, thanks for having me. Appreciate the opportunity. I've been working in IT operations for a long time, about 15 years now. I started off my career working for Intuit. I left there as a senior systems architect. From there, I went to work on a number of start-ups in the New York area and later as a Freidman's consultant.
Most of the work I did was focused on deployment automation and developer tooling. It was a natural flow into the land of Docker. I've been an early and frequent contributor to the Docker project, and created Deis, and open sourced it by August of last year.
Lucas: It's a cool project and it's fun to watch. I myself built app [inaudible 01:20] the platform as a service built on Cloud Foundry. Platform as a service is very close and near and dear to my heart. It's really cool to see this new next generation Micro-PaaS stuff coming out. Could you tell us a little bit about Deis, what it is, why people want to use it? Just give us the pitch.
Gabriel: It's interesting that you bring up the Cloud Foundry perspective here. After Docker had come out, it was clear to me that the way that you would build application platforms for developers, certain changes after the introduction of a [inaudible 02:04] Docker, you're going to do things differently than you might have before.
We were interested in tackling that problem head on and figuring out, "What does a Docker PaaS look like?" For us, from the get-go, we were focused on the developer workflow portion of things and probably less so on the underlying implementation. We thought it was more interesting to figure out, "Now that Docker exists, how can developers operationalize this and use it?" We took a lot of inspiration, especially in the beginning, from Heroku. Heroku provided a fantastic experience for developers and what we wanted to model.
Over time, we see ourselves as more of a Docker PaaS than as a private Heroku, although we are that as well. [laughs]
Lucas: Which is interesting, because the whole Docker project came out of the dotCloud platform as a service. It's interesting how Docker came out, and yet, it's birthing these new PaaSes. How do you see it different than Heroku, or Docker, or dotCloud?
Gabriel: We buy into the whole 12-factor methodology. That's the best way to build applications to scale horizontally, to make them stateless and follow a number of rules for best practices for distributed systems and service-oriented architectures. That was a natural shed for us. In terms of different than dotCloud, I truthfully wasn't that familiar with dotCloud's platform.
I know they were doing a ton of interesting things with services and databases. From our standpoint, we like the idea that you can have these horizontally scalable, stateless applications connected to a load balancer. Then attach those to backing services that are managed separately from the platform. In lot of cases, loosely [inaudible 04:15] and often managed using traditional means.
That's one of the things that we purposely don't do in Deis today, because we're really so focused on trying to get the stateless application part of it right.
Lucas: Deis is really focused on the application code, and running the application code. It doesn't handle services. Can you or can you not spin up MySQL in Postgres within Deis?
Gabriel: No, you cannot. As someone with many years of Ops experience, running databases inside containers makes me a little nervous, at least today. That's going to change over time, but if I'm a DBA and I'm telling someone that your data is under var/lib/docker/vfs/GUID, whatever, and you don't have a choice of underlying lock device that you can put it on that's distinct from the rest of your containers, there are a lot of problems there. Not to mention, how do you properly do backup restore, disaster recovery policy?
All those things need to be thought out properly. One of the things that we implemented early on, in the early days of Docker was the dash b file, for by mounting files from the host of the containers.
It's something we needed. We know firsthand that there is a ton of problems with things like UIDGI, e-mappings and stuff like that if you're actually running production databases. We want that stuff to sort out a little bit over time. Even after it becomes possible or recommended to run databases, I still think that Deis will have the idea of a service gateway that is used to do service attachments in a Heroku style, loosely type of fashion.
Lucas: For me, I think that helps me understand more what you guys are doing. Helps the audience understand better. The two big things that I would take away from your description is that the big part of the difference between Deis and Heroku, or Deis and dotCloud is one, that you can run it on your stuff, that's a huge differentiator. Two, is that it doesn't deal with services.
So, you have to do your services on your own. That makes sense for me. Some of our audience will know things like Flynn and Dokku as other PaaS alternatives for Docker. How do you see it different? Where are its strong points? How do you see it differentiated from some of the other Docker PaaS there?
Gabriel: Yes. First of all, Dokku has been around forever. Justin Deis. Talented developer and I considered a friend of mine. We took a lot of inspiration from the early days of Dokku. But as people may or may not know, Dokku is a single host, and only designed to be a single host PaaS. There is obviously a key difference right there. But a lot of the concepts are that the way that Bill packs are integrated with Docker.
We took a lot of inspiration there. We contributed to the Bill Shep project and the [inaudible 07:38] for while. With regard to Flynn, that's pretty interesting. There are a number of differences between the two projects. There are philosophical differences, technical differences, expert differences, and general approach to the [inaudible 07:56] system.
Maybe we could spend some time talk into those. On the philosophical side. Before you guys did a great job coming up with a specification. Like the expect doc that describe how everything would work. That was smart guys, and respects, well done. We had a different approach. Our approach was premised on the idea that, we don't really know how things are going to play out, with technologies as newest docker. Who could have predicted the stuff that the Hashi Corp guys are doing. Back when we were doing the stuff. Or that CoreOS and Flynn, was going to advance in the way that it did. Rather than saying, "This is like the Bible and we are going to go after that."
We started first with, what incentive of workflow that can get developer moving right away today? Even if it means Steel and coal may not be pretty, or some immigration that may or may not be ideal. How do we get people testing our workflow so that we can learn from that workflow, and improve the underlying implementation over time? Philosophically, that's a big difference.
Technically, the Flynn PaaS is incredibly ambitious especially given how few developers are. They're writing everything from scratch and go. There's no real external dependency. We aren't doing that. We're more focused on working with other players in the community who are focused full time on things like CoreOS leader. There are some other companies that we're talking with right now that have different scheduling implementations, that we hope to integrate with Deis as well. We're happy to use third-party off the shelf components, where they make sense. Always with an eye towards, what is the best and workflow for the developer.
Lucas: Deis is built to be single-tenant or multi-tenant?
Gabriel: We haven't done a terminal work right now, to make it multi-tenant, and do proper isolation, and c groups. If you're really doing multi-tenant for some stuff, you should be doing for kernel-hardening and lot deeper resource constraints. Right now, most of the deployments and most of the customers that we work with are single-tenant multiple applications.
Lucas: For multi-server Deis, when you have to grow beyond a single server like the Dokku is stuck on a single server, how does that work?
Gabriel: It's based on the way that you scale SED clusters. As you add nodes, typically the way this works is you plug in and you're discovering your LN points so the SED nodes can find each other. Really you just add nodes to the cluster, they discover each other and they automatically get included in the scheduling logic.
Lucas: It seems like there's a lot of CoreOS integration to Deis, and the 0.8 release really made that solid. Can you tell us how does Deis and CoreOS interact? How much is CoreOS, how much is Deis and in what way do they leverage each other?
Gabriel: CoreOS, much like Docker, provides a great set of parameters for building something like an application platform. Docker provides core containerization technology. CoreOS provides SED which is a terrific way to do services discovery as well as Flynn, which is a terrific way to do job scheduling.
I have a lot of faith in those guys and they're doing a great work and the Flynn project is increasing rapidly. We're really interested in evolving as they evolve. Where did Deis begin? That's an interesting question. For CoreOS, the entry point to CoreOS is system [inaudible 12:20] files. Writing system [inaudible 12:22] files that may or may not have some special Flynn meta data inside of them. That's great if you're into that level of system and administration, but if you're looking for just an office shelf platform that works, you're going to need to do things like routing containers, ambassador, [inaudible 12:41] containers for thing like logging, announcing.
You're also going to need authentication, authorization. You're going to need databases, going to track builds and config changes and releases. There's a whole set of platforming stuff that CoreOS is not designed to solve. But if you want a PaaS, you need that stuff.
That's really what we are laser-focused on. We are happy to work with CoreOS, Flynn, and also others. We can make a great way to schedule Docker containers across a distributed system. Frankly, we just submit the jobs and make sure we still have an exit to the router and again, try and stay focused on that developer work form. We have some interesting things planned in that space, too.
Lucas: You bring up two interesting things. One is I want to hear what's next for Deis, but the other thing you bring up is Ambassadors which I've written about a little bit. How does Deis and Ambassadors...is it something that you're building into Deis? Or is it something that you're just supporting people, introducing them to their systems on application level? How do you thing about Ambassadors?
Gabriel: Part of the idea with Deis is that we do all that stuff forward. Best practices for those type of things evolve. They evolve over time, they evolve as Docker gets better, they evolve as best practices evolve. Right now, when you scale a container inside of Deis, we actually spin up three containers.
We spin up the container you ask for, we spin up a log container which does upper-container log application through our separate log channel, then an announce container which is also known as the presence container, which does a health-check on the container to make sure it's listening or healthy and publishes that to the SED so that we can publish it on the router. That's how get things like zero-downtime to [inaudible 14:44] and all that.
Could you write all that stuff yourself? Yeah, sure. It's a pain not only to get working the first time, but also to make sure that it stays up-to-date as best practices evolve.
Lucas: Does Deis have any integration to the Docker index or do you build containers for your application using the build-step stuff?
Gabriel: This is where I wanted to share a little bit of what we have planned. We start off with git push Heroku-style workflow. That works great for a type of people and we're happy with it and it's done its job. Going forward though, we really see the idea of Docker PaaSes as moving beyond the git push process. One of the big advantages of Docker is that you can actually take a built image and promote it step-by-step through a good CICD pipeline and not have it change.
Every time you get push into Deis or Heroku or whatever, you're actually going and fetching a runtime from a third-party. Although your source-code didn't technically change, there are lots of other things that might have changed that means that your image is bit-for-bit compatible. Something that we hope to show off at DockerCon is a new workflow that bypasses the Get-Push process.
It's still deeply integrated with the 12-factor model for promoting existing Docker images whether they are the public index or on the private registry, and promoting those through Deis sustain where you would get push build.
Lucas: Would that enable more services like MySQL and Postgres, or are you still staying away from that?
Gabriel: We're still staying away from that. Could you run them? Sure, but it's not something that we recommend. At the end of the day, unless you've made the containers replicate amongst each other and ephemeral and design that in to your database, if the host goes down you're losing your data and we don't want people to be doing that on the platform.
Lucas: What do you use for your routing layer?
Gabriel: Right now we use EngineX. We've had some discussions with some of the folks that are working on VulcanDeek. We think that that's a really promising project that's going forward. There's some other projects that could potentially allow Deis to be a little more dynamic. Between using ComfD to template out EngineX and do soft reloads on it has proven to be surprisingly scaled, so we've been very happy with that to-date.
Lucas: Is there a head-node that you need to have a public address for the router mesh or is the router mesh something distributed as well?
Gabriel: Yeah. We actually have, and something that we're going to be announcing in our 0.9 release is HA routers. You can run as many routers as you want. We recommend three for most deployments using Flynn. Those are going to conflict with each other, so they'll be spread out across our minimum three-node cluster.
We typically recommend you use something like Wildcard DNS to balance traffic across. Honestly most of the real-world deployments at Deis have some front-end load balancer in front of our router mesh that's doing a cell termination caching and is actually exposed. Another way to think about the router is one of the things that Deis is doing is making it possible to horizontally scale different sets of allocations represented by containers.
In order to access those containers you need a router that is aware of which ones are healthy. We don't actually see a lot of people connecting our router mesh directly to the public Internet. We recommend some sort of front end load bouncer [inaudible 18:59] is a great solution.
Lucas: Is anybody hosting Deis or managed-hosted Deis? Is OpDemand? I know Deis is the open-source project, and OpDemand is the parent company. That's the company you work for. You guys work on premise Deis, but are you thinking or are you working on anything that's hosted-managed so that people that want Deis but don't necessarily want to install it themselves might be able to come to you or somebody else?
Gabriel: It's a good question. First of all, OpDemand as you mentioned is the commercial entity behind Deis, the open source project. We sell professional services, support contracts, subscriptions, for Deis deployments. That's how we fund the open source effort.
We've got six guys now, and we're growing. We're hiring. If you're interested in writing go and distribute systems, come [laughs] find me. Was the question about a hosted version of this?
Lucas: Hosted, so that somebody who wants to get started with Deis but doesn't want to install it themselves can come to some service? Or is there a provider or is OpDemand thinking about something like that?
Gabriel: Ultimately, we probably will get there. For now we're really just focused on getting the platform part of Deis right, and we've got a lot of work to do there. We are working with a ton of customers right now. We do go out and do installations of this on premise typically bare-metal-type environments. We're focused on letting people install this in their own environments, helping them with that to the extent they need help. Maybe down the road we'll look at some hosted solutions.
Lucas: What do you think is the biggest problem in real-life Docker adoption today for organizations, for people just getting started with Docker trying to figure out how to implement it into their workflows? What do you think is the biggest barrier today for Docker?
Gabriel: That's a fantastic question. We work with a ton of customers who face these same challenges, and it really varies. Some companies have a really high tolerance for experimentation and that sort of thing, but I found probably the biggest barrier is that some applications as written currently just don't fit inside of a Docker container.
Part of the idea behind Docker is that you should really be splitting up monolithic services inside their natural components and moving towards a SOA architecture. The reality is we go and talk to customers about using Deis, and sometimes we have to spend most of the time getting their apps into a place where they can leverage Docker.
It's different than virtualization. A lot of people compare the advent of Docker to the advent of virtualization, but one of the big differences there is that virtualization allows you to take existing machine images and run them unmodified inside of a VM. That's not really true at Docker.
There is some work that you have to do in most cases to get your application code to run inside a Docker container. I see that as probably the biggest hurdle to adoption.
Lucas: Not to go into too much detail, but what do you think are the most common things that you have to do?
Gabriel: There's a lot of moving from templated configuration files to environment variables. If you don't want to do that, which a lot of people don't, implementing a shared configuration through service discovery mechanisms like SED instead because that's typically a better option than statically injected environment variables.
The [inaudible 23:12] project, which we contribute to, is a great way to help bridge that gap for non-service discovery-aware applications, so you can write out template files and help processes inside a container if that doesn't do that in memory.
Lucas: What about persistent file systems? I know this is something that matters more for traditional legacy applications, CRM applications. I'm so glad that you added the Dash B to the command. When you go into something like CoreOS and you have multiple systems, Dash B doesn't do much help for that persistent file system.
Gabriel: Actually, you guys wrote a great post on this about doing true replicated file systems. This just gets back to part of the reason why we're not tacking state inside of containers yet because it's rapidly evolving, and we're not clear where the chips are going to fall.
In my mind, if there was some glusterFS/share file system-style service underpinning a series of Docker hosts, that can be an incredibly powerful way to get state working. I have some experience with GFS or one of those, and running those in production is very tricky.
Not only that, the IO performance for those type of files systems is terrible, and that's not exactly what you want for your database. You don't want crappy random IO because you have a share file system underpinning everything. Again, it just goes back to, if you've got a production database especially one that's going to be multi-tenant, we recommend you use something like Chef to just manage it the old way and connect it up to a high SCSI ray, how you would traditionally do it, and not take risks there. [laughs]
Lucas: For the post that you mentioned, I actually brought up the idea of containerizing BitTorrent Sync and using volumes to mount volumes through BitTorrent Sync.
It's not meant for databases either. That's really meant for things like uploaded content when you upload an image. If you have a CRM and you have to upload a PDF or a picture or something, and if you have a distributed system, you've got 100 nodes that you don't want a 1-in-100 chance that you're actually going to see the content you've uploaded.
I hate the idea of having uploaded content and going into a database. Obviously going into something like an object storage is ideal, but that can sometimes get tricky especially if you're dealing with something like WordPress. There's this middle ground of applications that do want some persistence but don't require transactional IO at the level of the database.
Gabriel: That's an interesting point. To me there's a difference between stateful and ephemeral, and it's an important distinction to point out. The containers that are deployed on Deis, for example, it's not so much that they can't hold state. It's that they have to be able to be ephemeral and disappear at any time.
If you could build around that, it's possible to use containers to store state, but that's often not easy to do with things like file systems. The approach that you had mentioned could be a good fit for things like a WordPress upload store.
Lucas: I was looking a lot into gluster and NFS for containerized gluster and running the gluster server within a container. You mentioned something that I hadn't thought of or maybe I misheard -- a gluster server to run the underlying Docker file system itself in the var Docker folder. Did I misunderstand that or is that why you...?
Gabriel: No, no, no. That was what I was proposing, and I honestly haven't thought enough about it to...
Lucas: It's an interesting thought.
Gabriel: ...pretend [laughs] I know what I'm talking about. In theory, if you were able to cluster var [inaudible 27:37] Docker, there may be some core issues with the Docker engine sharing some stuff inside that folder that it may not be designed to be clustered.
If there was a way to maybe tackle that at the Docker engine side, it's possible to get fast host-level performance for something like var Docker, and that could be an interesting approach.
Lucas: What do you think about the future of Docker and CoreOS? Do you think that these technologies are going to become staples and standard technologies that developers and OPs guys adopt as commonly as we now use GIT within the next five years, or do you think that it's going to be something different?
Gabriel: That's a great question. The first way I'd answer that is that those of us who work with Docker and these technologies are in our own bubble. We hear everyone using Docker and working on Docker, and "It's taking over the world" and so on and so forth. If you try to get out of that bubble and talk to some of the guys in the trenches, they're worried about uptime of services that are miles and miles away from being inside a Docker container.
It's very possible and probably likely that Docker is going to become the de facto way of containerizing new green field-type applications. I certainly hope so, and everything seems to be pointing in that direction. There are still so many verticals out there that are just not going to be rewritten anytime soon that aren't for whatever reason a good fit for a Docker container.
They're going to have to be managed with traditional tools for a while. I try to temper my enthusiasm about Docker with the realities of some of these enterprises we talked about using.
Lucas: Do you have any statistics about Deis adoption? Do you have any idea of how many people are using it? We can obviously just see the stars and the forks on GitHub, but do you have any usage information that you can share?
Gabriel: Yeah, there are over 100 Deis clusters out there that we know of. Unfortunately, the way open source codes [laughs] work, there's really no way to know any more than that. We do know that. I'd love to have a better answer for that, and so would our CEO. [laughs]
Lucas: My last question is you have a great slogan, "Your PaaS. Your rules." Can you explain what that means? What does that mean to you? What is the meaning behind that slogan?
Gabriel: You brought up before that one of the important things about Deis is that it's run on your hardware. You get to choose your network topology and your hosting infrastructure and your hardware providers and all that stuff. More than anything, the slogan is about, that it's about this idea of a private platform that you run and has your architecture behind it.
One of the things we don't mean by that is, and one of the things that the Flynn guys do really great is this idea of building modular components that you can sort or swap out. We're more of the mind that we want to package together a solution that you can run wherever you want, but it doesn't require a ton of tinkering. It just works out of the box. That's, "Your PaaS, your rules."
Lucas: It's been a lot of fun talking to you. I'm sure the audience is going to be really excited to hear about you and understand this. This is a great introduction. I appreciate this. I'd love to keep in touch. As Deis progresses, we should continue talking about the future of Docker and where things are going.
Gabriel: Thanks so much, Lucas.