[Music] well everybody so my name is Phil and Barty some of you may know me from just the DevOps community or my own Boston some of you may have seen me traveling around to various conferences giving talks some of you may have actually used the tools I've written I'm currently at a company called data wire i/o and we are building open-source tools for developer productivity on kubernetes the purpose of today's talk we're really the topic is kubernetes in about 45 minutes and really it's a subtitle is everything you need to know to be dangerous so without further ado let's get going oh there we go all right so we already this who am I think you can find me on twitter by the way if you want to tweet at me the big lebowski go for it so why are we all here so we are you're here because a couple things you're either curious about what kubernetes is and you're not using it and so you want to accumulate ease is the ecosystem around it which would include containers docker things like envoy I will talk a little bit about all of these things but it will be a primarily a deep dive into the core things you need to know if you are using kubernetes the other thing is you are invested already in kubernetes but you're looking to figure out how to do some kind of developer productivity stuff so you have developers using it and you want to see some tools that may make their lives easier the sort of the hats second half of the presentation will be focused around that kind of stuff or you're here because this is the last presentation and you really just kind of feel guilty about the fact you're here question oh that you you're just you're the one okay I totally understand anyway so the agenda is gonna be four parts and then we can do QA assuming I actually get this thing done in 60 minutes which based on the title we should be able to do part one just going to be containers and doctor and kubernetes I noticed some people doing Richards talk which was about an hour ago kind of just had questions about like what containers are to figure a good primer around containers and doctor and kubernetes will be good that I'm going to do a core concepts for working with kubernetes so this will talk about some of the actual constructs that you run into every day I'll talk a little bit about development workflow when using kubernetes so you guys can try and make your developers more productive and then finally I will go into a little bit around logging debugging and resiliency using kubernetes and talk a little bit about some tools you can use there focus on some logging aggregation tools and that kind of stuff so what is a container this is a common question people who have been working primarily with virtual machines ask they kind of wonder like so I hear about this thing where people are running code inside containers and they go what is it and the common answer is well it's not like a virtual machine which doesn't really describe anything so what I'm going to tell you a container is is basically it is a form of virtualization but it's not like a virtual machine in the sense that you're not abstracting and virtualizing the hardware layer at the same time you're also not running a full stack for an operating system in there so container is really kind of like basically they think about is process virtualization you are taking the networking stack the filesystem stack the the process stack from the host operating system putting a view over it so that when that container is running only that container sees what it kind of is in that group and everything else is kind of opaque to it doesn't know about its existence even though it's running basically if you actually look at it from outside the container you'll just see it as a regular process carriers themselves are actually not basically deployed artifacts there's this thing called images or you can basically bundle them up the containers the runtime the image is the shippable artifact these are basically immutable deployable artifacts and they're runnable so like the beauty of them is they're kind of treated as static kind of binaries if you kind of think about like golang which is become really popular in the last you know 2-3 years its claim to fame is that you basically get a single shippable artifact and you can hand that off to somebody and it will run on whatever you know they're running there's no we need to worry about libraries that have to ship with it no need to worry about the runtime just hand them the go binary it's good container roos and container images are basically same concept you have an image you can ship it off to somebody else you can shut up - somewhere where they can pull it from and they can run it as if it was a single static binary it's been popularized mostly by the docker tool chain and the docker ecosystem but it's certainly not docker is not the only form of containers in the world rocket is another popular alternative to docker and I say popular I mean there's some people using it but it's not it's not ambiguous similarly lxc has been around for a long time but it is not nearly as common as docker people kind of adopted docker as the primary container engine for working with these things so kind of gone into like just quick deep dive into what docker is it's combination of things it's a tool so we're all familiar if you're if we're working with containers with the docker CLI tool it provides the ability to run containers to build container images to do a whole bunch of things around like inspecting and running containers it's an ecosystem so there is a bunch of tools that have been built up around docker which people would know things like doctor machine dr. swarm all these kind of things there's also orchestration tools I haven't built around docker which include things like docker swarm kubernetes Amazon ECS etc and it's a platform so it's you know because it's a combination of a tool and an ecosystem that's basically what we call these days the platform for actually building things and for the purpose of this we're gonna it's important because it is actually the default runtime in kubernetes so when people talk about kubernetes they're usually talking about a doctor based kubernetes there is an ability to actually run rocket as another container engine but as I said since rocket is not very popular you don't usually run into it in the wild why containers so the reason containers have really become popular is because developers have said it's fast and easy to produce them they can basically easily decide you know this is what the container looks like there's a convenient format for describing what a container looks like the toolchain is simple that they can basically specify in ten lines how they're gonna package up their entire app and then that thing be fed right into docker which builds the image that is built and then it can be shipped off into a docker registry where someone else in their team or next Journal organization can just as easily get access to what they produced and so that's a pretty big speed up improvement and a pretty big packaging improvement that developers had to you over what they were using previously so the past developers may package things as jar files or zip files and ship those off somewhere or they may create VM images if they're doing immutable infrastructure so when you're going down the path of a VM image you have the problem of well the speed to actually produce the VM image is considerably slower than producing anything else because you know in a VM image setup basically you're gonna have to start the VM run through it's an it process then you start pulling down system updates to make sure the VM image is up-to-date then you start layering in all the application specific stuff you want then you have the shutdown process on the VM then you have the process of taking that hard drive that you have the VM running on and converting it into some kind of format that can be run on a you know hypervisor later on that's a slow process compared to actually creating a docker container so you know if you just want to package up say a java application so that you can run it again the VM approach can take you 10 minutes even with the fastest tooling so letting something like Packer do the work for you will just take a long time compared to being able to specify how it looks with a docter file and then running it through the docker build command it's ten minutes first you know 60 seconds or less it's great way to basically isolate different components in your system so if you want to basically do something where you have an application itself is packaged so say we actually example you are working with a java application and it is the app code and then there's libraries that go with it you can split those up so that you ship the libraries themselves and a different doctrine container link it over with the app and you can base will be able to update the library container and plain time it also allows you to kind of ensure a reproducible run time for your app along the devil' test and you know prod kind of pipeline that you might create one of the problems people have often had is while we can make as operations we can make the parts like this test the stick the test the stage and the prod parts more or less completely reproducible there has been for a long time the problem of it doesn't build on my machine or it doesn't run on my machine which is kind of a easy thing to fix in the doctor role because you can actually package up the entire build infrastructure needed for a codebase right into the docker image then you can share that with your team so rather than having to have people install Gradle or maven or pip or all the Python stuff or the goal they go tool chain you can package it all into the docker image then ship that around then you have reproducible builds at any point in time and so kind of schooling with the it's easy to share so you know if you're working in a team because of the docker registry system which basically is all built around the ability to pull images from a central location layer on changes onto those images and then you can push them right back up to the registry with a named tag you can basically go you know to the person in your office say just pull this image and you'll have it or you can tell you know an external partner or someone working with customer pull this image and you'll have what we were working on so that's docker that's containers what is kubernetes so a very technical level kubernetes is it's all about running a massive number of containers and it's basically based on lessons that Google learned when they were building out stuff like their Borg infrastructure but the idea is it can schedule all farms containers you can schedule things that are long-running so your typical web services or your micro services that build up your application but it can also run short-lived processes as well so a typical use for the short-lived process cycle in kubernetes is you want to do something like run database migrations before container comes up so you can basically have it go i'll bundle up all the database database migration logic inside the container that runs before the other container comes up that'll occur and then you have a reproducible database migration code that can be put into a container and shipped around similarly you can schedule cron jobs they can go into containers you can run those cron jobs through the containers as well I like to personally think of urban days as a distributed operating system or process manager where the thing that is running is actually the containers and kubernetes is basically responsible for managing a whole bunch of worker compute nodes and putting stuff out there that analogy works for some people it doesn't work for other people that's when I worked with Richard ran with the POSIX of the cloud basically as they're kind of example but really getting to it trying to get more to a level where people are often talk about what is kubernetes I'm gonna go with an office tower analogy so if you think about what your typical product looks like we'll think about as the Chrysler Building you are your company is building the Chrysler Building the business logic contained within the Chrysler Building is the offices and the workers the individual functions going on within that building kubernetes is this it's the frame that your stuff is built the top it's not the framework that you build your applications with it's not you know Django or Ruby on Rails or spring it's the wiring that's underneath its behind the walls if you think of your application framework says you know the plaster or the lighting and all that stuff Django and sorry kubernetes is actually more like the bearing walls the steel the wiring that gets you know allows you to send messages from floor 50 to floor 7 or allows you to get your visitors from the blobby all the way up to the top of the tower that's what kubernetes is doing it's the foundation for your app the foundation for your team to build a platform why are people adopting kubernetes people are basically adopting kubernetes these days because it has grown as the most popular eCos basically the most popular tool out there for contributors and change so there's a couple other options in this field there's Amazon ECS there's docker swarm there's hashey Karp's Nomad and there's Vsauce all of these are good tools they all have basically drawbacks so Amazon ECS you're locked into running on Amazon you can't run it locally you have that problem when you're using docker swarm it has you know the kind of the reputation docker has had in past times with engineering quality if you are on hatchet corpse Nomad it's a little bit more of a unknown tool though very customizable and very kind of productive if you have a minimal design and then there's Apache mesos which doesn't really belong in this category because Mesa actually is it's more of a generic scheduling framework you can use people kind of be so sick kubernetes into the same bucket but you think about me sauce actually does underneath it's all about scheduling resources rather than containers the the design of me sauce is built around any kind of abstraction so it could be virtual machines it could be actual bare metal they're not quite parallel point is like they're all lot less popular why has kubernetes kind of taken off well it seems to have figured out how to get to the biggest ecosystem going a combination of big-name backers so Google obviously started the project has put a lot of money into it it's got Red Hat working on it it's got Oracle working on it Amazon is working on parts of it Microsoft is heavily involved in it so it's kind of pulled in all these big companies which have put a lot of engineering effort into it it's very open to contributions so people are basically able to propose things and get them in fairly quickly the release cadence for kubernetes is three months and it's kind of like clockwork so they don't really spend any time dilly dallying with releases if it's not ready in three months it just gets pulled off the release train and they go so you know one seven one eight one nine all very predictable when they're going to occur which makes really easy for people to predict one features are actually going to land its runnable just about anywhere you want to run it so you can run in the cloud very typical to run it on top of Amazon very typical to run it on top of a juror or on Google's cloud but you can also run it bare metal is another option or you can run it locally on your desktop so you can have a local kubernetes cluster available to you at any point in time and really that's it's the unprecedented cloud portability that it gives you you're no longer necessarily bound to running on Amazon if you want to put your compute workloads on Google keep your storage in Amazon that's an option that's available to you it gives you a migration path a way to get away from the the domination of Amazon in the marketplace I kind of put this up here as just a brief let's talk about the kubernetes architecture it's pretty simple it's actually it sounds more complicated than it actually is but the setup is basically you have one or more masters running for high availability followed by one or more nodes that actually take workloads so the nodes are what run the containers you have on the master just an API server which is what you usually talk to with the coop cuddle command you have a scheduler and the controller manager which are responsible for putting containers out on the nodes and the controller for managing you know that things actually exist you know get a little bit to that in a second with the way kubernetes handles the world is kind of like a thermostat on the nodes very simple to think about docker is obviously the actual runtime couplet is basically the orchestration engine for talking to doctor and then this coop proxy thing is what enables the communication between the nodes so everything kind of flows through coop proxy to talk to the other nodes Kubla it is handling the actual orchestration and docker is handling the runtime of the containers what are the big five things you really need to know when using kubernetes it's pretty simple there's the things that are gonna run your code there are the things that are actually going to allow you to connect your code and other the things that are gonna basically let you configure your code in kubernetes that falls into these things pods deployments services config maps and secrets the ladder to config maps and secrets pretty much being you know interface the same with some subtle differences which I'll talk about a pod you say so when you are running a pot like it basically carré's it groups containers it doesn't run Kinane you don't actually tell kubernetes run this container directly you tell kubernetes to run a pod and a pod is a grouping of one or more containers that are strongly related next to each other so when i say strongly related think about your typical application like a blog where you will have a front-end tier perhaps that's nginx or it's a static site or you know API server it'll have you know a comments server and a story server and that it may have a Redis instance for caching and persistence is kind of left up we'll just we'll leave that out for now think about that way that is your that is your application at a glance so you need all these components used to play them when you think of a pod think of them as all local to each other so the components are gonna be deployed onto a single worker within the kubernetes cluster and so when you were actually talking about pods what we're really talking about is they're sharing the same hosts they all have the same IP address all those containers within that pod they have the same IP address they have the same port space because they're sharing the same hosts they also can do things like use UNIX domain sockets they can basically share the filesystem on the host basically the idea is there's a locality when you're in a pod but beautif scaling kubernetes is also the pod so say you want to run two or three or 50 or 100 instances of your app you bundle them all up into a pods you tell kubernetes I want 50 of these out there on mine worker cluster and it'll go and schedule all those pods across the entire system and so now you have the problem of well I'm sorry well hold on I'll get there moving ahead too fast so you've got all these pods they all have their own individual address they have all in their own individual port space they're all kind of local to each pod is local to all the containers in the pot are local to the worker node so I like to think of pods basically like a virtual machine host a regular bare metal host your laptop kind of thing of that way you can reference local host you can talk to the other the other stuff you've deployed on that machine is local host or sorry the other stuff in the pod as local host since they're all sharing they're all they're all sharing the IP and the port space I already hit that again important to note about pods they're not durable so what that means basically is yes you can deploy the pod kubernetes is not going to insure the pod stays running and what I mean by that is if the node fails then the pod goes away but they gets to my second point here or the last point really you don't really interact with pods much you need to know about them from an abstraction level because they're the most primitive thing you actually interact with in terms of specifying the shape of what your application looks like but very rarely do you put into the configuration for kubernetes a pod definition and then ship that off to the kubernetes api server and let it deploy just a bear pod it's an atypical use case what people really do is they usually sing called deployments and deployments basically are basically a mechanism to configure scale and update applications so going back to our blog post if you thought our blog engine it's you know you want to run three of them you tell career days create a deployment that it's three replicas of this the deployment mechanism also has the ability to kind of specify how you want to update so it can do a full on just recreate the application every time so basically roll basic not even role just put out new pods shut down the old ones don't even think about like you know the safety or it can do kind of a specialized rolling update so we can go through the entire cluster or say fifty of these pods and one by one by one update them to a new image or you know you can specify a scaling factor but the point is you basically only have to tell it what to do it's a declarative system so you specify what the you not only specify what your application looks like by specifying a pod you specify just how you want to the update and you specify how fast do the update and kubernetes will do all the mechanical work for you internally so in that way it's basically operating like a thermostat when you set your thermostat to seventy degrees you don't worry about how to go make you know your house actually seventy degrees you basically let you know the furnace figure out how to get the temperature up then the thermostat to tell the furnace when to cutoff and all that jazz gets you from current state to desired state so great we've got a whole bunch of these pods running we use deployments and we've got 50 of these pods running we actually want to basically talk to them as I mentioned earlier we have a problem very tricky problem so all those pods have individual IPs and they all use individual ports and so how do you actually communicate with the pods as if they were a single application and for this is what you need is a service and it's actually a well known problem and a there's a well known solution that we're all extremely familiar with called DNS and so services are basically configuring an internal DNS server for you you specify a name and then you basically specify a select so the service has a name and you can label the the pods that you put out through the deployments with certain labels so like I would say name of blog and the environment and the selector on the service would be the blog and it would be prod and when you hit that DNS name which be blog it will send you right internal into the actual blog app it'll basically it does a round-robin but it's kind of a it's a weighted round robin so it's not a dumb round robin it's a weighted round robin across the individual pods so this is kind of what it looks like you got basically DNS name blog there's a long form as well blog default clustered local that you only use that in special use cases so like if you're actually using namespaces in kubernetes then you would use that the default is the thing that would change it could be something like you know my team or Ken's or prod could also be namespaces there's no real policy around how you should use namespaces people have different strategies for it but the idea is it'll you get that name goes to the IP nothing nothing we haven't heard of with basically a records what starts to get cool is the selector capabilities so you can create two or more services or you know end services and you can basically start using these label capabilities so if we're looking at the blog blog zero blog one those have a blog and four prod there's this poorly label blog zero which would be blog two just has a people blog and doesn't have an end on it I think a bit like a Venn diagram where when you have when you match all the things that's where you go if you match only one of those things that's where you go so we have this blog service and if someone were to do HTTP inside the HTTP inside the cluster HP you know colon slash slash blog they'd end up at blog zero and blog one the left both side and if they were to go to blogs HTTP you know colon slash slash blog staging they'd actually end up doing something interesting they'd be going to not only the new one blog zero but the old the other two blogs you uh wow this is poorly so labels food blog zero blog one or blog two they would go across all three which is really a useful feature sounds kind of like why would I want to do that but it's useful in the sense that think about it if you were doing canary deployments on kubernetes you would have a mechanism to start feeding some amount of traffic into a new container that you just brought up just based on using labels on the service there are several types of services in kubernetes and they're all pretty useful the first thing to know is most of the time you're going to use a cluster IP if you think of a cluster IP mapping to something in Amazon or you know sorry ec2 specifically it's probably it's gonna be a private load balancer private ELB so it has no public endpoint the only way you can actually talk to a cluster IP is being on the cluster so that means if you have service foo it can talk to bar if you had you know bar service but you cannot come from the public internet and talk to either foo or bar if they're both cluster ip's node port what node port does is it actually opens up a port on the hosts underlying host machine and then routes traffic in so this is how you actually end up getting public traffic from the world outside world into your cluster you'll open up a node port service you typically use this for an api gateway type service a front-end service of some sort traffic and come into that you will be responsible if you do that for updating some kind of DNS system with the actual you know IP addresses of the individual nodes since it's going to open up a port on all the nodes across the cluster the third type of system or third type of service is this load balancer service the load balancer service basically wires it's interesting so load balancer service is only available on certain platforms so you're only going to bump into it if you're on Amazon and you're only gonna bump to if you're on Google and you may bump it into on Asia but I'm not sure what it will do it was it will actually go off and create a ELB for you or a Google load balancer or whatever the equivalent is on is your and it will manage all the work of adding the nodes behind it to the service pool so basically people will be able to talk to you and ELB endpoint so then you could yes record on that traffic had come in through that and it'll route to one of the worker nodes in the Cooper days cluster which will then forward it back to whatever the container is that is actually running the code finally there's the external name service which is often foregone but incredibly important so the external name service isn't at all about talking to another thing inside the Cabrini's cluster it's all about talking to an external service outside the cluster think of a use case where you need to talk to a Postgres database and so in some situations you may actually want to have that Postgres database be an RDS database run by Amazon in other situations you may actually want to just run the Postgres database as you know container perhaps for development speed the external dammit the external name feature allows you to basically put a name that acts as a redirection point so your service inside the code your actual business logic looks up the places going to talk to you by talking to the external name service the external name service will hand it the back-end that it's going to talk to and that can change depending on what you want to be so it could be one day it could be an RDS database another day it could be you know a docker container database a third day it could be some other thing depends on what you want to do the point is to avoid actually having to inject application level at the application time any kind of configuration that really says like oh go talk to the RDS specific cname that they set up for me and then having to change that later on when you want to go into development mode yeah at that point then the code becomes we talked to this external name DNS record which is a cname which points to one of these other things so very useful anyways summary of this thing so services they're creating DNS a records for the cluster IP the node port and the load balancer thing they are not doing it for the external name those are see names their powerful label matching capabilities which allow you to basically kind of route to different pods within your kubernetes cluster it's just based on I'm kind of doing like a you know a Venn diagram and then the third one I find really powerful but a lot people don't know about urban eights community supports SRV records which are really useful if you need to do a port lookup and you don't want to code the port into your actual application code so your developers will often you know hard code like five thousand into their app or five six seven three or whatever no they can actually do a they can do with sr vlookup in their code and that will come the port number they have to talk to will come from kubernetes which is really useful if you're basically changing things up and you don't want to you know you don't want developers making things brittle by pick you know just picking random ports and having to always rely on it config Maps so we covered things that actually run code we covered things that actually allow you to talk to code and stuff it's running then there's how to configure code so that it does what you want to do there's two constructs for this the first is config map so age-old problem containers or if you were doing immutable infrastructure with VMs they're immutable how do you get configuration data into it and so lots of us have probably come up with horrible solutions over the years to do this I know I've written some terrible things that did things with s3 I've also rained some really bad code that talked to console before way back a couple years ago that basically pulled out data at boot time for a VM but it didn't really think about things like versioning or any of that kind of stuff config maps are the answer so config Maps at deployment time you specify all the configuration that you need you put into kubernetes when a pod comes up that references it config map kubernetes will automatically inject all the configuration into the pod I'll either do it as environment variables or if you need to do really advanced configuration it can actually just lay down entire files onto the filesystem through a volume so that's config Maps secrets they're cousin of the config map they're kind of hacky these days I I'm gonna say like you may or may not want to use them depending on how concerned you are about security at your company so the problem with secrets in kubernetes is that they're stored in plain text on the master and so then you basically the answer so far from the commands people like we'll make sure you secure the master which isn't a very very satisfying answer they're working on it they're actually they are working on making it so that that's not the case but that is the state of things at last I checked with 1/7 I think 1/8 they're actually finally rolling out some security there but the big difference between config maps and secrets is really about how they're how they're issued out to the nodes so the only time secrets are actually shared from the master where they live over to the nodes is if there's a pod that actually needs it so if a pod says like I reflect I need the secret give it to me then the master will ship it over otherwise it's gonna stay only on the master further it's also gonna live in temp FS or in memory somehow guru Nez won't write the secrets out to disk when it ships them over to the node so it's just living kind of ephemerally iPad to put this away together because I know a lot of people are really familiar with AWS and DC - it's sometimes easy to map over concepts rather than trying to kind of understand them through all the features if you think about what a pod is it maps over really nicely to an ec2 instance if you think about what a deployment is it Maps over really nice to an auto scaling group plus a launch configuration and there's a little bit more going on there too because there's a policy like it the deployment has policy around how to upgrade so like launch configurations and auto scaling groups they don't really have that you have to kind of orchestrate yourself but deployments can do some built-in policy stuff around that services map basically the ELB I also should have put DNS record in there as the what there are and config Maps and secrets they don't have any analog in the ec2 world other than creating you know some custom solution through dynamos Evo or s3 or running console or Etsy do yourself let's talk a little bit about developer workflow on kubernetes so as part of our role for operate as operations you know DevOps platform engineers infrastructure engineers whatever it's really about aiding our developers and making it so that they can ship code quickly efficiently do it correctly and be productive and careers is awesome for doing that like it has a lot of power a lot of flexibility it can kind of satisfy any use case you throw at it but great power comes with lots of potential for learning and learning pain is probably the single greatest ability to slow down engineers because they will start digging into every little detail that they don't need to think about so how do we end up making developers productive so part of this is really about kind of laying down some some standards and I'm not saying there's like a that's saying there's a particular silver bullet here but you know it's kind of about standards and kind of about basically make sure configuration stuff like that is specified in a way that no one has to guess about it and you know that got stuff so how do we do that kubernetes has this thing called a manifest a manifest is basically a giant blob of yeah Mille and by giant I mean under 200 lines of yeah Mille it could be split out across one file or n number of files it can be written in the ammo or can be written in JSON I strongly recommend avoiding the JSON format for a number of reasons mostly around comments if you think about what you do with the CMO so you write out what you want to do in the mo as a declarative like this is what I want this is what the state of my world looks like it's gonna be you know pod it's gonna have four containers Redis comments server post server front-end you put it in that file and then you tell the coop cuddle command which is basically the interface people use to talk to kubernetes go apply this thing and kubernetes will go off and basically it will take that config file it will do a whole bunch of API commands actually will do a whole bunch API calls so there's business logic built into that coop couple command it will check the state of the world it will see what has to do compute a diff and then it'll apply the changes that it sees to bring you from whatever the current state of the world is into the desired state of world so basically as I mentioned a little bit earlier thermostat model finally the thing about manifests is while you can basically write out all config' as kinda like hard-coded so like dr. images there's a field for putting a doctor image name in there or document doctor image tag you can do that but you're going to be updating files all the time when changes come in and in a fast-paced kind of system that may be you know a negative I find these days that parameterizing the templates and then using something to run over them and change the parameters to what you actually want at runtime or deployment time is a much more satisfactory solution so you know there's a bunch of mech mechanisms for doing this you can do it with Python Jinja to go obviously has templating mechanisms I have in panic or time crunch down stuff with said and not been particularly proud of myself so often asked when people are starting to build stuff on kubernetes like how do i structure an application in a sane way and I'm this is not about how to lay out your source code this is not about how to where to put really a docker file I do generally suggest people create a k8s directory which is the abbreviation for kubernetes or you know something similar criminals directory and start putting amyl files in there for describing what is needed to run the application so drop that in there you put your KS directory there you put your you put your manifests in there either in actual concrete form so it has all the values you want you know hard-coded in and then you can just do coop cuddle apply on that entire directory and coop cuddle will basically know what to do it'll start creating deployments correctly it'll write services out to the kubernetes you know cluster it'll set everything up just the way it's supposed to or you can take the approach I mentioned before which was have something code gen the templates out take a template code Genet with the values you want and then same idea coop cuddle apply on that output source I'm kind of harping on the avoid hard coding thing because it's it's bad I've done it before really try and do the templating if you can and then also a further lesson that I've really learned is avoid putting namespaces directly hard-coding namespaces into your templates instead prefer to actually use namespaces as switches to the coop cub command the problem with the namespace byah basically hard-coding it is if you do it that way you're basically telling anyone who runs and deploys your application that you want it in that namespace and it's really not your policy it's not it's not your decision to make people use namespaces for a variety of different things some people use namespaces as environmental policy like you know we will segregate on prod staging dev some people use namespaces by team some people use namespaces per app there's no there's no policy so it's not it's not really polite to be like let's just hard code a namespace in here and call it a day so try and avoid doing that I really strongly recommend sticking to ya Mille we rather than going with JSON I have a love-hate relationship with the animal as many of you probably do it's very easy to read and terrible to write and has lots of ambiguity but at the end of the day the ability to put comments in there and describe what some of your stuff is actually therefore or kind of be like there's a reason I've picked this particular type of update policy for this contain this you know pod don't change it really valuable from a just documenting what's going on and making sure that's available in source control and all I have stuff I also recommend just keeping your crew bidets manifest in a single file for a long time I used to split it out across a number of files and what I realized is people don't know where to look for things at times and so I've kind of consolidated on the just keep everything in a single yamo file name it whatever you want service dot Yambol you know hello yeah mol deployment yeah mol but it makes it easy to basically you know do something like you can throw comments in there that people can search for easily you can easily look at with the whole thing is structured in a single editor window it just makes life easier so development workflows there's no real silver bullet for workflows they're all different for everybody our industry loves to kind of do the let's pick an awesome one single thing that'll work for everybody and just never happens to be the case that's fine kubernetes can do a whole bunch of different workflows but what you really do need is tools that can adapt to changing requirements and the process around what you're doing and so you can't pick a tool that will straitjacket you went to a certain methodology personally I've had great success doing trunk based development using parameterize templates I've kind of mentioned structuring out things as a mono repo which would be the entire company's code is living in a single repository in git or you know some other version control if you're using it or the pseudo mono repo which I've actually fallen in love with a little bit more recently which is on a per application level I keep all the services that compose it in a single repository but there may be you know several applications in a company that actually have to be built so those can all be their own mono repos and then really you need to offer your developers dev tooling that allows them to go fast if you make their lives easy they will be quick to adopt - what have adapted to whatever you're throwing at them they don't want to be spending time kind of like down in the trenches thinking about the infrastructure or the process anymore than they have to they are focused on basically clearing their plate whatever their boss is throwing at them or their JIRA tickets are telling them to do for business level functionality one tool that I've kind of adopted recently and I'm gonna put a little a little caveat here is I worked with a company that build this but Dana wires a little bit of an interesting company is like no one's really forced to use the tools that we produce at the company we all kind of do our own thing and overtime consensus arrive at the correct solution it's a somewhat interesting engineering culture I started out with the approach of having a single directory where I put my urbanize manifests in and then using said to actually template out things and I was very I was very happy with that solution but it was really confusing for other team members to work with and it was hard to debug and it was kind of brittle to what I was doing and so my co-worker and my boss really raphe started building this thing called Forge and for a long time I was pretty resistant to working with Forge it wasn't my code it wasn't the thing I built but the other day like it makes sharing the projects way easier it has a process that's pretty flexible built into it it art array it automatically knows how to do template parameterization with Jinja too so like it took me a little bit time to finally go like I don't know I'm just wasting time and kind of adopt this thing but I've adopted it I really like it it works really well I've actually recently started kind of moving and not even out of my develop a kind of a started using it as one my development tool but recently kind of moving it into my production workflow so I have kind of wired it into CI I allowed to do its thing also on the CI server but it's awesome because I can basically building an app and I can have one or twenty or fifty you know micro services composing it and it will know how to build them all from source code turn them into a docker image push that docker image and then deploy it onto kubernetes based off basically to config files the config file itself for Forge and the config file for kubernetes and what's even better about it is it's basically incremental so I can put this thing inside of a mono repo and if I only change one service I don't want to rebuild the world and redeploy the world it will just incrementally you know recompile one thing and then put that one thing go to the new world which is really great for a development workflow so basically it computes of different changes and then it pushes those updates to kubernetes works great love it if I had more time I'd probably do a demo but I actually also Richard kind of did a demo earlier so if you were here at the earlier presentation you saw I'm using it briefly all right ah so final kind of topic logging debugging and resiliency in a kubernetes setup so it has built in log aggregation and log collection before all the containers running in the system it will log and store standard out and standard error this is not going to be good enough for you if in operations so you're going to need to bring something else into the fold but coop catalogs is more than good enough to hand to your developers and allow them to kind of go crazy with debugging but for you know operations you're going to want to hook into fluent D which is there and you know send off logs to a last search where you can basically do you know real queries on what's there you can offer up an API to your developers to do queries you can kind of avoid having to be a grep Wizard figuring out what's wrong but there's really there's also more to logging than just application logs so just getting you know what your app server is doing is cool but there's you know logging along the request line within all of your application that's really important that's often missed so how do you know like something when it fails where it fails it went through ABCDEF you know G all these services and somewhere something failed and it's you know cascaded up but you have no logging about why it failed you have no logging about the parameters you have no way to trace it when you get to this kind of point and you should get to this point really quickly if you're building micro services you want to start thinking about a service mesh I mean a service mesh there's been two talks basically so far I won't really go into detail here too much about it two talks last two days about this but just to recap on what a service meshes it's a dedicated infrastructure layer for basically for servic communication to make it reliable and safe and so kubernetes and the CNC f they're kind of aligning heavily on two lifts envoy which runs is a sidecar proxy to your services so you schedule a Envoy proxy to run next to you know the blogging stuff and communication calls coming in to the blogging stuff and communication calls going outside the blogging stuff would all go through this proxy to wherever they're gonna go and with that you get request level logging so you can trace the traffic it gives you request IDs you can then feed that stuff into something like open Zipkin or jäger which would allow you to basically you know see what's going on your system not only from a like this is what was in the request this is where the request went but also like how long it took for this request to complete and then you can do kind of performance analysis on that information as well so really powerful stuff kind of go into the coop cut a log thing for developers your developers are gonna want to get those logs if you can the coop got a log command is unfortunately not very powerful so here's real problem with it you have n number of pods running and you have n number of containers inside those pods and it can only get you the logs for a single pod and a single container at a time there's no way to get an aggregate view across the cluster so the correct solution is to spit up a last search and store all that stuff for them but if they're really like if they're just working and you haven't gone that far in the process of setting up the infrastructure and on one of these things coop tail ke tail and stern they're all fantastic they all kind of do similar things I use stern personally and I really like it it works amazingly well but you can basically tell like go to these pods collect all the logs show them on my screen rather than having to you know go through all the various commands to do it or write a loop so debugging on kubernetes so kubernetes is reasonably complex not in the sense of the machinery that actually runs it but in the amount of interactions it has with the pieces so there's a lot of failure points between talking to docker doctor talking to the internet docker talking to private registries misconfigured things all of those things end up sending you on a basically a troubleshooting loop and it took me a long time to come up with my own basically a set of troubleshooting guides only to find out that they've already written all these down so there was a basically two URLs that I find really handy these two are fantastic used them because they'll save you a lot of time and I would I'd seen them before I started kind of debugging things myself I will say one of the unfortunate things about kubernetes is the documentation while it's all there it reminds me of the Amazon Docs which while also all being there are a nightmare to actually traverse and if you want to find out how to do something it's not clear at all you end up basically doing a blog post search on Google to figure out what you actually want to do but something you really times you want to do debugging anymore kind of in the cool way than basically looking at logs so or your developers kind of working on a shared development cluster there's tools for doing this one of these class of problems is basically like how do I actually get it so that I can run a local editor against the code that's running in the cluster or I can hook up a debugger to the code that's running in the cluster and get my you know have my native tool chain without having SSH and just use kind of like V or you know cat and you know grep on you know remote host to do this guy stuff or trying to do something like get the logs out of a net log aggregation sometimes you actually want to run code and see traffic coming through it so there's this thing called telepresence another disclaimer work with data wire teller it's one of our tools but it's open source and it allows you to do this it allows you to bridge a local laptop or you know workstation into kubernetes cluster so that traffic on the kubernetes cluster can talk to the code that's running on your laptop and vice versa you can have the code on your laptop talking the stuff in commands cluster as if it was all native feeling you know same same network cuz it's a VPN even more cool is it injects the filesystem stuff from the kubernetes cluster back onto your machine so you'll have access to whatever the Khmers cluster has for volumes so you can do kind of like you can work with the same datasets makes really really kind of a cool adult development experience so yeah basically just comment practicing network request environment variables volumes and basically you code locally you can use you know your favorite editor I'm an intelligent to be able to use IntelliJ anywhere but you can do that I can also hook up a debugger to the code and I can handle requests coming in and look at them and inspect them as if they were just there another use case that is really awesome for this and it's not in the debugging path but say you have a team you're working with a team of developers they're spinning up on a project and every night you'll do like a nightly build of a shared development environment for them you'll crank out you know all the base services they're gonna use but inside their little bunker you know each one's working on their own services and calling out to each other it enables a really unique collaborative development experience that I don't think there's any other tool out there then Able's so like you know each person can be working on their particular service they can call across the room through the kubernetes cluster using all the kubernetes infrastructure you set up they can call to the stuff in the commands cluster but also talk to the other guys stuff who's working on his laptop really cool really powerful kind of a unique development experience makes things really fast and enjoyable especially when you're doing kind of like rapid prototyping and just hacking away so anyways wrapping up so commute is awesome there's a lot of power and flexibility and he's in kubernetes we as operations engineers and platform engineers and whatever your title may be our role is to empower developers to work faster work better produce you know the business logic they're supposed to be working on we can help them do that with kubernetes and also our role as these two things is to make sure that the business continues to operate and much of what kubernetes and the service mesh stuff allows is the ability to make a resilient safe platform that your developers can basically work on quickly and it'll be safe from failures so the surface mesh stuff can do things like circuit breakers and so when you know one service fails it doesn't cause a cascading failure which allows you to allow the developers to start shipping code much more quickly and not have to worry about your pager blowing up all the time anyways we're done DevOps days I guess is coming to a close soon thank you for listening to me ramble up here for the last I don't know 45 minutes if you're building cloud cloud applications on top of kubernetes if you're doing cloud data stuff please check out our tools for gsh telepresence to i/o and get ambassador do Forge was the development build deploy tool I was talking about telepresence is the remote proxying tool and get an ambassador is a basically it's a HTTP API gateway or tcp is really a full-on api gateway build on top of lifts envoy so that you can get all that nice service mesh gooey inside miss basically not only internally to your cluster but all the way from the internet down oh thank you so I'd be interested to know like more in-depth how it so it makes sense if it's just web service but if you're gonna set up say like five node es cluster elasticsearch cluster or something right like it the data needs to be present obviously and if one goes down and it spins up a new one the data needs to be present there also so like can you describe in more detail how that works so I've not really worked with building staple stuff on top of kubernetes so I'm gonna be the wrong person there is features we're doing this so they have this thing called a stateful set which if you wanted to actually do stateful applications on top of kubernetes you should look at staple sets to figure out how what they're telling you to do does kubernetes provide a docker engine or do you need to install darker and darker engine independently from kubernetes so the question was does crooners provide a docker engine built into it so when you deploy kubernetes you pick the container runtime so docker is one of the container runtimes you can choose you could also choose rocket which would be another container runtime there's also the container runtime Cameron Bure cRIO that's a kind of abstraction for all these the point it is it doesn't man it doesn't mandate use docker and the fact they're trying to get away from mandating that use docker basically make it pluggable for all these container engines okay so the question was do you need to install docker independently from kubernetes on each node is that correct there so they're trying to explain this greatly so if you think about they're two independent programs so yeah basically if you have each node you're gonna have to install docker next to Kubla and coup proxy for each node that you deploy it's that that sound about right yes but there's tools for there's tools for doing the deployments for you so cops is a common tool people use for deploying kubernetes themselves there's Koopa corn which is also pretty cool and Kubb ADM is a very low-level tool but they'll handle all the nitty-gritty like I need to deploy these three apps together to make an actual node or a master does kubernetes provide any type of interface to tallien with auto scaling for example in that big compicated gamal file you were talking about could I specify the equivalent of AWS or GC auto scaling rules auto scaling rules oh so like scaling for traffic yeah correct uh I don't know so I haven't had to do it so I don't know that's a great question sorry I don't have a good answer for you on okay Phil thank you so much this was super informative [Applause]