Shattering your application into an umbrella project

0 0

[Music] okay hi I'm looking off as you may remember from the lightning talk yesterday I'm known everywhere as chronic death I'm the maintainer of the intellij elixir plugin and I also run the awesome looks to meet up here in town but my talk today is about my work at communication service for the Deaf in September 2015 CSD start a new Phoenix 102 project it contained a single OTP application interpreter server with live tests and web directories between VIN and Electra comm in Orlando about a year later I joined CFC and the rest of the team and I slowly built up that Phoenix project adding support for talking other services partner server and lighthouse server using RabbitMQ and talking to an ember front end using json api lighthouse is enough for authentication it stores our users and issues date JWT web tokens that all the other front ends can use to talk together worship services partners server on the other hand is targeted at our partners or the interpreting agencies themselves it's where they can set up jobs for interpreters to accept and to file the client requests and business requests for those jobs interpreter server is targeted to the actual ASL interpreters where they can accept jobs and set up their certifications because that's a big issue for us is that we have certified ASL interpreters and not just anyone off the street that thinks they can find together this is vineya at luxor comps 2016 Chris McCord gave keynote on Phoenix 1:3 now remember this is based on what he said then not what he's just told us where he mentioned the web directory would disappear and then optionally umbrella projects could be used to separate the web interface from repository access or other business logic I was excited but the templates for Phoenix 1:3 wasn't even a PR yet I actually was looking at the github repo as Chris was talking looking to figure out how to do this so I was stumped I'd never used some boa projects before I had heard people talk about podcast that they always use a brow project they always use OTP apps I didn't know what the secret sauce was it just seemed to be like the way I should be doing it thankfully I was in luck Wojtek Mach was giving a presentation on umbrella projects I recommend watching it if you haven't seen it is a great talk but his talk is from umbrella from the beginning Kristin boy text talks left me with the question of how do I get to that great umbrella project from where I started with a single monolithic application from vortex presentation I learned that umbrella project still had a top-level mix that exs but all the codes lives in this apps directory each directory under apps had a separate OTP application which is just what the root of the directory of a normal project was so it just is move the route down to F and make more projects on your app the first step was to put your project route under apps this one vote a bunch of get moves to preserve history you'll likely make some mistakes I know I did Phoenix one three we'll use the web suffix for this OTP app so we'll do the same here you'll need to move do the move first because your routes going to have a new mix that exs and otherwise you'll get file name conflicts after moving your mix study except from the root of project to the app so my app web directory some of the paths when you beer in to point back in a bro products the build path config path and depth path are all shared is one of the main things that separates us o TP apps and umbrella from just using path based dependencies if you use mix new inside the app directory these paths rewrites happen automatically which wasn't but our mix study XS that you're going to do with the hip move isn't set up that way if you have to do all these changes manually the way I ask you figure out was just set up a fake umbrella app and copy the changes in I recommend the same thing if you can't remember these changes from the presentation the new route needs to be a mixed project but unlike a full OTP app the mix buddy XS function will only have four options three of the options build embedded depth and start permanent are the same as any other project but for the root umbrella project we have a top-level config app's path that is false to apps so just like a loss of an elixir there is no magic it is explicit just the convention is to call it app eventually you may want to fill depth in to fill in things that need to be at the root examples would be credo and dialyze or Die elixir because if you don't put them in the root they won't work from the root directory old informative IDIA lab directories although each OTP app has its own config directory with its own config files the overall config for the project is unified top-level configurate directory it says go grab all the other configs and then each individual config will point up here and say a loopback so from any directory you always get the same set of config files since you're going to use blogger in all your OTP apps you can eliminate divergent configurations which was a problem we had if I actually pointing this you're larger in figures in the top-level area I recommend turning on handle OTP reports and handle fast reports so they all go through the same logger and you don't lose staffer reports that only would you be in the Erlang log or otherwise after all the moves renames and replacements you're left with an umbrella project with a single OTP app in the absence directory it's more complicated not less than what you started with and you have to wonder what was the point well umbrellas only start to pay off when you start to break up that single app but the question is where's the first craft to search shattering the web application from Chris's keynote on Phoenix 1:3 I knew that - - umbrella option for Phoenix knew was going to replace the EXO module both the repo and the schema an OTP app separate from Phoenix put in the extra repo and came in there oh no TPF isn't enough if you just treat like a separate namespace we want to be able to test and use the domain logic without the need for Phoenix controllers so this is a bounded context that Chris was talking about why well you may not think about it often but any Phoenix project already has to you is the Phoenix API that we present to the web but also the UI we use as developers DevOps or maintainer from is so if you've ever had the problem of debugging or having a set up code and it's just a hassle to do it in AI X that's because that UI is bad after a lot of refactoring cycles like three or four I eventually came up with a transport neutral behavior that can hide whether we're getting data from ecto rabbit or even local gem servers that backports to use SSH port forward tunnels that we use for debugging and I called it kelvinator resources this is only my supposition of what a domain module could look like in phoenix 1 3 it doesn't use the function prefixing system that Chris showed instead that I can have callbacks and a OTP behaviors to actually check at compilation time the behavior supports controller like controller action like callbacks but also support for testing with sandbox in certain callbacks like chain set insert and update have two forms to allow the optimization when calling in the controller like Kelson air module query options and cold cuts and code common options such as pagination sorting and associations to including the response we built Kelvinator to target Jason API but I wanted this to work for anyone that not just using GSN API so query options are more active centric where they have params and associations instead of relationships that Jason API would use you may find this list useful you may like some of the things some not if you like you completely you can just use the package on hex since I want else inator to work with EXO rabbitmq and really any backing datastore the returns are more complicated than the nectow as I want to be able handle more error conditions will happen to note each data stores exceptions so usually when you get an exception all the way down Postgres is a postfix there I didn't want to have to know that because I'm not supposed to know that Postgres is backing me owner error ownership is good for any ownership errors which can happen anytime your interaction with our PC or when you're doing the actual ownership stuff for testing and I want it to be an error not exception so that it can be surfaced as an API error during test so in con cases in Phoenix it actually shows up as a bad return instead of them appearing in the log or output which some developer has to notice the log of extra noisy during testing likewise error timeout allows for gen server time must be shown an assert response failure instead of appearing in the Safa log I found doing ok struct or error not found to be easier to match it with than using ok struct or nil so I recommend that for all get like calls it turns out the either monad is really useful so for a general platonic Phoenix project that's as far as I advise how to break up your project on domain OTP app and the phoenix web OTP app well let me cover the specifics of CSD zone project to give you some more ideas for how to break up your project when converting to an umbrella project in supers server as I said before is cfc's project for allowing sign language interpreters to find and track jobs for multiple agencies on the front end it uses ember to talk to Phoenix controllers that responds JSON API on the backend it uses ecto to talk to a database it owns to represent the data owned by the interpreters so their profile their address and their credentials but it also has to talk over RabbitMQ to best ground processes running RPC servers that can access databases owned by to ruby on rails server owned databases I don't know if this is the case for you but a lot of places seem to have that mix we got lucky we got one app with elixir we didn't get them all for debugging purposes we can access the backend using SSH finals held in memory that allow either IX or observer remote connections the Ember front-end uses ember CLI so to keep the publishing consistent with the Ruby on Rails servers the Ember front end is published from a Redis cache so the correct an interpreter server our feet OTP app become became obvious when I list it like this now mind you this is somewhat of a generalization it took a while to figure out this list was the obvious way to break it up unfortunately there's no way to go through the struggles and still make this a good talk each UI should get its own OTP app but so should ease backing store so let's see how I actually did that when shattering your web OTP app into more pieces you may end up with these force separations because some code is needed in two OTP apps what they no longer share a common dependency because heat on the monolith this was the case with us with with interpreter server JSON API because they have shared views are both used to talk over RAM mq4 RPC and in the phoenix web observer already has its own UI but the steps necessary to connect it over s an SSH port to forward to a containerized toast also needs to be a good user experience so interpreter server observer contains an interactive walkthrough that walks any developer through the Klan's to copy and paste some two terminals simultaneously pretty much you put into value and then it spits out the next command to throw in the remote terminal and goes back and forth it's a almost sorta like a text-based adventure and with this we're able to set up a remote console to our hosting or get observer to connect to production or QA containers instead of one set up by the remote console so this is very much like our Roku setup where when you ask for a counsel you actually get a new container but since we have RabbitMQ we can actually shut down the RPC servers on the top the council container we get spun up send a message and say hey open an SSH tunnel and production will respond and come back to my laptop and I can run observer then on container infrastructure that not supposed to support SSH this may work on a Roku but we don't use euros use Heroku we use a provider that used to Roku ish which is a Python implementation that kind of makes it work like a Roku I haven't tried it myself interpreter RPC talks throughout MQ so it owns the connection to rabbitmq because you're supposed to pull and have one connection to multiple channels ORP and it also supervises all the RPC servers that expose the de vaisseau and by interpreter server to the Ruby on Rails applications and it also has apps so that we can have RPC clients to talk to the Rouge on Rails application with all this code being moved out of the normal Phoenix app the Phoenix app interprets server web is down to just controllers and views that the Ember front end needs to consume the controllers are minimal because the calculator package that I made makes it so that mostly declarative with use statements saying which actions we want to use to support JSON API and there's a couple plugs to help with authentication authorization such as needed foreign keys and interpreter server web does have a few views that aren't in JSON API resources because there are parts of authorization that are mixed into the view layer to hide rule-based fields that don't apply in the inter back-end communication for RPC which kind of sends all the data at once back and forth during the shattering of interpreter server web authorization was one of the hardest aspects to disentangle into its own application I was never able to make it completely its own layer because there's no good way with an ESCO schema struct to indicate that a field was censored without making up some random value for that field and so we end up having some the authorization for the entire structure be its own layer but some of it to also be stuck in Jah sterilizer views where we override the attribute callbacks to hide some field our app handles fetching studying and invalidating the reddit cash for ember CLI interpreter is for data owned by interpreter serve and access using actor repo connected at Postgres lighthouse and partner r2 reason and rails applications but we have seen mere them in a liquor to represent their EXO skills that we need to deserialize the data over RabbitMQ when we get it from those services but additionally to be sneaky during integration tests Scott actually had a thing to have extra repos clear the database because it's sexy faster for us to clear rails database for it didn't ask right to do it as you can see there are multiple post gross apps interpreter lighthouse and partner this is because I would not recommend having the store format decide how to group your code OTP apps instead I'd shadow the OTP apps based on the owner of the data or to use case so partners a partner agency focused interpreters interpreter focus and lighthouse is off focused so if there were a new use case for Redis I would make a separate OTP app for member if some of the code in ember turned out to be useful to both I'd make a second OTP app that represents a library of the common shared bits of Reddit the three Postgres back OTP F reflect the distinct databases that they're backed by and in theory we could start using that to scale or deploy only part of our umbrella application as releases to new containers SSH tunnel is probably the most classic OTP app it is a super vision tree of gem servers tracking page to external OS processes it also includes an air phase for sign up SSH key so sign all the things in your home directory SSH but in the hosting environment if you're wondering why that's necessary when SSH is a light library in the ranks Aaron library is because the SSH fiber in the Erlang same library was accepted because someone wanted to send in a patch to give it but it was in the days before they really tested all those contributions and made sure they were maintainable so it's not really all the SSH protocol it's definitely not the part that does tunneling and until a recent release of 19 I didn't even do off mechanisms that most SSH servers would accept so it was just easier and to actually use a physical ssh client binary that was inside the containers from our hosting to dial back to our machine sees observer and remote cell remote sessions I need to emphasize this because it's important when searching for a validation or params casting library echo is both a way to talk to your database using actor people but also and far more generally useful is a way of validating params even when they don't come from the Internet through Phoenix and it can be used to track changes to those drugs at CSD we use ecto for converting to and from params in retort over RPC over RabbitMQ to represents the SSH tunnels in memory to track the ssh client processes and for the more common usage of just accessing the Postgres database when applying these considerations to your own projects understand that making a new OTP app in your umbrella project can just be an intermediary step on the way to making the code separate distinct hex package not all code needs to or should become a hex package though a namespace doesn't need any more justification then you keep repeating the same prefix or suffix on all your functions or you need a new place to put a def struct since there's only one per module moving to a separate OTP app in an Brella project the contents of the app need to be testable and useful on their own if you move to a namespace into its own ODF but you can't test or use it effectively from IX without bolting another OTP app on top then it's probably not worth in a separate OTP F just leave it enough to namespace going from a separate OTP app to a separate repository has both pros and cons the pro from my first experience is that you can shed build time by compiling testing and dial wising that separate repo when it actually changes we have full clean dialyze on all our elixir code it is great when it works you change a type signature and you can safely point out all the places you have to refactor so since the only time those changes you don't need to compile test and dialyze like you would all the OTP apps in an umbrella project unless you do some very clever things about figuring out the code hasn't changed and won't affected downstream okay p.m. the confidence is that you have sometimes the standardized core made release update dance when it turns out that you have a breaking change in that repository that requires updating all the downstream repositories but I had this issue in Ruby too it's just something that happens with open source repos that you also use in your own internal apps jumping from a separate repository to a hex package first requires that the repository is publicly published it also means increased duties as hopefully you're publishing the hex because you want the community to use your package this involves a dedication to support the package in any community involved don't pollute hex p.m. with your project that has a single commit that says initial commit you want to build the community around the project you want it to have updates you want to make sure you're going to maintain the community around that project it is perfectly okay for no Chiefs do you have to stop at any of these stages sometimes like for interpreter server Jason API which just contains view specific to our app there's no reason go beyond an OTP application to do that because anything more than that makes maintenance harder another repo makes maintenance harder having to publish on hex maintenance harder although I will say it is really nice that hex has a revert feature because I've used that a lot from interpreter server CFC is open-source three packages Alembic callus inator and retort the first package Alembic deals with JSON API format validation as you can see from the platform diagram JSON API appears in a lot of places servers controllers clients we spot this so early in the design of interpreter server that Alembic actually jump straight from a namespace to an independent hex package without going through the intermediary steps of being an OTP app in an umbrella project I actually open sourced it before I even did the umbrella fication so this type of component is easy to pick out find all the police's where you interact with the same encoding format and make it a library sometimes to find the common code that can be extracted another OTP application you need to start ignoring the actual data ignore the struct and look at the transformation pipeline zooming in on RPC servers and controllers you can see there were two types of RPC servers and two types of controllers the obvious place to unify the servers and the controllers is that they both interact with echo but that leaves the SSH part and the RPC client in the dark in this view will concentrate on listing the the resources with the index action and method so I could kind of make this work but controllers depend on plug while the RPC servers use their own struck and use normal pipelines because they don't want to do a plug builder just for my RPC servers only the controllers do authorization some of the data is an echo repos and that's got its own interface already but there's also SSH tunnels talking to gem servers and the clients are different because they need to spawn a client connection first finally the output is different this result output just goes back into a structure of app while the render function from Phoenix put the con as rendered a Shi P and HTML so there are a couple techniques I combined here Dec extract out cows inator first combine nomenclatures json-rpc may call it a method but these methods new sports the same operations as a normal Jason API controller so just settle in the controller nomenclature of action next is the issue that RPC servers don't do authorization only controllers you so borrow the null object pattern from oo and have a default authorization module that does no checks although I will say ours actually does a check that you don't set a user so if you start running authorization you can't accidentally forget to check that the users authorised third the RPC client back controllers have an extra step of making that client and potentially handling that it can't get a client but if we think about it extra repo is it really hiding connection management from us so you can group these two rows together under resources for authorization of individual Struck's and the return list will use the nulls ation for our PC servers finally and this took a while to realize the result and render Rho couldn't be broken up because the convenience of Phoenix controller render is hiding the fact that that render is actually doing two things is both rendering the view and then encoding it directly into the plug response to make this a transfer additional system these two steps need to remain separate so that the common format of a json api map can be correctly injected into either an outer JSON RPC map and then encoded or encoded directly by poison into the plug con response these steps of calling the action authorizing the action was can here getting the the resource authorizing the resource and then rendering the view and then returning it back out to the encoding with this couple-hour precisely represented in calais inator index the only addition that I didn't mention was support for sandbox access and this is thread both in and our controllers and our RPC servers because we actually have tests that throw the way that we get concurrent browser testing and we have to throw the RPC use a thread local variable in Ruby to send it back over for RPC request back to the elixir side this action only contains the happy path because all the matching with with will handle the okay and the air will hand through for the caller of this index action to actually handle because the error handling is unique to either a controller or an RPC server error ahem the reason why separate is because in JSON RPC certain things like a bad ID for a gift if something Jason RPC specifically calls out you have to flag in the JSON RPC part of the payload and not the data part while in Jason API spec is very HTTP centric and so that has statuses we must set but it turns out to be very simple because it's all just tuples so we just have a case that we have to handle and this document here is a JSON API document from Alembic so so errors that are already formatted as JSON API just go on through converting to an umbrella project isn't all sunshine and roses if you use docker it always assumes the root directory for inside the container so if you need to use one of the sub OTPs under an apps food directory you need to pass a def W flag to change for that directory inside the container if you CD into apps through outside and then run your docker container it doesn't care it doesn't do that sort of thinking between where you are in the host and where you are then container mix test behaves differently from the root directory and we still actually have an issue with a race condition where our RPC servers are sometimes connected to the repo too early for to drop it and it when we get to questions everyone can tell me what I'm doing wrong that'd be great those konso have been far outweighed by the ability to run mix test and mix dialyze on each OT p app and eventually being able to open source certain pieces which gets the build time down when shattering your own project identify your independent data stores this isn't just about the back in technology such as Postgres in memory or Redis but data specific to a given domain or user base that may be independent of sourcing or scaling characteristics so in our case the number of users we all have from the agencies is independent of the number interpreters we have because they're like their employees you want to hide the backing technology because you may want to change it to optimize for search caching or command query response segregation we're actually contemplating doing command query response vector segregation where the elixir app would serve reads in addition to the rails background worker to get load off the server because as some of you may have experienced the rails version of your app eats a lot more memory than the elixir version and so we got free space to do on the Electra version there's so much space on QA actually that when Scott does lightning talk that's black pot is in interpreter server it is not a separate running container some conveniences from libraries such as thief controller render or a youth statement that define the majority of module can observe obscure commonality in your own app dive into your dependencies code they're right there in depth you can read it maybe I'll have a hard time if C or Lang stuff and understand what they're generating and calling on your behalf to see if you can stop repeating yourself an extract an OTP app more tuned to your projects needs by jumping down a layer and calling parts of the library directory using more function calls and less declarative code in general assume that declarative code such as a use statement should be there as a convenience for new users that wants the libraries the final layer of their project but if you need to build upon a library look for the functions those macros are calling and hopefully the library author has took taken Chris's advice from his meta programming book and immediately call the function after get into the body of the macro that might not always be the case finally separate your UI into different OTP apps this allows you to potentially exclude entire OTP apps from balances that don't target a given UI and it can also point out pieces that really should be in domain-specific OTP F if you keep having to repeat yourself in code for UI apps or if you end up having to have a lab notebook when you're an IX because there's no way to quickly insert a new user without like 10 lines of code that you need to just have to be sitting in Evernote I hope this guidance can help you help lead you and your bright and shiny future of umbrella projects if you need any help I'm chronic death on elixir flak and LexA forms IRC Twitter don't hesitate to ask for help you [Applause]