Tiny DevOps episode #43 Jason Adam — A conversation about trunk-based development

September 13, 2022
Jason Adam is a software developer with a non-traditional background in biology, business development, and data analytics. Now he's active as a developer, and on the lookout for proven practices he can introduce to his team. On this episode we talk about Trunk-Based Development, and the related topics of continuous integration and deployment, infrastruture as code, and much more.

Jason Adam is a software with a non-traditional background in biology, business development, and data analytics. Now he's active as a developer, and on the lookout for proven practices he can introduce to his team. On this episode we talk about Trunk-Based Development, and the related topics of continuous integration and deployment, infrastruture as code, and much more.

In this episode

  • How Trunk-based development differs from GitFlow and other branching strategies
  • Two flavors of trunk-based development
  • How Trunk-based development fits into the larger picture of continuous integration and continuous delivery
  • Techniques for working in smaller batches
  • How test-driven development enhances trunk-based development
  • Using feature flags for smaller batches
  • How to keep pull requests small
  • Cherry-picking small changes out of a larger pull request
  • How Infrastructure-as-Code works with CI and CD


Jason Adam
Web site & newsletter: functionalbits.io

Have a topic to discuss on the show? Let me know!
Want a private consultation? Borrow my brain.


Announcer: Ladies and gentlemen, The Tiny DevOps Guy.


Jonathan: In this episode, we're going to talk about trunk-based development. What it is, what it isn't, different variations, techniques to use it, but before I dive into that, I want to give a quick update on the podcast. If you're a regular listener you've noticed that I've been on hiatus for a while. I had a holiday come up. I had an office slash studio renovation come up. I've had life come up, essentially, but I'm excited to announce that I'm back in the saddle again, and I have some exciting episodes in the pipeline.

This episode you're listening to was recorded during my office renovation, so the sound quality may be lower than you're expecting, but I hope that won't detract too much from the content. A little background on the episode. Jason, my guest, reached out on LinkedIn to all of his connections and asked for people with experience with trunk-based development, and I thought that could make a very interesting episode. I invited him on, he was excited to join. That's what we talk about, is just trunk-based development, some of my experiences, different techniques to use trunk-based development.

I hope you'll find the episode interesting. If you have a question you would like to bring onto the show, I'd love to hear from you. You can find my contact details at jhall.io/contact. If you have a topic that would be of general interest to the audience, or maybe you have a topic that you'd like to talk about, but it's really very specific to your situation and wouldn't make for a good episode. I'm happy to talk to you about that too. Go to jhall.io/call, and you can borrow my brain for an hour. It's less expensive than you might think.

Without further ado, let's jump on over to the interview. Jason, welcome to the show. We've been connected on LinkedIn for I think a few months and I always appreciate your content. You always put a thoughtful twist on everything that you post. I just want to shout out to you on that. Thank you for making LinkedIn a better place. Would you tell our audience a little bit about yourself, what you do, and why we're on today to talk about trunk-based development?

Jason: Yes, sure. I come into the development world from a non-traditional background. I didn't study computer science. I didn't go to a bootcamp, anything like that. I started off, I studied biology in undergrad and I worked in marketing and business development and I got really bored of that. I moved into analytics and data science before it was called that buzzword and just navigated further and further towards like the software development, software engineering side of things and automating, and then just took the full plunge and two or three years ago now was really just that's- I just went full board for that and left the data science analytics stuff behind. I still leverage that when I have to, but that's how I ended up here. I currently work at a startup that's building a pretty cool app around connecting personal trainers with clients and doing a bunch of Go lang-based development.

Jonathan: Cool.

Jason: I'm really interested in some of these concepts I've seen that have been around a long time, and how I can practically take them into a team setting for people who haven't even maybe even heard of them or done them. That's the space I'm coming from in this episode.

Jonathan: That's great. Now at the moment, you're doing software development yourself, right?

Jason: Yes, right now it's a very small team, so I'm just an individual contributor. I actually enjoy that. I'm old for the tech world. I got into it. I didn't get into it till into my 30s. I'm like an old guy in that regard, but I really enjoy the individual contributor aspect of, "Here's a hard problem," or, "Here's some complex flow or some complex business logic we need to map out, how do we go about doing that?" I learned about myself that I like to lead from the front and not necessarily be a people manager, and so it's a good fit for me in that regard.

Jonathan: Great. The initial question, now, I know before we started recording you said you had more than just one question and that's great. We can talk as long as you want. Your initial question, the one that marked this episode on LinkedIn was about trunk-based development. Do you want to describe the way you're doing branch management right now and then we'll talk about trunk-based?

Jason: Sure. Yes. The way I'm doing it now is the way I was taught, I don't know, a few jobs ago now. Like the first job I had where I was in some role where I was actually coding and it was going into production application and stuff like that. It's like GitHub flow or some modified Git flow or GitHub flow where you have a main branch and we have feature branches coming off of that, and then the feature branches can get deployed into a lower environment for testing, and then when the PR is fully completed, then we can merge it into main and have some pipelines kick off to do a production deploy.

That's been the general flow, some variations of that over the last few years, and I keep running into bottlenecks, constant bottlenecks. I even see a lot of people online like, "Hey, trunk-based development is like that plus several other ideas or methodologies together can be really, really fast incremental highly tested software. I've been trying to educate myself on those things, and I try to bring my skills up to a level where I could feel comfortable taking it to a team and be like, "Hey, let's try this out."

Jonathan: Nice. It sounds like at the moment you have two long-lived branches main, and I don't know, your other one is called develop, probably, or stage, something like that.

Jason: We don't even have that. Now we just have main and we feature branch off. We try to keep it small, and my understanding of the stuff I've read about trunk-based development is you can do that, but it needs to be really, really short-lived for the preferences to just be confident that your commits to main or trunk are tested and validated.

Jonathan: Awesome. Actually, it sounds like you're not as far away from trunk-based as maybe I initially thought. Just for context and anybody listening, if you're familiar with this workflow, a full Git flow usually has two or sometimes three, maybe even four long-lived branches. You have main or sometimes called production where your production code is rolled and deployed. I'll just simplify for the sake of conversation right now, but you could have many stages, but usually, we'll just say two for today.

Then there's a developed one. Usually make a branch off develop, make your changes, and then you merge back and develop. That gets deployed potentially to a staging server or something where maybe QA people run test against it, whatever. Once you're confident that you're ready to merge, then you either merge your feature individually again, from that branch into main, [unintelligible 00:07:13] with cherry picks or something or maybe you merge the whole branch all at once if you harden the whole thing together. Git flow has the multiple stages with each long-lived feature branches. You're already a step closer to trunk-based than that, which is going to probably make the transition easier, but it sounds you have long-lived branches. How long do your branches typically survive? Are they days or weeks or even longer?

Jason: Typically days. We try to keep the scope relatively small so that there's not as much cognitive load to review and you're not pushing as big a change that could potentially have unintended side effects, and it's easier to test. That's the thing we're doing now. I had done that where we had multiple long-lived branches, a main, a dev, even a staging or pre-prod or something that, and that just-- boy, it was awful, because it's just there are so many branches and they get out of sync and then multiple pipelines are running into each other and you're on a waiting list, QA stuff in an environment, and it's just I kept feeling there's got to be a better way. This is so slow. That's what's pushed me this way.

Jonathan: Probably the next step here is to define what trunk-based development is, and there are two flavors of trunk-based development, and I think what you're asking about is the pure flavor. Which is where you just commit straight to trunk without pull requests at all, without feature branches. Let me talk about that first, and then I'll talk about the less pure version which is closer to what you're doing.

Jason: Sure.

Jonathan: I don't have a lot of experience with pure trunk-based development, and there's a simple reason that I'll explain it in a minute, but the teams that do it the way they, they do that, I think it works better on smaller teams, but I suppose if you're disciplined enough it could work on a larger team. Usually, you do it with pair programming. You have two people sitting at the keyboard together. You update masters, so you have the latest version. You make your changes probably over the course of 20 minutes to an hour. It's a fairly short session.

You have something incremental that works and you push it to master. At that point, a CICD pipeline type thing kicks off and runs a test against master. If you have somehow accidentally broken master, now, hopefully, everybody's being completely disciplined and they're running their tests locally if they do this, and there's never a risk of breaking master, but sometimes maybe you break master accidentally, it goes red and then basically the whole team stops working until master is working again. Hopefully, that only happens once every few weeks or something, but that's the theory behind the so-called pure trunk-based development. You're literally pushing to trunk. If you push and there's a conflict then you do a rebase or something and try again until it works. That's the pure trunk-based development.

What I think most teams do, at least first, and this is what I usually do, it's more like what you described, but with short-lived feature branches. I still branch off of master and I still create pull request, but those pull requests maybe will only take me 10 minutes and then I create a request. It's a very small amount of work. Then it maybe goes through review or maybe doesn't. If I was pair programming with somebody I probably wouldn't go through a second review process, I would just let the tests run and then merge. This is the reason I prefer this approach, all though many people disagree with me, and that's fine. There's room to disagree. The reason I prefer this approach is I like the confidence that my tests have passed before I merge to master. With a pull request, even if there's no human actually manually reviewing the code, at least I know that my tests are passing. Of course, that's only useful if you have tests. If you're in a project that doesn't have tests yet then there's no assurance there, either. Broadly speaking those are the two flavors of trunk-based development. One is pure trunk-based development going straight off of master, or main, or trunk, whatever it's called. The other is this short-lived feature branch version.

It sounds like you're in a long-lived feature branch version that's similar in principle, but I think the missing piece for you is what I would call continuous integration. I don't mean a CI server, I mean continuous integration, the practice of integrating your code continuously, which I think Martin Fowler defines as integrating at least once per day. If you buy that rule or buy that definition then the maximum time a feature branch should live is 24 hours, or maybe 8 hours if you want to count work hours. We can talk about that. I don't know if that's the way you want to go, here. We could talk about strategies to make that easier.

Jason: Yes, I'm definitely interested in that, because yes, that's kind of the breakdown I've seen online, is the two camps, what you described. The problem I've seen is not just where I currently work, we do get some that go in a day, really short, small scope on a branch. Us and teams I've been on in the past, we tend to-- future branches can linger over days or even a week, and then you end up with this issue that there's already been several other things that've gone into main in that week, and so now you're pretty far behind and you've got to rebase and fix any issues and go through this whole review process again. I for one am a huge advocate for pair programming because I had a really good mentor when I started programming that we did a lot of pair programming.

I found that immensely helpful for thinking through and talking through how you're working on things, so that aspect of it appeals to me. Yes, I'm definitely in that camp. I like to see all the tests pass as a safety measure before I cut the merge. One thing I am curious about how this relates to that, on something going into your main branch, your production code base, we typically will cut a release periodically. Not on a cadence, but once a day or something like that. If several merges go in then we'll cut a new release, and then the cutting of the release will trigger a new pipeline for actually deploying to the production environment. So there's even a safety check there, that if I were to push something into main it's not immediately going to the production applications. I don't know whether that's considered or not, but I'm curious to hear more about what is-- because I see so many conflicting definitions of what is CI? I guess that would probably be a good starting place.

Jonathan: Okay. Yes, why don't I walk through the rest of the traditional CI/CD pipeline and provide my definitions, which will differ from other people, sometimes. I think the most important thing to start with is understanding that most of these define practices, not tools. Tools are there to enable the practices. CI is a great example. Continuous Integration, it's all in the name, but we forget that all the time. It's about integrating continuously. If I go to a conference, I've done this a couple of times, and I'm speaking, or at a meet-up or whatever, and I ask, "A show of hands, who uses CI?" 80% of the room raises their hand. Then I say "How many of you have a feature branch that lived more than a day?" Half the hands go down. "How many of you had more than one developer working on that code over the last couple of days?" Hands go down.

I do that, and it turns out most people aren't actually doing CI even though they think they are, because they're using a "CI tool".

If you're not integrating regularly, which should be a minimum of once a day, if not 10 or 15 or 20 times a day, you're not doing continuous integration. However, and this is where the confusion comes from, to enable that you need to be able to run your tests very quickly. If it takes an hour to run your test suite, or worse, it takes six hours to wait for the manual test developer guy to run through a bunch of manual test scenarios, there's no possible way to integrate that quickly. That's where the confusion comes from, is we have a tool that has become known as a CI tool or a CI pipeline that enables us to do this continuous integration.

The next step from there, logically, is continuous delivery, which is the name of Dave Farley's old book. I know you mentioned before the recording you're reading his new one. I haven't read it yet, but it's on my list. His old book was called Continuous Delivery, and it's a great book, too. It's about the idea that every time you merge to master you build a deployable artifact. It sounds like you're close to this, maybe. Maybe it's not automatic, but you have a button, so it's at least easy. The big advantage to this is you never run into the surprise that, "We're ready to release. Oh, but the release is broken for some reason." Maybe some dependency from NPM changed, or something happened unexpected and we can't release. With continuous delivery you have confidence that after every merge, within a few minutes, maybe 20 minutes or something, it's built an artifact, whether that's a docker image or an APK you can send to the Google Play store or whatever, maybe it's not actually going anywhere, but at least it's ready, so it could go somewhere.

Jason: That's cool. We actually do do that. That's exactly how our main merge setup is, so any merge to main builds a docker image and saves it in our image repository on AWS, and then it doesn't actually get deployed unless we trigger the release. We sort of do this [unintelligible 00:17:31] [crosstalk]--

Jonathan: Yes, it sounds like you're doing continuous delivery, but you're probably not doing the next one I'm going to describe, which is continuous deployment. That's the idea that it just takes it one step further and it automatically deploys as soon as the artifact is ready. There are many business reasons you might not want to do that. An easy, obvious example that everybody understands is you're building a mobile app and you have 100 developers each merging 10 times a day and you don't want to send 1,000 updates to your mobile clients every time you update. You probably want to batch those and not release until maybe the end of the week or the end of the month, or something like that. There are many other business cases, too, that you wouldn't want to do continuous deployment, but I honestly can't think of a single scenario where continuous delivery doesn't at least make sense. At least practice the automatic art of building an artifact. I don't know of a scenario where that wouldn't make sense.

That's really a high-level overview of this software release pipeline architecture that you're probably hearing a lot of buzz about. I hope that that helps break down the little pieces. It sounds like you're actually not terribly far away from a good place. You're already doing continuous delivery and you have an easy release process. Should we go back and talk about some of the strategies for making those pull requests smaller?

Jason: The [unintelligible 00:18:57] CI part of having your tests run, which to me is really what got me interested with coupling with test-driven development, because you build that flow at the same time. That's a good synergy between those two was that our CI tool that we're using, I've been trying to work on dependency caching and some other strategies to speed up the-- They're already pretty fast. Any commit takes four or five minutes to build and run the test, but I want it faster than that because it's nice to be able to commit really frequently and see things pass. That's something we're actively trying to cut down the times on just to see those things faster, that feedback loop faster. Yes, I'd be interested in how do we-- Maybe we say we stick with this feature branching off of main, how do we trim down the scope or the units of work for? Some of that wades into project management territory, but how do we scale some of that down so that these feature branches are just going multiple times a day into main.

Jonathan: I think you have to tackle this from multiple angles. There's no single thing you can do to solve this because it's an interwoven system. Let me just illustrate that with an example. Suppose the use magically with a snap of your fingers could instead of creating one branch per day could now turn that same work into 100 branches per day, you wouldn't be any faster because you'd be still waiting for somebody to review your code at the end of the day. You can't do this by yourself. It requires several angles. Let me just jump straight ahead here that the smallest batch of work you can do in coding is a line of code that is reviewed at the same time it's written in other words, pair programming.

If you or others on your team are comfortable with pair programming, that is an excellent way to jump ahead of what I'm going to describe in a minute and get the advantage of this faster, but let's talk about some approaches I coach teams on, and that myself for shorter and smaller pull requests which I think naturally lead to the idea of pair programming, especially on teams that maybe don't like the idea, this leads to that, because you get to a cadence where you're almost pair programming through GitHub. I'm getting ahead of myself. If you're doing TDD you already have a cycle where your code is proven to work every few minutes at most.

Every one of those times is an opportunity to create a pull request and ask for review and merge. Now, practically I don't usually create one pull request for every green cycle in my red green refactored cycle but you could. Normally I more hang on a function. Maybe I add a little bit of functionality that has five or six tests and that becomes a pull request. The point is that if you're doing TDD and I don't know if everybody on our team is, but if you're doing TDD, then you have frequent checkpoints that tell you it's safe to merge effectively. It's safe to create a pull request that could be merged. Keep that in mind.

The main thing is though just to merge smaller portions of functionality future flags help this, although they're not usually necessary, I keep those in my back pocket as last resort. Maybe you're already using them. They're heavy for this thing. I like to just use them if (false), if I have code that's incomplete, I just wrap it in if (false), my test can still run around it. I can comment out the, if (false) for my testing, and then when I'm done, I just delete the, if (false). That way I can have-- my code it's integrated so there's no risk of conflicts with other developers because it's already in there [unintelligible 00:23:16] also--

Jason: I haven't done a ton with feature flagging. My current company, the feature flags we have typically they're from the mobile side. Our app is primarily an iOS app. They'll feature flag accessing certain things on the backend or some new feature they want to expose to a subset of users. I didn't really preface this at the beginning. I primarily do backend server-side development for some of our services that power the different applications we have. We haven't really done much at all with anything really with feature flags on our side of things. I understand the benefit. We did at my last job. We had some server-side feature flagging as we wanted to try new things and roll out to certain percentages but not in this current setup.

Jonathan: You definitely can use feature flags on the back end, but like I said, a minute ago, I tend to keep that as last resort. Basically every time you do a feature flag you're taking on technical debt, which is fine if you know that, but there's code, you have to clean up later. I worked at Booking.com, which is famous for their A/B test, and it was so much uncleaned up A/B testing and feature flagging code. It was a disaster.

It can spiral out of control very quickly if you're not very careful.

That's the main reason I keep feature flags as a back pocket thing. When it's necessary is very powerful. For front end, it's a lot more reasons to use it, but for backend code, I tend to prefer lighter-weight alternatives. The point I was trying to make, I guess, is that you can and should merge incomplete features as long as they aren't broken. I don't know. Maybe you're building up some new authentication flow or something you could build the middle part without the two end pieces first and merge that, and then add the end and then add the beginning and then bundle altogether as long as the code whole application continues to work with these bits of unused code where the feature flag or a simple if (false) is appropriate is when you start to wire the things up, but you don't want to actually expose it yet then that would be the place to use that.

If you have a feature that's only going to take a day or two to build a full-fledged feature flag is probably overkill. That's really more appropriate if you have something that takes maybe weeks or months, or involves multiple developers, or needs some extra care or validation. That's really the point I wanted to make is don't be afraid to merge incomplete features.

Jason: Well, that makes sense, because that tends to be like the part where I have a hard time delineating where I should stop on something because otherwise, nobody wants to review a thousand-line PR because it's too much. I've been trying to push more. I read Eric Evans domain driven design and I've been trying to integrate the-- I had realized after reading that there were several strategies in there that I had already used or seen, but I didn't realize that's where they came from, like the repository pattern and things like that.

I've been trying to chunk up, let's say new feature, for example, like what you said, I'm actually working on some new payment flows for our application and trying to chunk up the actual, just pure domain logic layer and then different unit of work for a repository layer, different unit of work for handlers and stuff like that. Then stitching things. Trying to follow that.

Jonathan: Sounds like you're on the right track. Another piece I pulled out of Michael Feather's book Working Effectively with Legacy Code. I think it's in the first chapter. He talks about four reasons to change code. He talks about adding functionality, fixing a bug, refactoring, changing the structure of the code without changing its functionality, or performance improvements. There are a few other reasons and sometimes there's gray areas between these, but I like to use this as a general guideline to never do more than one of these in a pull request.

That helps keep your pull request model too. What that means is if you're working on a new feature and you discover a bug and you're tempted to fix it in your sample request, probably don't, create a new pull request to fix that bug and there's practical reasons for that too. Aside from it, keeping your pull request smaller. Imagine that your feature has to be reverted for some reason in the future. You don't want to revert that bug fix at the same time. It's nice just to keep those separate for that reason.

Jason: It's a good point.

Jonathan: If you're doing a performance, if you're trying to do some performance profiling and fixing a performance bug, you don't want to bug a performance issue, you don't want to add new features at the same time, because that's going to confuse your results. There's all sorts of good reasons to keep those four separate, but as it relates to this conversation, it's one way. It's one dimension to think of to keep your pull requests isolated.

I'm never afraid sometimes I'm annoyed, but I'm never afraid to cherry-pick some things out of a larger branch to a smaller one. This usually happens. This is actually is a common example of I'm working on a new feature and I discover a bug along the way and I fix it and then I later go, "Oh, I should have done that as a separate of pull request," just cherry-pick it out and create a new pull request, get that merged. Maybe my other one's still sitting there for another hour, but at least the bug fix is down to production. Then I can rebase it and it just takes care of itself. Don't be afraid to cherry-pick little bits.

I also encourage when I'm code reviewing, I encourage my colleagues to do this. Sometimes if I see that they've touched three different files and maybe they're related ways, but one of them could very easily be its own pull request I say, "Why don't you just cherry pick that out into its own pole request? I can read that faster and get that out of the way and then we can discuss the changes I want to discuss on these other ones or something like that."

Jason: That's a good idea. My boss has recommended that a few times because I end up with that problem sometimes. I see oh, this code is terrible here and I'm like, "Let me just fix this real quick." It's not anything to do with what I was working on. The typical response has been, "Hey, just yank that into a new one."

Jonathan: That makes review so much easier, especially for something like formatting or you're renaming some variables, cause they were all confusing or something. It's so much easier just to review a variable name changes it's isolated pull request versus that and a bunch of functional changes at the same time. That's a great one. Those are my top suggestions off the top of my head on how to keep pull requests small. I think it's really important to keep in mind, as I said at the beginning of this part of the conversation, if you're the only one doing this, it's not going to really help, you're going to end up at a pile of pull requests waiting for review. What I always do is when I'm working on [unintelligible 00:30:47] on increasing the flow, that's really what we're talking about here, we're not talking about more code we're talking about increasing flow.

We need to think of it from a systemic standpoint and a flow engineering standpoint. What we need to do then is we need to come to a working agreement as a team, that code review is a priority. My basic rule is unless the database is literally crashing and customers can't log in or whatever, there's some sort of incident, code review is always the highest priority when you reach a breaking point. You finish your task and you're thinking "What's my next task?" see if there's any open pull request review, if there are just review those. The pushback I get from this is, "Oh, but code review takes so long and it's boring." Yes, but if you do it this way, it's not, if everybody's making quick pull requests, you're just going to spend 5 or 10 minutes reviewing a couple 10 line changes and you're probably just going to hit, okay or maybe you make a suggestion, whatever, it's going to be quick.

You feel like you got something done, you got through three pull requests in 10 minutes. That's fine. That doesn't feel slow and boring, it feels fast and quick. Then you go back to and write another pull request. You have to make that your priority. Unblock your colleagues before you work on your own work needs to be the rule of the team. Like I said, if you do this well and you get to the point where you're making four requests in an hour, and your colleagues are reviewing your four pull requests in an hour, pretty soon, you're going to get to the point where like, "Why don't we just sit down at the keyboard together and do this together?" Your pair programming at that point.

Jason: Yes that makes sense. No, that's really helpful. We're trying to-- our team struggles with that a little bit. We get pull requests that are stale or people won't review that, or it's just elongated, even if it's a simple one. We've set up some, get GitHubs, slack integration, add some pretty handy little helpers that will do constant reminders like, "Hey, there's PR open, it's been open for a couple of hours. It's waiting on you, go review it." We're trying to put some of those things in place to just make it a little easier to remember to hop on when you have a second and give someone's PR a quick review. I think it's starting to get better, the team itself is like pretty new. I had been there a few months now and all of us have started within the last three months, other than the team lead, our boss, she was there for the last year and the team, it was basically just her after some developers left and then took some time to hire back some new people. The team is all just trying to get our bearings on what's the right way for us to do this stuff.

Jonathan: I guess one other thing, if you have a pull request that's taking too long to be reviewed, you're tired of waiting for it, cherry-pick something out of it, make it smaller and your colleagues will be grateful that you have a smaller pull request.

Jason: That's a good idea. I'll start doing that. I'm still reading. I think I'm planning on taking0-- I got a stipend at work, so I'm planning on taking Dave Farley's TDD course.

Jonathan: Nice.

Jason: Because I know this is not quite related to the dev trunk-based development. I think the TDD stuff goes pretty like synergistically with it. I would say is a knockoff version. I don't quite write the test first, but I write interface stubs, that don't do anything and then I write tests based on those stubs and then I fill in the implementation. I almost do it. I want to take some courses to get actually to the next level of the test-driven design portion of it.

Jonathan: I agree with you that test-driven development does closely relate to this. You can do all of these things without TDD, but I think my experience is it makes it easier and faster, which is the reason I prefer TDD. I feel like it gives me a performance boost. I know that there are people who think that it's a waste because you're writing tests that you may not "need" but I need those tests. They're telling me that I'm doing what I think I'm doing and I'm not writing code down some rabbit hole. That's another topic, but I do agree with you that it's closely related. It is synergistic, as you say, it's a good practice to do what these other things. I don't want to dissuade you, if anybody either hates TDD. I don't want you to think that you can't also do these other things, you can do these without TDD also.

Jason: I never realized how upset people get in the tech world over like do this, don't do that.

Jonathan: I guess it feels personal when somebody's telling you how you should cook your own food or around the kitchen but--

Jason: It's a fun- [crosstalk]

Jonathan: I want to learn how to cook the best way, whether it's my way or not. That's how care I want to learn the most effective way.

Jason: One thing I've been interested in is, I guess this does relate to this in terms of like CI/CD pipelines. I'm a big fan of infrastructure's code and so I always wanted to-- in the teams I was on in the past worked really hard to bake infrastructure's code checks or the automated deployment part of it on a push to main via the infrastructures code. Is that something you typically recommend to teams? Again, I'm not married to a tool, but the process of that, it's not something that's easy to test or validate, I guess, until it actually tries to do it so that's something that I found to be a tricky until you get the initial infrastructures code set up. How does that work in a continuous CI environment? How does that piece works? Say for example or thing we talked about before where continuous not deployment, but for what was the other day we talked about continuous?

Jonathan: Delivery

Jason: Delivery. Yes. We have these buildable artifacts, how does IAC fit into that?

Jonathan: In infrastructure's code it can fit in at different levels. You could deploy your entire data center with infrastructure's code in theory. You could set up Terraform or something that goes out and sets up your ECT two instances and your S3 buckets, it does everything for you at that level. It's hard to do automated testing for that. You can to an extent, you could deploy it and then have some scripts that see, did we create the right number of instances of this, and do we have the right buckets there, and do they have the right permissions and whatever? It's probably not a very efficient use of testing especially if using Terraform. We can probably trust that the tool does its job and sets things up correctly.

Where I have used testing and infrastructures code together is like if I'm deploying to Kubernetes maybe with a Helm Chart, and I don't know if you guys do at your work, but that that's where infrastructures code and code and testing all they merge in this confusing centerpiece of Venn diagram. You can, in fact, there's tools that will let you test a Helm Chart and they test it by starting potentially by starting the Helm Chart in Kubernetes and running tests against it, that you've defined. You could create a test that it could be a bash script, for example, that checks to see that the right files exist on the right directories with the right permissions and returns a true value or something like that.

You can go that route if you want to. I would consider that kind of an advanced feature. I've only done that at one company and we only did the bare minimum version of it. We didn't set up complicated Helm tests. What I have seen more often is tests that will test like a Docker image to see that it doesn't have security vulnerabilities, maybe run some static analysis or dynamic analysis against it, checks that you don't have password files or SSH keys in there, things like that. You could do that testing as well. The truth is the sky's the limit when it comes to what kind of testing you under, if you can think of something to test, you can find a way to automate it and almost all cases. It is good to do that for infrastructure's code, especially for the purposes of security, where you don't want credentials or giving the wrong access to the wrong people and things like that.

Jason: Yes. It's not something where we have just yet, but like all my previous roles had, were very heavy into IAC stuff, mostly Terraform, some other native, like AWS tooling for that but trying to push us towards that, for some of the stuff we're doing now, because it's notoriously brittle to perhaps to change or set stuff up in the GUI-based thing on our cloud provider. That just makes me nervous. If it's something I think I can just automate it. I'd rather do that so I don't have to worry about it. In the past, I've just had it where it was part of the final deployment piece was maybe part of the commit build test part was just, I can say, like a Terraform plan or something that just validated a plan that the infrastructure could be provisioned in the way that you put it up and then the actual provisioning happened only on the merge domain or something to that effect. Yes, that makes sense.

Clearly, I don't really have any other questions. Again, I have other topics I have been reading a lot about. I read clean architecture a couple of years ago and I have been reading a lot about hexagonal architecture which is the same thing. They're all the same name, but different names, the same topic, like in architecture hexagonal ports adapters, whatever you want to call it. I feel like there's a lot of synergy between those things, like domain-driven design and clean architecture, and true CI/CD and trunk-based development. I'm trying to move on in all that direction.

Jonathan: Yes, they're all tools designed to help us keep contexts so that we can think about problems in a useful way to constrain context, I mean, so we aren't thinking about all the different problems at once. We're just thinking about one little bit. TDD helps with that hexagonal architecture. Microservices should help with that if done well, they usually aren't.

Jason: Yes.

Jonathan: Et cetera. Et cetera.

Jason: That's true. I did read an interesting thing from Uber, they call it domain-oriented or domain-driven microservices, where they add-- We did this at my last place. We had a cluster of microservices that were pretty specific, but they were all within what you'd call a bounded context in domain-driven design.

Jonathan: Yes.

Jason: There was a centralized entry point and exit point for the services, but then they could work together to do different things, and there were some clear separations on what they were responsible for. That company was much bigger so that made sense. In a smaller company, that's overkill. It's a lot of work to build that out.

Jonathan: Jason, thanks so much for coming on. If people are interested in connecting with you on LinkedIn. I think you did. I assume you still do have a mailing list. How can people get in touch with you?

Jason: Yes. I am on LinkedIn, I post radically on there and then I have a site, it's called functionalbits.io, which I'm planning to morph into-- Right now it exists as a blog site. I have an automated newsletter I wrote that pulls a bunch of high-quality tech blogs and research papers and repositories, and bundles them up in a email every day and sends them out.

It's something I built for myself on my Raspberry Pi to just serve myself stuff every day to read, and I was like, "I might as well just deploy it up on the cloud somewhere and see if other people would be interested in it."

I'm planning to put out a course sometime in the near future. I've been working on something related to hexagon architecture and domain-driven design, and how I use those things to build a microservice. It's something small in that regard, that I think is a missing piece of-- People learn very basic stuff and then there's a lot of stuff missing between there and these other concepts. Through experimentation, I can help some folks bridge the gap a little bit.

Jonathan: Very cool. Great. Thank you, Jason, for coming on, and hope to be in touch with you on LinkedIn soon.

Jason: Yes, it's been great. Thanks, Jonathan.

Jonathan: All right. Cheers.


Share this