Tiny DevOps episode #17 Daniel Bartholomae — Borrow My Brain: Integrating Dev and QA

November 2, 2021
In this episode, Daniel Bartholomae, CTO of Optilyz, "borrows my brain" for a consultatative discussion about how to improve the integration of QA in a growing startup with just two dev teams.

We discuss the theory of setting up QA to support developers, rather than to act as gatekeepers, and many of the practical implications.

Resources
Borrow my brain
Tiny DevOps Episode #5: George Stocker — A Dogma-Free Approach to TDD
Book: Accelerate by Nicole Forsgren PhD, Jez Humble, and Gene Kim

Guest
Daniel Bartholomae, CTO of Optilyz
Optilyz job openings

Watch this episode on YouTube


Transcript

Intro: Ladies and gentlemen, The Tiny DevOps Guy Guy.

[music]

Jonathan Hall: Hello and welcome to another episode of The Tiny DevOps Guy Podcast where we talk about DevOps for small and tiny teams. I'm your host, Jonathan Hall. Today's episode is going to be different than usual. Rather than me interviewing my guests, today, my guess is going to be borrowing my brain. He'll be interviewing me, and I'll be providing a brief consultation.

If you would like to borrow my brain, go to jhall.io/call. There, you can sign up for private consultation where I can answer questions about software delivery, DevOps, or heck, my favorite flavor of ice cream. Well, let's get on to today's episode. My guest Daniel has some questions about QA on his team. Welcome, Daniel, thanks for coming on. Maybe you want to do a brief introduction of yourself and the situation you're in.

Daniel: Sure. I’m Daniel, I’m currently the CTO at [unintelligible 00:01:00] startup which has around 40 people at the moment and growing. That's also the environment that I'm usually in. I’ve been building product engineering teams for startups usually from a situation where it's founder led to something a bit more professionalized with multiple teams. One of the big pills I've swallowed is the good old agile pill of making sure that everything shifts left.

One of the things that always is quite interesting for me to figure out is also what are the limitations and how to actually do this? One specific topic that I'd like to talk about today is QA. How can we make a meaningful QA in a setup like this? How does it work in agile world? What things do we maybe not even need at all? Which things might make sense to have in terms of a QA team just for some specific other reasons?

Jonathan: That's great. That's a good topic. It's something I've dealt with in the past and struggled with in the past. If you don't mind, I'm going to ask you just a few more questions before I start to provide an answer. You said right now there's 40 people. Is that just engineers or is that the entire company?

Daniel: That's total we have at the moment. Around 1,000 people are like I said product folks so developers, product managers, designs, QA, and growing to more or less double that number over the next year. It's definitely something that is growing. I'm more planning for the structure in half-year or year than worrying too much about the structure right now.

Jonathan: How are they organized right now? Do you have individual teams, or is it a big group? How do you have that organized?

Daniel: We come from a situation where we had a QA, a front-end team, a back-end team, a design team, and the product manager trying to coordinate these teams. Now one of the things we already did move over to for now two teams. We want to grow this into three teams, where each team has one person from QA, has a bit of front-end, a bit of back-end basically, covering all the areas so that they can get things done.

Jonathan: Great. Are you using any named process, maybe Scrum or something like that, or is it more of an ad hoc situation?

Daniel: This is a bit up to the team. One of the teams is using a very light version of Scrum. The other team is more in a Kanban mode right now. We don't do this too strictly, to be honest, because the main method of how we organize all this is with bi-weekly demos. Every two weeks there's a demo where each team will show to the rest of the organization including sales, including customer success, including operations, basically, everyone was interested, what happened over the last weeks.

This obviously leads to a bit of move towards Scrum because you have this natural two-week Cadence. Even the team that works in Kanban, for them, it's just some Cadence in there. It doesn't change necessarily the way that the work has been organized while working on it.

Jonathan: You said you have one QA per team right now. Does that mean you have a total of two QAs, or is it exactly one per team?

Daniel: Yes, that's exactly the numbers right now.

Jonathan: Perfect, okay. Basically, it's a broad question, what's the best way to organize QA? Is that fair or do you have a specific maybe problem you're trying to face or a specific business outcome you're trying to achieve?

Daniel: I would say that there are two parts of this. One is that more the theoretical question of what could be a really great set-up for this? Also, it's a question of getting there. As mentioned, we come from a world where we had a front-end team, a back-end team, a QA team, a lot of handovers and just making sure that everyone is on board with changes and not doing all the changes at the same time. It's quite important to me, while at the same time, make sure that we don't lose too much of the benefits that the old structure had as well.

Jonathan: You may have said this and I missed the detail. The two teams you have now, are they organized front-end and back-end, or are they organized on a product level?

Daniel: They are organized more as a product level, though at the moment this translates actually quite closely because one of the product, topics we’re working on is mainly in the front-end. The other is solely in the back-end. We move basically one back-end developer to the front-end team so that they can get their work done. For the back-end team, it's still only back-end development, at least for now, just because the product area they're working on is only back-end.

Jonathan: Okay, very good. Let me relate one of my experiences which is one of the reasons you wanted to talk today, something I did at an organization a few years ago. Then I'll offer some of my maybe suggestions, how that might apply and how it might not. I was at a company about two and a half years ago, three years ago. When I had joined I think we had Scrum teams. We had approximately one QA per team.

I think one of the teams may have had two, but it was more or less one per team. These were cross-functional teams. There was a really adversarial relationship between our QA and our developers. Of course, sometimes it was personality-based. Some personalities just didn't get along well. The structure we had was that developers would write code and they would pass it to QA. They were on the same team, but there was still this sort of handoff that was happening. How does that relate to your situation? Do you have the same or--? Maybe I should have asked this earlier, but what's your relationship right now between developers and QA?

Daniel: I would not say that adversarial. Everyone is getting along quite well. I think there's a bit of a situation that devs might think that some QA tasks are basically below them, which is something that I've seen happen a lot of times where developers then do not really want to do it because it feels a bit like a waste of time. On the other hand, the QAs are also really interested in what's happening on the product and on the development side.

Overall I would say the situation is a lot more benign than that what you were describing there, but there's still this handover definitely happening of a ticket moves to a certain column on the board and then the dev is no longer responsible and QA to take over. This sometimes leads to waiting times and sometimes to some rework.

Jonathan: I had another question. What was I going to ask? How much automated testing do you have, if any?

Daniel: This is something, which is in the works. One of the QA actually is at the moment spending most of her time on automating tests. They're writing automated end-to-end tests. We do have different levels of test coverage for our different areas. We are not yet at a spot where we would feel comfortable trusting the tests, so we cannot just stop doing QA. We still discover things also in the QA phase. To be honest, also, there is a big value around exploratory testing for us at the moment-

Jonathan: Of course.

Daniel: -where some things come up during QA tests, which just as a dev might never have thought about when writing the code.

Jonathan: What is your deployment process like? Are you using anything like continuous deployment or is it a manual release and deploy process?

Daniel: It's mixed. We still have some manual steps in there. We are still using Jenkins for some of those. The manual steps more or less boil down to pressing one or two buttons. It's something that we most likely will switch over in a soon-ish time frame without much trouble. At the moment is actually still a manual process. It's not automatically continuously deployed.

Jonathan: How frequently do you deploy?

Daniel: At the moment I would say once or twice per week roughly. It depends a bit on what comes up because we do have two different systems. The two teams work on different parts of the codebase. For the code that mainly front-end team is working on this, actually already has continuous deployment in place. For the back-end team, that doesn't yet have. What I'm talking about here is maybe the back-end part, which also is the one that actually requires most testing so far.

Jonathan: Walk me through the lifecycle of a typical user story. How it's touched by developers and QA before it's deployed. Somebody, I don't know if it's a product owner or whom, but they create a user story, a developer gets assigned to it, they work on it. When they think it's ready, they move it to the testers column or what happens at that stage?

Daniel: That stage is moved to a test column ready for QA. At that point is also manually pushed to a testing environment. Then on this testing environment, the QA team takes over. Does some manual checking on that. At some point says, "Yes, this is ready." The ticket gets moved to the next column. That's then ready for release, which again requires a manual testing step before actually pressing the button because sometimes multiple things are bundled. To be honest, this step also usually doesn't catch anything anymore. Most of the actual testing happens on the feature branch.

Jonathan: If a QA finds something wrong that needs additional attention, how does that story change?

Daniel: At that point, thy reach out usually via Slack to the developer. The developer find some time soon-ish. If they're in the middle of something, it might take a couple of hours, but it's usually in, I would say, in a day or so to take care of it, take the ticket back, fix it. Then the process starts again from the same spot.

Jonathan: You did mention that one of your QA is working on writing automated tests. Do the developers write any automated tests, maybe unit-level tests or something like that?

Daniel: Some, I would say that at the moment, this is not that much of a focus yet. We are at the moment also trying to get a bit more skills around TDD, both by hiring as well as just expanding the knowledge and the team, but it's not there yet, I would say. We always say that you do want to write what tests everyone is aware of that is just not happening as much as we should to actually get a decent amount of coverage.

Jonathan: What is your branching strategy? Are you using Git flow or Trunk-based development or how are you doing that in Git?

Daniel: It's more or less Git flow. It's slightly modified. We do have release branches, for example, but I would say, for all intents and purposes, it's Git flow.

Jonathan: Let me get back to relating what I did and then talk about how some of that applies to your situation or how I think it might apply to your situation. I had joined a company with approximately five Scrum teams, one QA per team. On some of the teams, in particular, it was a fairly adversarial relationship. At least on paper, it looked very much like what you described. The developer would write some code. They would pass it to the QA. The QA would either pass or fail that code. Of course, it was the fails that turned into the adversarial relationship where the QA said, "This is not right." The dev said, "No, I think it is," or something like that could happen sometimes.

Aside from the emotional turmoil that was happening, it wasn't very efficient on paper either. A typical scenario would be that for the beginning of the sprint, developers are busy and the QA were bored. [chuckles] Then the last half of the sprint was the reverse. The QAs were frantically trying to test everything the developers did the first week, and the developers were waiting for something to do. Of course, they would sometimes have some rework to do depending on the QA results. This was one of the key things we were trying to achieve or to overcome when I was there.

What we ended up doing, this was made more complicated and/or easier depending on your viewpoint, by the fact that all except one of our QAs were actually working for an outsource company. They were in a different country. They were neither direct employees nor were they in the same room with us. That almost certainly contributed to this adversarial relationship. When you pass your work off to somebody in another country, they're not sitting there with you. You don't have coffee with them every day. It's a different scenario than I'm assuming you're dealing with. What we ended up doing was we decided to-- well, I decided to basically end the contract with our offshore QA team.

We did hire a temporary, a six-month contract freelance test automation engineer to come in and help us build the foundation of a test automation system and train our one in-house QA engineer on how to use that. He'd only been doing a manual QA up to that point. We wanted to teach him how to do automated QA. She came in and she helped set up, I don't remember the name of the tools right now, but some SaaS-based automation tools for front-end. This was all web-based front-end testing. We did have some unit tests written in Java for the back-end code. We kept doing that. That didn't really change. It was really the front-end testing that was the area of concern here.

Over the course of her six-month contract, she set this up. She helped cross-train our in-house developer. Then about four months in, maybe four and a half months in, was the end of the contract with our offshore QAs. Of course, they weren't happy to leave. Nobody's ever happy to lose-- They didn't lose their jobs, but they lost the contract. They still had jobs at the agency they were at. The surprising thing that happened, at least to me, we were freaking out. We had expressed all the same concerns you have about, we were certain that our automated testing wasn't ready yet for prime time, that QA is not going to be catching things anymore.

We're probably going to have more bugs into production. I set up, actually for the first couple of days, I think it was a daily meeting. Then we went to weekly with the two QAs, our freelance contractor and in-house QA, and me, some of the other stakeholders on a daily basis and weekly basis to handle all of the problems that we expected to come. We didn't have any problems. The truth is we canceled the meeting after about three weeks because we would just get together, stare at each other for five minutes and we had nothing to talk about because nothing actually broke at least that we noticed. As far as I'm aware, there weren't any hidden, dangerous bugs that got through either.

Months later, things were still going smoothly. The one big lesson I can take out of that is our fear of removing manual QA as a gatekeeping step was much bigger than the reality deserved. I don't know if your situation is different, but I suspect there's some similarities there. After having told that story, some of the things that I would recommend for your team to do, "Stop using Git flow. [chuckles]" If you can get to Trunk-based development, that will make most of everything else simpler. It makes the flow more streamlined. I really like what you said though, that you're pushing your code to a test environment so that it's tested before it is merged. That's perfect. That's important.

It's important that your QA whether it's manual or automated, whichever, can test your code before it's merged. In other words, you don't want to go from say development to staging. Then everything's in this staging pool together. Then you test it and if it was broken, maybe you reverted or something like that. I've seen some teams do that. In fact, the team that I just described was doing that at one point when I first started. I think it's essential that what you're doing is right that you're testing before you merge it. Unless I misunderstood how you described, that sounds really good to me. To go back to the original question, you wanted to talk about the theory of QA.

How it should work in a situation like this. This is my opinion, my professional opinion. I know it's not the only opinion. There are, especially some people whose careers are dedicated to QA only, will disagree with me on this. My opinion is that QA should take a support role. They should-- I'll explain that more in a moment, but they should not be gatekeepers for software delivery. In the same way, I call myself The Tiny DevOps Guy. I see things through a DevOps lens. I look at QA much the same way I look at operations, that operations should not be a gatekeeper for the developers, neither should QA, neither should security, neither should any of those other things.

In the same way that I see operations providing a support role to developers, I like to see QA doing the same. The way that looks in more practical terms or more concrete terms is I think it's great to have dedicated QA people that could be their own team. They could be embedded on existing teams. That's less important to me, but they should be able to support the developers. In other words, each user story the developer works on is their responsibility. They are responsible for writing code and making sure, that code works correctly. When they need help, because they aren't familiar with some testing tools or techniques or technologies, that's when a QA person, a QA specialist can help them.

The QA specialists or the QA team can also be responsible for maintaining the testing platforms. This is much the same way I look at, suppose I'm a developer and I'm building a deployment script for Kubernetes, but I'm not a Kubernetes guru. Who am I going to ask help from? I'm going to ask the operations people who installed and maintain Kubernetes. They can provide guidance for me, but they're not going to do it for me.

They may help, they may write some of the lines of the configuration, the Kubernetes manifest. They're not going to do it for me; they're going to help me do it. I like that same relationship between dev and QA that I see between dev and operations in a healthy DevOps organization. That's the theory. Does that help? Is that clear? Two questions there.

Daniel: Yes. I think that's clear so far. I do have a couple of questions to follow up on this one may be around just the focus. I mentioned exploratory testing. Is this something that you then did at all in this context? Is this something, which you realized is not needed? Is this something, which the developers did? How did exploratory testing go in this context?

Jonathan: We never formalized that at this team. I do think exploratory testing is very important. The way I see this working is that exploratory testing is removed completely from the software delivery life cycle. Your user stories go through the process of being written, tested, deployed, and exploratory testing happens outside of that. Exploratory testing could happen in a production-like environment after something's been merged. It could even be in production. It could happen on an ad hoc basis during the development. I think the answer to that depends on the team more than the workflow. What works for you? What makes sense?

What are you trying to achieve with your exploratory testing? Are you trying to prove that a new workflow in your software is valuable to end-users, for example, or that there aren't any hidden use cases or hidden booby traps? What are you trying to test for? The goal, in my view, of exploratory testing, maybe there are some exceptions, but generally, it should not be as a gatekeeper for, "You can't release this until we've done the testing." In that sense, it's separate from the software delivery life cycle.

There could be exceptions. Maybe you're building a brand new feature that you can't beta test or something, that it has to be right the first time. Maybe then, you're going to do some exploratory testing in some sort of staging or test environment before it gets released. That should be rare. It's not usually the kinds of software that, I think, the teams that you and I work on are doing. Startups need to move fast. The fastest way to test an idea is to have customers looking at it.

Whenever possible, you want to get your changes in front of customers as quickly as possible. Simultaneously, your QA can be doing this exploratory testing to hopefully find things before users do or find areas for improvement. Maybe they're not finding bugs, maybe they're just finding areas for improvement. I hope that answers the question. If it doesn't, let me know and I can clarify some more.

Daniel: Yes, that's really helpful. I also am a big fan of just using feature toggles and basically, having a production user, which is not one of the customers who can just test. Because then, the deployment is completely separated out from the release and from the testing process. My next question is actually more around the cultural aspect of this. This is something where I think our situation might have been slightly different. If I understood correctly, then, in your case, the developers actually did not want to work with QA, if there was, as you mentioned, this adversarial relationship.

In my situation, I'm a bit in the different situation that they really like working with QA. They like to not have part of the responsibility so that they can focus on other things. The aggravating factor here is that we are already doing a lot of changes. For example, I mentioned that we come from the situation where we had one big team split up as front-end team, back-end team, QA team, and so on.

We are now moving or have moved into a structure where we have cross-functional teams. For me, one of the biggest challenges I'm facing around this is also staggering the changes. Therefore, here, I'd be interested in how you would approach it. If I just now say, "Hey, let's also take away take away QA for the new teams," then this would be one more thing that everyone would need to cope with at the same time.

Jonathan: That's a great question. Of course, whatever I tell you will probably be irrelevant because you have to apply it to your situation, to your culture. It's great that you're being mindful of that. I guess my answer would be what's the biggest pain point right now and address that. I don't know if that's QA-related or if it's something else because I don't know your whole situation. I would focus on the biggest problem. Whatever the biggest problem is at the moment, solve that. Let everybody breathe a sigh of relief. Celebrate a little bit that that problem is solved, hopefully, and then move on to the next one.

Don't try to solve everything at once. I told you not to use Git flow. That should be a goal. Maybe that's not your first step. Maybe your first step is completely unrelated to that. As I was taking notes through your first description, there were several things I could think of that I would probably change, but I wouldn't do them all at once. I don't know your business context. I don't know what your biggest pain point is. If QA is your biggest pain point, maybe you address that now. If you've made bigger changes more recently, maybe you need to wait two or three months just to let everybody settle down first before you do that.

If you can get the team on board with a change before you do it, that's even better. I had a difficult time doing that. The team I was working with trusted me and they trusted that this would be a good change, but they were skeptical all the time. It's scary to take away human testers, no matter how you do it. It's scary because whether it was working or not, it's what we're comfortable with. Making that change to something, handing the controls over to a computer, so to speak, seems scary. Be sensitive, you're already doing it. Be sensitive to what the team is ready for emotionally, in a maturity sense. Focus on the biggest problem first. I don't know if this is that problem or not.

Daniel: Thanks. That's actually, the last question from my side around this is on skills. Let's say that we now have the focus. We know this is what we want to do for QA. Everyone is on board. Everyone says, "Yes, let's move from this direction." Then one thing that always can also stand in the way is, do we have the right skill set with everyone? Who will cover which parts? For this, what I would be most interested in, what kind of skill sets did you see that the developers use more, or maybe think they would say it's good that they had this skill set already because that made the transition a lot less painful?

Jonathan: I should have mentioned this a moment ago, in my situation, one of the fears or the skepticisms from my developers was, are we capable of doing testing as well as the QA? Aside from that also, do we want to? I want to write code today, I don't necessarily want to test that code. Even after the fact, I had some of the developers express a little bit of frustration that it was annoying that they had to spend more time writing tests when they really wanted to be writing code. At the same time, they admitted that it was a much better situation than before. Given the option between waiting for QA and writing their own test, they preferred writing their own test.

How does that relate to this question of skills? I don't know what tools you are or will be using to write tests. In my view, you should write most of your tests as close to the code as possible. Unit-style tests. There's still definitely a place for end-to-end tests. The kinds that use Selenium and stuff like that. Hopefully, your developers are comfortable learning those tools. I don't know if you'll be using Cucumber or Gherkin or what you'll be using. Whatever tools you choose to use, there's going to be a learning curve, of course, there. If your testers already are familiar with that, then they can help with their training, hopefully.

If literally, nobody in the company knows those new tools, then, for one thing, question whether to use those tools or choose something else. If you need to truly use a tool that nobody knows, then maybe you need to hire somebody. Maybe you're not making a permanent hire, maybe you bring in a trainer or a freelancer for 6 to 12 months who can help with that sort of thing. Those are all business questions you'll have to answer. There's different ways you can approach that. You don't necessarily have to hire, just using a random example, you don't have to hire a Gherkin expert to come in and teach you all how to do that on a permanent basis.

It could be a three-month or six-month contract. It could be some training, whatever. There's different ways to fill that skill gap. Generally speaking, I've put a lot of confidence in my developers. I think that people who choose that profession do it because they like solving problems and learning a new skill set is just, after all, another problem to solve. It's more a question then of, do they want to solve that problem? Do they want to learn those skills, rather than can they? That's something you'll need to ask your guys or gals. Are they interested in addressing these issues? Does that help at all or did I skip part of your question?

Daniel: I think that's great. I think what I take from this is also the main skills that you mentioned now are around test automation using the related tools, and also around moving tests closer to the code. Specifically, things around TDD would actually cover many of these topics quite well. If any other skills come to mind that are not related to automated testing, then that'll be interesting. Based on what you said before, it sounds like the focus that actually was successful was the focus on automated testing.

Jonathan: Yes. TDD is great although it is a skill to learn. I'm a big advocate of TDD, but I don't know if your developers are experienced with it. It's not something you just pick up overnight. You can learn to write unit tests quickly, but learning to write unit tests in that way that are scalable and don't get brittle and crumbly very quickly, is a new skill set to learn. There are some trainings, if you don't have this that skill in your team, you need to develop it. Actually, one of the episodes of my podcast, an older one, I interviewed somebody who does a TDD training. I could provide a link to that if you're interested or other listeners are interested.

One other thing I would mention, I'm in the middle of reading the book called Accelerate. I don't know if you're familiar with the book. One of the things they mentioned in that book is in their studying-- What do they call it? In their investigation, basically, they did surveys of hundreds of companies doing DevOps-related principles and practices. The data they have shows that the way they phrase it I think was, "If your tests are not being primarily written by your developers, you see practically no benefit in terms of the output from your IT organization." That's not to say that you shouldn't have QA people writing tests, but they should not be writing the majority of tests.

Basically, if there's a wall between your coders and your testers, in terms of who's writing the code and who's writing the tests for that code, that wall practically destroys the benefit of the tests in the first place. That's, in my view, another reason to use QA people as a resource for developers. They shouldn't be the ones doing the tests; they should be helping the people who do the test if that makes sense. One other thing I would encourage you to do is to work on getting continuous deployment in place quickly. You don't need to have automated tests first. You already have that in place that you can do the manual testing in your review environment and then merge. Keep doing that. That's great.

I spend a lot of time trying to convince people they can do that. They can do CD with manual testing. It's possible. Maybe it's not perfect, but it's possible and start there. The reason I would encourage you to get CD going as quickly as possible is because I think it helps boost the morale of most teams when they see their code being deployed more frequently. It gives you a little adrenaline boost. It's like achievement unlocked every time you deploy and it feels good. It also reduces, of course, the feedback loop assuming this is going in front of customers and it should be. Then you have customers exposed to the new features more quickly.

Maybe one of the most important reasons that I think is often overlooked that I really like CD, is it adds a sense of responsibility to developer who hits that merge button. As a developer, when I hit merge but I know it's going to be two days or three days or a week or a month before my code goes in front of customers, I'm a little bit less careful than if I know that it when I hit this in five minutes customers are going to see my code.

It adds a little bit of responsibility and ownership that I think it's essential if you're trying to either reduce or eliminate this manual QA stage. You want to instill that ownership in the developers convincing them that hitting merge means customers see your code, is one of the first steps towards the developer feeling the ownership for the quality of the code they're merging. Does that make sense?

Daniel: Yes. Also, personally, I'm a big fan of introducing these kind of things early. Again, it's always a question of pacing it out correctly because not everyone in the team is always comfortable with every methodology. Then it's better to focus on one thing after a time. I've seen myself time after time the benefits of Trunk-based development of-- Personally for my preferred working style is working in a small ensemble of three or four people and just commit by commit pushing directly to production.

Jonathan: That's amazing.

Daniel: That's a feeling that if you have lived that, if you haven't had the chance to try working like that then-- I always encourage you to at least give it a try. Find the company that's willing to take you on for a day or two to see how it works. Then after you've worked in this style for a bit, you will not be happy [chuckles] anymore in a different style. Maybe I need to take back my recommendation for this reason. I'm saying don't do it because it might create some expectations for the company you could be working, which might not be able to fulfill it just yet.

Jonathan: [laughs] Great. It sounds like you've already seen the light so it's easy. You don't have to be convinced; you just have to convince the others to do is what it sounds like. Great. Is there anything that we haven't covered? Any other questions I can help with?

Daniel: Yes. I think we covered the two main questions. You talked both about the theoretical aspect of how could QA work and the experience that it works without exploratory testing in that example by focusing on making sure the devs know what to work on, what to look at, feel responsible for it, and know how to write automated tests. Then more of the cultural aspect of pacing things out a bit. Making sure that if you introduce a new tool in your framework, people are aware of it, know about it, feel a bit more comfortable before everyone has to use it. Maybe not do five changes in one day, but instead spread it out over about half a year or a year depending on how many changes there are.

Jonathan: Yes. Exactly. Try to get as much buy-in as you can. If you can convince them it was their idea to change something, of course, that's the whole art of diplomacy. Convincing somebody else that your idea was theirs, then it's even easier. It sounds like you have an exciting challenge ahead of you. I hope that it goes well. I hope your teams are enthusiastic and yes, I wish you the best of luck.

Daniel: Thanks, Jonathan was a pleasure talking with you.

Jonathan: If people are interested in your company, how can we follow up?

Daniel: It's [unintelligible 00:37:10] Because it's hard to spell, I guess, we'll put the link in the description. We are actually also hiring. I know this is the default line of any CTO always in all kinds of podcasts, but if you're interested in joining on that journey, that's nice. We're looking forward for anyone interested.

Jonathan: What's your tech stack for listeners who might be interested?

Daniel: We work purely in JavaScript TypeScript. At the moment, TypeScript React, Redux Frontend, and a JavaScript/in the future TypeScript backend run on AWS. From focus perspective, it's a lot of big data transformations.

Jonathan: At this point, the call was dropped. Since we were nearly done, we didn't re-record the end. Thank you for listening. Once again, if you would be interested in borrowing my brain as well, go to jhall.io/call. I hope to see you next time on Tiny DevOps.

[music]

Outro: This episode is Copyright 2021 by Jonathan Hall. All rights reserved. Find me online at jhall.io. Theme music is performed by Riley Day.

[00:38:23] [END OF AUDIO]

Share this