Tiny DevOps episode #40 Stacy Cashmore — The painful crawl through the morass of past shortcuts
April 20, 2022
Stacy Cashmore has the interesting title of Tech Explorer DevOps at Omniplan, which means she has free reign to do what she thinks she needs to do! In this episode, we talk about a big rewrite decision she made, and the results of this decision, good and bad.
In this episode
- Why "DevOps" does not belong in a job title, and why Stacy put it in her job title anyway.
- What is DevOps, if not a job title?
- How to respond to mistakes we've made
- Why a rewrite is always the wrong decision
- Why a rewrite was the right decision in this case
- The pressure of proving yourself once you convince management to do a rewrite
- DevOps and CI/CD goals for the new system
- Where the problem started to go wrong: Awkward tests, shortcuts and technical debt
- Working against deadline pressure
- Taking the pragmatic approach to CD
- The drawbacks to not doing "full CD"
- Plans for ongoing improvement
- Things to do differently next time, and lessons learned
Resources
Six Degrees of Kevin Bacon
The DevOps Handbook by Gene Kim, Patrick Debois, John Willis, Jez Humble
The Unicorn Project by Gene Kim
The Phoenix Project by Gene Kim
Guest
Stacy Cashmore
Twitter: @Stacy_Cash
Web site: stacy-clouds.net
Transcript
Speaker 1: Ladies and gentlemen, the Tiny DevOps guy.
[music]
Jonathan: Hello, everybody. Welcome to another exciting episode of Tiny DevOps. I'm your host Jonathan Hall. I'm actually really excited about my guest today because I have been trying to get Stacy on the show for, I feel like 10 years. The show has only been around for nine months but we've had so many hiccups.
She was traveling and caught COVID a few weeks ago, but she's finally here. Stacy thanks for coming on the show. We're going to talk about some mistakes I think you've made on a previous project, but before we do that, would you introduce yourself to our audience?
Stacy: First of all, thanks for having me on. It's taken way too long for me to be here. I apologize for that. Hi, everybody. I'm Stacy Cashmore. I am tech Explorer DevOps for OmniPlan and we make financial advice software. We're based in Amsterdam in the Netherlands. Tech explorer DevOps is a little bit of an interesting title. Basically, what it means is I have free reign to do what I think I need to do at the time. I chose the title to give myself that freedom.
I spend my time in the company, either helping the company with an agile transformation that we're trying to do, helping teams try and improve the way that they work together. Or I sit at the keyboard with the developers, with the architects, making our new software. Just trying to do the best that we can do.
Jonathan: Let's break that title down quickly. Say it one more time. What's your title?
Stacy: Tech Explorer DevOps
Jonathan: Tech Explorer DevOps. That's cool.
Stacy: When it came to choosing a job title, I was lucky enough to be able to choose my own. I love the idea of the tech Explorer part. I live playing with tech. I love trying new things. I live seeing what there is to do and I hate having DevOps in a job title because I don't think it is a job title.
Jonathan: I think we agree on that.
Stacy: All of the options that I came up with whenever I was speaking to somebody. It's okay. We have this option. We have that option. I've forgotten what they were exactly but everybody always asks the same thing. What does that mean? The only one that nobody asked, what does that mean was when I said DevOps, and everybody just accepted it. It's like, "Okay, I don't like this but it's probably the one that's going to work best."
Jonathan: It's pragmatic, isn't it?
Stacy: Yes.
Jonathan: See, I would've been the other way around. I probably would ask you what does Explorer mean. I also would ask you, wait, what does DevOps mean because that's not really a job title?
Stacy: It is absolutely not a job title. For me, DevOps, it's a way of thinking. It's a way of getting our code out there to the users. The minimum effort is not what I want to say but it's the phrase that's in my head. I want to be able to do it repeatedly. I want to be able to do it with my team. I don't want to have to throw things over a wall. I want the responsibility. I want the possibility to do things, but I also want the responsibility to make sure that what I'm doing, I'm doing well. That for me rounds up the whole DevOps mindset. I know that there are so many different definitions out there. I agree with pretty much all of them that I've read but that's how it works in my head.
Jonathan: It is a nebulous area. Isn't it? You listen even to the so-called founders of the movement, Jean Kim, and some of them, and they have different takes too. It's hard to disagree but on the other end, it's also so hard to pinpoint this is what DevOps is. There's not a line. It's a fuzzy area where you go from DevOps to something else and that's okay.
Stacy: Absolutely.
Jonathan: We were talking before we started recording and you said you're working on a new talk about-- If I'm not misquoting you some mistakes you've made. I was shocked to hear that you've made mistakes but maybe you want to.
[laughter]
Stacy: I make mistakes all the time. I'm a programmer. I like bugs and-
Jonathan: Really.
Stacy: -I'm trying to help teams improve. I make mistakes.
Jonathan: [laughs] Welcome to the club. I think we all do. I'm sure every listener today can identify with that. What do you do with your mistakes, Stacy, when you make a mistake, what do you do about it?
Stacy: I think the first thing that I try and do when I make a mistake is own it. When I was in the early stages in my career all those years ago, there was so much work from seniors and architects at the time that really, they didn't like to admit that things went wrong, shall we say. Let's be polite here. They never made mistakes and if anything happened, it was always trying to see who you could put the blame on. You got some really nasty cultures out of that. Once I got to be in the position where I could try and do something here, I decided straight away, if I make a mistake, I'm going to own up to it. I'm going to be public about it. I'm not going to hide it.
I'm going to try and take learnings from it and try and one, improve me so that I don't make the same mistake next time. To make whole new mistakes, that's so much more fun, and to try and let the people that aren't lucky enough to be in my position yet to let them know that making mistakes is not a bad thing. It's how we learn. It's how we progress. It's how you deal with them, and how you learn from them. That is the important part of making mistakes.
Jonathan: Great. If you like to go public with your mistakes, I have a show you should come on, and we can talk publicly about some of your mistakes.
[laughter]
Why don't you set the stage for us here? I don't know the project that you want to talk about but set the stage. What are these mistakes that we're going to discuss, what's their context?
Stacy: The context is I joined Omniplan in 2020 and there were issues with a project. I came on to try and help work with the architect, work with the teams, like I was saying. We did some good things but everything was really tough. We decided September of that year, the best thing that we can do is throw everything away and start again. Me and the other architect that I work with, we were absolutely convinced that if we did this, we could actually finish the project quicker. Then if we're trying to carry on working with what was there. This is not something I would recommend doing most of the time because the grass is always greener when it rarely is.
This is one of the times when I actually went with my gut feeling. I've done this in the past, in my career. Always, it's like, "We could just write this quicker if we start from scratch." "No, you can't, don't go there," and this time it was like, "No, we can do this quicker from scratch." Management went for it, which is a really rare position to be in inside of IT management, agreeing that you can throw everything away and start from scratch. Which put us in the wonderful position of doing it but the really interesting position of we now have to prove that what we've said is true. We've got to make sure that in 12 months' time, we are not in exactly the same place that we were in previously.
The company was coming from originally a silver light application. Obviously, silver light is end of life. It's out of support it means IE that's end of life, out of support. They had a real hard deadline of what they needed to do. It also meant that the code that we had available, wasn't something that we wanted to use. The initial rewrite, the one that we threw away, we kept it available for reference but not to use any of the code. That was an explicit thing that we said, it's even if you say, "Hey, we can use this code, you don't copy-paste it you retype it," because you retype it and you think what you're doing at the time and make sure that everything is a deliberate choice.
That was where we were. It's September 2020, we have not that long to get the project finished. We've just come back almost to square one. The first things that we had to come up with were how do we make sure we don't go off the rails again. Because I'm fairly sure I wouldn't be employed if I went back to my manager and I said, "We had to start again. Guess what we need to start again." We've got to make sure we don't go down that road. We also wanted to make sure that we could get things out immediately. In the previous system, there were deploy pipelines, there was continuous integration, but things broke a lot.
When I first joined the company, test didn't work as often as it could. That was a mixture of the way that the code was deployed, it was a mixture of how the teams were working at the time. We decided no, from this project from day one, we want to make sure that every single code change that we put into the trunk goes out to a production-like environment.
I know production-like is a bad word, but we were replacing a system so we couldn't do a small release, and then build on it because we literally had to go from this system to this system. It wasn't something that we could strangle easily.
We got that in place. We worked with the teams to make sure that the teams could work together. We'd already done great work with them. Since I started until September, we had teams that were communicating way better. They were operating as one unit, rather than lots of individuals, and it was all coming together nicely. From that point on it was like, "Right, now we've got this system, we're going forward. How can we get our code out there to Azure, in a constant way, and also make sure that we have confidence in what we're deploying?"
The last one being a really important one, anybody can throw all of your code to production on a daily basis. Are you confident it's not going to go boom is an entirely different question.
Jonathan: Right. How big is this team that was working on this project?
Stacy: At the point of 2020, we had three teams with, I think, between five and seven people in each team. We're not huge.
Jonathan: Yes, 15, 20 total, something like that.
Stacy: Yes.
Jonathan: All right. This is the context. You're doing a rewrite, and you're a little bit nervous because you know rewrites are dangerous, but you're convinced this is the right time.
Stacy: Yes.
Jonathan: What happened next?
Stacy: The first couple of months everything went swimmingly. We were architected [crosstalk].
Jonathan: By the way, I think anytime someone says, "I'm going to tell the story about mistakes," we decided in your opening line, "I started with a rewrite." We can start to predict how things are going to go, right?
Stacy: The interesting one is the actual project as a success, but it's not quite what we wanted it to be.
Jonathan: Okay. All right. There is a twist at the end, so stay tuned next time for this exciting conclusion.
[laughter]
Jonathan: All right, sorry, to interrupt. Let's continue. You were nervous about this rewrite. How did things go?
Stacy: Yes. We started off with a group of the tech leads from each of the teams plus me and the architect, figuring out what we were going to do to build the new system. One of the things we decided was we were going to go for CD out of the box, but that was a given, and that includes the infrastructure. We didn't want to have to do manual work, creating infrastructure in order to do a deploy, it had to be something that just was available and easy to do.
The second one that we had is we wanted to keep all of our services. We didn't go to microservices, but we did go to a service-orientated architecture. We had a long discussion and decided on not doing shared models, which is what had always been done in the past. We wanted to make sure that each service was independent from the rest. The only thing that it shared is an API would talk to a persistence layer if it needed to fetch your stored data. The actual business logic inside had to be protected from the outside world. That went through the Angular application on the front end as well. That had an API and then we had internal objects which we would use in all of the angular code.
In order to facilitate the CD part of all of this, the continuous delivery, we wanted to have high-quality tests with good coverage. That's high-quality in they needed to test what we thought was being tested, but also high-quality because I think it's G Paul Hill did a wonderful one on Twitter about awkward tests. Once you have an awkward test, it makes it so hard to work with that you stop writing the tests for that, and you end up with code which isn't covered. We wanted to make sure that we didn't go down the route of awkward tests, which I've got to say with Angular is really hard.
With C#, it can be really hard. With Angular, it can be spectacularly hard, but for the first few months, we managed to go down this path, and it went quite okay. Everybody is learning new things, but we're all on the same path together. Then, we get the problem of pressure. Like I say, it's like we're into this project. I joined the company to help this project, and six months after joining the company, we threw all of the code away. Now, obviously, deadlines don't change. We have people wanting to use what we're making. Obviously, that's why we do it, and so we still got this deadline coming up.
I think around the December, January time, we noticed the amount of pressure coming onto the development teams was increasing. At that point, things start to go a little bit awry. You notice that people are desperate to get functionality out because functionality is what people are chasing. It's what you hear from the customers, from the management.
It's what we want to do, we want to get things out. At that point, the pressure got to the point that developers were rushing what they were doing and using phrases which I'm sure I've used early on in my career as well, "We'll come back and add that bit later."
We were starting to get work which maybe wasn't refined as well as it could be, so it wasn't quite as obvious what we should be doing. We started to get work pool requests, followed by testing pool requests, which if you're in a CICD world is really dangerous. Don't do this people because it's not like you've got somebody checking things are working before it's going into production.
Some of our tests started to get really awkward. I think this is the first thing that we realized we were doing wrong, is coming in and skipping things for short-term gain. Everybody was working super hard. This is absolutely not a complaint at my colleagues. Everybody is doing the best that they can do, but once you start to get a certain amount of pressure, it doesn't matter how much you let people know that the quality and the agreements that were made are super important. That pressure comes down and people go into a different mode and mistakes start to get made.
Jonathan: Where was this pressure coming from? Was it just the looming deadlines or was it active pressure?
Stacy: I think for the teams, the main thing was the deadlines.
Jonathan: These deadlines are what led to the pressure. Was there a constant reminder or was it just the clock ticking? Was there internal conversation about it like, "Oh, this is it's getting close. We're not there yet." What was the atmosphere like?
Stacy: The atmosphere was good, I've got to say. I've been in places where the pressure was put onto developers in a really nasty way. This wasn't that, but the pressure was obviously, there. We did have a deadline ticking down. We had lists of things that we needed to do by certain dates and those dates are getting closer and closer or passing. We work with intelligent people, and I think everybody knew what was going on. Everybody could see what was happening. It gets to people. The biggest issue that comes from something like this is once you've got into that way of working it becomes so easy for that to become the way of working.
It can become habit, "This is how we do things." Not only that. When you have multiple deadlines, you don't have that time to stop. The first time you do something, you do it like this. "We're going to come back and clean it up," but before you can clean it up, you've got the second one and you start to go down. All of these good intentions, they start to build and back up on themselves, and it becomes a vicious circle. There are some things that you can't do because things weren't done previously, and it just makes everything so much harder to start.
Jonathan: Yes. What happened next? That's the obvious question. Was this resolved or did it lead to bigger problems before it was resolved? [chuckles]
Stacy: Well, it isn't resolved yet. We are live. We don't have the CD pipeline that we want. That is one thing that happened because we had to take the pragmatic view. We now do multiple releases per week but isn't on-demand. There is a final gate between where somebody goes, "Yes, we are going to release now." We can see that the issues that causes us. It causes delays in the pipeline. It causes questions about when something has gone to production or not, but it gives us a degree of coverage that we wouldn't otherwise have.
The next thing that we're going to do, and this is what we're working on right now. That is taking time to take stock of where we are to figure out where we are going to go next. We're 18 months on from the start of the project. We are in production. We built a system which I have not seen in my career built in the time that it's built. I am super proud of everybody that's been working on this, but it's also now 18 months of code which means we cannot go back and fix things. You can't now spend the next three weeks or three months, or however long it would take going through the code with a fine-tooth comb trying to figure out, "What we are missing and what we need to add."
Although we're in a better place than we were, undoubtedly, we are not where we want to be. What we are looking at doing now is accepting that some places don't have that coverage where we can have the confidence to just throw things to production, but as long as they're not being changed that's okay. It's tested, it's working, if we are not changing it, it's not going to break. What we are going to do now is before we make a change, we are going to try and figure out exactly what it is that we can impact with this change.
We're going to make sure that that particular area has enough coverage that we can say with confidence, "We can make this change, and if something breaks we will know that it's broken." The teams are going to have to have a look there and figure out, "Is this an API test which is a big cumbersome test that we need to do? Is it a test on an individual class that we need to do, so it can be a faster run test and easy to maintain?" This is going to be a case-by-case basis figuring out exactly what we need to do to move forward.
That's going to be a PR on its own. It's going to be a pool request which is nothing but adding coverage to what we have. Once we got that, then the real fun can start, then it's okay. Now we know that we are not going to break stuff, or at least we are going to know if we stuff. Now we can look at what do we need to refactor any code to make this change? Is there anything we need to do to make this change simpler and less dangerous? We can make those changes, and then we can actually do the changes that we want for the code itself.
Jonathan: Nice. I'm interested in hearing from you because I know that you're experienced with modern DevOps tooling and so on. I think some of the listeners probably are less experienced. Certainly, the industry at large is less experienced I think on average than you are. What are some of the problems? I can imagine if you and I were sitting over a coffee we would just assume that we both understand why you want CD, and why you're not content with a two or three times per week release. What are the actual problems that you see by not having CD in place that you're hoping to overcome by getting there?
Stacy: The big one for me is I want to get confirmation that things are working. If things aren't finished they're not done until they're in production. That's number one. I want to get stuff out there. Now, we have introduced feature toggles, so that we separate our deploy and our lease moment, because we don't want to put new things out there directly. We want the control over when something is visible, so for big functionality changes we have that, but we want to get the code out production as quickly as possible.
The other one that we want is to know what is running on production. If you are going two days a week, the thing that I checked in yesterday is that on production yet is that not on production yet? What is currently in the pipeline if it breaks, what broke it? If you're bunching up three days' worth of changes, then you've got three days' worth of pool request that you've got to go through to figure out which one may have broken production. If I'm putting stuff out on a regular basis with each pool request that I make, then at the point where I make a deploy and it breaks I have a reasonable idea. It's not guaranteed, of course.
It can always be a sleeper from a previous pool request, but you've got a reasonable idea that, "Okay, this is probably what killed it. What did we change? How could this impact what we were doing?" Yes, those put together are why we want to do it now. For the future, once we get good at what we are doing with the CICD process. Then it allows us to actually do more real-time changes.
We can have an idea, we can flesh it out, we can push it out, and we can see the impact that it has. We can get fast feedback cycles from that rather than it's thought of one month refine the next month, develop the month after, and it's live the month after that. You've, suddenly, got this really long pipeline, and has your change had an impact? Has the market had an impact? All these things you don't really get? If we can get that fast-rolling change in a really high cadence, it means that we can just have a nice way of working and getting the feedback on a regular basis without too much of a delay.
Jonathan: What manual checks do you have in place right now before something's been merged, but it's not released yet. What additional checks are you going through before you make that final call to deploy?
Stacy: Right now, we have testers which will test on our development or test environments on top of the automatic checks that we do. Our business consultants who have the really deep financial knowledge that you need. We are producing reams of numbers and knowing that those reams of numbers make sense, I'd say it's an art form, but I don't think it's an art form. It's just a humongous amount of knowledge which you need to be specialized to know what's going on there. These people are checking at the various levels as well.
We have two people in charge of the pipeline before it goes to production, and they are checking that nothing is going boom in the acceptation environment before moving it forward. If we start to see things coming in application insights from the acceptation environment, then there's a good chance that we're not going to go forward from there.
Jonathan: Looking back, is there anything that you think you or the team could have done differently to prevent this less than ideal situation so that you could have proper CD from the beginning?
Stacy: I think one thing that I would've done differently now is, I probably wouldn't have started development that first week. We were taking a team that were used to quite long cycles. There were hardening cycles built into the previous way of working. It is quite a mindset switch for people from that to moving to a CICD flow. You do get that instant feedback, but you've also got to take that instant responsibility. I think maybe we could have done better with education for the developers, for the business consultants, so that everybody was on the same page and got a sense of why we needed to take this so seriously if we're going to do it.
On top of that, the other one that I think we should have done differently is at the point where the pressure got the highest. We saw things creeping in. I think a better intervention would've been useful.
Jonathan: What would that have looked like? If you could do it again?
Stacy: What I would've done there is try to get a lot more involvement from everybody. From the developers up to management at the point where we saw things happening. It doesn't help for your deadlines to say, "We need to stop for a couple of days. We need to take stock. We need to go through this and figure out why this is happening and get people to understand that it's not something that can continue." Rather than saying, "We know that this isn't what it should be, but we'll come back and fix it." I think that both of those two things, especially, if you're in a high-pressure situation, I think they would probably help. If I was doing this all over again, I would try and do something like that.
I also know that that is an incredibly tough sell to say to somebody, "We've got deadlines coming up and saying, 'let's stop for two days and not produce anything,'" but take stock and figure out what we need to do to make the progress we need to make, but still keep these checks in place so that we know that we are doing the right things.
Jonathan: It is a hard sell, although it is often also the correct move. Sometimes you need to stop and make sure you're moving in the right direction or that you're moving as efficiently as possible or many other things that you may want to do that just take time to stop and slow down. What's the saying? Slow is smooth and smooth is fast?
Stacy: Yes. That exactly, in the first talk that I did ever as a speaker, it is this talk but based on an old company, so different experiences. One of the things that I see in there is when it comes to taking shortcuts, you can be really fast today. You might be quick in a week or so. Eventually, you are going to slow to that painful crawl as you go through the [unintelligible 00:32:31] of all of the shortcuts you've taken in the past. By the time you reach that point, it's too late.
Jonathan: Well said.
Stacy: That is a point where you start to talk about rewriting either sections of or complete applications.
Jonathan: Yes. In retrospect, was the rewrite the right decision?
Stacy: Yes. That's something that I am 100% convinced of. I saw what we did between February, July, August. I've seen what we have done in the 18 months since, and it is orders of magnitude better, the quality of code orders of magnitude better. Once we don't have the testing and the coverage that we would want, that is still far better than it was so everything is better. Like I say, it is a story of my failures, but the project wasn't the project has done what it's supposed to do. It works well. We can deploy it easily, even if not, as often as we'd like, but we can deploy it easily. We can create new environments for new customers in a couple of hours.
Even with all of the complex things we're using inside of Azure. We need to change a few templates and then roll out one pipeline. We get all of our infrastructure, and then each service rolls out its own pipeline. We get that in there too. We have all of this in place, in a very short amount of time, for the amount of lines of code that we have. We're just not quite where we want to be when it comes to actually pushing things out as we want to, but we will get there.
Jonathan: If we ever get to where we want to be, we'll be bored anyway, won't we?
Stacy: Absolutely. There's always something new to learn and there's always something new to improve.
Jonathan: Exactly. Great. Thanks, Stacy, for sharing your story, helping us all feel human, I guess. We all make mistakes.
Stacy: I also made mistakes. Be proud of your mistakes and figure out what you can do to change it. The one thing that I would say to anybody in this type of situation is always ensure that you are experimenting with things because in these situations. You've got all sorts of different things that you can try, and it doesn't have to be big. It doesn't have to be this amazing humongous experiment that's going to change the world overnight. If you are trying new things on a regular basis, some of them are going to work. Some of them aren't going to work, and you're going to be able to pick out those better ones. You are going to find your way to a much better place.
Jonathan: One thing I'm interested in asking you about it, I might have should have asked this at the beginning. How did you make the decision to do the rewrite? Because that is such a big decision. As we've already discovered, it's one that sometimes you had to wait months to see if it was the right decision. We hear so many stories of when it was the wrong decision. Why were you convinced, and why were you right that this was the right decision in this case?
Stacy: I think the biggest indicator to me that we were probably on the right line for this one was, not only did we see that making small changes took a long time. It took a long time in code. It affected many different layers of code. Finding the code that you needed to change also took a long time. Some of it was just purely unmaintainable. I've seen classes that were 1500 lines long. You figure out which part of that 1,500 line class you've got to change, it's not fun to do. We also saw that when we made those changes, totally unexpected things broke.
I think at that point it was every change was scary because every change is scary, it takes even longer to do because you are really trying to figure out how you cannot break things. Literally, you could change something over here, and because of the shared models, because of the design of the system, changing something over here might break this. Even though it's technically, you think unrelated. It's like is it the six degrees of heavy bacon? It's the six degrees of code. It can filter its way through and find it. That was one side of it. The other side was it was a visual studio solution with, I believe 144 projects in it.
You opened it up and every development machine slowed to a crawl, and the system didn't need it. It wasn't screaming out for that many things. It was in all the good intentions. There were too many patterns. There were too many ideas thrown at it, and it just got overly complex because of this. It's one of the things we're trying to do in the new system. One of the reasons why we went for the service-orientated architecture is we wanted to keep the individual parts as simple as possible. The complexity that was in there, we want it to come from the business logic and not complexity from how it was put together.
I said it was a big decision. It was a scary decision. I don't know what's worse, looking at it with hindsight. Is it worse to have your management tell you? "Nope, we are not rewriting this. You have to carry on with it as it is," which is not fun, but then it's not on you.
Jonathan: Exactly.
Stacy: Or being told, "No, we see your point. We trust you, rewrite it, prove yourself right."
Jonathan: In the eyes of management, have you been vindicated? Did you prove that?
Stacy: Yes.
Jonathan: Awesome.
Stacy: I think my boss said it best a couple of days ago, at a company update. In his career, and it's the same for mine. I've done lots of big projects. I've never seen there's a turnaround from scratch to production with barely any real issues once we hit production in that shorter amount of time. We're talking hundreds and hundreds of thousands of lines of code in less than two years, and it works. It's reasonably performant.
We have changes that we know we need to make because we made decisions at the beginning to keep some things too simple because we didn't want to introduce complexity until we knew we needed the complexity. There's some things we know we need to change, but they're performant, they're not a problem yet, and we know that we can change them in the future. I think that all comes together, and it vindicated the decision this time.
Jonathan: Good. Stacy, if anybody else is going through something like this, maybe they're in the middle of a rewrite or considering one. Or they're just struggling with these shortcuts that have started to add up. Do you have any advice or maybe resources that they might look at for help?
Stacy: I would recommend to anybody in this situation to take a look at the DevOps Handbook, The Unicorn Project, The Phoenix Project. Those three books, in particular, are eye-openers for the things to look out for and for the way to think about what you're doing. That's both from fixing a project, which isn't absolutely doomed, and you can save, to figuring out how to bring a project from the ground up in the right way.
Don't feel bad when you don't do as well as the people in the book do, but definitely, read the book, and take the ideas, and the real models of the book. The one from, I think it's in The Phoenix Project, tackle the real bottlenecks. It's so easy to see something that you think is a bottleneck, and you fix it and nothing changes, because whilst it might have been the bottleneck, it wasn't the actual one causing the problems. All you might do is put more work then onto that bottleneck and actually make your situation worse. It's morals, and it's thoughts like this on those three books that really helped. Also, improving work is more important than doing work. Is a real one to take away.
Jonathan: Great. Well, I know that your LinkedIn, says not only that you're a tech explorer DevOps but that you're a speaker. Do you have any speaking engagements coming up that are open, if somebody wants to come here you speak in the next few months?
Stacy: April the 21st I'm giving a talk at Azure Live. I'm also hosting part of that conference, which is always fun to do being on the other side of the questions for the speakers. After that, I am speaking in [unintelligible 00:43:09] Europe. I'm speaking NDC in London, Scottish Summit. I've got a few different ones coming up over the coming months.
Jonathan: Very good.
Stacy: If you go to my website then I have a list of talks that are coming up.
Jonathan: What's your website?
Stacy: The website is stacy-clouds.net
Jonathan: Great, Stacy. If people want to follow you, I guess the website is the best place to go, or do you have social media? I see you have social media on the website, Twitter, Twitch, GitHub, and LinkedIn, so we can stalk you all over the interwebs.
Stacy: You can stalk me all over the internet from there. I'm mainly active on Twitter @Stacy_cash. Is probably the best way to get me.
Jonathan: Great. Thanks, Stacy for taking the time to come on today. It's been a long time coming. I'm so glad we finally got to chat and hear your story. Is there anything you'd like to add finally before we sign off for today?
Stacy: No. I just want to say thank you for having me. Apology for it taking so long. Good luck, everybody, with the projects that you've got.
Jonathan: Great. Thanks. Until next time, everybody.
[music]