diff --git a/docs/bets/Coding-Bets.md b/docs/bets/Coding-Bets.md index 9fd9a8bc9..d68c748b3 100644 --- a/docs/bets/Coding-Bets.md +++ b/docs/bets/Coding-Bets.md @@ -132,12 +132,12 @@ The idea here is to make a bet that a market exists for a certain product, _and We're used to the idea of entrepreneurs taking risks on new business ideas (like in the MVP example, above). But it's not really so different when you are developing in a team, or on a personal project. So if you start by taking the view that every piece of work you do is a bet then it really helps to put into perspective what is at stake and what is to gain. -The best gamblers (the ones who win over time) don't necessarily take bets they'll always win. But they are always judging risk, stake and reward. They try to place bets where the [Balance of Risk](/thinking/Glossary.md#balance-of-risk) is in their favour. As developers, we should adopt the same mind-set: +The best gamblers (the ones who win over time) don't necessarily take bets they'll always win. But they are always judging risk, stake and reward. They try to place bets where the [Balance of Risk](/thinking/Glossary#balance-of-risk) is in their favour. As developers, we should adopt the same mind-set: - What are the likely stakes? - - What is the [Payoff](/thinking/Glossary.md#payoff)? + - What is the [Payoff](/thinking/Glossary#payoff)? - What are the odds? - - Is the bet worth it? Do the stakes justify the [Payoff](/thinking/Glossary.md#payoff)? + - Is the bet worth it? Do the stakes justify the [Payoff](/thinking/Glossary#payoff)? - How can you maximise the stakes while minimising pay-off? How long will it take for the pay-off to be worthwhile? - Are you making a long bet, or lots of small, short bets? You can reduce the overall stakes by splitting work up and doing the riskiest part first. @@ -149,4 +149,4 @@ But software isn't like this. Largely, we aren't building the exact same thing What if you _are_ building the same cookie-cutter things over-and-over? Perhaps it's time to change the bet? By using new tools or techniques you would increase the risk, but also the reward would be to learn something new. Alternatively, _build the library_ that automates the drudge-work so you can re-focus on the areas of risk. -[The Purpose Of The Development Team](Purpose-Development-Team.md) article expands this idea further: that everything we do in a development team is about managing a balance of risks across the portfolio of an entire team's efforts. In the next article though, we'll zoom in more closely and see how we use risk when we make [Debugging Bets](Debugging-Bets.md). +[The Purpose Of The Development Team](Purpose-Development-Team) article expands this idea further: that everything we do in a development team is about managing a balance of risks across the portfolio of an entire team's efforts. In the next article though, we'll zoom in more closely and see how we use risk when we make [Debugging Bets](Debugging-Bets). diff --git a/docs/bets/Debugging-Bets.md b/docs/bets/Debugging-Bets.md index f4ad632d4..f6acf3231 100644 --- a/docs/bets/Debugging-Bets.md +++ b/docs/bets/Debugging-Bets.md @@ -14,15 +14,15 @@ tweet: yes # Debugging Bets -In [The Purpose Of The Development Team](Purpose-Development-Team.md) we looked at how a development team is all about trying to shift the risk profile in favour of the business. Perhaps by removing the risk of customers not having the features they want, or not signing up, or not learning about the product. +In [The Purpose Of The Development Team](Purpose-Development-Team) we looked at how a development team is all about trying to shift the risk profile in favour of the business. Perhaps by removing the risk of customers not having the features they want, or not signing up, or not learning about the product. -Then, in [Coding Bets](Coding-Bets.md) we considered the same thing at task level. That is, in choosing to spend time on a given task we are staking our time to improve our risk position. And, it’s definitely a bet, because sometimes, a piece of coding simply doesn’t end up working the way you want. +Then, in [Coding Bets](Coding-Bets) we considered the same thing at task level. That is, in choosing to spend time on a given task we are staking our time to improve our risk position. And, it’s definitely a bet, because sometimes, a piece of coding simply doesn’t end up working the way you want. ![Article Series](/img/generated/bets/debugging/bets.svg) Now, we’re going to consider the exact same thing again but from the point of view of debugging. I’ve been waiting a while to write this, because I’ve wanted a really interesting bug to come along to allow me to go over how you can apply risk to cracking it. -Luckily one came along today, giving me a chance to write it up and go over this. If you've not looked at Risk-First articles before, you may want to review [Risk-First Diagrams Explained](/thinking/Risk-First-Diagrams.md), since there'll be lots of diagrams to demonstrate the bets I'm making. +Luckily one came along today, giving me a chance to write it up and go over this. If you've not looked at Risk-First articles before, you may want to review [Risk-First Diagrams Explained](/thinking/Risk-First-Diagrams), since there'll be lots of diagrams to demonstrate the bets I'm making. ## The Problem @@ -126,7 +126,7 @@ Sadly, this meant that I’d actually had to test and rule out _all of the other ## Some Notes -1. I started by writing down all the things I knew, and all of my hypotheses. Why? Surely, time was short! I did this _because_ time was short. The reason was, by having all of the facts and hypotheses to hand I was setting up my [Internal Model](/thinking/Glossary.md#internal-model) of the problem, with which I could reason about the new information as I came across it. +1. I started by writing down all the things I knew, and all of my hypotheses. Why? Surely, time was short! I did this _because_ time was short. The reason was, by having all of the facts and hypotheses to hand I was setting up my [Internal Model](/thinking/Glossary#internal-model) of the problem, with which I could reason about the new information as I came across it. 2. I performed four tests, and ended up ruling out six different hypotheses. That feels like good value-for-time. 3. In each case, I am trading _time_ to change the risk profile of the problem. By reducing to zero the likelihood of some risks, I am increasing the likelihood of those left. So a good test would: - a. Bisect probability space 50/50. That way the information is maximised. diff --git a/docs/bets/Purpose-Development-Team.md b/docs/bets/Purpose-Development-Team.md index 228015358..6e7a5199b 100644 --- a/docs/bets/Purpose-Development-Team.md +++ b/docs/bets/Purpose-Development-Team.md @@ -40,7 +40,7 @@ Scrum's rule about working-to-a-sprint is well-meaning but not always applicable ## Case 3: Technical Debt -Sometimes, I am faced with a conflict over whether to pay off [technical debt](/risks/Complexity-Risk.md#technical-debt) or build new functionality. Sometimes the conflict will be with people in my team, or with stake-holders but sometimes it is an internal, personal conflict. +Sometimes, I am faced with a conflict over whether to pay off [technical debt](/risks/Complexity-Risk#technical-debt) or build new functionality. Sometimes the conflict will be with people in my team, or with stake-holders but sometimes it is an internal, personal conflict. ![Technical Debt vs Building Features](/img/generated/bets/purpose/technical-debt.svg) @@ -68,9 +68,9 @@ So, above I’ve given several cases of contradictory tensions within developmen But could there be a “general theory” somehow that avoids these contradictions? What would it look like? I am going to suggest one here: -> "The purpose of the development team is to improve the [balance of risk](/thinking/Glossary.md#balance-of-risk) for achieving business goals as much as possible." +> "The purpose of the development team is to improve the [balance of risk](/thinking/Glossary#balance-of-risk) for achieving business goals as much as possible." -Now clearly, the troublesome clause in this statement is “[balance of risk](/thinking/Glossary.md#balance-of-risk)”. So, before we apply this to the cases above, let’s explain this concept in some detail by exploring three toy examples: the roulette table, buying stocks and cycling to work. Then we'll see how this impacts the work we do in software development more generally. +Now clearly, the troublesome clause in this statement is “[balance of risk](/thinking/Glossary#balance-of-risk)”. So, before we apply this to the cases above, let’s explain this concept in some detail by exploring three toy examples: the roulette table, buying stocks and cycling to work. Then we'll see how this impacts the work we do in software development more generally. ## Example 1: The Roulette Table @@ -81,7 +81,7 @@ Let’s talk about “risk” for a bit. First, we’re going to consider the g The above chart shows the distribution of returns for this bet. Which hole the ball lands in (entirely randomly) is the independent variable on the x-axis. The return is on the y-axis. Most of the time, it’s a small loss, but there’s that one big win on the 12. (For clarity, in all the charts, I’ve arranged the x-axis in order of “worst outcome” to “best outcome”, but it doesn’t necessarily have to be arranged like this.) -In roulette, then, the [balance of risk](/thinking/Glossary.md#balance-of-risk) is against us: if we integrate to find the area under this chart, it comes to -1 chips. You could get lucky, but over time the house wins. It’s (fairly) transparent that this is the case when you enter the game, so people are clearly not playing roulette with the rational goal of maximising chips. +In roulette, then, the [balance of risk](/thinking/Glossary#balance-of-risk) is against us: if we integrate to find the area under this chart, it comes to -1 chips. You could get lucky, but over time the house wins. It’s (fairly) transparent that this is the case when you enter the game, so people are clearly not playing roulette with the rational goal of maximising chips. ## Example 2: Buying Stocks @@ -93,11 +93,11 @@ First, a roulette table presents us with a set of very discrete outcomes. Real The chart above (from [William T Ziemba](https://www.williamtziemba.com)) shows the returns-per-quarter of Ford and Berkshire Hathaway stocks over a number of years, with worst-performing quarters on the left and best-performing on the right. -Second, while you know ahead-of-time the chances of winning at roulette, you can only guess at the [balance of risk](/thinking/Glossary.md#balance-of-risk) for owning Berkshire Hathaway stock for the next quarter, even if you are armed with the above chart. Generally, owning shares has a net-positive [balance of risk](/thinking/Glossary.md#balance-of-risk): on average you're more likely to make money than lose money, but it's not guaranteed - past performance is no indication of future performance. +Second, while you know ahead-of-time the chances of winning at roulette, you can only guess at the [balance of risk](/thinking/Glossary#balance-of-risk) for owning Berkshire Hathaway stock for the next quarter, even if you are armed with the above chart. Generally, owning shares has a net-positive [balance of risk](/thinking/Glossary#balance-of-risk): on average you're more likely to make money than lose money, but it's not guaranteed - past performance is no indication of future performance. Another question relating to this graph might be: which firm is generating the most value? Certainly, the area under the Berkshire Hathaway curve is larger but there is a bigger downside too. Is it possible that Berkshire Hathaway generates more value while taking on more risk? -When we consider buying a stock, we are going to build a model of the [balance of risks](/thinking/Glossary.md#balance-of-risk) (perhaps on a spreadsheet, or in our heads). This will be dependent on our own preferences and experience (our [Internal Model](/thinking/Glossary.md#internal-model) if you will). +When we consider buying a stock, we are going to build a model of the [balance of risks](/thinking/Glossary#balance-of-risk) (perhaps on a spreadsheet, or in our heads). This will be dependent on our own preferences and experience (our [Internal Model](/thinking/Glossary#internal-model) if you will). ## Example 3: Cycling To Work @@ -105,7 +105,7 @@ Gambling is all about winning _chips_, and buying stock is all about winning _mo ![Cycling To Work: Distributions of Returns - Time and Health](/img/numbers/cycling-to-work.png) -In the above chart, we have two risk profiles for cycling to work. On the left, we have the time taken. After a few week's cycling, we can probably start to build up a good [Internal Model](/thinking/Glossary.md#internal-model) of what this distribution looks like. +In the above chart, we have two risk profiles for cycling to work. On the left, we have the time taken. After a few week's cycling, we can probably start to build up a good [Internal Model](/thinking/Glossary#internal-model) of what this distribution looks like. On the right, we have _health_. There _isn't_ a good objective measure for this. We might look at our weight, or resting heart-rate or something, or just generally have a good feeling that cycling is making us fitter. Also, there's probably a worry about having an accident built into this (the steep drop on the left), and again, there is no objective measure for judging how badly that might come off. @@ -117,15 +117,15 @@ So we have three issues with health: ## Back To Software -So, we've gone from the Roulette Table example where the whole risk profile is completely known in advance to the Cycling example, where the risk profile is hidden from us, and unknowable. Regardless, we will have our own [Internal Model](/thinking/Glossary.md#internal-model) of the balance of risks which we use to make judgement calls. +So, we've gone from the Roulette Table example where the whole risk profile is completely known in advance to the Cycling example, where the risk profile is hidden from us, and unknowable. Regardless, we will have our own [Internal Model](/thinking/Glossary#internal-model) of the balance of risks which we use to make judgement calls. -Just as a decision over how fast to cycle to work changes the [balance of risk](/thinking/Glossary.md#balance-of-risk), the actions and decisions we make in software development do too. +Just as a decision over how fast to cycle to work changes the [balance of risk](/thinking/Glossary#balance-of-risk), the actions and decisions we make in software development do too. -The difference is, while the cycling example was chosen to be quite _finely balanced_, in software development we should be looking for actions to take which improve the upside _considerably_ more than they worsen the downside. That is, improving the [balance of risk](/thinking/Glossary.md#balance-of-risk) _as much as possible_. +The difference is, while the cycling example was chosen to be quite _finely balanced_, in software development we should be looking for actions to take which improve the upside _considerably_ more than they worsen the downside. That is, improving the [balance of risk](/thinking/Glossary#balance-of-risk) _as much as possible_. ![Good and Not-So-Good Actions](/img/numbers/good-not-so-good-actions.png) -This is shown in the above chart. Let's say you have two possible pieces of development, both with a similar downside (maybe they take a similar time to complete and this what is lost if it doesn't work out). However, the action on the left _significantly_ improves the [balance of risk](/thinking/Glossary.md#balance-of-risk) for the project. Therefore, all else being equal, we should take that bet. +This is shown in the above chart. Let's say you have two possible pieces of development, both with a similar downside (maybe they take a similar time to complete and this what is lost if it doesn't work out). However, the action on the left _significantly_ improves the [balance of risk](/thinking/Glossary#balance-of-risk) for the project. Therefore, all else being equal, we should take that bet. We don't want to just do work that merely shifts us from having one big risk to another, we want to do work that swaps out a large risk for maybe a couple of tiny ones. @@ -133,16 +133,16 @@ Let's go back to our original cases: - If I decide to **suspend the current sprint** to fix an outage, then that’s because I’ve decided that the risk of lost business, or the damage to reputation is much greater than the risk of customers walking because we didn’t complete the planned features. - When the Agile Manifesto stresses **Individuals and Interactions over Processes and Tools**, it’s because it believes focusing on processes and tools leads to much greater risk. This is based on the experience that while focusing on individuals and interactions may appear to be a less efficient way to build software, following strict formal processes massively increases the much worse risk of [building the wrong product](/tags/Feature-Fit-Risk). -- When we argue for **fixing technical debt against shipping a new feature**, what we are really doing is expressing differences in our models of the [balance of risk](/thinking/Glossary.md#balance-of-risk) from taking these actions. My boss and I might both be trying to minimise the risk of customers defecting to another product but he might believe this is best achieved by [adding new features](/tags/Feature-Risk) in the short term, whilst I might believe that [clearing technical debt](/risks/Complexity-Risk.md#technical-debt) allows us to get features delivered faster in the long term. -- In the example of **Sustainably vs Quickly**, it's clear that what we should be doing is trying to avoid altering the balance of risks in a way that sacrifices too much Sustainability or Speed. To do this requires judgement in the form of an accurate [Internal Model](/thinking/Glossary.md#internal-model) of the [balance of risks](/thinking/Glossary.md#balance-of-risk). +- When we argue for **fixing technical debt against shipping a new feature**, what we are really doing is expressing differences in our models of the [balance of risk](/thinking/Glossary#balance-of-risk) from taking these actions. My boss and I might both be trying to minimise the risk of customers defecting to another product but he might believe this is best achieved by [adding new features](/tags/Feature-Risk) in the short term, whilst I might believe that [clearing technical debt](/risks/Complexity-Risk#technical-debt) allows us to get features delivered faster in the long term. +- In the example of **Sustainably vs Quickly**, it's clear that what we should be doing is trying to avoid altering the balance of risks in a way that sacrifices too much Sustainability or Speed. To do this requires judgement in the form of an accurate [Internal Model](/thinking/Glossary#internal-model) of the [balance of risks](/thinking/Glossary#balance-of-risk). ### Other Scenarios -In a way, this is not just about development teams. Any time a person is added to an organisation, the hope is that it will improve the [balance of risk](/thinking/Glossary.md#balance-of-risk) for that organisation. The development team are experts in improving the balance of [technical risks](/risks/Risk-Landscape.md) but other teams have other specialities: +In a way, this is not just about development teams. Any time a person is added to an organisation, the hope is that it will improve the [balance of risk](/thinking/Glossary#balance-of-risk) for that organisation. The development team are experts in improving the balance of [technical risks](/risks/Risk-Landscape) but other teams have other specialities: - The Finance team are there to avoid the risk of [running out of money](/tags/Funding-Risk) and ensuring that the bills get paid (avoiding [Legal Risks](/tags/Operational-Risk)). - The Human Resources team are there to make sure staff are hired, managed and leave properly. Doing this avoids [inefficiency](/tags/Schedule-Risk), [Reputation Damage](/tags/Trust-And-Belief-Risk), [Morale Issues](/risks/Agency-Risk#morale-failure) and [Legal Risks](/tags/Operational-Risk). - - The best doctors have accurate [Internal Models](/thinking/Glossary.md#internal-model). They can best diagnose the illnesses and figure out treatments that improve the patient's [balance of risk](/thinking/Glossary.md#balance-of-risk). Medical Students are all taught to 'first, do no harm': + - The best doctors have accurate [Internal Models](/thinking/Glossary#internal-model). They can best diagnose the illnesses and figure out treatments that improve the patient's [balance of risk](/thinking/Glossary#balance-of-risk). Medical Students are all taught to 'first, do no harm': > "given an existing problem, it may be better not to do something, or even to do nothing, than to risk causing more harm than good." - [Primum non nocere, _Wikipedia_](https://en.wikipedia.org/wiki/Primum_non_nocere). @@ -166,4 +166,4 @@ All of these actions are about _insurance_, which is about limiting downside-ris If you are faced with a choice between extremes... -This is just a few simple examples and actually it goes much further than this. In [Estimates](../estimating/Start.md) I apply this idea to software estimating, and the next article, [Coding Bets](Coding-Bets.md), I am going to show how knowledge of the [balance of risk](/thinking/Glossary.md#balance-of-risk) concept can inform the way we go about our day-to-day work as developers... +This is just a few simple examples and actually it goes much further than this. In [Estimates](../estimating/Start) I apply this idea to software estimating, and the next article, [Coding Bets](Coding-Bets), I am going to show how knowledge of the [balance of risk](/thinking/Glossary#balance-of-risk) concept can inform the way we go about our day-to-day work as developers... diff --git a/docs/books/Risk-First-Second-Edition.md b/docs/books/Risk-First-Second-Edition.md index 81bd7faff..7d6edb47c 100644 --- a/docs/books/Risk-First-Second-Edition.md +++ b/docs/books/Risk-First-Second-Edition.md @@ -44,4 +44,4 @@ This is where I will be adding blog materials discussing the content of the new ### Tell Us What You Think! -Most of the material in the second edition book is published here on this website, so you can simply [start reading](overview/Start.md). If you have any feedback, please get in touch. What's missing? What doesn't make sense? What should be left out? Knowing this will be super-helpful and **you will be credited in the book along with all the other [Contributors](misc/Contributors.md).** +Most of the material in the second edition book is published here on this website, so you can simply [start reading](overview/Start). If you have any feedback, please get in touch. What's missing? What doesn't make sense? What should be left out? Knowing this will be super-helpful and **you will be credited in the book along with all the other [Contributors](/overview/Contributors).** diff --git a/docs/books/The-Menagerie.md b/docs/books/The-Menagerie.md index 0a762fd03..93e37da84 100644 --- a/docs/books/The-Menagerie.md +++ b/docs/books/The-Menagerie.md @@ -13,7 +13,7 @@ sidebar_position: 1 # The Menagerie -[Second Edition Coming Soon!](Risk-First-Second-Edition.md) +[Second Edition Coming Soon!](Risk-First-Second-Edition) The software development world is crowded with different practices, metrics, methodologies, tools and techniques. But what unites them all? @@ -31,4 +31,4 @@ The book aims to develop a _Pattern Language_ for understanding software risk, a ## Read It Here -"The Menagerie" contains all of the [Overview](overview/Start.md), [Thinking Risk-First](thinking/Start.md) and [Risks](thinking/Start.md) tracks from the Risk-First website, so you can read all the material on-line here if you want to. +"The Menagerie" contains all of the [Overview](overview/Start), [Thinking Risk-First](thinking/Start) and [Risks](thinking/Start) tracks from the Risk-First website, so you can read all the material on-line here if you want to. diff --git a/docs/estimating/Analogies.md b/docs/estimating/Analogies.md index 8ac3c0110..9496d4d92 100644 --- a/docs/estimating/Analogies.md +++ b/docs/estimating/Analogies.md @@ -16,14 +16,14 @@ tweet: yes So far, this track of articles has tried to bring the problems of estimating software projects into focus by identifying different _estimation domains_ and analogies for each domain. Let's recap: -- [Fill-The-Bucket](Fill-The-Bucket.md): This is the easiest domain to work in. All tasks are similar and uncorrelated. We can _extrapolate_ to figure out how much time the next _n_ units will take to do. -- [Kitchen Cabinet](Kitchen-Cabinet.md): In this domain, there is _hidden work_. We don't know how much there might be. If we can break down tasks into smaller units, then by the _law of averages_ and the _central limit theorem_, we can apply some statistics to figure out when we might finish. -- [Journeys](Journeys.md): In this domain, work is heterogeneous and interconnected. Different parts depend on each other, and a failure in one part might mean going back to the drawing board entirely. The way to estimate in this domain is to _know the landscape_ and to build in _buffers_. -- [Fractals](Fractals.md): In this domain, [Parkinson's Law](/risks/Process-Risk.md#bureaucracy) is king. There is always more work to be done. The best thing we can do is try and apply ourselves to the _highest value_ work at any given point, and frequently refer back to reality to find out if we're building the right thing. +- [Fill-The-Bucket](Fill-The-Bucket): This is the easiest domain to work in. All tasks are similar and uncorrelated. We can _extrapolate_ to figure out how much time the next _n_ units will take to do. +- [Kitchen Cabinet](Kitchen-Cabinet): In this domain, there is _hidden work_. We don't know how much there might be. If we can break down tasks into smaller units, then by the _law of averages_ and the _central limit theorem_, we can apply some statistics to figure out when we might finish. +- [Journeys](Journeys): In this domain, work is heterogeneous and interconnected. Different parts depend on each other, and a failure in one part might mean going back to the drawing board entirely. The way to estimate in this domain is to _know the landscape_ and to build in _buffers_. +- [Fractals](Fractals): In this domain, [Parkinson's Law](/risks/Process-Risk#bureaucracy) is king. There is always more work to be done. The best thing we can do is try and apply ourselves to the _highest value_ work at any given point, and frequently refer back to reality to find out if we're building the right thing. ![Three Dimensions From Fill-The-Bucket](/img/estimates/dimensions.png) -In Risk-First, one of the main messages has been that it's all about your [Internal Model](/thinking/Glossary.md#internal-model). If you have a good model of the world, then you're likely to be able to [Take Actions](/thinking/Glossary.md#taking-action) in the world that lead you to positions of lower risk. +In Risk-First, one of the main messages has been that it's all about your [Internal Model](/thinking/Glossary#internal-model). If you have a good model of the world, then you're likely to be able to [Take Actions](/thinking/Glossary#taking-action) in the world that lead you to positions of lower risk. So the main reason for identifying all these different problem domains for estimation has been to improve that internal model. @@ -37,7 +37,7 @@ For the rest of this article, I'm going to go out on a limb, and describe, throu ![Journey Planning](/img/estimates/fill-journey.png) -As we discussed in [Journeys](Journeys.md), there are plenty of problems in getting from A to B. But to help you we have: +As we discussed in [Journeys](Journeys), there are plenty of problems in getting from A to B. But to help you we have: - **Maps**: so we can plan our routes via those which already exist, and - **Closeness**: the closer you are to your destination, the nearer you are to done (which is great for walking and driving, but tends to fall down somewhat when we have to wait for buses or make a detour to the airport). @@ -106,4 +106,4 @@ So I find the _transport network_ analogy to be a useful one. But actually it t Maintaining a transport network is a balancing act. In an ideal world, every destination would be connected with every other. In reality, we adopt hub-and-spoke architectures to minimise the cost of maintaining all the connections. In essence, turning our transport network into some kind of _hierarchy_. -It's time to look at [Fixing Scrum](Fixing-Scrum.md). +It's time to look at [Fixing Scrum](Fixing-Scrum). diff --git a/docs/estimating/Fill-The-Bucket.md b/docs/estimating/Fill-The-Bucket.md index 2e613767d..4e1da6577 100644 --- a/docs/estimating/Fill-The-Bucket.md +++ b/docs/estimating/Fill-The-Bucket.md @@ -89,7 +89,7 @@ This kind of measurement and estimating is the bread-and-butter of all kinds of ## Big-O -Although software development tasks don't often fit into the [Fill-The-Bucket](Fill-The-Bucket.md) domain, lots of things in _data processing_ do. When talking about _algorithms_, we say fence-panel painting is $$O(n)$$. That is, the number of operations taken to complete the job is a linear function _**n**_, the number of fence panels. +Although software development tasks don't often fit into the [Fill-The-Bucket](Fill-The-Bucket) domain, lots of things in _data processing_ do. When talking about _algorithms_, we say fence-panel painting is $$O(n)$$. That is, the number of operations taken to complete the job is a linear function _**n**_, the number of fence panels. The same is true for lots of other algorithms - scanning a linked-list, walking a tree, these are often $$O(n)$$. @@ -102,11 +102,11 @@ There are plenty of algorithms too which have other efficiencies. Let's say yo This is the [binary chop algorithm](https://en.wikipedia.org/wiki/Binary_search_algorithm), in which the number of remaining search-space _halves_ each time you go round step 2. Therefore, doubling the length of the dictionary only increases the number of operations by 1. So this algorithm takes $$O(log_2 n)$$ time. -So [Fill-The-Bucket](Fill-The-Bucket.md) is _still_ an appropriate way of estimating for these algorithms. If you can figure out how long it takes to do steps 1 & 2, and how many times it'll have to do them, you can make a good estimate of the total time. That is, even though the time won't be _linear_, _extrapolation_ still works. +So [Fill-The-Bucket](Fill-The-Bucket) is _still_ an appropriate way of estimating for these algorithms. If you can figure out how long it takes to do steps 1 & 2, and how many times it'll have to do them, you can make a good estimate of the total time. That is, even though the time won't be _linear_, _extrapolation_ still works. ## Estimating Risk -Let's say we have a problem in the [Fill-The-Bucket](Fill-The-Bucket.md) domain. How can we use this to estimate risk? +Let's say we have a problem in the [Fill-The-Bucket](Fill-The-Bucket) domain. How can we use this to estimate risk? Let's set up a simple scenario, which we've agreed by contract with a client: @@ -133,22 +133,22 @@ Are you a gambler? If you can just make everyone work a couple of extra hours' This is a really contrived example, but actually this represents _most of_ how banks, insurance companies, investors etc. work out risk, simply multiplying the probability of something happening by what is lost when it does happen. But let's look at some criticisms of this: -1. Aren't there other options? We might be able to work nights to get the project done, or hire more staff, or give bonuses for overtime _or something_. In fact, in [Pressure](/tags/Pressure.md) we'll look at some of these factors. +1. Aren't there other options? We might be able to work nights to get the project done, or hire more staff, or give bonuses for overtime _or something_. In fact, in [Pressure](/tags/Pressure) we'll look at some of these factors. 2. We've actually got a project here which _degrades gracefully_. The costs of taking longer are clearly sign-posted in advance. In reality, the costs of missing a date might be much more disastrous: not getting your game completed for Christmas, missing a regulatory deadline, not being ready for an important demo - these are all-or-nothing outcomes where it's a [stark contrast between in-time and missing-the-bus](/tags/Deadline-Risk). -3. Software development isn't generally isn't like this - as we will explore in the following sections, software development is _not_ in the [Fill-The-Bucket](Fill-The-Bucket.md) domain, generally. +3. Software development isn't generally isn't like this - as we will explore in the following sections, software development is _not_ in the [Fill-The-Bucket](Fill-The-Bucket) domain, generally. ## Failure Modes The problem is, because this approach works well in insurance and operations and other places, there is a _strong tendency_ for project managers to want to apply it to software development. -But there are lots of ways [Fill-The-Bucket](Fill-The-Bucket.md) goes wrong, and this happens when you are estimating in scenarios that violate the original conditions: +But there are lots of ways [Fill-The-Bucket](Fill-The-Bucket) goes wrong, and this happens when you are estimating in scenarios that violate the original conditions: 1. The work can be measured in units. 2. Each unit is pretty much the same as another. 3. Each unit is _independent_ to the others. -In [the financial crisis](/risks/Risk-Landscape.md#example-the-financial-crisis), we saw how estimates of risk failed because they violated point 3. +In [the financial crisis](/risks/Risk-Landscape#example-the-financial-crisis), we saw how estimates of risk failed because they violated point 3. -Let's have a look at [what happens when we relax these constraints](Kitchen-Cabinet.md). \ No newline at end of file +Let's have a look at [what happens when we relax these constraints](Kitchen-Cabinet). \ No newline at end of file diff --git a/docs/estimating/Fixing-Scrum.md b/docs/estimating/Fixing-Scrum.md index e5efeb4b6..026dade17 100644 --- a/docs/estimating/Fixing-Scrum.md +++ b/docs/estimating/Fixing-Scrum.md @@ -29,7 +29,7 @@ Work in Scrum is done within periods of time called _Sprints_. Each sprint ends > "The goal of this activity is to inspect and adapt the product being built... Everyone in attendance gets clear visibility into what is occurring and has an opportunity to help guide the forthcoming development to ensure that the most business-appropriate solution is created." - Essential Scrum (p26), _Rubin_ -In Risk-First, we tend to call this validation step [Meeting Reality](/tags/Meeting-Reality): you are creating a [feedback loop](/thinking/Cadence.md) in order to minimise risk. What is the risk you are minimising? Essentially, we are trying to reduce the risk of the developers _building the wrong thing_, which could be due to misunderstanding of requirements, or perfectionism, or because the piece of work was ill-conceived in the first place. In Risk-First, the risk of building the wrong thing is called [Feature Risk](/tags/Feature-Risk). +In Risk-First, we tend to call this validation step [Meeting Reality](/tags/Meeting-Reality): you are creating a [feedback loop](/thinking/Cadence) in order to minimise risk. What is the risk you are minimising? Essentially, we are trying to reduce the risk of the developers _building the wrong thing_, which could be due to misunderstanding of requirements, or perfectionism, or because the piece of work was ill-conceived in the first place. In Risk-First, the risk of building the wrong thing is called [Feature Risk](/tags/Feature-Risk). ![Feature Risk mitigated by Meeting Reality](/img/generated/estimating/scrum/scrum1.svg) @@ -37,7 +37,7 @@ The above diagram demonstrates us mitigating [Feature Risk](/tags/Feature-Risk) ![Schedule Risk for Stakeholders](/img/generated/estimating/scrum/scrum2.svg) -And that risk is called [Schedule Risk](/tags/Schedule-Risk). It is shown in the diagram above: the _more feedback_ you are receiving, the more _interruption_ you are causing to the people giving feedback. So you are trying to [Balance Risk](../bets/Purpose-Development-Team.md): while having a _daily_ review for a software project involving all stakeholders would be over-kill and waste a lot of everyone's time, having a _yearly_ review would be too-long a feedback loop. Balancing risk here means doing the feedback loop _just often enough_. +And that risk is called [Schedule Risk](/tags/Schedule-Risk). It is shown in the diagram above: the _more feedback_ you are receiving, the more _interruption_ you are causing to the people giving feedback. So you are trying to [Balance Risk](../bets/Purpose-Development-Team): while having a _daily_ review for a software project involving all stakeholders would be over-kill and waste a lot of everyone's time, having a _yearly_ review would be too-long a feedback loop. Balancing risk here means doing the feedback loop _just often enough_. ## Time-Boxing To The Rescue @@ -60,13 +60,13 @@ Nevertheless, time-boxing is foundational principle of Scrum. So in order to ge Now, although the above diagram _makes sense_ (estimating as a mitigation to coordination issues) by this point in this track of articles we should be wary of our ability to estimate development tasks _at all_: - - **Sometimes, tasks have a [Fill-The-Bucket](Fill-The-Bucket.md) nature.** If you have a test plan to run through on six different platforms, and last week doing a single platform took two hours, then your estimate of two days for the lot is probably about right. + - **Sometimes, tasks have a [Fill-The-Bucket](Fill-The-Bucket) nature.** If you have a test plan to run through on six different platforms, and last week doing a single platform took two hours, then your estimate of two days for the lot is probably about right. - - **Sometimes, it's about finesse.** With [Fractal-Style](Fractals.md) problems you know that three days spent on icon design will yield better results than one day, but either way, there will be a set of icons to look at. + - **Sometimes, it's about finesse.** With [Fractal-Style](Fractals) problems you know that three days spent on icon design will yield better results than one day, but either way, there will be a set of icons to look at. - - **But sometimes, problems can telescope, as we discussed in [Kitchen Cabinets](Kitchen-Cabinet.md).** You start thinking the problem of connecting A to B is simple, but then you realise it involves a call to C and to redesign the whole of D and introduce a new micro-service E... your estimate is toast. + - **But sometimes, problems can telescope, as we discussed in [Kitchen Cabinets](Kitchen-Cabinet).** You start thinking the problem of connecting A to B is simple, but then you realise it involves a call to C and to redesign the whole of D and introduce a new micro-service E... your estimate is toast. - - **Finally, sometimes, you'll have a problem that's like a [Journey](Journeys.md).** Maybe you're trying to set up a new deployment pipeline? The first step, finding servers turned out to be easy, but now you're trying to license the software to run on them, and it's taking longer. The journey you have to take is _known_, but the steps along it are all different. Will you hit the Sprint Review on time? It's super-hard to say. + - **Finally, sometimes, you'll have a problem that's like a [Journey](Journeys).** Maybe you're trying to set up a new deployment pipeline? The first step, finding servers turned out to be easy, but now you're trying to license the software to run on them, and it's taking longer. The journey you have to take is _known_, but the steps along it are all different. Will you hit the Sprint Review on time? It's super-hard to say. Given that estimating is so problematic, does it make any sense to try to mitigate our [Coordination Risk](/tags/Coordination-Risk) using estimates? @@ -97,7 +97,7 @@ Perhaps this improves estimating, but for me there are two key problems with thi ## 10X -I've written before about [how being a "10X Developer" largely comes down to having already visited the terrain](Estimates.md). This implies that _at different times_ we can all be either 1X or 10X Developers. +I've written before about [how being a "10X Developer" largely comes down to having already visited the terrain](/estimating/Start). This implies that _at different times_ we can all be either 1X or 10X Developers. But with the power of hindsight, it's clear that at different times, on different projects, _whole teams_ can be either 1X or 10X. @@ -131,4 +131,4 @@ But actually, we're now _three degrees_ away from the original problem of **tryi If the thesis that "90% of everything is waste" is true, then Planning Poker is _also_ a waste, and we should devise a planning process to avoid this. -In the [next article](Risk-First-Analysis.md) we'll look at how we might do that. +In the [next article](Risk-First-Analysis) we'll look at how we might do that. diff --git a/docs/estimating/Fractals.md b/docs/estimating/Fractals.md index e6fa10524..077315bae 100644 --- a/docs/estimating/Fractals.md +++ b/docs/estimating/Fractals.md @@ -59,7 +59,7 @@ If your problem doesn't have an exact, defined end-goal, there is simply no way ![Opportunity on the Risk Landscape](/img/estimates/fractal1.png) -You might have some idea (selling hats for dogs?) of some interesting area of value on the [Risk Landscape](/thinking/Glossary.md#risk-landscape) that you want to occupy, as shown in the above diagram. +You might have some idea (selling hats for dogs?) of some interesting area of value on the [Risk Landscape](/thinking/Glossary#risk-landscape) that you want to occupy, as shown in the above diagram. Your best bet is to try and colonise the area of value _as fast as possible_ by using as much readily available software as possible. @@ -69,7 +69,7 @@ Maybe version one looks something like the diagram above: a few hastily-assemble ![Second Version](/img/estimates/fractal3.png) -Releasing the first version might fill in some of the blanks, and show you more detail on the [Risk Landscape](/thinking/Glossary.md#risk-landscape). Effectively showing you a more detailed view of the coastline. Feedback from users will provide you with a better understanding of exactly what this fractal problem-space looks like. +Releasing the first version might fill in some of the blanks, and show you more detail on the [Risk Landscape](/thinking/Glossary#risk-landscape). Effectively showing you a more detailed view of the coastline. Feedback from users will provide you with a better understanding of exactly what this fractal problem-space looks like. ![Third Version](/img/estimates/fractal4.png) @@ -77,8 +77,8 @@ As you go on [Meeting Reality](/tags/Meeting-Reality), the shape of the problem Is it possible to estimate problems in the Fractal Shape domain? The best you might be able to do is to match two competing objectives: -- Building Product: By building functionality you head towards your [Goal](/thinking/Glossary.md#goal) on the [Risk Landscape](/thinking/Glossary.md#risk-landscape). But how do you know this is the right goal? -- [Meeting Reality](/tags/Meeting-Reality): By putting your product "out there" you find your customers and your niche in the market, and you explore the [Risk Landscape](/thinking/Glossary.md#risk-landscape). But this takes time and effort away from _building product_. +- Building Product: By building functionality you head towards your [Goal](/thinking/Glossary#goal) on the [Risk Landscape](/thinking/Glossary#risk-landscape). But how do you know this is the right goal? +- [Meeting Reality](/tags/Meeting-Reality): By putting your product "out there" you find your customers and your niche in the market, and you explore the [Risk Landscape](/thinking/Glossary#risk-landscape). But this takes time and effort away from _building product_. With this in mind, you estimate a useful amount of time to go round this cycle, fixing the time but letting the deliverable vary. @@ -88,11 +88,11 @@ The fractal nature of many software development tasks is both a blessing and a c > "Lets explore this point more by means of an extended analogy. Suppose that you wanted to start a new business as a yachting captain... This is in many ways analogous to when a startup company decides that they want to serve the fortune 500, companies that have petabytes and beyond of data. However, you as a startup founder have to operate lean, and you are only willing to spend $10,000 on a boat. If you were to walk up to the owner of the multi-million dollar yacht and say, I’ll give you $10,000 for that boat, you would be laughed off the dock. " - [Kyle Prifogle, _Dear Startup_](https://kyleprifogle.com/dear-startup/) -Buying yachts is _not_ in the Fractal problem space. It's much more [Fill-The-Bucket](Fill-The-Bucket.md): more money means more yacht. So, it's not a great analogy. But the point is that the _expectation_ is for a value-miracle to occur, simply by adopting the practice of MVP or agile development. +Buying yachts is _not_ in the Fractal problem space. It's much more [Fill-The-Bucket](Fill-The-Bucket): more money means more yacht. So, it's not a great analogy. But the point is that the _expectation_ is for a value-miracle to occur, simply by adopting the practice of MVP or agile development. ## Where To Find Fractal Spaces -Not all software development problems are squarely in the [Fractal](Fractals.md) space, but those that are are generally tasks like building user interfaces, games, interactivity and usability. This is where the curse comes in: it's _hard to know what to build_ and _you are never done_. +Not all software development problems are squarely in the [Fractal](Fractals) space, but those that are are generally tasks like building user interfaces, games, interactivity and usability. This is where the curse comes in: it's _hard to know what to build_ and _you are never done_. Although there are some high-profile wins with these types of problems, generally they are _hard_. @@ -106,7 +106,7 @@ Let's look at the conclusions we reached in [Boundary Risk](/tags/Boundary-Risk) If we accept this problem of the fractal nature of human desire, then we have to contend with the fact that our software systems are always going to get continually more complex to serve it. -So that's _four_ different styles of estimating. Let's try and put these together in [Analogies](Analogies.md) +So that's _four_ different styles of estimating. Let's try and put these together in [Analogies](Analogies) diff --git a/docs/estimating/Interference-Checklist.md b/docs/estimating/Interference-Checklist.md index 2c1a7f7bd..4344379f5 100644 --- a/docs/estimating/Interference-Checklist.md +++ b/docs/estimating/Interference-Checklist.md @@ -15,7 +15,7 @@ hide_table_of_contents: true # Interference Checklist -Here is an example "Interference Checklist", which you can use to estimate the risk on your stories / tasks. For an explanation of how this works, check out the previous article [On Story Points](On-Story-Points.md). +Here is an example "Interference Checklist", which you can use to estimate the risk on your stories / tasks. For an explanation of how this works, check out the previous article [On Story Points](On-Story-Points). This is just meant to be used as a starting point - feel free to adapt this to the specifics of your own projects and environments. diff --git a/docs/estimating/Journeys.md b/docs/estimating/Journeys.md index 85c393b1e..f9dc19f17 100644 --- a/docs/estimating/Journeys.md +++ b/docs/estimating/Journeys.md @@ -14,9 +14,9 @@ tweet: yes # Journeys -A third way to conceive of software development is as a _journey_ on the [Risk Landscape](/thinking/Glossary.md#risk-landscape). For example, in a startup we might start at a place where we have no product, no customers and some funding. We go on a journey of discovery and end up in a place where hopefully we _have_ a product, customers and an income stream. +A third way to conceive of software development is as a _journey_ on the [Risk Landscape](/thinking/Glossary#risk-landscape). For example, in a startup we might start at a place where we have no product, no customers and some funding. We go on a journey of discovery and end up in a place where hopefully we _have_ a product, customers and an income stream. -There are many ways we could do this journey, and many destinations. The idea of "pivoting" your startup idea feels very true to the [Journey](Journeys.md) analogy, because that literally means changing direction. _The place where we were headed sucked, lets go over here_. +There are many ways we could do this journey, and many destinations. The idea of "pivoting" your startup idea feels very true to the [Journey](Journeys) analogy, because that literally means changing direction. _The place where we were headed sucked, lets go over here_. What does this journey look like in Risk-First terms? @@ -24,7 +24,7 @@ What does this journey look like in Risk-First terms? As this diagram shows, at the start we have plenty of [Feature Fit Risk](/tags/Feature-Fit-Risk): if we have _no_ product, then it definitely doesn't fit our customer's needs! Also we have some amount of [Funding Risk](/tags/Funding-Risk), as at some point the money will run out. -After that, we use every trick in the book called "product development" to get to a new place on the [Risk Landscape](/thinking/Glossary.md#risk-landscape). This place (hopefully) will have a better risk profile than the one we came from. +After that, we use every trick in the book called "product development" to get to a new place on the [Risk Landscape](/thinking/Glossary#risk-landscape). This place (hopefully) will have a better risk profile than the one we came from. If we're successful then yes, we'll have the [Operational Risk](/tags/Operational-Risk) of running a business, but hopefully we'll be in a better position than we started. @@ -44,11 +44,11 @@ If you were doing this same journey on foot, it's a very direct route, but would ## Journey Risks -In the software development past, _building it yourself_ was the only way to get anything done. It was like London _before road and rail_. Nowadays, you are bombarded with choices. It's actually _worse than London_ because it's not even a two-dimensional geographic space and there are multitudes of different routes and acceptable destinations. Journey planning on the software [Risk Landscape](/thinking/Glossary.md#risk-landscape) is an optimisation problem _par excellence_. +In the software development past, _building it yourself_ was the only way to get anything done. It was like London _before road and rail_. Nowadays, you are bombarded with choices. It's actually _worse than London_ because it's not even a two-dimensional geographic space and there are multitudes of different routes and acceptable destinations. Journey planning on the software [Risk Landscape](/thinking/Glossary#risk-landscape) is an optimisation problem _par excellence_. How can we think about estimating in such a domain? There are clearly a number of factors to come into play: -1. For individual _parts_ of the journey, we could use a [Fill-The-Bucket](Fill-The-Bucket.md) approach, and look at things like _expected travel time_, _mean travel time_ or _reliability_. +1. For individual _parts_ of the journey, we could use a [Fill-The-Bucket](Fill-The-Bucket) approach, and look at things like _expected travel time_, _mean travel time_ or _reliability_. 2. Chances are, we're going to need to join up several different pieces of transport: maybe some on-foot, some by road, some by rail. 3. It's a really good idea to build in buffers if you're relying on services that are infrequent (like flights or trains). 4. Cost is a factor. @@ -98,7 +98,7 @@ This should look a _fair bit_ like software architecture: often, we sketch out At the other extreme, if we're estimating a single story, we can break down work like this. For development tasks which _look like a journey_, this is what I'm doing. _"If I build the Foo component using Spring and the Bar component in HTML, I can join them together with some Java code..."_ -Further, as we solve problems in our code-base, we break them down into smaller and smaller parts. (We'll come back to this in [Hierarchies](/complexity/Hierarchies.md).) +Further, as we solve problems in our code-base, we break them down into smaller and smaller parts. So **Journey Estimating** is three things all at once: @@ -108,7 +108,7 @@ So **Journey Estimating** is three things all at once: ## Meta Analysis -So, we now have a third type of estimating. Again, very different from the [first](Fill-The-Bucket.md) [two](Kitchen-Cabinet.md). But again, there are obvious similarities with what we do in the world of software, because it's so easy to _go the wrong way_ or _overlook a short-cut_. +So, we now have a third type of estimating. Again, very different from the [first](Fill-The-Bucket) [two](Kitchen-Cabinet). But again, there are obvious similarities with what we do in the world of software, because it's so easy to _go the wrong way_ or _overlook a short-cut_. I've been on projects where a team has toiled long-and-hard to get a database working, only to find out that there was a better, different one available that would do the job for them with _way less effort_. I've watched people struggle to build their own languages and compilers, only to realise later that actually all they needed was to use the ones that were there already. @@ -127,6 +127,6 @@ Estimating then becomes the art of: To achieve point (4), once an estimate is in place, the Risk-First way to proceed would then be to tackle each part in order, from the riskiest and most-likely-to-fail, to the most reliable. This approach front-loads finding out if the plan is suspect. -But, there is _yet another_ way of looking at what's needed to estimate: [Fractals](Fractals.md). +But, there is _yet another_ way of looking at what's needed to estimate: [Fractals](Fractals). \ No newline at end of file diff --git a/docs/estimating/Kitchen-Cabinet.md b/docs/estimating/Kitchen-Cabinet.md index bb461a59f..f003b7cb5 100644 --- a/docs/estimating/Kitchen-Cabinet.md +++ b/docs/estimating/Kitchen-Cabinet.md @@ -29,9 +29,9 @@ Imagine a scenario where you're helping a friend pack up their kitchen: How long should you estimate for the job? (The answer is below) -This was suggested in a [Hacker News](https://news.ycombinator.com) comment discussing software estimation, and struck a chord with many readers. It's clear that we are no longer in the [Fill-The-Bucket](Fill-The-Bucket.md) domain anymore; our original intuitions about how long things might take are not going to work here. +This was suggested in a [Hacker News](https://news.ycombinator.com) comment discussing software estimation, and struck a chord with many readers. It's clear that we are no longer in the [Fill-The-Bucket](Fill-The-Bucket) domain anymore; our original intuitions about how long things might take are not going to work here. -As a developer, this 'feels' more real to me than [Fill-The-Bucket](Fill-The-Bucket.md). _Any_ task I take on has an outside chance of telescoping into something _much worse_. Here's a recent example: +As a developer, this 'feels' more real to me than [Fill-The-Bucket](Fill-The-Bucket). _Any_ task I take on has an outside chance of telescoping into something _much worse_. Here's a recent example: - I wanted to test out a CSS change to my website site. _1 hour?_ - But in order to avoid wrecking the live version, I would need to test this offline, with [Jekyll](https://jekyllrb.com). _2 hours?_ @@ -54,13 +54,13 @@ The above chart simulates the kitchen cabinet scenario. Have a play and see how - You have _thirty_ cabinets in the original kitchen? - You have a _single_ cabinet in the original kitchen, and say a .8 chance-of-nesting? -When the number of initial cabinets is high, we are closer to the [Fill-The-bucket](Fill-The-Bucket.md) world, with it's normal distribution, and variance-around-a-mean. +When the number of initial cabinets is high, we are closer to the [Fill-The-bucket](Fill-The-Bucket) world, with it's normal distribution, and variance-around-a-mean. But when the number of initial cabinets is low, the distribution is "long-tailed" and tends towards the [Exponential Distribution](https://en.wikipedia.org/wiki/Exponential_distribution), which works in a way similar to [radioactive decay](https://en.wikipedia.org/wiki/Radioactive_decay). We might best be able to talk about moving kitchens in terms of their half-lives. That is, given a bunch of infinity-cabinets, we could say how long it would usually take for _half_ of them to be completed. Then, it'll be the same again for the next half, and so on. -Whereas [Fill-The-Bucket](Fill-The-Bucket.md) was defined with a _mean_ and _variance_, the exponential distribution is modelled with a single parameter, lambda (λ), which is the rate of decay. +Whereas [Fill-The-Bucket](Fill-The-Bucket) was defined with a _mean_ and _variance_, the exponential distribution is modelled with a single parameter, lambda (λ), which is the rate of decay. @@ -77,7 +77,7 @@ Let's assume that the exponential distribution _does_ model software development With any estimate, there are risks in both under- and over- estimating: - - **Too Long**: In estimating too much time, you might not be given the work or your business might [miss the opportunity in the marketplace](/risks/Scarcity-Risk.md#opportunity-risk). A too cautious risk might doom a potentially successful project before it has even started. + - **Too Long**: In estimating too much time, you might not be given the work or your business might [miss the opportunity in the marketplace](/risks/Scarcity-Risk#opportunity-risk). A too cautious risk might doom a potentially successful project before it has even started. - **Too Short**: If you estimate too little time, you might miss important coordinating dates with your marketing team, or miss the Christmas window, or run out of "runway". @@ -128,7 +128,7 @@ If the estimate is accepted, the supplier's [Funding Risk](/tags/Funding-Risk) i If the supplier is short on opportunities or funds, there is a tendency to under-estimate. That's because the [Feature Risk](/tags/Feature-Risk) is a problem for the supplier _in the future_, whereas their [Funding Risk](/tags/Funding-Risk) is a problem _right now_. -You can often see suppliers under-bid on projects because of this future discounting, which we discussed before in [Evaluating Risk](/thinking/Evaluating-Risk.md#discounting). +You can often see suppliers under-bid on projects because of this future discounting, which we discussed before in [Evaluating Risk](/thinking/Evaluating-Risk#discounting). This analysis also suggests something else: the process of giving and accepting estimates _transfers risk_. This is a key point which we'll return to later. @@ -136,7 +136,7 @@ This analysis also suggests something else: the process of giving and acceptin Conversely, too-late risk accrues only _after_ the delivery date has passed. Like too-early risk, there is probably a maximal limit on this too, which occurs at the point the project is cancelled due to lack of funds! -The problem with projects in the [Kitchen Cabinet](Kitchen-Cabinet.md) domain is that _elapsed time is no indication of remaining time_. The exponential distribution is exactly the same shape at every point in time (we're dealing with half-lives, remember?). +The problem with projects in the [Kitchen Cabinet](Kitchen-Cabinet) domain is that _elapsed time is no indication of remaining time_. The exponential distribution is exactly the same shape at every point in time (we're dealing with half-lives, remember?). This means that clients often keep projects running for far longer than they should, assuming success is just around the corner. This is an example of the [Sunk Cost Fallacy](https://en.wikipedia.org/wiki/Sunk_cost). @@ -146,7 +146,7 @@ There is an alternative to too-early or too-late risk. You can always choose to Then, instead of worrying about [Scarcity Risks](/risks/(/tags/Scarcity-Risk), you are letting [Feature Risk](/tags/Feature-Risk) vary to take up the slack. -So far, we've seen two kinds of estimate: [Fill-The-Bucket](Fill-The-Bucket.md) and [Kitchen-Cabinet](Kitchen-Cabinet.md). Now, it's time to review a third - estimating [Journey Style](Journeys.md), and looking at how we can minimise [Feature Risk](/tags/Feature-Risk) within an available budget. +So far, we've seen two kinds of estimate: [Fill-The-Bucket](Fill-The-Bucket) and [Kitchen-Cabinet](Kitchen-Cabinet). Now, it's time to review a third - estimating [Journey Style](Journeys), and looking at how we can minimise [Feature Risk](/tags/Feature-Risk) within an available budget. \ No newline at end of file diff --git a/docs/estimating/On-Story-Points.md b/docs/estimating/On-Story-Points.md index d65394d0d..3afe78d73 100644 --- a/docs/estimating/On-Story-Points.md +++ b/docs/estimating/On-Story-Points.md @@ -17,9 +17,9 @@ tweet: yes In Scrum, the idea of a _sprint_ is well named: as a team, you are trying to complete work on a whole bunch of work items (stories) before a deadline. -In a previous article in this series, [Fixing Scrum](Fixing-Scrum.md) we took against the idea of fixed time-boxes generally, because they _introduce more problems than they solve_: as we've seen [in](Journeys.md) [multiple](Kitchen-Cabinet.md) [articles](Fractals.md), estimating is _hard_, trying to use it as a solution to anything is misguided. So you should only estimate when you absolutely need to. +In a previous article in this series, [Fixing Scrum](Fixing-Scrum) we took against the idea of fixed time-boxes generally, because they _introduce more problems than they solve_: as we've seen [in](Journeys) [multiple](Kitchen-Cabinet) [articles](Fractals), estimating is _hard_, trying to use it as a solution to anything is misguided. So you should only estimate when you absolutely need to. -Nevertheless, _knowing how long things will take_ is really the whole purpose of this track on [Estimating](Start.md), and sometimes unavoidable [deadlines](/tags/Deadline-Risk) make it necessary. +Nevertheless, _knowing how long things will take_ is really the whole purpose of this track on [Estimating](Start), and sometimes unavoidable [deadlines](/tags/Deadline-Risk) make it necessary. In Scrum, the Estimation process is based on the concept of _story points_, so that will be the focus here, although essentially this discussion is relevant to anyone estimating software development. @@ -37,7 +37,7 @@ At a basic level, to calculate the number of story points for an item of work, y - **A Project**: Since the story will be embedded in the context of a project, this is an important input. On some projects, work is harder to complete than on others. Things like the choice of languages or architectures have an effect, as do the systems and people the project needs to interface with. -- **Team Experience**: Over time, the team become more experienced both working with each other and with the project itself. They learn the [Risk Landscape](/risks/Risk-Landscape.md) and understand where the pitfalls lie and how to avoid them. +- **Team Experience**: Over time, the team become more experienced both working with each other and with the project itself. They learn the [Risk Landscape](/risks/Risk-Landscape) and understand where the pitfalls lie and how to avoid them. ## Calculating Story Points @@ -75,7 +75,7 @@ In his essay, "Choose Boring Technology", Dan McKinley describes a theoretical i > "Let’s say every company gets about three innovation tokens. You can spend these however you want, but the supply is fixed for a long while... If you choose to write your website in NodeJS, you just spent one of your innovation tokens. If you choose to use MongoDB, you just spent one of your innovation tokens. If you choose to use service discovery tech that’s existed for a year or less, you just spent one of your innovation tokens... there are many choices of technology that are boring and good, or at least good enough. MySQL is boring. Postgres is boring. PHP is boring. " - [Choose Boring Technology, _Dan McKinley_](https://mcfunley.com/choose-boring-technology) -What he's driving at here is of course _risk_: with shiny (i.e. non-boring) technology, you pick up lots of [Hidden Risk](/thinking/Glossary.md#hidden-risk). Innovation Tokens are paying for time spent dealing with [Hidden Risk](/thinking/Glossary.md#hidden-risk). Dan's contention is that not only do you have the up-front costs of integrating the shiny technology, but you also have a long tail of extra running costs, as you have to manage the new technology through to maturity in your environment. +What he's driving at here is of course _risk_: with shiny (i.e. non-boring) technology, you pick up lots of [Hidden Risk](/thinking/Glossary#hidden-risk). Innovation Tokens are paying for time spent dealing with [Hidden Risk](/thinking/Glossary#hidden-risk). Dan's contention is that not only do you have the up-front costs of integrating the shiny technology, but you also have a long tail of extra running costs, as you have to manage the new technology through to maturity in your environment. Put this way, couldn't story points be some kind of "Innovation Token"? @@ -87,14 +87,14 @@ Sometimes, developers provide _tolerances_ around their story-point estimates, " Another problem in Story Point estimation is bootstrapping. It is expected that, to start with, estimates made by inexperienced teams, or inexperienced team-members, are going to be poor. The expectation is also that over time, through domain experience, the estimates improve. This seems to happen _somewhat_ in my experience. But nowhere near enough. -A common complaint when tasks overrun is that the team were blind-sided by [Hidden Risk](/thinking/Glossary.md#hidden-risk), but in my experience this boils down to two things: +A common complaint when tasks overrun is that the team were blind-sided by [Hidden Risk](/thinking/Glossary#hidden-risk), but in my experience this boils down to two things: - Genuine hidden risk, that no-one could have foreseen (e.g. a bug in a device driver that no-one knew about). - Fake hidden risks, that could have been foreseen with the appropriate up-front effort (e.g. a design approval might take a bit longer than expected due to absence). Couldn't we bootstrap the estimation process by providing an "Interference Checklist" for story points, based on the things that commonly throw spanners into the works? -Below, I've sketched out a small section of what this might look like. The [next article](Interference-Checklist.md) contains a more complete Interference Checklist that I've put together and you can modify for your own purposes. +Below, I've sketched out a small section of what this might look like. The [next article](Interference-Checklist) contains a more complete Interference Checklist that I've put together and you can modify for your own purposes. | **Area** | **Concern** | **Notes** | **Point Value** | | -------------------------------------------- | --------------------------------------------------------------------------------- | --------- | --------------- | @@ -109,7 +109,7 @@ Below, I've sketched out a small section of what this might look like. The [nex By starting discussions with an Interference Checklist, we can augment the "play planning poker" process by _prompting people on things to think about_, like "Do we know what done looks like here?", "Is this going to affect some of our existing functionality?", "How are we going to get it tested?". -A Checklist is a good way of asking questions in order that we can manage risk early on. It's all about turning a [Hidden Risk](/thinking/Glossary.md#hidden-risk) into one we've thought about. +A Checklist is a good way of asking questions in order that we can manage risk early on. It's all about turning a [Hidden Risk](/thinking/Glossary#hidden-risk) into one we've thought about. If the team runs through this list together, and then decides the task is a "five-story-pointer", then surely that is a better, more rigorous approach than just plucking a number out of the air, as planning poker suggests. @@ -158,11 +158,11 @@ Note that above I just show a small sample of the full Interference Checklist. ## Summing Up -In my view, the poker planning / story point process fails to produce a reliable estimate. Mainly, this is not entirely the fault of story points - estimating software development tasks is akin to the [Halting Problem](https://en.wikipedia.org/wiki/Halting_problem). In this series of articles, we've looked at how software can at times have [Fractal Complexity](Fractals.md), [be like a journey of discovery](Journeys.md) or [have nested layers of complexity](Kitchen-Cabinet.md) - it is _hard_. +In my view, the poker planning / story point process fails to produce a reliable estimate. Mainly, this is not entirely the fault of story points - estimating software development tasks is akin to the [Halting Problem](https://en.wikipedia.org/wiki/Halting_problem). In this series of articles, we've looked at how software can at times have [Fractal Complexity](Fractals), [be like a journey of discovery](Journeys) or [have nested layers of complexity](Kitchen-Cabinet) - it is _hard_. Nevertheless, experience shows us that there are common _modes of failure_ for software estimates: things we try to estimate and fail at. Having an Interference Checklist and Risk Budgets addresses that deficit. -The next article is a complete [Interference Checklist](Interference-Checklist.md) that you can take and try out on your own projects. +The next article is a complete [Interference Checklist](Interference-Checklist) that you can take and try out on your own projects. diff --git a/docs/estimating/Risk-First-Analysis.md b/docs/estimating/Risk-First-Analysis.md index d759d9eef..e27b1c3e0 100644 --- a/docs/estimating/Risk-First-Analysis.md +++ b/docs/estimating/Risk-First-Analysis.md @@ -18,7 +18,7 @@ tweet: yes # Risk-First Analysis: An Example -The previous article, [Fixing Scrum](Fixing-Scrum.md), examined Scrum's idea of "Sprints" and concluded: +The previous article, [Fixing Scrum](Fixing-Scrum), examined Scrum's idea of "Sprints" and concluded: - The main purpose of a Sprint is to ensure there is a **feedback loop**. Every two weeks (or however long the Sprint is) we have a Sprint Review, and review the code that has been completed during the Sprint. In Risk-First parlance, we call this [Meeting Reality](/tags/Meeting-Reality). It is the process of _testing your ideas against reality_ to make sure they stand up. @@ -28,19 +28,19 @@ The previous article, [Fixing Scrum](Fixing-Scrum.md), examined Scrum's idea of ![Scrum: Consequences Of Time-Boxing](/img/generated/estimating/planner/scrum-consequences.svg) -The diagram above shows this behaviour in the form of a [Risk-First Diagram](/thinking/Risk-First-Diagrams.md). Put briefly: _risks_ ([Schedule Risk](/tags/Schedule-Risk), [Feature Risk](/tags/Feature-Risk)) are addressed by actions such as "Development", "Review" or "Planning Poker". +The diagram above shows this behaviour in the form of a [Risk-First Diagram](/thinking/Risk-First-Diagrams). Put briefly: _risks_ ([Schedule Risk](/tags/Schedule-Risk), [Feature Risk](/tags/Feature-Risk)) are addressed by actions such as "Development", "Review" or "Planning Poker". -If you're new to [Risk-First](https://www.riskfirst.org) then it's probably worth explaining at this point that one of the purposes of this project is to enumerate the different types of risk you could face running a software project. You can begin to learn about them all [here](/risks/Start.md). Suffice to say, we have icons to represent each of these kinds of risks, and the rest of this article will introduce some of them to you in passing. +If you're new to [Risk-First](https://www.riskfirst.org) then it's probably worth explaining at this point that one of the purposes of this project is to enumerate the different types of risk you could face running a software project. You can begin to learn about them all [here](/risks/Start). Suffice to say, we have icons to represent each of these kinds of risks, and the rest of this article will introduce some of them to you in passing. ##### On a Risk-First diagram, when you address a risk by taking an action, you draw a line through the risk. ## Estimating Is A Poor Tool -Seen like this, **Planning Poker** is a tool to avoid the [Coordination Risk](/tags/Coordination-Risk) problem of everyone needing to complete their work for the end of the Sprint. But estimating is _really hard_: In this track so far we've looked at three different ways in which software estimation deviates from the straightforward extrapolation (a.k.a, [Fill-The-Bucket](Fill-The-Bucket.md)) we learnt about in maths classes at school: +Seen like this, **Planning Poker** is a tool to avoid the [Coordination Risk](/tags/Coordination-Risk) problem of everyone needing to complete their work for the end of the Sprint. But estimating is _really hard_: In this track so far we've looked at three different ways in which software estimation deviates from the straightforward extrapolation (a.k.a, [Fill-The-Bucket](Fill-The-Bucket)) we learnt about in maths classes at school: -- [Kitchen Cabinet](Kitchen-Cabinet.md): In this domain, there is _hidden work_. We don't know how much there might be. If we can break down tasks into smaller units, then by the _law of averages_ and the _central limit theorem_, we can apply some statistics to figure out when we might finish. -- [Journeys](Journeys.md): In this domain, work is heterogeneous and interconnected. Different parts depend on each other, and a failure in one part might mean going right back to square one. The way to estimate in this domain is to _know the landscape_ and to build in _buffers_. -- [Fractals](Fractals.md): In this domain, [Parkinson's Law](/risks/Process-Risk.md#bureaucracy) is king. There is always more work to be done. The best thing we can do is try and apply ourselves to the _highest value_ work at any given point, and frequently refer back to reality to find out if we're building the right thing. +- [Kitchen Cabinet](Kitchen-Cabinet): In this domain, there is _hidden work_. We don't know how much there might be. If we can break down tasks into smaller units, then by the _law of averages_ and the _central limit theorem_, we can apply some statistics to figure out when we might finish. +- [Journeys](Journeys): In this domain, work is heterogeneous and interconnected. Different parts depend on each other, and a failure in one part might mean going right back to square one. The way to estimate in this domain is to _know the landscape_ and to build in _buffers_. +- [Fractals](Fractals): In this domain, [Parkinson's Law](/risks/Process-Risk#bureaucracy) is king. There is always more work to be done. The best thing we can do is try and apply ourselves to the _highest value_ work at any given point, and frequently refer back to reality to find out if we're building the right thing. ![Three Dimensions From Fill-The-Bucket](/img/estimates/dimensions.png) @@ -67,7 +67,7 @@ How can we convert a planning session away from being estimate-focused and back - Consideration for what is going on longer-term in the project. - Consideration of risks besides how long something takes. Sure, that's important, because it affects _value_, but it's not the only thing to worry about. - _Deciding what is important_ above _what can fit into a sprint_. -- Making [Bets](../bets/Purpose-Development-Team.md): what actions give the biggest [Payoff](/thinking/Glossary.md#payoff) for the smallest [Stake](/thinking/Glossary.md#stake)? +- Making [Bets](../bets/Purpose-Development-Team): what actions give the biggest [Payoff](/thinking/Glossary#payoff) for the smallest [Stake](/thinking/Glossary#stake)? ## A Scenario @@ -104,7 +104,7 @@ Let's move on to task 2, the **Search Function**, as shown in the above diagram. As with the **Rendering Bug**, above, we lose something: [Feature Risk](/tags/Feature-Risk), which is the risk (to us) that the features our product is supplying don't meet the client's (or the market's) requirements. Writing code is all about identifying and removing [Feature Risk](/tags/Feature-Risk), and building products that fit the needs of their users. -So as in the Rendering Bug example, we can show [Feature Risk](/tags/Feature-Risk) being eliminated by showing it on the left with a strike-out line. However, it's been established during analysis that the way to implement this feature is to introduce [ElasticSearch](https://www.elastic.co), a third-party piece of software. This in itself is an [Attendant Risk](/thinking/Glossary.md#attendant-risk) of taking that action: +So as in the Rendering Bug example, we can show [Feature Risk](/tags/Feature-Risk) being eliminated by showing it on the left with a strike-out line. However, it's been established during analysis that the way to implement this feature is to introduce [ElasticSearch](https://www.elastic.co), a third-party piece of software. This in itself is an [Attendant Risk](/thinking/Glossary#attendant-risk) of taking that action: - Are we going to find that easy to deploy and maintain? - What impact will this have on hosting charges? @@ -113,21 +113,21 @@ So as in the Rendering Bug example, we can show [Feature Risk](/tags/Feature-Ris ##### If an action leads to new risks, show them on the right side of the action. -So, on the right side of the action, we are showing the [Attendant Risks](/thinking/Glossary.md#attendant-risk) we _gain_ from taking the action. +So, on the right side of the action, we are showing the [Attendant Risks](/thinking/Glossary#attendant-risk) we _gain_ from taking the action. ## Question 3: What Is The Expected Return? -If we know what we lose and what we gain from each action we take, then it's simple maths to work out what the best actions to take on a project are simply pick the ones with the greatest [Expected Return](../thinking/Glossary.md#expected-return) (as shown in the above diagram). +If we know what we lose and what we gain from each action we take, then it's simple maths to work out what the best actions to take on a project are simply pick the ones with the greatest [Expected Return](../thinking/Glossary#expected-return) (as shown in the above diagram). ### Upside Risk -It's worth noting - not all risks are bad! [Upside Risk](/thinking/Glossary.md#upside-risk) captures this concept well. If I buy a lottery ticket, there's a big risk that I'll have wasted some money buying the ticket. But there's also the [Upside Risk](/thinking/Glossary.md#upside-risk) that I might win! Both upside and downside risks should be captured in your analysis of [Payoff](/thinking/Glossary.md#payoff). +It's worth noting - not all risks are bad! [Upside Risk](/thinking/Glossary#upside-risk) captures this concept well. If I buy a lottery ticket, there's a big risk that I'll have wasted some money buying the ticket. But there's also the [Upside Risk](/thinking/Glossary#upside-risk) that I might win! Both upside and downside risks should be captured in your analysis of [Payoff](/thinking/Glossary#payoff). -While some projects are expressed in terms of addressing risks (e.g. installing a security system, replacing the tyres on your car) a lot are expressed in terms of _opportunities_ (e.g. create a new product market, win a competition). It's important to consider these longer-term objectives in the [Payoff](/thinking/Glossary.md#payoff). +While some projects are expressed in terms of addressing risks (e.g. installing a security system, replacing the tyres on your car) a lot are expressed in terms of _opportunities_ (e.g. create a new product market, win a competition). It's important to consider these longer-term objectives in the [Payoff](/thinking/Glossary#payoff). ![Goals, Anti-Goals, Risks and Upside Risks](/img/generated/estimating/planner/focus.svg) -The diagram above lays these out: We'll work hard to _improve the probability_ of [Goals](/thinking/Glossary.md#goal) and [Upside Risks](/thinking/Glossary.md#upside-risk) occurring, whilst at the same time taking action to prevent [Anti-Goals](https://riskfirst.org/post/news/2020/01/17/Anti-Goals) and [Downside Risks](/thinking/Glossary.md#risk). +The diagram above lays these out: We'll work hard to _improve the probability_ of [Goals](/thinking/Glossary#goal) and [Upside Risks](/thinking/Glossary#upside-risk) occurring, whilst at the same time taking action to prevent [Anti-Goals](https://riskfirst.org/post/news/2020/01/17/Anti-Goals) and [Downside Risks](/thinking/Glossary#risk). (There's a gentle introduction to the idea of _Anti-Goals_ [here](https://riskfirst.org/post/news/2020/01/17/Anti-Goals) which might be worth the diversion). @@ -151,21 +151,21 @@ On the face of it, it's clear why the Sales Team might feel annoyed - there is a ![Fixing The Build, v2](/img/generated/estimating/planner/ci-impact-2.svg) -The above diagram models that. Fixing the CI Pipeline is now implicated in reducing [Staff Risk](/tags/Staff-Risk), [Coordination Risk](/tags/Coordination-Risk) and [Funding Risk](/tags/Funding-Risk) for the whole business and therefore seems like it might have a better [Expected Return](/thinking/Glossary.md#expected-return). +The above diagram models that. Fixing the CI Pipeline is now implicated in reducing [Staff Risk](/tags/Staff-Risk), [Coordination Risk](/tags/Coordination-Risk) and [Funding Risk](/tags/Funding-Risk) for the whole business and therefore seems like it might have a better [Expected Return](/thinking/Glossary#expected-return). ## Judgement -But is that a fair assessment? How would you determine [Expected Return](/thinking/Glossary.md#expected-return) in this situation? It's clear that even though we might be able to _describe_ the risks, it might not be all that easy to _quantify_ them. +But is that a fair assessment? How would you determine [Expected Return](/thinking/Glossary#expected-return) in this situation? It's clear that even though we might be able to _describe_ the risks, it might not be all that easy to _quantify_ them. Luckily, we don't really have to. If I am trying to evaluate a single action on my own, all I really need to do is answer one question: do I lose more risk than I gain? -All I need to do is "weigh up" the change in risks as best as I can. A lot of the time, the [Payoff](/thinking/Glossary.md#payoff) will be obviously worth it, or obviously not. +All I need to do is "weigh up" the change in risks as best as I can. A lot of the time, the [Payoff](/thinking/Glossary#payoff) will be obviously worth it, or obviously not. ## Ensemble -So far, we've been looking at each task individually, working out which risks we're addressing, and which ones we're exposed to as a result. If you have plenty of spare talent and only a few tasks, then maybe that's enough and you can get to work on all the tasks that have a positive [Payoff](/thinking/Glossary.md#payoff). But if you're constrained, then you should be hunting for the [actions](/thinking/Glossary.md#taking-action) with the biggest [Payoff](/thinking/Glossary.md#payoff) and doing those first. +So far, we've been looking at each task individually, working out which risks we're addressing, and which ones we're exposed to as a result. If you have plenty of spare talent and only a few tasks, then maybe that's enough and you can get to work on all the tasks that have a positive [Payoff](/thinking/Glossary#payoff). But if you're constrained, then you should be hunting for the [actions](/thinking/Glossary#taking-action) with the biggest [Payoff](/thinking/Glossary#payoff) and doing those first. -Things change too when you have a whole team engaged in the planning process. Although people will generally agree on what the risks _are_, they often will disagree on the [Probability they will occur, or the impact if they do](/thinking/Track-Risk.md#risk-registers). In cases like these, you might want to allow each stakeholder to "vote up" the risks they consider significant, or vote up the actions they consider to have high [Payoff](/thinking/Glossary.md#payoff). This will be covered in further detail in the [next section](Stop-Estimating-Start-Navigating.md). +Things change too when you have a whole team engaged in the planning process. Although people will generally agree on what the risks _are_, they often will disagree on the [Probability they will occur, or the impact if they do](/thinking/Track-Risk#risk-registers). In cases like these, you might want to allow each stakeholder to "vote up" the risks they consider significant, or vote up the actions they consider to have high [Payoff](/thinking/Glossary#payoff). This will be covered in further detail in the [next section](Stop-Estimating-Start-Navigating). But for now, let's talk about in which ways this is better or worse than Planning Poker. @@ -173,7 +173,7 @@ But for now, let's talk about in which ways this is better or worse than Plannin ![Instead of Estimates](/img/generated/estimating/planner/estimates.svg) -- **We've made explicit the trade-offs for carrying out pieces of work**. If [building the right thing](Fixing-Scrum.md#10x) is the most important thing we can do, then making sure the whole team are on the same page with respect to what the pros or cons might be. +- **We've made explicit the trade-offs for carrying out pieces of work**. If [building the right thing](Fixing-Scrum#10x) is the most important thing we can do, then making sure the whole team are on the same page with respect to what the pros or cons might be. - **This isn't user stories**: we're not describing a piece of work and asking how long it'll take. We're very clearly figuring out what the advantages and disadvantages are to attempting something. This is fundamentally a different discussion to a Scrum planning session. - **Estimates are de-emphasised**: We're not coming up with hard estimates, but we _are_ considering risks to deadlines, to budgets, to funding. As shown in the diagram above, there are _plenty_ of risks associated with tasks taking too long. - **We're not planning, so much as weighing risks**: A lot of project plans fall to pieces because they insist on certain events occurring at certain times. By talking about risk, we're acknowledging what we don't know. @@ -194,4 +194,4 @@ The model we are describing here is just _a graphic representation of a discussi One argument made _for_ the Scrum planning game is that it gives everyone on the development team a voice. For many, this might be the biggest contribution of Planning Poker and we definitely don't want to lose that. -We've not looked at how Risk-First Analysis can be _gamified_ in the way that Planning Poker is - we'll get to that. But first, let's look in more detail at the [Story Point](On-Story-Points.md) idea and see if it can be improved. \ No newline at end of file +We've not looked at how Risk-First Analysis can be _gamified_ in the way that Planning Poker is - we'll get to that. But first, let's look in more detail at the [Story Point](On-Story-Points) idea and see if it can be improved. \ No newline at end of file diff --git a/docs/estimating/Stop-Estimating-Start-Navigating.md b/docs/estimating/Stop-Estimating-Start-Navigating.md index 2658c9183..f7d780ff4 100644 --- a/docs/estimating/Stop-Estimating-Start-Navigating.md +++ b/docs/estimating/Stop-Estimating-Start-Navigating.md @@ -13,15 +13,15 @@ sidebar_position: 9 # Stop Estimating, Start Navigating -This is the _ninth_ article in the [Risk-First](https://riskfirst.org) track on [Estimating](Start.md). We've come a long way: +This is the _ninth_ article in the [Risk-First](https://riskfirst.org) track on [Estimating](Start). We've come a long way: -- In the first four articles, [Fill-The-Bucket](Fill-The-Bucket.md), [Kitchen Cabinet](Kitchen-Cabinet.md), [Journeys](Journeys.md) and [Fractals](Fractals.md) we looked at the various reasons why estimating is such a nightmare on software projects. This is summarised in [Analogies](Analogies.md). The upshot is that predictable, well understood, repeatable things can be estimated with some confidence. However, as soon as software is predictable, repeatable and well-understood, _you're doing it wrong_. +- In the first four articles, [Fill-The-Bucket](Fill-The-Bucket), [Kitchen Cabinet](Kitchen-Cabinet), [Journeys](Journeys) and [Fractals](Fractals) we looked at the various reasons why estimating is such a nightmare on software projects. This is summarised in [Analogies](Analogies). The upshot is that predictable, well understood, repeatable things can be estimated with some confidence. However, as soon as software is predictable, repeatable and well-understood, _you're doing it wrong_. -- In article seven, we explored how [Scrum](Fixing-Scrum.md), the popular Agile methodology, fails to understand this crucial problem with estimates (among other failings). +- In article seven, we explored how [Scrum](Fixing-Scrum), the popular Agile methodology, fails to understand this crucial problem with estimates (among other failings). -- Then, in [Risk-First Analysis](Risk-First-Analysis.md) we look at how we can work out what to build by examining what [risks](/thinking/Glossary.md#risk) we'd like to address and which [goals](/thinking/Glossary.md#risk) or [Upside Risks](/thinking/Glossary.md#upside-risk) we'd like to see happen. +- Then, in [Risk-First Analysis](Risk-First-Analysis) we look at how we can work out what to build by examining what [risks](/thinking/Glossary#risk) we'd like to address and which [goals](/thinking/Glossary#risk) or [Upside Risks](/thinking/Glossary#upside-risk) we'd like to see happen. -So, now we're up to date. It's article nine, and I was going to build on [Risk-First Analysis](Risk-First-Analysis.md) to show how to plan work for a team of people over a week, a month, a year. +So, now we're up to date. It's article nine, and I was going to build on [Risk-First Analysis](Risk-First-Analysis) to show how to plan work for a team of people over a week, a month, a year. ## Something Happened diff --git a/docs/overview/Contributors.md b/docs/overview/Contributors.md index 632d26d13..e959e8422 100644 --- a/docs/overview/Contributors.md +++ b/docs/overview/Contributors.md @@ -28,7 +28,7 @@ Ideas, issues and proof-reading: ## Want To Help? -If you feel something important is missing, or you spot a mistake, [we need help](https://github.com/risk-first/website/blob/master/CONTRIBUTING.md). +If you feel something important is missing, or you spot a mistake, [we need help](https://github.com/risk-first/website/blob/master/CONTRIBUTING). Although this is a collaborative Github project, it's not meant to be an open-ended discussion of software techniques like [Ward's Wiki](https://wiki.c2.com). In order to be concise and useful, discussions need to be carried out by either: diff --git a/docs/overview/Quick-Summary.md b/docs/overview/Quick-Summary.md index 7b455ef00..c044fbe82 100644 --- a/docs/overview/Quick-Summary.md +++ b/docs/overview/Quick-Summary.md @@ -15,7 +15,7 @@ tags: ## 1. There are Lots of Ways to Run Software Projects -There are lots of ways to look at a project in-flight. For example, metrics such as “number of open tickets”, “story points”, “code coverage" or "release cadence" give us a numerical feel for how things are going and what needs to happen next. We also judge the health of projects by the practices used on them, such as [Continuous Integration](/tags/Integration-Testing.md), [Unit Testing](/tags/Automated-Testing) or [Pair Programming](/tags/Pair-Programming). +There are lots of ways to look at a project in-flight. For example, metrics such as “number of open tickets”, “story points”, “code coverage" or "release cadence" give us a numerical feel for how things are going and what needs to happen next. We also judge the health of projects by the practices used on them, such as [Continuous Integration](/tags/Integration-Testing), [Unit Testing](/tags/Automated-Testing) or [Pair Programming](/tags/Pair-Programming). Software methodologies, then, are collections of tools and practices: “Agile”, “Waterfall”, “Lean” or “Phased Delivery” all prescribe different approaches to running a project and are opinionated about the way they think projects should be done and the tools that should be used. @@ -25,11 +25,11 @@ A key question then is: **how do we select the right tools for the job?** ## 2. We Can Look at Projects in Terms of Risks -One way to examine the project in-flight is by looking at the [risks](/thinking/Glossary.md#risk) it faces. +One way to examine the project in-flight is by looking at the [risks](/thinking/Glossary#risk) it faces. Commonly, tools such as [RAID logs](https://www.projectmanager.com/blog/raid-log-use-one) and [RAG status](https://pmtips.net/blog-new/what-does-rag-status-mean) reporting are used. These techniques should be familiar to project managers and developers everywhere. -However, the Risk-First view is that we can go much further: that each item of work being done on the project is to manage a particular risk. [Risk](/thinking/Glossary.md#risk) isn't something that just appears in a report, it actually drives *everything we do*. +However, the Risk-First view is that we can go much further: that each item of work being done on the project is to manage a particular risk. [Risk](/thinking/Glossary#risk) isn't something that just appears in a report, it actually drives *everything we do*. For example: @@ -37,7 +37,7 @@ For example: - A task about improving the health indicators could be seen as mitigating _the risk of the application failing and no-one reacting to it_. - Even a task as basic as implementing a new function in the application is mitigating _the risk that users are dissatisfied and go elsewhere_. -One assertion of Risk-First is that **every action you take on a project is to manage a [risk](/thinking/Glossary.md#risk).** +One assertion of Risk-First is that **every action you take on a project is to manage a [risk](/thinking/Glossary#risk).** ## 3. We Can Break Down Risks on a Project Methodically @@ -54,7 +54,7 @@ Software risks are difficult to quantify and mostly the effort involved in doing With this in place, we can: - Talk about the types of risks we face on our projects, using an appropriate language. -- Anticipate [Hidden Risks](/thinking/Glossary.md#hidden-risk) that we hadn't considered before. +- Anticipate [Hidden Risks](/thinking/Glossary#hidden-risk) that we hadn't considered before. - Weigh the risks against each other and decide which order to tackle them. ## 4. We Can Analyse Tools and Techniques in Terms of how they Manage Risk @@ -91,9 +91,9 @@ We have described a model of risk within software projects, looking something li How do we take this further? -One idea explored is the _[Risk Landscape](/risks/Risk-Landscape.md)_: although the software team can't remove risk from their project, they can take actions that move them to a place in the [Risk Landscape](/risks/Risk-Landscape.md) where the risks on the project are more favourable than where they started. +One idea explored is the _[Risk Landscape](/risks/Risk-Landscape)_: although the software team can't remove risk from their project, they can take actions that move them to a place in the [Risk Landscape](/risks/Risk-Landscape) where the risks on the project are more favourable than where they started. -From there, we examine basic risk archetypes you will encounter on the software project, to build up a [vocabulary of Software Risk](/risks/Staging-And-Classifying.md) and look at which specific tools you can use to mitigate each kind of risk. +From there, we examine basic risk archetypes you will encounter on the software project, to build up a [vocabulary of Software Risk](/risks/Staging-And-Classifying) and look at which specific tools you can use to mitigate each kind of risk. Then, we look at software practices and how they manage various risks. Beyond this we examine the question: _how can a Risk-First approach inform the use of this practice?_ @@ -107,4 +107,4 @@ Risk-First aims to provide a framework in which we can _analyse these actions_ a ## Next Steps -[Tracks](Tracks.md) explains how the material on this site is structured. +[Tracks](Tracks) explains how the material on this site is structured. diff --git a/docs/overview/Tracks.md b/docs/overview/Tracks.md index a0b74d973..12fcf6420 100644 --- a/docs/overview/Tracks.md +++ b/docs/overview/Tracks.md @@ -21,4 +21,4 @@ There is quite a lot of material on this site so to aid digestion Risk-First is ## Lets Go! -If you're just starting with Risk-First, then let's head to [Thinking Risk-First](/thinking/Start.md) next... \ No newline at end of file +If you're just starting with Risk-First, then let's head to [Thinking Risk-First](/thinking/Start) next... \ No newline at end of file diff --git a/docs/practices/Deployment-And-Operations/Automation.md b/docs/practices/Deployment-And-Operations/Automation.md index 9b3862043..fb9ff191a 100644 --- a/docs/practices/Deployment-And-Operations/Automation.md +++ b/docs/practices/Deployment-And-Operations/Automation.md @@ -47,12 +47,12 @@ practice: > "Automation describes a wide range of technologies that reduce human intervention in processes, mainly by predetermining decision criteria, subprocess relationships, and related actions, as well as embodying those predeterminations in machines.": - [Automation, _Wikipedia_](https://en.wikipedia.org/wiki/Automation) -One of the key ways to measure whether your team is doing _useful work_ is to look at whether, in fact, it can be automated. And this is the spirit of [DevOps](DevOps) - the idea that people in general are poor at repeatable tasks, and anything people do repeatedly _should_ be automated. +One of the key ways to measure whether your team is doing _useful work_ is to look at whether, in fact, it can be automated. And this is the spirit of [DevOps](/methods/DevOps) - the idea that people in general are poor at repeatable tasks, and anything people do repeatedly _should_ be automated. See: - - [Automation (Meeting Reality)](/thinking/Meeting-Reality.md#example-automation) - - [The Purpose of Process](/risks/Process-Risk.md#the-purpose-of-process) + - [Automation (Meeting Reality)](/thinking/Meeting-Reality#example-automation) + - [The Purpose of Process](/risks/Process-Risk#the-purpose-of-process) ## See Also diff --git a/docs/practices/Deployment-And-Operations/Configuration-Management.md b/docs/practices/Deployment-And-Operations/Configuration-Management.md index 9db286587..580317a1a 100644 --- a/docs/practices/Deployment-And-Operations/Configuration-Management.md +++ b/docs/practices/Deployment-And-Operations/Configuration-Management.md @@ -43,7 +43,7 @@ Configuration Management (CM) involves systematically handling changes to ensure See: - - [Consider Payoff](/thinking/Consider-Payoff.md) + - [Consider Payoff](/thinking/Consider-Payoff) ## See Also diff --git a/docs/practices/Deployment-And-Operations/Demand-Management.md b/docs/practices/Deployment-And-Operations/Demand-Management.md index 412445d17..23efc8916 100644 --- a/docs/practices/Deployment-And-Operations/Demand-Management.md +++ b/docs/practices/Deployment-And-Operations/Demand-Management.md @@ -43,7 +43,7 @@ TODO: buffers, queues, pools, kanban See: -- [Scarcity Risk](/risks/Scarcity-Risks/Mitigations) +- [Scarcity Risk](/risks/Dependency-Risks/Scarcity-Risks/Mitigations) ## See Also diff --git a/docs/practices/Deployment-And-Operations/Monitoring.md b/docs/practices/Deployment-And-Operations/Monitoring.md index 39739c91c..29eac9474 100644 --- a/docs/practices/Deployment-And-Operations/Monitoring.md +++ b/docs/practices/Deployment-And-Operations/Monitoring.md @@ -41,9 +41,9 @@ practice: Monitoring encompasses a wide range of practices designed to ensure that systems operate efficiently and without interruption. This includes tracking the performance, availability, and security of networks, systems, and applications. Effective monitoring helps in early detection of issues, allowing for prompt resolution and minimizing the impact on operations. See: - - [Operations Management](/risks/Operational-Risk.md#operations-management) - - [Monitoring](/risks/Agency-Risk.md#monitoring) - - [Control](/risks/Operational-Risk.md#control) + - [Operations Management](/risks/Operational-Risk#operations-management) + - [Monitoring](/risks/Agency-Risk#monitoring) + - [Control](/risks/Operational-Risk#control) ## See Also diff --git a/docs/practices/Deployment-And-Operations/Release.md b/docs/practices/Deployment-And-Operations/Release.md index 7fbbba572..0eb316449 100644 --- a/docs/practices/Deployment-And-Operations/Release.md +++ b/docs/practices/Deployment-And-Operations/Release.md @@ -40,9 +40,9 @@ Release / Delivery involves the structured and controlled process of moving soft See: -- [Development Process](/thinking/Development-Process.md#a-toy-process) -- [Consider Payoff](/thinking/Consider-Payoff.md#example-4-continue-testing-or-release) -- [Production (Cadence)](/thinking/Cadence.md#production) +- [Development Process](/thinking/Development-Process#a-toy-process) +- [Consider Payoff](/thinking/Consider-Payoff#example-4-continue-testing-or-release) +- [Production (Cadence)](/thinking/Cadence#production) ## See Also diff --git a/docs/practices/Development-And-Coding/Coding.md b/docs/practices/Development-And-Coding/Coding.md index 2195e2dfc..46086cd2e 100644 --- a/docs/practices/Development-And-Coding/Coding.md +++ b/docs/practices/Development-And-Coding/Coding.md @@ -40,7 +40,7 @@ Coding is a core activity in software development, involving the translation of See: - - [Time/Reality Tradeoff](/thinking/Cadence.md#time--reality-trade-off) + - [Time/Reality Tradeoff](/thinking/Cadence#time--reality-trade-off) ## See Also diff --git a/docs/practices/Development-And-Coding/Pair-Programming.md b/docs/practices/Development-And-Coding/Pair-Programming.md index 3953af314..30e269de3 100644 --- a/docs/practices/Development-And-Coding/Pair-Programming.md +++ b/docs/practices/Development-And-Coding/Pair-Programming.md @@ -44,7 +44,7 @@ Pair Programming involves two developers working together on the same code. One See: - - [Crisis Mode](/thinking/Crisis-Mode.md) + - [Crisis Mode](/thinking/Crisis-Mode) ## See Also diff --git a/docs/practices/Development-And-Coding/Prototyping.md b/docs/practices/Development-And-Coding/Prototyping.md index c3c20a5e8..d0fda89df 100644 --- a/docs/practices/Development-And-Coding/Prototyping.md +++ b/docs/practices/Development-And-Coding/Prototyping.md @@ -39,7 +39,7 @@ practice: Prototyping in software development involves creating early models or mockups of the software to test concepts and gather feedback. This practice helps in validating design choices, identifying potential issues, and ensuring that the final product meets the users' needs and expectations. See: - - [Spike Solution (Coding Bets)](../bets/Coding-Bets.md#spike-solutions-a-new-technology-bet) + - [Spike Solution (Coding Bets)](/bets/Coding-Bets#spike-solutions-a-new-technology-bet) ## See Also diff --git a/docs/practices/Development-And-Coding/Refactoring.md b/docs/practices/Development-And-Coding/Refactoring.md index 815912b16..0aabb0994 100644 --- a/docs/practices/Development-And-Coding/Refactoring.md +++ b/docs/practices/Development-And-Coding/Refactoring.md @@ -51,9 +51,9 @@ Refactoring is all about ensuring you have the right abstractions. See: - - [Refactoring](/risks/Complexity-Risk.md#refactoring) - - [The Power of Abstractions](/risks/Staging-And-Classifying.md#the-power-of-abstractions) - - [Hierarchies and Modularisation](/risks/Complexity-Risk.md#hierarchies-and-modularisation) + - [Refactoring](/risks/Complexity-Risk#refactoring) + - [The Power of Abstractions](/risks/Staging-And-Classifying#the-power-of-abstractions) + - [Hierarchies and Modularisation](/risks/Complexity-Risk#hierarchies-and-modularisation) ## External References diff --git a/docs/practices/Development-And-Coding/Runtime-Adoption.md b/docs/practices/Development-And-Coding/Runtime-Adoption.md index 16f231289..2534ed76d 100644 --- a/docs/practices/Development-And-Coding/Runtime-Adoption.md +++ b/docs/practices/Development-And-Coding/Runtime-Adoption.md @@ -42,9 +42,9 @@ Adoption of standards and libraries involves implementing and adhering to establ See: - - [Languages and Dependencies](/risks/Complexity-Risk.md#languages-and-dependencies) - - [Software Libraries (Software Dependency Risk)](/risks/Software-Dependency-Risk.md#2-software-libraries) - - [Software-as-a-Service (Software Dependency Risk)](/risks/Software-Dependency-Risk.md#3--software-as-a-service) + - [Languages and Dependencies](/risks/Complexity-Risk#languages-and-dependencies) + - [Software Libraries (Software Dependency Risk)](/risks/Software-Dependency-Risk#2-software-libraries) + - [Software-as-a-Service (Software Dependency Risk)](/risks/Software-Dependency-Risk#3--software-as-a-service) ## See Also diff --git a/docs/practices/Development-And-Coding/Standardisation.md b/docs/practices/Development-And-Coding/Standardisation.md index 91b909971..bc0575815 100644 --- a/docs/practices/Development-And-Coding/Standardisation.md +++ b/docs/practices/Development-And-Coding/Standardisation.md @@ -42,7 +42,7 @@ practice: Standardisation involves creating, implementing, and enforcing standards and guidelines to ensure consistency, compatibility, and quality across software projects. This practice helps in maintaining uniformity, reducing complexity, and improving communication among team members and stakeholders. See: -- [Unwritten Software (Software Dependency Risk)](/risks/Software-Dependency-Risk.md#unwritten-software) +- [Unwritten Software (Software Dependency Risk)](/risks/Software-Dependency-Risk#unwritten-software) ## See Also diff --git a/docs/practices/Development-And-Coding/Tool-Adoption.md b/docs/practices/Development-And-Coding/Tool-Adoption.md index e5084f87b..e342c989e 100644 --- a/docs/practices/Development-And-Coding/Tool-Adoption.md +++ b/docs/practices/Development-And-Coding/Tool-Adoption.md @@ -51,7 +51,7 @@ In general, unless the problem is somehow _specific to your circumstances_ it ma Tools in general are _good_ and _worth using_ if they offer you a better risk return than you would have had from not using them. -But, this is a low bar - some tools offer _amazing_ returns on investment. The [Silver Bullets](/complexity/Silver-Bullets.md) article describes in general some of these: +But, this is a low bar - some tools offer _amazing_ returns on investment: - Assemblers - Compilers - Garbage Collection diff --git a/docs/practices/External-Relations/Analysis.md b/docs/practices/External-Relations/Analysis.md index 36979727a..16ac4dedf 100644 --- a/docs/practices/External-Relations/Analysis.md +++ b/docs/practices/External-Relations/Analysis.md @@ -44,7 +44,7 @@ Analysis in software development involves examining and breaking down the requir See: - - [Environmental Scanning](/risks/Operational-Risk.md#scanning-the-operational-context) + - [Environmental Scanning](/risks/Operational-Risk#scanning-the-operational-context) ## See Also diff --git a/docs/practices/External-Relations/Outsourcing.md b/docs/practices/External-Relations/Outsourcing.md index 4afd579d2..031598507 100644 --- a/docs/practices/External-Relations/Outsourcing.md +++ b/docs/practices/External-Relations/Outsourcing.md @@ -44,7 +44,7 @@ Outsourcing in software development involves hiring external vendors or service **Pairing** and **Mobbing** as mitigations to [Coordination Risk](/tags/Coordination-Risk) are easiest when developers are together in the same room. But it doesn't always work out like this. Teams spread in different locations and timezones naturally don't have the same [communication bandwidth](/tags/Communication-Risk) and you _will_ have more issues with [Coordination Risk](/tags/Coordination-Risk). -In the extreme, I've seen situations where the team at one location has decided to "suck up" the extra development effort themselves rather than spend time trying to bring a new remote team up-to-speed. More common is for one location to do the development, while another gets the [Support](Support) duties. +In the extreme, I've seen situations where the team at one location has decided to "suck up" the extra development effort themselves rather than spend time trying to bring a new remote team up-to-speed. More common is for one location to do the development, while another gets the [Support](../Planning-And-Management/Issue-Management) duties. When this happens, it's because somehow the team feel that [Coordination Risk](/tags/Coordination-Risk) is more unmanageable than [Schedule Risk](/tags/Schedule-Risk). diff --git a/docs/practices/Planning-And-Management/Approvals.md b/docs/practices/Planning-And-Management/Approvals.md index 39865996d..3221a2975 100644 --- a/docs/practices/Planning-And-Management/Approvals.md +++ b/docs/practices/Planning-And-Management/Approvals.md @@ -45,7 +45,7 @@ Approval / Sign Off in software development involves getting formal approval fro See: -- [Processes, Sign-Offs and Agency Risk](/risks/Process-Risk.md#processes-sign-offs-and-agency-risk)_ +- [Processes, Sign-Offs and Agency Risk](/risks/Process-Risk#processes-sign-offs-and-agency-risk)_ ## See Also diff --git a/docs/practices/Planning-And-Management/Delegation.md b/docs/practices/Planning-And-Management/Delegation.md index ad6b062da..74fcb436d 100644 --- a/docs/practices/Planning-And-Management/Delegation.md +++ b/docs/practices/Planning-And-Management/Delegation.md @@ -40,8 +40,8 @@ Delegation involves assigning responsibility and authority to others to carry ou See: - - [Goal Alignment](/risks/Agency-Risk.md#goal-alignment)= - - [Risk-First Diagrams](/thinking/Risk-First-Diagrams.md#example-blaming-others) + - [Goal Alignment](/risks/Agency-Risk#goal-alignment)= + - [Risk-First Diagrams](/thinking/Risk-First-Diagrams#example-blaming-others) ## See Also diff --git a/docs/practices/Planning-And-Management/Design.md b/docs/practices/Planning-And-Management/Design.md index 0b53ba27e..7c643d2f3 100644 --- a/docs/practices/Planning-And-Management/Design.md +++ b/docs/practices/Planning-And-Management/Design.md @@ -47,7 +47,7 @@ Architecture / Design in software development involves creating the high-level s Design is what you do every time you think of an action to mitigate a risk. And **Big Design Up Front** is where you do a lot of it in one go, for example: - Where you think about the design of all (or a set of) the requirements in one go, in advance. - - Where you consider a _set of [Attendant Risks](/thinking/Glossary.md#attendant-risk)_ all at the same time. + - Where you consider a _set of [Attendant Risks](/thinking/Glossary#attendant-risk)_ all at the same time. Compare with "little" design, where we consider just the _next_ requirement, or the _most pressing_ risk. @@ -55,11 +55,11 @@ Although it's fallen out of favour in Agile methodologies, there are benefits to ## How It Works -As we saw in [Meet Reality](/thinking/Meeting-Reality.md), "Navigating the [Risk Landscape](/risks/Risk-Landscape.md)", meant going from a position of high risk, to a position of lower risk. [Agile Design](Agile) is much like [Gradient Descent](https://en.wikipedia.org/wiki/Gradient_descent): each day, one small step after another _downwards in risk_ on the [Risk Landscape](/risks/Risk-Landscape.md). +As we saw in [Meet Reality](/thinking/Meeting-Reality), "Navigating the [Risk Landscape](/risks/Risk-Landscape)", meant going from a position of high risk, to a position of lower risk. [Agile Design](/tags/Agile) is much like [Gradient Descent](https://en.wikipedia.org/wiki/Gradient_descent): each day, one small step after another _downwards in risk_ on the [Risk Landscape](/risks/Risk-Landscape). But the problem with this is you can get trapped in a [Local Minima](https://en.wikipedia.org/wiki/Maximum_and_minimum#Search), where there are _no_ easy steps to take to get you to where you want to be. -In these cases, you have to _widen your horizon_ and look at where you want to go: and this is the process of _design_. You're not necessarily now taking steps on the [Risk Landscape](/risks/Risk-Landscape.md), but imagining a place on the [Risk Landscape](/risks/Risk-Landscape.md) where you want to be, and checking it against your [Internal Model](/thinking/Glossary.md#internal-model) for validity. +In these cases, you have to _widen your horizon_ and look at where you want to go: and this is the process of _design_. You're not necessarily now taking steps on the [Risk Landscape](/risks/Risk-Landscape), but imagining a place on the [Risk Landscape](/risks/Risk-Landscape) where you want to be, and checking it against your [Internal Model](/thinking/Glossary#internal-model) for validity. ## See Also diff --git a/docs/practices/Planning-And-Management/Pressure.md b/docs/practices/Planning-And-Management/Pressure.md new file mode 100644 index 000000000..78e9657d8 --- /dev/null +++ b/docs/practices/Planning-And-Management/Pressure.md @@ -0,0 +1,45 @@ +--- +title: Pressure +description: The practice of exerting influence on team members to ensure tasks are completed on time and to a high standard. +tags: + - Pressure + - Practice +featured: + class: c + element: 'Pressure' +practice: + aka: + - "Exerting Pressure" + - "Encouraging" + - "Pushing Deadlines" + - "High Expectations" + mitigates: + - tag: Schedule Risk + reason: "Helps in meeting tight deadlines by pushing team members to work efficiently." + - tag: Performance Risk + reason: "Encourages team members to maintain high performance and quality standards." + - tag: Motivation Risk + reason: "Can increase motivation by setting clear expectations and goals." + attendant: + - tag: Key Person Risk + reason: "Excessive pressure can lead to stress and burnout among team members." + - tag: Operational Risk + reason: "May compromise quality if team members rush to meet deadlines." + - tag: Agency Risk + reason: "Can negatively impact team morale and job satisfaction." + related: + - ../Deployment-And-Operations/Release + - ../Collaboration-And-Communication/Stakeholder-Management +--- + + + +## Description + +> "Psychological stress in the workplace refers to the emotional strain and pressure that can arise from job demands, work environment, and organizational practices. Excessive stress can lead to negative health outcomes, reduced productivity, and decreased job satisfaction." - [Psychological stress, _Wikipedia_](https://en.wikipedia.org/wiki/Psychological_stress) + +Applying pressure involves exerting influence on team members to ensure tasks are completed on time and to a high standard. This practice is often used in high-stakes projects where meeting deadlines and maintaining quality is critical. While it can be effective in ensuring timely completion of tasks, it must be applied judiciously to avoid negative consequences such as burnout, decreased quality, and reduced morale. + +## See Also + + \ No newline at end of file diff --git a/docs/practices/Planning-And-Management/Prioritising.md b/docs/practices/Planning-And-Management/Prioritising.md index 3bef8864c..d02ab4f81 100644 --- a/docs/practices/Planning-And-Management/Prioritising.md +++ b/docs/practices/Planning-And-Management/Prioritising.md @@ -42,9 +42,9 @@ Prioritising in software development involves defining the Minimum Viable Produc Prioritisation is a key process in trying to focus on building _useful_ stuff first. It could look like: - - [A Sprint Planning Meeting](Agile): Deciding on the most important things for the team to build in a time period. - - [Phased Delivery](Waterfall): Breaking a large project into smaller-scoped projects. - - [A Backlog](Lean): Having tasks or stories in delivery order in a queue. + - [A Sprint Planning Meeting](/tags/Agile): Deciding on the most important things for the team to build in a time period. + - [Phased Delivery](/methods/Waterfall): Breaking a large project into smaller-scoped projects. + - [A Backlog](/methods/Lean): Having tasks or stories in delivery order in a queue. - **Task Decomposition**: Breaking down larger units of a task into smaller items. Often, [Requirements](Requirements-Capture) come _bundled together_ and need to be broken down so that we work on just the most vital parts, as in - [Identifying the MVP](https://en.wikipedia.org/wiki/Minimum_viable_product): Trying to cast out _all_ non-essential functionality. @@ -53,7 +53,7 @@ Prioritisation is a key process in trying to focus on building _useful_ stuff fi - **Big Bang**: Delivering all the functionality in a single go. - **Cycles, or Phases**: Splitting a large project into smaller chunks. - **Sprints**: Delivering with a fixed cadence, e.g. every month or week. -- [Continuous Delivery](DevOps): Delivering functionality one-piece-at-a-time. +- [Continuous Delivery](/methods/DevOps): Delivering functionality one-piece-at-a-time. Usually, risk is mitigated by **Prioritisation**. But sometimes, it's not appropriate: When Finland changed from driving on the right side of the road to the left, (in order to be in line with the rest of Europe) the changeover _had_ to be **Big Bang** and the whole country changed [overnight](https://en.wikipedia.org/wiki/Dagen_H). @@ -61,17 +61,17 @@ Usually, risk is mitigated by **Prioritisation**. But sometimes, it's not appro There are several ways you can prioritise work: -- **Largest Mitigation First**: What's the thing we can do right now to reduce our [Attendant Risk](/thinking/Glossary.md#attendant-risk) most? This is sometimes hard to quantify, given [Hidden Risk](/thinking/Glossary.md#hidden-risk), so maybe an easier metric is... -- **Biggest Win**: What's the best thing we can do right now to reduce [Attendant Risk](/thinking/Glossary.md#attendant-risk) for least additional [Schedule-Risk](/tags/Schedule-Risk)? (i.e. simply considering how much *work* is likely to be involved) +- **Largest Mitigation First**: What's the thing we can do right now to reduce our [Attendant Risk](/tags/Attendant-Risk) most? This is sometimes hard to quantify, given [Hidden Risk](/thinking/Glossary#hidden-risk), so maybe an easier metric is... +- **Biggest Win**: What's the best thing we can do right now to reduce [Attendant Risk](/tags/Attendant-Risk) for least additional [Schedule-Risk](/tags/Schedule-Risk)? (i.e. simply considering how much *work* is likely to be involved) - **Dependency Order**: Sometimes, you can't build Feature A until Feature B is complete. Prioritisation helps to identify and mitigate [Dependency Risk](/tags/Dependency-Risk). -By prioritising, you get to [Meet Reality](/thinking/Meeting-Reality.md) _sooner_ and _more frequently_ and in _small chunks_. +By prioritising, you get to [Meet Reality](/thinking/Meeting-Reality) _sooner_ and _more frequently_ and in _small chunks_. See: - - [Operations Management](/risks/Operational-Risk.md#operations-management) - - [Planning](/risks/Operational-Risk.md#planning) - - [Tracking Risks](/thinking/Track-Risk.md#visualising-risks) + - [Operations Management](/risks/Operational-Risk#operations-management) + - [Planning](/risks/Operational-Risk#planning) + - [Tracking Risks](/thinking/Track-Risk#visualising-risks) ## See Also diff --git a/docs/practices/Testing-and-Quality-Assurance/Automated-Testing.md b/docs/practices/Testing-and-Quality-Assurance/Automated-Testing.md index 8059ee6d1..90c548adf 100644 --- a/docs/practices/Testing-and-Quality-Assurance/Automated-Testing.md +++ b/docs/practices/Testing-and-Quality-Assurance/Automated-Testing.md @@ -43,8 +43,8 @@ Unit testing involves writing and running tests for individual units or componen See: - - [Development Process](/thinking/Development-Process.md#a-toy-process) - - [Unit Testing (Meeting Reality)](/thinking/Meeting-Reality.md#example-automation) + - [Development Process](/thinking/Development-Process#a-toy-process) + - [Unit Testing (Meeting Reality)](/thinking/Meeting-Reality#example-automation) ## See Also diff --git a/docs/practices/Testing-and-Quality-Assurance/Integration-Testing.md b/docs/practices/Testing-and-Quality-Assurance/Integration-Testing.md index dd92705cb..985cebf7d 100644 --- a/docs/practices/Testing-and-Quality-Assurance/Integration-Testing.md +++ b/docs/practices/Testing-and-Quality-Assurance/Integration-Testing.md @@ -39,8 +39,8 @@ practice: Integration Testing involves testing combined parts of the software to ensure they work together correctly. This practice helps in identifying and fixing issues that arise when individual components interact, ensuring that the overall system functions as intended. See: -- [Development Process](/thinking/Development-Process.md#a-toy-process)_ -- [Production (Cadence)](/thinking/Cadence.md#production) +- [Development Process](/thinking/Development-Process#a-toy-process)_ +- [Production (Cadence)](/thinking/Cadence#production) ## See Also diff --git a/docs/practices/Testing-and-Quality-Assurance/Regression-Testing.md b/docs/practices/Testing-and-Quality-Assurance/Regression-Testing.md index 74db2b219..00310edd5 100644 --- a/docs/practices/Testing-and-Quality-Assurance/Regression-Testing.md +++ b/docs/practices/Testing-and-Quality-Assurance/Regression-Testing.md @@ -107,9 +107,9 @@ If none of the other issues warn you against regression testing, this should be One of the biggest problems is that, eventually, it’s probably too much trouble. You have to get both systems up and running at the same time, with the same input data, and deterministic services, and you might have to access the production systems for this, and then get the data out of them, and then run the diff tool and eyeball the numbers. You’ll probably have to clone databases so that A* has the same data as A. You’ll probably have to do that every time you run it as A is a live system... -Regression testing _seems like_ it's going to be a big win. Sometimes, if you're lucky, it might be. But at least now you can see some of the [Hidden Risks](/thinking/Glossary.md#hidden-risk) associated with it. +Regression testing _seems like_ it's going to be a big win. Sometimes, if you're lucky, it might be. But at least now you can see some of the [Hidden Risks](/thinking/Glossary#hidden-risk) associated with it. -Although [Acceptance Tests](Testing) seem like a harder option, they are much easier to debug, and are probably what you really need: what they tend to do though is surface problems in the original system that you didn't want to fix. But, is that a bad thing? +Although [Automated Acceptance Tests](Automated-Testing) seem like a harder option, they are much easier to debug, and are probably what you really need: what they tend to do though is surface problems in the original system that you didn't want to fix. But, is that a bad thing? Likelihood is, the payback of regression testing is probably slight. But, if you can confidently say that none of these risks is going to present a serious problem to you, then by all means, skip writing acceptance tests and go ahead. diff --git a/docs/practices/Testing-and-Quality-Assurance/Security-Testing.md b/docs/practices/Testing-and-Quality-Assurance/Security-Testing.md index 43f3ab7d6..083b5909b 100644 --- a/docs/practices/Testing-and-Quality-Assurance/Security-Testing.md +++ b/docs/practices/Testing-and-Quality-Assurance/Security-Testing.md @@ -41,7 +41,7 @@ practice: Security Testing involves assessing the security of software applications to identify vulnerabilities and ensure they are protected against threats and attacks. This practice is essential for maintaining the integrity, confidentiality, and availability of software systems. See: - - [Penetration Testing](/risks/Operational-Risk.md#scanning-the-operational-context) + - [Penetration Testing](/risks/Operational-Risk#scanning-the-operational-context) ## See Also diff --git a/docs/practices/Testing-and-Quality-Assurance/User-Acceptance-Testing.md b/docs/practices/Testing-and-Quality-Assurance/User-Acceptance-Testing.md index 84dfcfea8..9b7f7cb14 100644 --- a/docs/practices/Testing-and-Quality-Assurance/User-Acceptance-Testing.md +++ b/docs/practices/Testing-and-Quality-Assurance/User-Acceptance-Testing.md @@ -44,11 +44,11 @@ practice: User Acceptance Testing (UAT) involves having end users test the software to ensure it meets their requirements and expectations. This practice helps in identifying any issues that may not have been caught during previous testing phases and ensures that the final product is user-friendly and functional. See: - - [Consider Payoff](/thinking/Consider-Payoff.md) - - [Development Process](/thinking/Development-Process.md#a-toy-process)_ - - [User Acceptance Testing (Meeting Reality)](/thinking/Meeting-Reality.md#example-user-acceptance-testing-uat) - - [Manual Testing](/thinking/Cadence.md#development-cycle-time) - - [Waterfall (One Size Fits No One)](thinking/One-Size-Fits-No-One.md) + - [Consider Payoff](/thinking/Consider-Payoff) + - [Development Process](/thinking/Development-Process#a-toy-process)_ + - [User Acceptance Testing (Meeting Reality)](/thinking/Meeting-Reality#example-user-acceptance-testing-uat) + - [Manual Testing](/thinking/Cadence#development-cycle-time) + - [Waterfall (One Size Fits No One)](/thinking/One-Size-Fits-No-One) ## See Also diff --git a/docs/risks/A-Pattern-Language.md b/docs/risks/A-Pattern-Language.md index 41716e8e7..27ebc6290 100644 --- a/docs/risks/A-Pattern-Language.md +++ b/docs/risks/A-Pattern-Language.md @@ -42,6 +42,6 @@ Risk-First isn't an exhaustive guide to every possible software development prac Neither is this a practitioner's guide to using any particular methodology: if you've come here to learn the best way to use Story Points (for example), then you're in the wrong place. There are plenty of places you can find that information already. Where possible, this site will link to or reference concepts on Wikipedia or the wider Internet for further reading on each subject. -With those caveats in place, let's go on and explore [The Risk Landscape](/risks/Risk-Landscape.md). +With those caveats in place, let's go on and explore [The Risk Landscape](/risks/Risk-Landscape). diff --git a/docs/risks/Communication-Risks/Channel-Risk.md b/docs/risks/Communication-Risks/Channel-Risk.md index 90a9e282b..8405580a5 100644 --- a/docs/risks/Communication-Risks/Channel-Risk.md +++ b/docs/risks/Communication-Risks/Channel-Risk.md @@ -2,7 +2,7 @@ title: Channel Risk description: Risks due to the inadequacy of the physical channel used to communicate our messages. e.g. noise, loss, interception, corruption. -slug: risks/Channel-Risk +slug: /risks/Channel-Risk featured: class: c element: '' @@ -50,4 +50,4 @@ This works both ways. Let's looks at some of the **Channel Risks** from the poi ![Marketing Communication](/img/generated/risks/communication/communication_marketing.svg) -[Internal Models](/thinking/Glossary.md#internal-model) don't magically get populated with the information they need: they fill up gradually, as shown in the diagram above. Popular products and ideas _spread_, by word-of-mouth or other means. Part of the job of being a good technologist is to keep track of new **Ideas**, **Concepts** and **Options**, so as to use them as [Dependencies](/tags/Dependency-Risk) when needed. +[Internal Models](/thinking/Glossary#internal-model) don't magically get populated with the information they need: they fill up gradually, as shown in the diagram above. Popular products and ideas _spread_, by word-of-mouth or other means. Part of the job of being a good technologist is to keep track of new **Ideas**, **Concepts** and **Options**, so as to use them as [Dependencies](/tags/Dependency-Risk) when needed. diff --git a/docs/risks/Communication-Risks/Communication-Risk.md b/docs/risks/Communication-Risks/Communication-Risk.md index 920942154..f70855397 100644 --- a/docs/risks/Communication-Risks/Communication-Risk.md +++ b/docs/risks/Communication-Risks/Communication-Risk.md @@ -27,7 +27,7 @@ part_of: Operational Risk If we all had identical knowledge, there would be no need to do any communicating at all, and therefore no [Communication Risk](/tags/Communication-Risk). -But people are not all-knowing oracles. We rely on our _senses_ to improve our [Internal Models](/thinking/Glossary.md#internal-model) of the world. There is [Communication Risk](/tags/Communication-Risk) here - we might overlook something vital (like an on-coming truck) or mistake something someone says (like "Don't cut the green wire"). +But people are not all-knowing oracles. We rely on our _senses_ to improve our [Internal Models](/thinking/Glossary#internal-model) of the world. There is [Communication Risk](/tags/Communication-Risk) here - we might overlook something vital (like an on-coming truck) or mistake something someone says (like "Don't cut the green wire"). So, we are going to go on a journey discovering Communication Risk, covering: @@ -59,7 +59,7 @@ But it's not just transmission. [Communication Risk](/tags/Communication-Risk) |Reception | **Bob** doesn't hear the message clearly (maybe there is background noise). | |Decoding | **Bob** might not decode what was said into a meaningful sentence. | |Interpretation | Assuming **Bob** _has_ heard, will he correctly **interpret** which type of chips (or chops) **Alice** was talking about? | -|Reconciliation | Does **Bob** believe the message? Will he **reconcile** the information into his [Internal Model](/thinking/Glossary.md#internal-model) and act on it? Perhaps not, if **Bob** forgets, or thinks that there are chips at home already.| +|Reconciliation | Does **Bob** believe the message? Will he **reconcile** the information into his [Internal Model](/thinking/Glossary#internal-model) and act on it? Perhaps not, if **Bob** forgets, or thinks that there are chips at home already.| ## Approach To Communication Risk @@ -72,7 +72,7 @@ There is a symmetry about the steps going on in Shannon's model and we're going - **[Channels](https://en.wikipedia.org/wiki/Communication_channel)**: the medium via which the communication is happening. - **[Protocols](https://en.wikipedia.org/wiki/Communication_protocol)**: the systems of rules that allow two or more entities of a communications system to transmit information. - **[Messages](https://en.wikipedia.org/wiki/Message)**: the information we want to convey. - - **[Internal Models](/thinking/Glossary.md#internal-model)**: the sources and destinations for the messages. Updating internal models (whether in our heads or machines) is the reason why we're communicating. + - **[Internal Models](/thinking/Glossary#internal-model)**: the sources and destinations for the messages. Updating internal models (whether in our heads or machines) is the reason why we're communicating. As we look at these four stages we'll consider the risks of each. diff --git a/docs/risks/Communication-Risks/Internal-Model-Risk.md b/docs/risks/Communication-Risks/Internal-Model-Risk.md index 3bc8745cc..b23e3aa91 100644 --- a/docs/risks/Communication-Risks/Internal-Model-Risk.md +++ b/docs/risks/Communication-Risks/Internal-Model-Risk.md @@ -2,7 +2,7 @@ title: Internal Model Risk description: Risks arising from insufficient or erroneous internal models of reality. -slug: risks/Internal-Model-Risk +slug: /risks/Internal-Model-Risk featured: class: c element: '' diff --git a/docs/risks/Communication-Risks/Invisibility-Risk.md b/docs/risks/Communication-Risks/Invisibility-Risk.md index 7500b8fed..3c0d57c03 100644 --- a/docs/risks/Communication-Risks/Invisibility-Risk.md +++ b/docs/risks/Communication-Risks/Invisibility-Risk.md @@ -2,7 +2,7 @@ title: Invisibility Risk description: Risks caused by the choice of abstractions we use in communication. -slug: risks/Invisibility-Risk +slug: /risks/Invisibility-Risk featured: class: c element: '' @@ -15,9 +15,9 @@ part_of: Communication Risk -Another cost of [Abstraction](/thinking/Glossary.md#abstraction) is [Invisibility Risk](/tags/Invisibility-Risk). While abstraction is a massively powerful technique, it lets the function of a thing hide behind the layers of abstraction and become invisible. +Another cost of [Abstraction](/thinking/Glossary#abstraction) is [Invisibility Risk](/tags/Invisibility-Risk). While abstraction is a massively powerful technique, it lets the function of a thing hide behind the layers of abstraction and become invisible. -As we saw above, [Protocols](Communication-Risk.md#protocols) allow things like the Internet to happen - this is amazing! But the higher level protocols _hide_ the details of the lower ones. HTTP _didn't know anything about_ IP packets, for example. +As we saw above, [Protocols](Communication-Risk#protocols) allow things like the Internet to happen - this is amazing! But the higher level protocols _hide_ the details of the lower ones. HTTP _didn't know anything about_ IP packets, for example. Abstractions hide detail, then. But when they hide from you the details you need this is called a [leaky abstraction](https://en.wikipedia.org/wiki/Leaky_abstraction). Since all abstractions hide information, they are all potentially leaky. @@ -25,7 +25,7 @@ Abstractions hide detail, then. But when they hide from you the details you nee [Invisibility Risk](/tags/Invisibility-Risk) is risk due to information not sent. Because humans don't need a complete understanding of a concept to use it, we can cope with some [Invisibility Risk](/tags/Invisibility-Risk) in communication and this saves us time when we're talking. It would be _painful_ to have conversations if, say, the other person needed to understand everything about how cars worked in order to discuss cars. -For people, [Abstraction](/thinking/Glossary.md#abstraction) is a tool that we can use to refer to other concepts, without necessarily knowing how the concepts work. This divorcing of "what" from "how" is the essence of abstraction and is what makes language useful. +For people, [Abstraction](/thinking/Glossary#abstraction) is a tool that we can use to refer to other concepts, without necessarily knowing how the concepts work. This divorcing of "what" from "how" is the essence of abstraction and is what makes language useful. The debt of [Invisibility Risk](/tags/Invisibility-Risk) comes due when you realise that _not_ being given the details _prevents_ you from reasoning about it effectively. Let's think about this in the context of a project status meeting, for example: @@ -47,7 +47,7 @@ _Referring to **f** is a much simpler job than understanding **f**._ We try to mitigate this via documentation but this is a terrible deal: documentation is necessarily a simplified explanation of the abstraction, so will still suffer from [Invisibility Risk](/tags/Invisibility-Risk). -[Invisibility Risk](/tags/Invisibility-Risk) is mainly [Hidden Risk](/thinking/Glossary.md#hidden-risk). (Mostly, _you don't know what you don't know_.) But you can carelessly _hide things from yourself_ with software: +[Invisibility Risk](/tags/Invisibility-Risk) is mainly [Hidden Risk](/thinking/Glossary#hidden-risk). (Mostly, _you don't know what you don't know_.) But you can carelessly _hide things from yourself_ with software: - Adding a thread to an application that doesn't report whether it worked, failed, or is running out of control and consuming all the cycles of the CPU. - Redundancy can increase reliability, but only if you know when servers fail, and fix them quickly. Otherwise, you only see problems when the last server fails. diff --git a/docs/risks/Communication-Risks/Learning-Curve-Risk.md b/docs/risks/Communication-Risks/Learning-Curve-Risk.md index 455b4a1b0..08c46617a 100644 --- a/docs/risks/Communication-Risks/Learning-Curve-Risk.md +++ b/docs/risks/Communication-Risks/Learning-Curve-Risk.md @@ -2,7 +2,7 @@ title: Learning Curve Risk description: Risks due to the difficulty faced in updating an internal model. -slug: risks/Learning-Curve-Risk +slug: /risks/Learning-Curve-Risk featured: class: c element: '' @@ -15,7 +15,7 @@ part_of: Communication Risk -If the messages we are receiving force us to update our [Internal Model](/thinking/Glossary.md#internal-model) too much, we can suffer from the problem of "too steep a [Learning Curve](https://en.wikipedia.org/wiki/Learning_curve)" or "[Information Overload](https://en.wikipedia.org/wiki/Information_overload)", where the messages force us to adapt our [Internal Model](/thinking/Glossary.md#internal-model) too quickly for our brains to keep up. +If the messages we are receiving force us to update our [Internal Model](/thinking/Glossary#internal-model) too much, we can suffer from the problem of "too steep a [Learning Curve](https://en.wikipedia.org/wiki/Learning_curve)" or "[Information Overload](https://en.wikipedia.org/wiki/Information_overload)", where the messages force us to adapt our [Internal Model](/thinking/Glossary#internal-model) too quickly for our brains to keep up. Commonly, the easiest option is just to ignore the information channel completely in these cases. @@ -29,6 +29,6 @@ By now it should be clear that it's going to be _both_ quite hard to read and wr But now we should be able to see the reason why it's harder to read than write too: - - When reading code, you are having to shift your [Internal Model](/thinking/Glossary.md#internal-model) to wherever the code is, accepting decisions that you might not agree with and accepting counter-intuitive logical leaps. i.e. [Learning Curve Risk](/tags/Learning-Curve-Risk). _(cf. [Principle of Least Surprise](https://en.wikipedia.org/wiki/Principle_of_least_astonishment))_ - - There is no [Feedback Loop](/thinking/Glossary.md#feedback-loop) between your [Internal Model](/thinking/Glossary.md#internal-model) and the [Reality](/tags/Meeting-Reality) of the code, opening you up to [misinterpretation](Communication-Risk.md#misinterpretation). When you write code, your compiler and tests give you this. + - When reading code, you are having to shift your [Internal Model](/thinking/Glossary#internal-model) to wherever the code is, accepting decisions that you might not agree with and accepting counter-intuitive logical leaps. i.e. [Learning Curve Risk](/tags/Learning-Curve-Risk). _(cf. [Principle of Least Surprise](https://en.wikipedia.org/wiki/Principle_of_least_astonishment))_ + - There is no [Feedback Loop](/thinking/Glossary#feedback-loop) between your [Internal Model](/thinking/Glossary#internal-model) and the [Reality](/tags/Meeting-Reality) of the code, opening you up to [misinterpretation](Communication-Risk#misinterpretation). When you write code, your compiler and tests give you this. - While reading code _takes less time_ than writing it, this also means the [Learning Curve](/tags/Learning-Curve-Risk) is steeper. \ No newline at end of file diff --git a/docs/risks/Communication-Risks/Message-Risk.md b/docs/risks/Communication-Risks/Message-Risk.md index 7b8e16966..0e22ef2d5 100644 --- a/docs/risks/Communication-Risks/Message-Risk.md +++ b/docs/risks/Communication-Risks/Message-Risk.md @@ -2,7 +2,7 @@ title: Message Risk description: Risks caused by the difficulty of composing and interpreting messages in the communication process. -slug: risks/Message-Risk +slug: /risks/Message-Risk featured: class: c element: '' @@ -47,6 +47,6 @@ For people, nothing exists unless we have a name for it. The > "The famous pipe. How people reproached me for it! And yet, could you stuff my pipe? No, it's just a representation, is it not? So if I had written on my picture “This is a pipe”, I'd have been lying!" - [Rene Magritte, of _The Treachery of Images_](https://en.wikipedia.org/wiki/The_Treachery_of_Images) -People don't rely on rigorous definitions of abstractions like computers do; we make do with fuzzy definitions of concepts and ideas. We rely on [Abstraction](/thinking/Glossary.md#abstraction) to move between the name of a thing and the _idea of a thing_. +People don't rely on rigorous definitions of abstractions like computers do; we make do with fuzzy definitions of concepts and ideas. We rely on [Abstraction](/thinking/Glossary#abstraction) to move between the name of a thing and the _idea of a thing_. -This brings about [Misinterpretation](Communication-Risk.md#misinterpretation): names are not _precise_, and concepts mean different things to different people. We can't be sure that other people have the same meaning for a name that we have. +This brings about [Misinterpretation](Communication-Risk#misinterpretation): names are not _precise_, and concepts mean different things to different people. We can't be sure that other people have the same meaning for a name that we have. diff --git a/docs/risks/Communication-Risks/Protocol-Risk.md b/docs/risks/Communication-Risks/Protocol-Risk.md index 6bd99c3ca..89944979b 100644 --- a/docs/risks/Communication-Risks/Protocol-Risk.md +++ b/docs/risks/Communication-Risks/Protocol-Risk.md @@ -2,7 +2,7 @@ title: Protocol Risk description: Risks due to the failure of encoding or decoding messages between two parties in communication. -slug: risks/Protocol-Risk +slug: /risks/Protocol-Risk featured: class: c element: '' @@ -16,9 +16,9 @@ part_of: Communication Risk > "A communication protocol is a system of rules that allow two or more entities of a communications system to transmit information. " - [Communication Protocol, Wikipedia](https://en.wikipedia.org/wiki/Communication_protocol) -In this section I want to examine the concept of [Communication Protocols](https://en.wikipedia.org/wiki/Communication_protocol) and how they relate to [Abstraction](/thinking/Glossary.md#abstraction), which is implicated over and over again in different types of risk we will be looking at. +In this section I want to examine the concept of [Communication Protocols](https://en.wikipedia.org/wiki/Communication_protocol) and how they relate to [Abstraction](/thinking/Glossary#abstraction), which is implicated over and over again in different types of risk we will be looking at. -[Abstraction](/thinking/Glossary.md#abstraction) means separating the _definition_ of something from the _use_ of something. It's a widely applicable concept, but our example below will be specific to communication, and looking at the abstractions involved in loading a web page. +[Abstraction](/thinking/Glossary#abstraction) means separating the _definition_ of something from the _use_ of something. It's a widely applicable concept, but our example below will be specific to communication, and looking at the abstractions involved in loading a web page. ### Clients and Servers @@ -45,7 +45,7 @@ http://google.com/preferences The first thing that happens is that the name `google.com` is _resolved_ by DNS. This means that the browser looks up the domain name `google.com` and gets back an [IP Address](https://en.wikipedia.org/wiki/IP_address). An IP Address is a bit like a postal address, but instead of being the address of a building, it is the address of a particular computer. -This is an [Abstraction](/thinking/Glossary.md#abstraction): although computers use IP addresses like `216.58.204.78`, I can use a human-readable _name_, `google.com`. +This is an [Abstraction](/thinking/Glossary#abstraction): although computers use IP addresses like `216.58.204.78`, I can use a human-readable _name_, `google.com`. The address `google.com` doesn't even necessarily resolve to that same address each time: Google serves a lot of traffic so there are multiple servers handling the requests and _they have multiple IP addresses for `google.com`_. But as a user, I don't have to worry about this detail. @@ -58,7 +58,7 @@ Each packet consists of two things: - The **IP address**, which tells the network where to send the packet (again, much like you'd write the address on the outside of a parcel). - The **payload**, the stream of bytes for processing at the destination, like the contents of the parcel. -But even this concept of "packets" is an [abstraction](/thinking/Glossary.md#abstraction). Although the network understands this protocol, we might be using Wired Ethernet cables, or WiFi, 4G or _something else_ beneath that. You can think of this as analogous to the parcel being delivered on foot, by plane or by car - it doesn't matter to the sender of the parcel! +But even this concept of "packets" is an [abstraction](/thinking/Glossary#abstraction). Although the network understands this protocol, we might be using Wired Ethernet cables, or WiFi, 4G or _something else_ beneath that. You can think of this as analogous to the parcel being delivered on foot, by plane or by car - it doesn't matter to the sender of the parcel! ### 3. 802.11 - WiFi Protocol @@ -68,7 +68,7 @@ And WiFi is just the first hop. After the WiFi receiver, there will be protocol ### 4. TCP - Transmission Control Protocol -Another [abstraction](/thinking/Glossary.md#abstraction) going on here is that my browser believes it has a "connection" to the server. This is provided by the TCP protocol. +Another [abstraction](/thinking/Glossary#abstraction) going on here is that my browser believes it has a "connection" to the server. This is provided by the TCP protocol. But this is a fiction - my "connection" is built on the IP protocol, which as we saw above is just packets of data on the network. So there are lots of packets floating around which say "this connection is still alive" and "I'm message 5 in the sequence" and so on in order to maintain this fiction. diff --git a/docs/risks/Communication-Risks/Trust-And-Belief-Risk.md b/docs/risks/Communication-Risks/Trust-And-Belief-Risk.md index ec9d37aed..e47f05db1 100644 --- a/docs/risks/Communication-Risks/Trust-And-Belief-Risk.md +++ b/docs/risks/Communication-Risks/Trust-And-Belief-Risk.md @@ -2,7 +2,7 @@ title: "Trust And Belief Risk" description: Risk that a party we are communicating with can't be trusted, as it has agency or is unreliable in some other way. -slug: risks/Trust-And-Belief-Risk +slug: /risks/Trust-And-Belief-Risk featured: class: c element: '' @@ -18,8 +18,8 @@ Although protocols can sometimes handle security features of communication (such Even if the **receiver** trusts the **sender**, they may not _believe_ the message. Let's look at some reasons for that: -- **[Weltanschauung (World View)](https://en.wikipedia.org/wiki/World_view)**: the ethics, values and beliefs in the receiver's [Internal Model](/thinking/Glossary.md#internal-model) may be incompatible to those from the sender. +- **[Weltanschauung (World View)](https://en.wikipedia.org/wiki/World_view)**: the ethics, values and beliefs in the receiver's [Internal Model](/thinking/Glossary#internal-model) may be incompatible to those from the sender. - **[Relativism](https://en.wikipedia.org/wiki/Relativism)** is the concept that there are no universal truths. Every truth is from a frame of reference. For example, what constitutes _offensive language_ is dependent on the listener. - **[Psycholinguistics](https://en.wikipedia.org/wiki/Psycholinguistics)** is the study of how humans acquire languages. There are different languages, dialects, and _industry dialects_. We all understand language in different ways, take different meanings and apply different contexts to the messages. -From the point-of-view of [Marketing Communications](Communication-Risk.md#marketing-communications), choosing the right message is part of the battle. You are trying to communicate your idea in such a way as to mitigate Trust & Belief Risk. \ No newline at end of file +From the point-of-view of [Marketing Communications](Communication-Risk#marketing-communications), choosing the right message is part of the battle. You are trying to communicate your idea in such a way as to mitigate Trust & Belief Risk. \ No newline at end of file diff --git a/docs/risks/Complexity-Risk.md b/docs/risks/Complexity-Risk.md index 5b02249bb..345b62677 100644 --- a/docs/risks/Complexity-Risk.md +++ b/docs/risks/Complexity-Risk.md @@ -11,6 +11,8 @@ tags: - Risks - Refactoring - Complexity Risk + - Codebase Risk + - Dead End Risk - Abstraction definitions: - name: Abstraction @@ -20,15 +22,15 @@ part_of: Operational Risk -[Complexity Risk](/tags/Complexity-Risk) is the [risk](/thinking/Glossary.md#risk) to your project due to its underlying "complexity". Here, we will break down exactly what we mean by complexity, look at where it can hide on a software project and discuss some ways in which we can manage this important risk. +[Complexity Risk](/tags/Complexity-Risk) is the [risk](/thinking/Glossary#risk) to your project due to its underlying "complexity". Here, we will break down exactly what we mean by complexity, look at where it can hide on a software project and discuss some ways in which we can manage this important risk. Here we will: - - Look at two ways in which complexity is measured, via [Kolmogorov Complexity](/risks/Complexity-Risk.md#kolmogorov-complexity) and [Graph-Connectivity](/risks/Complexity-Risk.md#connectivity). + - Look at two ways in which complexity is measured, via [Kolmogorov Complexity](/risks/Complexity-Risk#kolmogorov-complexity) and [Graph-Connectivity](/risks/Complexity-Risk#connectivity). - Define [Complexity Risk](/tags/Complexity-Risk), and the related risks of [Codebase Risk](/tags/Codebase-Risk) (complexity in your codebase) and [Dead-End Risk](/tags/Dead-End-Risk) (risk of implementations getting "stuck"). - - Discuss ways to think about complexity: as [mass](/risks/Complexity-Risk.md#complexity-is-mass), [technical debt](/risks/Complexity-Risk.md#technical-debt) and [mess](/risks/Complexity-Risk.md#kitchen-analogy). + - Discuss ways to think about complexity: as [mass](/risks/Complexity-Risk#complexity-is-mass), [technical debt](/risks/Complexity-Risk#technical-debt) and [mess](/risks/Complexity-Risk#kitchen-analogy). - Discuss ways to manage complexity risk, such as modularisation, hierarchy, use of languages and libraries and by avoiding feature creep. - - Discuss places where Complexity Risk [manifests](/risks/Complexity-Risk.md#where-complexity-hides) in computing. + - Discuss places where Complexity Risk [manifests](/risks/Complexity-Risk#where-complexity-hides) in computing. ## Codebase Risk @@ -95,7 +97,7 @@ function out() { (7 ) ### Abstraction -What's happening here is that we're _exploiting a pattern_: we noticed that `abcd` occurs several times, so we defined it a single time and then used it over and over, like a stamp. This is called [abstraction](/thinking/Glossary.md#abstraction). +What's happening here is that we're _exploiting a pattern_: we noticed that `abcd` occurs several times, so we defined it a single time and then used it over and over, like a stamp. This is called [abstraction](/thinking/Glossary#abstraction). By applying abstraction, we can improve in the direction of the Kolmogorov lower bound. By allowing ourselves to say that _symbols_ (like `out` and `ABCD`) are worth one complexity point, we've allowed that we can be descriptive in naming `function` and `const`. Naming things is an important part of abstraction, because to use something, you have to be able to refer to it. @@ -227,9 +229,9 @@ The great complexity-reducing mechanism of modularisation is that _you only have ## Analogies -So, we've looked at some measures of software structure complexity. We can say "this is more complex than this" for a given piece of code or structure. We've also looked at three ways to manage it: [Abstraction](/thinking/Glossary.md#abstraction) and [Modularisation](/risks/Complexity-Risk.md#hierarchies-and-modularisation) and via [Dependencies](/risks/Complexity-Risk.md#languages-and-dependencies). +So, we've looked at some measures of software structure complexity. We can say "this is more complex than this" for a given piece of code or structure. We've also looked at three ways to manage it: [Abstraction](/thinking/Glossary#abstraction) and [Modularisation](/risks/Complexity-Risk#hierarchies-and-modularisation) and via [Dependencies](/risks/Complexity-Risk#languages-and-dependencies). -However, we've not really said why complexity entails [Risk](/thinking/Glossary.md#attendant-risk). So let's address that now by looking at three analogies, [Mass](/risks/Complexity-Risk.md#complexity-is-mass), [Technical Debt](/risks/Complexity-Risk.md#technical-debt) and [Mess](/risks/Complexity-Risk.md#kitchen-analogy) +However, we've not really said why complexity entails [Risk](/thinking/Glossary#attendant-risk). So let's address that now by looking at three analogies, [Mass](/risks/Complexity-Risk#complexity-is-mass), [Technical Debt](/risks/Complexity-Risk#technical-debt) and [Mess](/risks/Complexity-Risk#kitchen-analogy) ### Complexity is Mass @@ -255,19 +257,19 @@ At a basic level, [Complexity Risk](/tags/Complexity-Risk) heavily impacts on [S ### Technical Debt -The most common way we talk about [Complexity Risk](/tags/Complexity-Risk) in software is as [Technical Debt](/risks/Complexity-Risk.md#technical-debt): +The most common way we talk about [Complexity Risk](/tags/Complexity-Risk) in software is as [Technical Debt](/risks/Complexity-Risk#technical-debt): > "Shipping first time code is like going into debt. A little debt speeds development so long as it is paid back promptly with a rewrite... The danger occurs when the debt is not repaid. Every minute spent on not-quite-right code counts as interest on that debt. Entire engineering organisations can be brought to a stand-still under the debt load of an unconsolidated implementation, object-oriented or otherwise." - [Ward Cunningham, 1992, _Wikipedia, Technical Debt_](https://en.wikipedia.org/wiki/Technical_debt) -Building a low-complexity first-time solution is often a waste: in the first version, we're usually interested in reducing [Feature Risk](/tags/Feature-Risk) as fast as possible. That is, putting working software in front of users to get [feedback](/thinking/Meeting-Reality.md). We would rather carry [Complexity Risk](/tags/Complexity-Risk) than take on more [Schedule Risk](/tags/Schedule-Risk). +Building a low-complexity first-time solution is often a waste: in the first version, we're usually interested in reducing [Feature Risk](/tags/Feature-Risk) as fast as possible. That is, putting working software in front of users to get [feedback](/thinking/Meeting-Reality). We would rather carry [Complexity Risk](/tags/Complexity-Risk) than take on more [Schedule Risk](/tags/Schedule-Risk). -So a quick-and-dirty, over-complex implementation mitigates the same [Feature Risk](/tags/Feature-Risk) and allows you to [Meet Reality](/thinking/Meeting-Reality.md) faster. +So a quick-and-dirty, over-complex implementation mitigates the same [Feature Risk](/tags/Feature-Risk) and allows you to [Meet Reality](/thinking/Meeting-Reality) faster. But having mitigated the [Feature Risk](/tags/Feature-Risk) this way, you are likely exposed to a higher level of [Complexity Risk](/tags/Complexity-Risk) than would be desirable. This "carries forward" and means that in the future, you're going to be slower. As in the case of a real debt, "servicing" the debt incurs a steady, regular cost. ### Kitchen Analogy -It’s often hard to make the case for minimising [Technical Debt](/risks/Complexity-Risk.md#technical-debt): it often feels that there are more important priorities, especially when technical debt can be “swept under the carpet” and forgotten about until later. (See [Discounting](/thinking/Evaluating-Risk.md#discounting-the-future-to-zero).) +It’s often hard to make the case for minimising [Technical Debt](/risks/Complexity-Risk#technical-debt): it often feels that there are more important priorities, especially when technical debt can be “swept under the carpet” and forgotten about until later. (See [Discounting](/thinking/Evaluating-Risk#discounting-the-future-to-zero).) One helpful analogy I have found is to imagine your code-base is a kitchen. After preparing a meal (i.e. delivering the first implementation), _you need to tidy up the kitchen_. This is just something everyone does as a matter of _basic sanitation_. @@ -277,7 +279,7 @@ It's not long before someone comes down with food poisoning. ![Complexity Risk and its implications](/img/generated/risks/complexity/complexity-risk-impact.svg) -We wouldn't tolerate this behaviour in a restaurant kitchen, so why put up with it in a software project? This state-of-affairs is illustrated in the above diagram. Not only does [Complexity Risk](/tags/Complexity-Risk) slow down future development, it can be a cause of [Operational Risks](/tags/Operational-Risk) and [Security Risks](Agency-Risk.md#security). +We wouldn't tolerate this behaviour in a restaurant kitchen, so why put up with it in a software project? This state-of-affairs is illustrated in the above diagram. Not only does [Complexity Risk](/tags/Complexity-Risk) slow down future development, it can be a cause of [Operational Risks](/tags/Operational-Risk) and [Security Risks](Agency-Risk#security). ### Feature Creep @@ -320,7 +322,7 @@ Whichever option you choose, this is a [Dead End](#dead-end-risk) because with h Working in a complex environment makes it harder to see developmental dead-ends. -Sometimes, the path across the [Risk Landscape](/risks/Risk-Landscape.md) will take you to dead ends, and the only benefit to be gained is experience. No one deliberately chooses a dead end - often you can take an action that doesn't pay off, but frequently the dead end appears from nowhere: it's a [Hidden Risk](/thinking/Glossary.md#hidden-risk). The source of a lot of this hidden risk is the complexity of the [risk landscape](/thinking/Glossary.md#risk-landscape). +Sometimes, the path across the [Risk Landscape](/risks/Risk-Landscape) will take you to dead ends, and the only benefit to be gained is experience. No one deliberately chooses a dead end - often you can take an action that doesn't pay off, but frequently the dead end appears from nowhere: it's a [Hidden Risk](/thinking/Glossary#hidden-risk). The source of a lot of this hidden risk is the complexity of the [risk landscape](/thinking/Glossary#risk-landscape). [Version Control Systems](https://en.wikipedia.org/wiki/Version_control) like [Git](https://en.wikipedia.org/wiki/Git) are a useful mitigation of [Dead-End Risk](/tags/Dead-End-Risk), because using them means that at least you can _go back_ to the point where you made the bad decision and go a different way. Additionally, they provide you with backups against the often inadvertent [Dead-End Risk](/tags/Dead-End-Risk) of someone wiping the hard-disk. @@ -383,7 +385,7 @@ This is a strong argument for the use of libraries. But when should you use a l ### The Environment -The complexity of software tends to reflect the complexity of the environment it runs in, and complex software environments are more difficult to reason about, and more susceptible to [Operational Risk](/tags/Operational-Risk) and [Security-Risk](Agency-Risk.md#security). +The complexity of software tends to reflect the complexity of the environment it runs in, and complex software environments are more difficult to reason about, and more susceptible to [Operational Risk](/tags/Operational-Risk) and [Security-Risk](Agency-Risk#security). In particular, when we talk about the environment, we are talking about the number of external dependencies that the software has, and the risks we face when relying on those dependencies. diff --git a/docs/risks/Coordination-Risk.md b/docs/risks/Coordination-Risk.md index aa8d2b670..4384ba7f6 100644 --- a/docs/risks/Coordination-Risk.md +++ b/docs/risks/Coordination-Risk.md @@ -16,11 +16,11 @@ part_of: Operational Risk -As in [Agency Risk](/tags/Agency-Risk), we are going to use the term _agent_, which refers to anything with [agency](Agency-Risk.md#software-processes) in a system to make decisions: that is, an agent has an [Internal Model](/thinking/Glossary.md#internal-model) and can [take actions](/thinking/Glossary.md#taking-action) based on it. Here, we work on the assumption that the agents _are_ working towards a common [Goal](/thinking/Glossary.md#goal), even though in reality it's not always the case, as we saw in the section on [Agency Risk](/tags/Agency-Risk). +As in [Agency Risk](/tags/Agency-Risk), we are going to use the term _agent_, which refers to anything with [agency](Agency-Risk#software-processes) in a system to make decisions: that is, an agent has an [Internal Model](/thinking/Glossary#internal-model) and can [take actions](/thinking/Glossary#taking-action) based on it. Here, we work on the assumption that the agents _are_ working towards a common [Goal](/thinking/Glossary#goal), even though in reality it's not always the case, as we saw in the section on [Agency Risk](/tags/Agency-Risk). [Coordination Risk](/tags/Coordination-Risk) is the risk that agents can fail to coordinate to meet their common goal and end up making things worse. [Coordination Risk](/tags/Coordination-Risk) is embodied in the phrase "Too Many Cooks Spoil The Broth": more people, opinions or _agents_ often make results worse. -In this section, we'll first build up [a model of Coordination Risk](#a-model-of-coordination-risk), describing exactly coordination means and why we do it. Then, we'll look at some classic [problems of coordination](#problems-of-coordination). Then, we're going to consider agency at several different levels (because of [Scale Invariance](/thinking/Crisis-Mode.md#invariance-2-scale-invariance)) . We'll look at: +In this section, we'll first build up [a model of Coordination Risk](#a-model-of-coordination-risk), describing exactly coordination means and why we do it. Then, we'll look at some classic [problems of coordination](#problems-of-coordination). Then, we're going to consider agency at several different levels (because of [Scale Invariance](/thinking/Crisis-Mode#invariance-2-scale-invariance)) . We'll look at: - [Team Decision Making](#decision-making), - [Living Organisms](#in-living-organisms), @@ -47,16 +47,16 @@ As you can see, by _sharing_, it's possible that the _total benefit_ is greater Just two things are needed for competition to occur: - - Multiple, Individual agents, trying to achieve [Goals](/thinking/Glossary.md#goal). + - Multiple, Individual agents, trying to achieve [Goals](/thinking/Glossary#goal). - Scarce Resources, which the agents want to use as [Dependencies](/tags/Dependency-Risk). ### Coordination via Communication The only way that the agents can move away from competition towards coordination is via [Communication](/tags/Communication-Risk), and this is where their coordination problems begin. -[Coordination Risk](/tags/Coordination-Risk) commonly occurs where people have different ideas about how to achieve a [goal](/thinking/Glossary.md#goal), and they have different ideas because they have different [Internal Models](/thinking/Glossary.md#internal-model). As we saw in the section on [Communication Risk](/tags/Communication-Risk), we can only hope to synchronise [Internal Models](/thinking/Glossary.md#internal-model) if there are high-bandwidth [Channels](Communication-Risk.md#channels) available for communication. +[Coordination Risk](/tags/Coordination-Risk) commonly occurs where people have different ideas about how to achieve a [goal](/thinking/Glossary#goal), and they have different ideas because they have different [Internal Models](/thinking/Glossary#internal-model). As we saw in the section on [Communication Risk](/tags/Communication-Risk), we can only hope to synchronise [Internal Models](/thinking/Glossary#internal-model) if there are high-bandwidth [Channels](Communication-Risk#channels) available for communication. -You might think, therefore, that this is just another type of [Communication Risk](/tags/Communication-Risk) problem, and that's often a part of it, but even with synchronized [Internal Models](/thinking/Glossary.md#internal-model), coordination risk can occur. Imagine the example of people all trying to madly leave a burning building. They all have the same information (the building is on fire). If they coordinate, and leave in an orderly fashion, they might all get out. If they don't, and there's a scramble for the door, more people might die. +You might think, therefore, that this is just another type of [Communication Risk](/tags/Communication-Risk) problem, and that's often a part of it, but even with synchronized [Internal Models](/thinking/Glossary#internal-model), coordination risk can occur. Imagine the example of people all trying to madly leave a burning building. They all have the same information (the building is on fire). If they coordinate, and leave in an orderly fashion, they might all get out. If they don't, and there's a scramble for the door, more people might die. ![Coordination Risk - Mitigated by Communication](/img/generated/risks/coordination/coordination-risk.svg) @@ -85,7 +85,7 @@ Let's unpack this idea, and review some classic problems of coordination, none o ## Decision Making -Within a team, [Coordination Risk](/tags/Coordination-Risk) is at its core about resolving [Internal Model](/thinking/Glossary.md#internal-model) conflicts in order that everyone can agree on a [Goal](/thinking/Glossary.md#goal) and cooperate on getting it done. Therefore, [Coordination Risk](/tags/Coordination-Risk) is worse on projects with more members, and worse in organisations with more staff. +Within a team, [Coordination Risk](/tags/Coordination-Risk) is at its core about resolving [Internal Model](/thinking/Glossary#internal-model) conflicts in order that everyone can agree on a [Goal](/thinking/Glossary#goal) and cooperate on getting it done. Therefore, [Coordination Risk](/tags/Coordination-Risk) is worse on projects with more members, and worse in organisations with more staff. As an individual, do you suffer from [Coordination Risk](/tags/Coordination-Risk) at all? Maybe: sometimes, you can feel "conflicted" about the best way to solve a problem. And weirdly, usually _not thinking about it_ helps. Sleeping too. (Rich Hickey calls this "[Hammock Driven Development](https://www.youtube.com/watch?v=f84n5oFoZBc)"). This is probably because, unbeknownst to you, your subconscious is furiously communicating internally, trying to resolve these conflicts itself, and will let you know when it has come to a resolution. @@ -108,13 +108,13 @@ As an individual, do you suffer from [Coordination Risk](/tags/Coordination-Risk **s** = subordinate -At the top, you have the _least_ consultative styles, and at the bottom, the _most_. At the top, decisions are made with just the leader's [Internal Model](/thinking/Glossary.md#internal-model), but moving down, the [Internal Models](/thinking/Glossary.md#internal-model) of the _subordinates_ are increasingly brought into play. +At the top, you have the _least_ consultative styles, and at the bottom, the _most_. At the top, decisions are made with just the leader's [Internal Model](/thinking/Glossary#internal-model), but moving down, the [Internal Models](/thinking/Glossary#internal-model) of the _subordinates_ are increasingly brought into play. -The decisions at the top are faster, but don't do much for mitigating [Coordination Risk](/tags/Coordination-Risk). The ones below take longer (incurring [Schedule Risk](/tags/Schedule-Risk)) but mitigate more [Coordination Risk](/tags/Coordination-Risk). Group decision-making inevitably involves everyone _learning_ and improving their [Internal Models](/thinking/Glossary.md#internal-model). +The decisions at the top are faster, but don't do much for mitigating [Coordination Risk](/tags/Coordination-Risk). The ones below take longer (incurring [Schedule Risk](/tags/Schedule-Risk)) but mitigate more [Coordination Risk](/tags/Coordination-Risk). Group decision-making inevitably involves everyone _learning_ and improving their [Internal Models](/thinking/Glossary#internal-model). The trick is to be able to tell which approach is suitable at which time. Everyone is expected to make decisions _within their realm of expertise_: you can't have developers continually calling meetings to discuss whether they should be using an [Abstract Factory](https://en.wikipedia.org/wiki/Abstract_factory_pattern) or a [Factory Method](https://en.wikipedia.org/wiki/Factory_method_pattern): it would waste time. The critical question is therefore, "what's the biggest risk?" - - Is the [Coordination Risk](/tags/Coordination-Risk) greater? Are we going to suffer [Dead End Risk](/tags/Complexity-Risk) if the decision is made wrongly? What if people don't agree with it? Poor leadership has an impact on [morale](Agency-Risk.md#morale-failure) too. + - Is the [Coordination Risk](/tags/Coordination-Risk) greater? Are we going to suffer [Dead End Risk](/tags/Complexity-Risk) if the decision is made wrongly? What if people don't agree with it? Poor leadership has an impact on [morale](Agency-Risk#morale-failure) too. - Is the [Schedule Risk](/tags/Schedule-Risk) greater? If you have a 1-hour meeting with eight people to decide a decision, that's _one person day_ gone right there: group decision making is _expensive_. So _organisation_ can reduce [Coordination Risk](/tags/Coordination-Risk) but to make this work we need more _communication_, and this has attendant complexity and time costs. @@ -123,7 +123,7 @@ So _organisation_ can reduce [Coordination Risk](/tags/Coordination-Risk) but to Staff in a team have a dual nature: they are **Agents** and **Resources** at the same time. The team [depends](/tags/Dependency-Risk) on staff for their resource of _labour_, but they're also part of the decision making process of the team, because they have [_agency_](/tags/Agency-Risk) over their own actions. -Part of [Coordination Risk](/tags/Coordination-Risk) is about trying to mitigate differences in [Internal Models](/thinking/Glossary.md#internal-model). So it's worth considering how varied people's models can be: +Part of [Coordination Risk](/tags/Coordination-Risk) is about trying to mitigate differences in [Internal Models](/thinking/Glossary#internal-model). So it's worth considering how varied people's models can be: - Different skill levels - Different experiences @@ -135,11 +135,11 @@ The job of harmonising this on a project would seem to fall to the team leader, > "The forming–storming–norming–performing model of group development was first proposed by Bruce Tuckman in 1965, who said that these phases are all necessary and inevitable in order for the team to grow, face up to challenges, tackle problems, find solutions, plan work, and deliver results." - [Tuckman's Stages Of Group Development, _Wikipedia_](https://en.wikipedia.org/wiki/Tuckman%27s_stages_of_group_development) -Specifically this describes a process whereby a new group will form and then be required to work together. In the process, they will have many _disputes_. Ideally, the group will resolve these disputes internally and emerge as a team, with a common [Goal](/thinking/Glossary.md#goal). +Specifically this describes a process whereby a new group will form and then be required to work together. In the process, they will have many _disputes_. Ideally, the group will resolve these disputes internally and emerge as a team, with a common [Goal](/thinking/Glossary#goal). -Since [Coordination](/tags/Coordination-Risk) is about [Resource Allocation](Coordination-Risk.md#problems-of-coordination) the skills of staff can potentially be looked at as resources to allocate. This means handling [Coordination Risk](/tags/Coordination-Risk) issues like: +Since [Coordination](/tags/Coordination-Risk) is about [Resource Allocation](Coordination-Risk#problems-of-coordination) the skills of staff can potentially be looked at as resources to allocate. This means handling [Coordination Risk](/tags/Coordination-Risk) issues like: - - People leaving, taking their [Internal Models](/thinking/Glossary.md#internal-model) and expertise with them ([Key Person Risk](Scarcity-Risk.md#staff-risk)). + - People leaving, taking their [Internal Models](/thinking/Glossary#internal-model) and expertise with them ([Key Person Risk](Scarcity-Risk#staff-risk)). - People requiring external training, to understand new tools and techniques ([Learning Curve Risk](/tags/Learning-Curve-Risk)). - People being protective about their knowledge in order to protect their jobs ([Agency Risk](/tags/Agency-Risk)). @@ -154,7 +154,7 @@ Vroom and Yetton's organisational model isn't relevant to just teams of people. - **Tissues**, which contain... - **Cells** of different types. (Even cells are complex systems containing multiple different, communicating sub-systems.) -There is huge attendant [Coordination Risk](/tags/Coordination-Risk) to running a complex multi-cellular system like the human body, but given the success of humanity as a species, you must conclude that these steps on the _evolutionary_ [Risk Landscape](/risks/Risk-Landscape.md) have benefited us in our ecological niche. +There is huge attendant [Coordination Risk](/tags/Coordination-Risk) to running a complex multi-cellular system like the human body, but given the success of humanity as a species, you must conclude that these steps on the _evolutionary_ [Risk Landscape](/risks/Risk-Landscape) have benefited us in our ecological niche. ### Decision Making @@ -176,7 +176,7 @@ Clearly, this is just a _model_, it's not set in stone and decision making style ## In Software Processes -It should be pretty clear that we are applying our [Scale Invariance](/thinking/Crisis-Mode.md#invariance-2-scale-invariance) rule to [Coordination Risk](/tags/Coordination-Risk): all of the problems we've described as affecting teams and organisations also affect software, although the scale and terrain are different. Software processes have limited _agency_ - in most cases they follow fixed rules set down by the programmers, rather than self-organising like people can (so far). +It should be pretty clear that we are applying our [Scale Invariance](/thinking/Crisis-Mode#invariance-2-scale-invariance) rule to [Coordination Risk](/tags/Coordination-Risk): all of the problems we've described as affecting teams and organisations also affect software, although the scale and terrain are different. Software processes have limited _agency_ - in most cases they follow fixed rules set down by the programmers, rather than self-organising like people can (so far). As before, in order to face [Coordination Risk](/tags/Coordination-Risk) in software, we need multiple agents all working together. [Coordination Risks](/tags/Coordination-Risk) (such as race conditions or deadlock) only really occur where _more than one agent working at the same time_. This means we are considering _at least_ multi-threaded software, and anything above that (multiple CPUs, servers, data-centres and so on). diff --git a/docs/risks/Dependency-Risks/Agency-Risks/Agency-Risk.md b/docs/risks/Dependency-Risks/Agency-Risks/Agency-Risk.md index 092cb51b7..7a070a09f 100644 --- a/docs/risks/Dependency-Risks/Agency-Risks/Agency-Risk.md +++ b/docs/risks/Dependency-Risks/Agency-Risks/Agency-Risk.md @@ -8,6 +8,7 @@ tags: - Goal - Agency Risk - Agent + - Security Risk definitions: - name: Agent description: blah @@ -21,7 +22,7 @@ part_of: Dependency Risk -Coordinating a team is difficult enough when everyone on the team has a single [Goal](/thinking/Glossary.md#goal). But people have their own goals too. Sometimes their goals harmlessly co-exist with the team's goal, other times they don't. +Coordinating a team is difficult enough when everyone on the team has a single [Goal](/thinking/Glossary#goal). But people have their own goals too. Sometimes their goals harmlessly co-exist with the team's goal, other times they don't. This is [Agency Risk](/tags/Agency-Risk). @@ -118,7 +119,7 @@ Working on a pet project usually means you get lots of attention (and more than > "Morale, also known as Esprit de Corps, is the capacity of a group's members to retain belief in an institution or goal, particularly in the face of opposition or hardship" - [Morale, _Wikipedia_](https://en.wikipedia.org/wiki/Morale) -Sometimes the morale of the team or individuals within it dips, leading to lack of motivation. Losing morale is a kind of [Agency Risk](/tags/Agency-Risk) because it really means that a team member or the whole team isn't committed to the [Goal](/thinking/Glossary.md#goal) and may decide their efforts are best spent elsewhere. Morale failure might be caused by: +Sometimes the morale of the team or individuals within it dips, leading to lack of motivation. Losing morale is a kind of [Agency Risk](/tags/Agency-Risk) because it really means that a team member or the whole team isn't committed to the [Goal](/thinking/Glossary#goal) and may decide their efforts are best spent elsewhere. Morale failure might be caused by: - **External Factors**: perhaps the employee's dog has died, or they're simply tired of the industry, or are not feeling challenged. - **The goal feels unachievable**: in this case people won't commit their full effort to it. This might be due to a difference in the evaluation of the risks on the project between the team members and the leader. In military science, a second meaning of morale is how well supplied and equipped a unit is. This would also seem like a useful reference point for IT projects. If teams are under-staffed or under-equipped, it will impact on motivation too. @@ -160,9 +161,9 @@ This problem may be a long way off. In any case it's not really in our interest ### Teams -[Agency Risk](/tags/Agency-Risk) applies to _whole teams_ too. It's perfectly possible that a team within an organisation develops [Goals](/thinking/Glossary.md#goal) that don't align with those of the overall organisation. For example: +[Agency Risk](/tags/Agency-Risk) applies to _whole teams_ too. It's perfectly possible that a team within an organisation develops [Goals](/thinking/Glossary#goal) that don't align with those of the overall organisation. For example: - - A team introduces excessive [Bureaucracy](Process-Risk.md#bureaucracy) in order to avoid work it doesn't like. + - A team introduces excessive [Bureaucracy](Process-Risk#bureaucracy) in order to avoid work it doesn't like. - A team gets obsessed with a particular technology, or their own internal process improvement, at the expense of delivering business value. - A marginalised team forces their services on other teams in the name of "consistency". (This can happen a lot with "Architecture", "Branding" and "Testing" teams, sometimes for the better, sometimes for the worse.) @@ -210,11 +211,11 @@ We're waking up to the realisation that our software systems need to work the sa ![Security as a mitigation for Agency Risk](/img/generated/risks/agency/security-risk.svg) -[Agency Risk](/tags/Agency-Risk) and [Security Risk](Agency-Risk.md#security) thrive on complexity: the more complex the systems we create, the more opportunities there are for bad actors to insert themselves and extract their own value. The dilemma is, _increasing security_ also means increasing [Complexity Risk](/tags/Complexity-Risk), because secure systems are necessarily more complex than insecure ones. +[Agency Risk](/tags/Agency-Risk) and [Security Risk](Agency-Risk#security) thrive on complexity: the more complex the systems we create, the more opportunities there are for bad actors to insert themselves and extract their own value. The dilemma is, _increasing security_ also means increasing [Complexity Risk](/tags/Complexity-Risk), because secure systems are necessarily more complex than insecure ones. ### Goal Alignment -As we stated at the beginning, [Agency Risk](/tags/Agency-Risk) at any level comes down to differences of [Goals](/thinking/Glossary.md#goal) between the different agents, whether they are _people_, _teams_ or _software_. +As we stated at the beginning, [Agency Risk](/tags/Agency-Risk) at any level comes down to differences of [Goals](/thinking/Glossary#goal) between the different agents, whether they are _people_, _teams_ or _software_. #### Skin In The Game @@ -228,7 +229,7 @@ Another example of this is [The Code of Hammurabi](https://en.wikipedia.org/wiki > "The death of a homeowner in a house collapse necessitates the death of the house's builder... if the homeowner's son died, the builder's son must die also." - [Code of Hammurabi, _Wikipedia_](https://en.wikipedia.org/wiki/Code_of_Hammurabi#Theories_of_purpose) -Luckily, these kinds of exposure aren't very common on software projects! [Fixed Price Contracts](/thinking/One-Size-Fits-No-One.md#waterfall) and [Employee Stock Options](https://en.wikipedia.org/wiki/Employee_stock_option) are two exceptions. +Luckily, these kinds of exposure aren't very common on software projects! [Fixed Price Contracts](/thinking/One-Size-Fits-No-One#waterfall) and [Employee Stock Options](https://en.wikipedia.org/wiki/Employee_stock_option) are two exceptions. #### Needs Theory @@ -244,7 +245,7 @@ But _extrinsic motivation_ is a complex, difficult-to-apply tool. In [Map And T ![Collective Code Ownership, Individual Responsibility](/img/generated/risks/agency/cco.svg) -Tools like [Pair Programming](https://en.wikipedia.org/wiki/Pair_programming) and [Collective Code Ownership](https://en.wikipedia.org/wiki/Collective_ownership) are about mitigating [Staff Risks](Scarcity-Risk.md#staff-risk) like [Key Person Risk](https://en.wikipedia.org/wiki/Key_person_insurance#Key_person_definition) and [Learning Curve Risk](/tags/Learning-Curve-Risk), but these push in the opposite direction to _individual responsibility_. +Tools like [Pair Programming](https://en.wikipedia.org/wiki/Pair_programming) and [Collective Code Ownership](https://en.wikipedia.org/wiki/Collective_ownership) are about mitigating [Staff Risks](Scarcity-Risk#staff-risk) like [Key Person Risk](https://en.wikipedia.org/wiki/Key_person_insurance#Key_person_definition) and [Learning Curve Risk](/tags/Learning-Curve-Risk), but these push in the opposite direction to _individual responsibility_. This is an important consideration: in adopting _those_ tools, you are necessarily setting aside certain _other_ tools to manage [Agency Risk](/tags/Agency-Risk) as a result. @@ -252,7 +253,7 @@ This is an important consideration: in adopting _those_ tools, you are necessar We've looked at various different shades of [Agency Risk](/tags/Agency-Risk) and three different mitigations for it. [Agency Risk](/tags/Agency-Risk) is a concern at the level of _individual agents_, whether they are processes, people, systems or teams. -So having looked at agents _individually_, it's time to look more closely at [Goals](/thinking/Glossary.md#goal), and the [Attendant Risks](/thinking/Glossary.md#attendant-risk) when aligning them amongst multiple agents. +So having looked at agents _individually_, it's time to look more closely at [Goals](/thinking/Glossary#goal), and the [Attendant Risks](/thinking/Glossary#attendant-risk) when aligning them amongst multiple agents. On to [Coordination Risk](/tags/Coordination-Risk)... diff --git a/docs/risks/Dependency-Risks/Boundary-Risk.md b/docs/risks/Dependency-Risks/Boundary-Risk.md index 9f51a72ee..55e0397df 100644 --- a/docs/risks/Dependency-Risks/Boundary-Risk.md +++ b/docs/risks/Dependency-Risks/Boundary-Risk.md @@ -25,7 +25,7 @@ As shown in the above diagram, [Boundary Risk](/tags/Boundary-Risk) is the risk - Although I eat cereals for breakfast, I don't have [Boundary Risk](/tags/Boundary-Risk) on them. If the supermarket runs out of cereals when I go, I can just buy some other food and eat that. - However the hot water system in my house uses gas. If that's not available I can't just switch to using oil or solar without cost. There is [Boundary Risk](/tags/Boundary-Risk), but it's low because the supply of gas is plentiful and seems like it will stay that way. -In terms of the [Risk Landscape](/risks/Risk-Landscape.md), [Boundary Risk](/tags/Boundary-Risk) is exactly as it says: a _boundary_, _wall_ or other kind of obstacle in your way to making a move you want to make. This changes the nature of the [Risk Landscape](/thinking/Glossary.md#risk-landscape), and introduces a maze-like component to it. It also means that we have to make _commitments_ about which way to go, knowing that our future paths are constrained by the decisions we make. +In terms of the [Risk Landscape](/risks/Risk-Landscape), [Boundary Risk](/tags/Boundary-Risk) is exactly as it says: a _boundary_, _wall_ or other kind of obstacle in your way to making a move you want to make. This changes the nature of the [Risk Landscape](/thinking/Glossary#risk-landscape), and introduces a maze-like component to it. It also means that we have to make _commitments_ about which way to go, knowing that our future paths are constrained by the decisions we make. As we discussed in [Complexity Risk](/tags/Complexity-Risk), there is always the chance we end up at a [Dead End](/tags/Dead-End-Risk), having done work that we need to throw away. In this case, we'll have to head back and make a different decision. @@ -57,7 +57,7 @@ This is a toy example, but in real life this dilemma occurs when we choose betwe The degree of [Boundary Risk](/tags/Boundary-Risk) is dependent on a number of factors: - - **The Sunk Cost** of the [Learning Curve](/tags/Learning-Curve-Risk) we've overcome to integrate the dependency, which may fail to live up to expectations (_cf._ [Feature Fit Risks](/tags/Feature-Fit-Risk)). We can avoid accreting this by choosing the _simplest_ and _fewest_ dependencies for any task, and trying to [Meet Reality](/thinking/Meeting-Reality.md) quickly. + - **The Sunk Cost** of the [Learning Curve](/tags/Learning-Curve-Risk) we've overcome to integrate the dependency, which may fail to live up to expectations (_cf._ [Feature Fit Risks](/tags/Feature-Fit-Risk)). We can avoid accreting this by choosing the _simplest_ and _fewest_ dependencies for any task, and trying to [Meet Reality](/thinking/Meeting-Reality) quickly. - **Life Expectancy**: libraries and products come and go. A choice that was popular when it was made may be superseded in the future by something better. (_cf._ [Market Risk](/tags/Market-Risk)). This may not be a problem until you try to renew a support contract, or try to do an operating system update. Although no-one can predict how long a technology will last, [The Lindy Effect](https://en.wikipedia.org/wiki/Lindy_effect) suggests that _future life expectancy is proportional to current age_. So, you might expect a technology that has been around for ten years to be around for a further ten. - **The level of [Lock In](#ecosystems-and-lock-in)**, where the cost of switching to a new dependency in the future is some function of the level of commitment to the current dependency. As an example, consider the level of commitment you have to your mother tongue. If you have spent your entire life committed to learning and communicating in English, there is a massive level of lock-in. Overcoming this to become fluent in Chinese may be an overwhelming task. - **Future needs**: will the dependency satisfy your expanding requirements going forward? (_cf._ [Feature Drift Risk](/tags/Feature-Drift-Risk)) @@ -79,11 +79,11 @@ But crucially, the underlying abstractions of WordPress and Drupal are different > "... a set of businesses functioning as a unit and interacting with a shared market for software and services, together with relationships among them. These relationships are frequently underpinned by a common technological platform and operate through the exchange of information, resources, and artifacts." - [Software Ecosystem, _Wikipedia_](https://en.wikipedia.org/wiki/Software_ecosystem) -You can think of the ecosystem as being like the footprint of a town or a city, consisting of the buildings, transport network and the people that live there. Within the city, and because of the transport network and the amenities available, it's easy to make rapid, useful moves on the [Risk Landscape](/risks/Risk-Landscape.md). In a software ecosystem it's the same: the ecosystem has gathered together to provide a way to mitigate various different [Feature Risks](/tags/Feature-Risk) in a common way. +You can think of the ecosystem as being like the footprint of a town or a city, consisting of the buildings, transport network and the people that live there. Within the city, and because of the transport network and the amenities available, it's easy to make rapid, useful moves on the [Risk Landscape](/risks/Risk-Landscape). In a software ecosystem it's the same: the ecosystem has gathered together to provide a way to mitigate various different [Feature Risks](/tags/Feature-Risk) in a common way. Ecosystem size is one key determinant of [Boundary Risk](/tags/Boundary-Risk): -- **A large ecosystem** has a large boundary circumference. [Boundary Risk](/tags/Boundary-Risk) is lower in a large ecosystem because your moves on the [Risk Landscape](/thinking/Glossary.md#risk-landscape) are unlikely to collide with it. The boundary _got large_ because other developers before you hit the boundary and did the work building the software equivalents of bridges and roads and pushing it back so that the boundary didn't get in their way. +- **A large ecosystem** has a large boundary circumference. [Boundary Risk](/tags/Boundary-Risk) is lower in a large ecosystem because your moves on the [Risk Landscape](/thinking/Glossary#risk-landscape) are unlikely to collide with it. The boundary _got large_ because other developers before you hit the boundary and did the work building the software equivalents of bridges and roads and pushing it back so that the boundary didn't get in their way. - In **a small ecosystem**, you are much more likely to come into contact with the edges of the boundary. _You_ will have to be the developer that pushes back the frontier and builds the roads for the others. This is hard work. ### Big Ecosystems Get Bigger @@ -125,7 +125,7 @@ The above chart is an example of this: look at how the number of public classes #### Backward Compatibility -As we saw in [Software Dependency Risk](/tags/Software-Dependency-Risk), The art of good design is to afford the greatest increase in functionality with the smallest increase in complexity possible, and this usually means [Refactoring](https://en.wikipedia.org/wiki/Refactoring). But, this is at odds with [Backward Compatibility](/risks/Protocol-Risk.md#backward-compatibility). +As we saw in [Software Dependency Risk](/tags/Software-Dependency-Risk), The art of good design is to afford the greatest increase in functionality with the smallest increase in complexity possible, and this usually means [Refactoring](https://en.wikipedia.org/wiki/Refactoring). But, this is at odds with [Backward Compatibility](/risks/Protocol-Risk#backward-compatibility). Each new version has a greater functional scope than the one before (pushing back [Boundary Risk](/tags/Boundary-Risk)), making the platform more attractive to build solutions in. But this increases the [Complexity Risk](/tags/Complexity-Risk) as there is more functionality to deal with. @@ -133,7 +133,7 @@ Each new version has a greater functional scope than the one before (pushing bac You can see in the diagram above the Peter Principle at play: as more responsibility is given to a dependency, the more complex it gets and the greater the learning curve to work with it. Large ecosystems like Java react to [Learning Curve Risk](/tags/Learning-Curve-Risk) by having copious amounts of literature to read or buy to help, but it is still off-putting. -Because [Complexity is Mass](/risks/Complexity-Risk.md#complexity-is-mass), large ecosystems can't respond quickly to [Feature Drift](/tags/Feature-Drift-Risk). This means that when the world changes, new ecosystems are likely to appear to fill gaps, rather than old ones moving in. +Because [Complexity is Mass](/risks/Complexity-Risk#complexity-is-mass), large ecosystems can't respond quickly to [Feature Drift](/tags/Feature-Drift-Risk). This means that when the world changes, new ecosystems are likely to appear to fill gaps, rather than old ones moving in. ## Managing Boundary Risk diff --git a/docs/risks/Dependency-Risks/Deadline-Risk.md b/docs/risks/Dependency-Risks/Deadline-Risk.md index 065d81d20..1fbed67f4 100644 --- a/docs/risks/Dependency-Risks/Deadline-Risk.md +++ b/docs/risks/Dependency-Risks/Deadline-Risk.md @@ -2,7 +2,7 @@ title: Deadline Risk description: What is the point of a deadline? Do they serve a useful purpose? -slug: risks/Deadline-Risk +slug: /risks/Deadline-Risk featured: class: c element: '' @@ -27,7 +27,7 @@ In the first example, you can't _start_ something until a particular event happe ## Events Mitigate Risk... -Having an event occur in a fixed time and place is [mitigating risk](/thinking/Glossary.md#mitigated-risk): +Having an event occur in a fixed time and place is [mitigating risk](/thinking/Glossary#mitigated-risk): - By taking the bus, we are mitigating our own [Schedule Risk](/tags/Schedule-Risk): we're (hopefully) reducing the amount of time we're going to spend on the activity of getting to work. It's not entirely necessary to even take the bus: you could walk, or go by another form of transport. But, effectively, this just swaps one dependency for another: if you walk, this might well take longer and use more energy, so you're just picking up [Schedule Risk](/tags/Schedule-Risk) in another way. - Events are a mitigation for [Coordination Risk](/tags/Coordination-Risk): a bus needn't necessarily _have_ a fixed timetable. It could wait for each passenger until they turned up, and then go. (A bit like ride-sharing works). This would be a total disaster from a [Coordination Risk](/tags/Coordination-Risk) perspective, as one person could cause everyone else to be really really late. @@ -37,7 +37,7 @@ Having an event occur in a fixed time and place is [mitigating risk](/thinking/G ![Action Diagram showing risks mitigated by having an _event_](/img/generated/risks/deadline/dependency-risk-event.svg) -By _deciding to use the bus_ we've [Taken Action](/thinking/Glossary.md#taking-action). By agreeing a _time_ and _place_ for something to happen (creating an _event_, as shown in the diagram above), you're introducing [Deadline Risk](/tags/Deadline-Risk). Miss the deadline, and you miss the bus. +By _deciding to use the bus_ we've [Taken Action](/thinking/Glossary#taking-action). By agreeing a _time_ and _place_ for something to happen (creating an _event_, as shown in the diagram above), you're introducing [Deadline Risk](/tags/Deadline-Risk). Miss the deadline, and you miss the bus. As discussed above, _schedules_ (such as bus timetables) exist so that _two or more parties can coordinate_, and [Deadline Risk](/tags/Deadline-Risk) is on _all_ of the parties. While there's a risk I am late, there's also a risk the bus is late. I might miss the start of a concert, or the band might keep everyone waiting. diff --git a/docs/risks/Dependency-Risks/Dependency-Risk.md b/docs/risks/Dependency-Risks/Dependency-Risk.md index 33210f598..1a4673057 100644 --- a/docs/risks/Dependency-Risks/Dependency-Risk.md +++ b/docs/risks/Dependency-Risks/Dependency-Risk.md @@ -71,7 +71,7 @@ What this shows us is that [Fit Risks](/tags/Feature-Fit-Risk) are as much a pro Dependencies (like the bus) make life simpler for you by taking on complexity for you. -In software, dependencies are a way to manage [Complexity Risk](/tags/Complexity-Risk). The reason for this is that a dependency gives you an [abstraction](/thinking/Glossary.md#abstraction): you no longer need to know _how_ to do something, (that's the job of the dependency), you just need to interact with the dependency properly to get the job done. Buses are _perfect_ for people who can't drive, after all. +In software, dependencies are a way to manage [Complexity Risk](/tags/Complexity-Risk). The reason for this is that a dependency gives you an [abstraction](/thinking/Glossary#abstraction): you no longer need to know _how_ to do something, (that's the job of the dependency), you just need to interact with the dependency properly to get the job done. Buses are _perfect_ for people who can't drive, after all. ![Dependencies help with complexity risk, but come with their own attendant risks](/img/generated/risks/dependency/dependency-risk.svg) @@ -83,13 +83,13 @@ In Rich Hickey's talk, [Simple Made Easy](https://www.infoq.com/presentations/Si But: living systems are not simple. Not anymore. They evolved in the direction of increasing complexity because life was _easier_ that way. In the "simpler" direction, life is first _harder_ and then _impossible_, and then an evolutionary dead-end. -Depending on things makes _your job easier_. But the [Complexity Risk](/tags/Complexity-Risk) hasn't gone away: it's just _transferred_ to the dependency. It's just [division of labour](https://en.wikipedia.org/wiki/Division_of_labour) and dependency hierarchies, as we saw in [Complexity Risk](/risks/Complexity-Risk.md#hierarchies-and-modularisation). +Depending on things makes _your job easier_. But the [Complexity Risk](/tags/Complexity-Risk) hasn't gone away: it's just _transferred_ to the dependency. It's just [division of labour](https://en.wikipedia.org/wiki/Division_of_labour) and dependency hierarchies, as we saw in [Complexity Risk](/risks/Complexity-Risk#hierarchies-and-modularisation). Our economic system and our software systems exhibit the same tendency-towards-complexity. For example, the television in my house now is _vastly more complicated_ than the one in my home when I was a child. But, it contains much more functionality and consumes much less power and space. ## Managing Dependency Risk -Arguably, managing [Dependency Risk](/tags/Dependency-Risk) is _what Project Managers do_. Their job is to meet the project's [Goal](/thinking/Glossary.md#goal) by organising the available dependencies into some kind of useful order. +Arguably, managing [Dependency Risk](/tags/Dependency-Risk) is _what Project Managers do_. Their job is to meet the project's [Goal](/thinking/Glossary#goal) by organising the available dependencies into some kind of useful order. There are some tools for managing dependency risk: [Gantt Charts](https://en.wikipedia.org/wiki/Gantt_chart) for example, arrange work according to the capacity of the resources (i.e. dependencies) available, but also the _dependencies between the tasks_. If task **B** requires the outputs of task **A**, then clearly task **A** comes first and task **B** starts after it finishes. We'll look at this more in [Process Risk](/tags/Process-Risk). diff --git a/docs/risks/Dependency-Risks/Process-Risk.md b/docs/risks/Dependency-Risks/Process-Risk.md index 1988310c6..73209fc91 100644 --- a/docs/risks/Dependency-Risks/Process-Risk.md +++ b/docs/risks/Dependency-Risks/Process-Risk.md @@ -2,7 +2,7 @@ title: Process Risk description: Risks due to the following a particular protocol of communication with a dependency, which may not work out the way we want. -slug: risks/Process-Risk +slug: /risks/Process-Risk featured: class: c element: '' @@ -32,7 +32,7 @@ As the above diagram shows, process exists to mitigate other kinds of risk. For - **[Operational Risk](/tags/Operational-Risk)**: this encompasses the risk of people _not doing their job properly_. But, by having a process, (and asking, did this person follow the process?) you can draw a distinction between a process failure and a personnel failure. For example, accepting funds from a money launderer _could_ be a failure of a bank employee. But, if they followed the _process_, it's a failure of the [Process](/tags/Process-Risk) itself. - **[Complexity Risk](/tags/Complexity-Risk)**: working _within a process_ can reduce the amount of [Complexity](/tags/Complexity-Risk) you have to think about. We accept that processes are going to slow us down, but we appreciate the reduction in risk this brings. Clearly, the complexity hasn't gone away, but it's hidden within design of the process. For example, [McDonalds](https://en.wikipedia.org/wiki/McDonald's) tries to design its operation so that preparing each food item is a simple process to follow, reducing complexity (and training time) for the staff. -These are all examples of [Risk Mitigation](/thinking/Glossary.md#mitigated-risk) for the _owners_ of the process. But often the _consumers_ of the process end up picking up [Process Risks](/tags/Process-Risk) as a result: +These are all examples of [Risk Mitigation](/thinking/Glossary#mitigated-risk) for the _owners_ of the process. But often the _consumers_ of the process end up picking up [Process Risks](/tags/Process-Risk) as a result: - **[Invisibility Risk](/tags/Invisibility-Risk)**: it's often not possible to see how far along a process is to completion. Sometimes, you can do this to an extent. For example, when I send a package for delivery, I can see roughly how far it's got on the tracking website. But this is still less-than-complete information and is a representation of reality. - **[Dead-End Risk](/tags/Dead-End-Risk)**: even if you have the right process, initiating a process has no guarantee that your efforts won't be wasted and you'll be back where you started from. The chances of this happening increase as you get further from the standard use-case for the process, and the sunk cost increases with the length of time the process takes to complete. @@ -46,7 +46,7 @@ When we talk about "[Process Risk](/tags/Process-Risk)" we are really referring Processes tend to work well for the common cases because *practice makes perfect*, but they are really tested when unusual situations occur. Expanding processes to deal with edge-cases incurs [Complexity Risk](/tags/Complexity-Risk), so often it's better to try and have clear boundaries of what is "in" and "out" of the process' domain. -Sometimes, processes are _not_ used commonly. How can we rely on them anyway? Usually, the answer is to build in extra [feedback loops](/thinking/Glossary.md#feedback-loop): +Sometimes, processes are _not_ used commonly. How can we rely on them anyway? Usually, the answer is to build in extra [feedback loops](/thinking/Glossary#feedback-loop): - Testing that backups work, even when no backup is needed. - Running through a disaster recovery scenario at the weekend. @@ -67,7 +67,7 @@ Often, [Sign-Offs](/tags/Approvals) boil down to a balance of risk for the signe This is a nasty situation, but there are a couple of ways to de-risk this: - Break [Sign-Offs](/tags/Approvals) down into bite-size chunks of risk that are acceptable to those doing the signing-off. - - Agree far-in-advance the sign-off criteria. As discussed in [Risk Theory](/thinking/Evaluating-Risk.md), people have a habit of heavily discounting future risk, and it's much easier to get agreement on the _criteria_ than it is to get the sign-off. + - Agree far-in-advance the sign-off criteria. As discussed in [Risk Theory](/thinking/Evaluating-Risk), people have a habit of heavily discounting future risk, and it's much easier to get agreement on the _criteria_ than it is to get the sign-off. ## Evolution Of Process @@ -87,14 +87,14 @@ Let's look at an example of how that can happen in a step-wise way. ![Step 2: team `B` doing `A` for clients `C`. Complexity Risk is transferred to B, but C pick up Staff Risk.](/img/generated/risks/process/step2.svg) -2. Because `A` is risky, a new team (`B`) is spun up to deal with the [Complexity Risk](/tags/Complexity-Risk), which might let `C` get on with their "proper" jobs. As shown in the diagram above, this is really useful: `C`'s is job much easier (reduced [Complexity Risk](/tags/Complexity-Risk)) as they have an easier path to `A` than before. But the risk for `A` hasn't really gone - they're now just dependent on `B` instead. When members of `B` fail to deliver, this is [Staff Risk](Scarcity-Risk.md#staff-risk) for `C`. +2. Because `A` is risky, a new team (`B`) is spun up to deal with the [Complexity Risk](/tags/Complexity-Risk), which might let `C` get on with their "proper" jobs. As shown in the diagram above, this is really useful: `C`'s is job much easier (reduced [Complexity Risk](/tags/Complexity-Risk)) as they have an easier path to `A` than before. But the risk for `A` hasn't really gone - they're now just dependent on `B` instead. When members of `B` fail to deliver, this is [Staff Risk](Scarcity-Risk#staff-risk) for `C`. ![Step 3: team `B` formalises the dependency with a Process](/img/generated/risks/process/step3.svg) 3. Problems are likely to occur eventually in the `B`/`C` relationship. Perhaps some members of the `B` team give better service than others, or deal with more variety in requests? In order to standardise the response from `B` and also to reduce scope-creep in requests from `C`, `B` organises bureaucratically so that there is a controlled process (`P`) by which `A` can be accessed. Members of teams `B` and `C` now interact via some request mechanism like forms (or another protocol). - As shown in the above diagram, because of `P`, `B` can now process requests on a first-come-first-served basis and deal with them all in the same way: the more unusual requests from `C` might not fit the model. These [Process Risks](/tags/Process-Risk) are now the problem of the form-filler in `C`. - - Since this is [Abstraction](/thinking/Glossary.md#abstraction), `C` now has [Invisibility Risk](/tags/Invisibility-Risk) since it can't access team `B` and see how it works. + - Since this is [Abstraction](/thinking/Glossary#abstraction), `C` now has [Invisibility Risk](/tags/Invisibility-Risk) since it can't access team `B` and see how it works. - Team `B` may also use `P` to introduce other bureaucracy like authorisation and sign-off steps or payment barriers. All of this increases [Process Risk](/tags/Process-Risk) for team C. ![Person D acts as a middleman for customers needing some variant of `A`, transferring Complexity Risk](/img/generated/risks/process/step4.svg) @@ -110,8 +110,8 @@ In this example, you can see how the organisation evolves process to mitigate ri Two key take-aways from this: - - **The System Gets More Complex**: with different teams working to mitigate different risks in different ways, we end up with a more complex situation than when we started. Although we've _evolved_ in this direction by mitigating risks, it's not necessarily the case that the end result is _more efficient_. In fact, as we will see in [Map-And-Territory Risk](Map-And-Territory-Risk.md#markets), this evolution can lead to some very inadequate (but nonetheless stable) systems. - - **Organisational process evolves to mitigate risk**: just as we've shown that [actions are about mitigating risk](/thinking/Start.md), we've now seen that these actions get taken in an evolutionary way. That is, there is "pressure" on our internal processes to reduce risk. The people maintaining these processes feel the risk, and modify their processes in response. Let's look at a real-life example: + - **The System Gets More Complex**: with different teams working to mitigate different risks in different ways, we end up with a more complex situation than when we started. Although we've _evolved_ in this direction by mitigating risks, it's not necessarily the case that the end result is _more efficient_. In fact, as we will see in [Map-And-Territory Risk](Map-And-Territory-Risk#markets), this evolution can lead to some very inadequate (but nonetheless stable) systems. + - **Organisational process evolves to mitigate risk**: just as we've shown that [actions are about mitigating risk](/thinking/Start), we've now seen that these actions get taken in an evolutionary way. That is, there is "pressure" on our internal processes to reduce risk. The people maintaining these processes feel the risk, and modify their processes in response. Let's look at a real-life example: ## An Example - Release Processes @@ -134,7 +134,7 @@ But [Parkinson's Law](https://en.wikipedia.org/wiki/Parkinsons_law) takes this o This implies that there is a tendency for organisations to end up with _needless levels of [Process Risk](/tags/Process-Risk)_. -To fix this, design needs to happen at a higher level. In our code, we would [Refactor](/risks/Complexity-Risk.md#technical-debt) these processes to remove the unwanted complexity. In a business, it requires re-organisation at a higher level to redefine the boundaries and responsibilities between the teams. +To fix this, design needs to happen at a higher level. In our code, we would [Refactor](/risks/Complexity-Risk#technical-debt) these processes to remove the unwanted complexity. In a business, it requires re-organisation at a higher level to redefine the boundaries and responsibilities between the teams. Next in the tour of [Dependency Risks](/tags/Dependency-Risk), it's time to look at [Boundary Risk](/tags/Boundary-Risk). diff --git a/docs/risks/Dependency-Risks/Reliability-Risk.md b/docs/risks/Dependency-Risks/Reliability-Risk.md index c671f0013..9dcd1bf85 100644 --- a/docs/risks/Dependency-Risks/Reliability-Risk.md +++ b/docs/risks/Dependency-Risks/Reliability-Risk.md @@ -2,7 +2,7 @@ title: Reliability Risk description: Risks of not getting benefit from a dependency due to it's reliability, either now or in the future. -slug: risks/Reliability-Risk +slug: /risks/Reliability-Risk featured: class: c element: '' diff --git a/docs/risks/Dependency-Risks/Scarcity-Risks/Funding-Risk.md b/docs/risks/Dependency-Risks/Scarcity-Risks/Funding-Risk.md index 96c6a7c41..85716f89f 100644 --- a/docs/risks/Dependency-Risks/Scarcity-Risks/Funding-Risk.md +++ b/docs/risks/Dependency-Risks/Scarcity-Risks/Funding-Risk.md @@ -3,7 +3,7 @@ title: Funding Risk description: A particular scarcity risk, due to lack of funding. -slug: risks/Funding-Risk +slug: /risks/Funding-Risk featured: class: c @@ -26,4 +26,4 @@ This grants you some leeway as now you have two variables to play with: the _siz In startup circles, this "amount of time you can afford it" is called the ["Runway"](https://en.wiktionary.org/wiki/runway): you have to get the product to "take-off" (become profitable) before the runway ends. -Startups often spend a lot of time courting investors in order to get funding and mitigate this type of [Schedule Risk](/tags/Schedule-Risk). But, as shown in the diagram above, this activity usually comes at the expense of [Opportunity Risk](/tags/Opportunity-Risk) and [Feature Risk](/tags/Feature-Risk), as usually the same people are diverted into raise funds instead of building the project itself. \ No newline at end of file +Startups often spend a lot of time courting investors in order to get funding and mitigate this type of [Schedule Risk](/tags/Schedule-Risk). But, as shown in the diagram above, this activity usually comes at the expense of [Feature Risk](/tags/Feature-Risk), as usually the same people are diverted into raise funds instead of building the project itself. \ No newline at end of file diff --git a/docs/risks/Dependency-Risks/Scarcity-Risks/Red-Queen-Risk.md b/docs/risks/Dependency-Risks/Scarcity-Risks/Red-Queen-Risk.md index 8b80750bb..8efd32f60 100644 --- a/docs/risks/Dependency-Risks/Scarcity-Risks/Red-Queen-Risk.md +++ b/docs/risks/Dependency-Risks/Scarcity-Risks/Red-Queen-Risk.md @@ -2,7 +2,7 @@ title: Red Queen Risk description: The general risk that the competitive environment we operate within changes over time. -slug: risks/Red-Queen-Risk +slug: /risks/Red-Queen-Risk featured: class: c @@ -31,6 +31,6 @@ Now, they didn't _deliberately_ take 15 years to build this game (lots of things ![Red Queen Risk](/img/generated/risks/scarcity/red-queen-risk.svg) -Personally, I have suffered the pain on project teams where we've had to cope with legacy code and databases because the cost of changing them was too high. This is shown in the above diagram: mitigating [Red Queen Risk](#red-queen-risk) (by _keeping up-to-date_) has the [Attendant Risk](/thinking/Glossary.md#attendant-risk) of costing time and money, which might not seem worth it. Any team who is stuck using [Visual Basic 6.0](https://en.wikipedia.org/wiki/Visual_Basic) is here. +Personally, I have suffered the pain on project teams where we've had to cope with legacy code and databases because the cost of changing them was too high. This is shown in the above diagram: mitigating [Red Queen Risk](#red-queen-risk) (by _keeping up-to-date_) has the [Attendant Risk](/thinking/Glossary#attendant-risk) of costing time and money, which might not seem worth it. Any team who is stuck using [Visual Basic 6.0](https://en.wikipedia.org/wiki/Visual_Basic) is here. It's possible to ignore [Red Queen Risk](/tags/Red-Queen-Risk) for a time, but this is just another form of [Technical Debt](/tags/Complexity-Risk) which eventually comes due. diff --git a/docs/risks/Dependency-Risks/Scarcity-Risks/Schedule-Risk.md b/docs/risks/Dependency-Risks/Scarcity-Risks/Schedule-Risk.md index 2040fa73b..550cb0859 100644 --- a/docs/risks/Dependency-Risks/Scarcity-Risks/Schedule-Risk.md +++ b/docs/risks/Dependency-Risks/Scarcity-Risks/Schedule-Risk.md @@ -2,7 +2,7 @@ title: Schedule Risk description: A particular scarcity risk, due to lack of time. -slug: risks/Schedule-Risk +slug: /risks/Schedule-Risk featured: class: c diff --git a/docs/risks/Dependency-Risks/Scarcity-Risks/Staff-Risk.md b/docs/risks/Dependency-Risks/Scarcity-Risks/Staff-Risk.md index be213cbd2..ba6832394 100644 --- a/docs/risks/Dependency-Risks/Scarcity-Risks/Staff-Risk.md +++ b/docs/risks/Dependency-Risks/Scarcity-Risks/Staff-Risk.md @@ -2,7 +2,7 @@ title: Staff Risk description: The aspect of dependency risks related to employing people. -slug: risks/Staff-Risk +slug: /risks/Staff-Risk featured: class: c element: '' @@ -24,7 +24,7 @@ Since staff are a scarce resource, it stands to reason that if a startup has a " You need to consider how long your staff are going to be around, especially if you have [Key Person Risk](https://en.wikipedia.org/wiki/Key_person_insurance#Key_person_definition) on some of them. People like to have new challenges, move on to live in new places, or simply get bored. Replacing staff can be highly risky. -The longer your project goes on for, the more [Staff Risk](Scarcity-Risk.md#staff-risk) you will have to endure, and you can't rely on getting the [best staff for failing projects](/tags/Agency-Risk). +The longer your project goes on for, the more [Staff Risk](Scarcity-Risk#staff-risk) you will have to endure, and you can't rely on getting the [best staff for failing projects](/tags/Agency-Risk). ### Student Syndrome @@ -34,6 +34,6 @@ The longer your project goes on for, the more [Staff Risk](Scarcity-Risk.md#staf Arguably, there is good psychological, evolutionary and risk-based reasoning behind procrastination: if there is apparently a lot of time to get a job done, then [Schedule Risk](/tags/Schedule-Risk) is low. If we're only ever mitigating our _biggest risks_, then managing [Schedule Risk](/tags/Schedule-Risk) in the future doesn't matter so much. Putting efforts into mitigating future risks that _might not arise_ is wasted effort. -Or at least, that's the argument: if you're [Discounting the Future To Zero](/thinking/Evaluating-Risk.md) then you'll be pulling all-nighters in order to deliver any assignment. +Or at least, that's the argument: if you're [Discounting the Future To Zero](/thinking/Evaluating-Risk) then you'll be pulling all-nighters in order to deliver any assignment. -So, the problem with [Student Syndrome](#student-syndrome) is that the _very mitigation_ for [Schedule Risk](/tags/Schedule-Risk) (allowing more time) is an [Attendant Risk](/thinking/Glossary.md#attendant-risk) that _causes_ [Schedule Risk](/tags/Schedule-Risk): you'll work within the more generous time allocation more slowly and you'll end up revealing [Hidden Risk](/thinking/Glossary.md#hidden-risk) _later_. And, discovering these hidden risks later causes you to end up being late because of them. +So, the problem with [Student Syndrome](#student-syndrome) is that the _very mitigation_ for [Schedule Risk](/tags/Schedule-Risk) (allowing more time) is an [Attendant Risk](/thinking/Glossary#attendant-risk) that _causes_ [Schedule Risk](/tags/Schedule-Risk): you'll work within the more generous time allocation more slowly and you'll end up revealing [Hidden Risk](/thinking/Glossary#hidden-risk) _later_. And, discovering these hidden risks later causes you to end up being late because of them. diff --git a/docs/risks/Dependency-Risks/Software-Dependency-Risk.md b/docs/risks/Dependency-Risks/Software-Dependency-Risk.md index 79dac948e..ac3d6476e 100644 --- a/docs/risks/Dependency-Risks/Software-Dependency-Risk.md +++ b/docs/risks/Dependency-Risks/Software-Dependency-Risk.md @@ -2,7 +2,7 @@ title: Software Dependency Risk description: Specific dependency risks due to relying on software. -slug: risks/Software-Dependency-Risk +slug: /risks/Software-Dependency-Risk featured: class: c element: '' @@ -50,7 +50,7 @@ With this in mind, we can see that adding a software dependency is a trade-off: ## Programming Languages as Dependencies -In the earlier section on [Complexity Risk](/tags/Complexity-Risk) we tackled [Kolmogorov Complexity](/risks/Complexity-Risk.md#kolmogorov-complexity), and the idea that your codebase had some kind of minimal level of complexity based on the output it was trying to create. This is a neat idea, but in a way, we cheated. Let's look at how. +In the earlier section on [Complexity Risk](/tags/Complexity-Risk) we tackled [Kolmogorov Complexity](/risks/Complexity-Risk#kolmogorov-complexity), and the idea that your codebase had some kind of minimal level of complexity based on the output it was trying to create. This is a neat idea, but in a way, we cheated. Let's look at how. We were trying to figure out the shortest (Javascript) program to generate this output: @@ -96,7 +96,7 @@ function out() { (7 symbols) 1. **Language Matters**: the Kolmogorov complexity is dependent on the language, and the features the language has built in. 2. **Exact Kolmogorov complexity is uncomputable anyway:** Since it's the _theoretical_ minimum program length, it's a fairly abstract idea, so we shouldn't get too hung up on this. There is no function to be able to say, "What's the Kolmogorov complexity of string X?" 3. **What is this new library function we've created?** Is `abcdRepeater` going to be part of _every_ Javascript? If so, then we've shifted [Codebase Risk](/tags/Complexity-Risk) away from ourselves, but we've pushed [Conceptual Integrity Risk](/tags/Conceptual-Integrity-Risk) onto every _other_ user of Javascript, because `abcdRepeater` will be clogging up the JavaScript documentation for everyone, despite being rarely useful. -4. **Are there equivalent functions for every single other string?** If so, then compilation is no longer a tractable problem because now we have a massive library of different `XXXRepeater` functions to compile against. So, what we _lose_ in [Codebase Risk](/tags/Codebase-Risk) we gain in [Complexity Risk](/risks/Complexity-Risk.md#space-and-time-complexity). +4. **Are there equivalent functions for every single other string?** If so, then compilation is no longer a tractable problem because now we have a massive library of different `XXXRepeater` functions to compile against. So, what we _lose_ in [Codebase Risk](/tags/Codebase-Risk) we gain in [Complexity Risk](/risks/Complexity-Risk#space-and-time-complexity). 5. **Language design, then, is about _ergonomics_:** x After you have passed the relatively low bar of providing [Turing Completeness](https://en.wikipedia.org/wiki/Turing_completeness), the key is to provide _useful_ features that enable problems to be solved, without over-burdening the user with features they _don't_ need. And in fact, all software is about this. 6. **Language Ecosystems _really_ matter**: all modern languages allow extensions via libraries, modules or plugins. If your particular `abcdRepeater` isn't in the main library, @@ -122,7 +122,7 @@ Adopting complex software dependencies (as shown in the diagram above) might all Using a software dependency allows us to split a project's complexity into two: - - The inner complexity of the dependency (how it works internally, its own [internal complexity](/risks/Complexity-Risk.md#kolmogorov-complexity)). + - The inner complexity of the dependency (how it works internally, its own [internal complexity](/risks/Complexity-Risk#kolmogorov-complexity)). - The complexity of the instructions that we need to write to make the tool work, [the protocol complexity](/tags/Protocol-Risk), which will be a function of the complexity of the tool itself. ![Types of Complexity For a Software Dependency](/img/generated/risks/software-dependency/protocol-complexity.svg) @@ -131,7 +131,7 @@ As the above diagram shows, the bulk of the complexity of a software tool is hid ### Designing Protocols -Software is not constrained by _physical_ ergonomics in the same way as a tool is. But ideally, it should have conceptual ergonomics: complexity is hidden away from the user behind the _User Interface_. This is the familiar concept of [Abstraction](/thinking/Glossary.md#abstraction) we've already looked at. As we saw in [Communication Risk](/tags/Learning-Curve-Risk), when we use a new protocol, we face [Learning Curve Risk](/tags/Learning-Curve-Risk). +Software is not constrained by _physical_ ergonomics in the same way as a tool is. But ideally, it should have conceptual ergonomics: complexity is hidden away from the user behind the _User Interface_. This is the familiar concept of [Abstraction](/thinking/Glossary#abstraction) we've already looked at. As we saw in [Communication Risk](/tags/Learning-Curve-Risk), when we use a new protocol, we face [Learning Curve Risk](/tags/Learning-Curve-Risk). To minimise this, we should apply the [Principal Of Least Astonishment](https://en.wikipedia.org/wiki/Principle_of_least_astonishment) when designing our own protocols: @@ -192,7 +192,7 @@ In essence, this is Conway's Law: ### 2. Software Libraries -By choosing a particular software library, we are making a move on the [Risk Landscape](/risks/Risk-Landscape.md) in the hope of moving to a place with more favourable risks. Typically, using library code offers a [Schedule Risk](/tags/Schedule-Risk) and [Complexity Risk](/tags/Complexity-Risk) [Silver Bullet](/complexity/Silver-Bullets.md) - a high-speed route over the risk landscape to somewhere nearer where we want to be. But, in return we expect to pick up: +By choosing a particular software library, we are making a move on the [Risk Landscape](/risks/Risk-Landscape) in the hope of moving to a place with more favourable risks. Typically, using library code offers a [Schedule Risk](/tags/Schedule-Risk) and [Complexity Risk](/tags/Complexity-Risk) Silver Bullet - a high-speed route over the risk landscape to somewhere nearer where we want to be. But, in return we expect to pick up: - **[Communication Risk](/tags/Communication-Risk)**: because we now have to learn how to communicate with this new dependency. - **[Boundary Risk](/tags/Boundary-Risk)**: - because now are limited to using the functionality provided by this dependency. We have chosen it over alternatives and changing to something else would be more work and therefore costly. diff --git a/docs/risks/Feature-Risks/Analysis.md b/docs/risks/Feature-Risks/Analysis.md index d92b3a7e8..1508e2ad6 100644 --- a/docs/risks/Feature-Risks/Analysis.md +++ b/docs/risks/Feature-Risks/Analysis.md @@ -51,7 +51,7 @@ Darwin's conception of fitness was not one of athletic prowess, but how well an For further reading, you can check out [The Service Quality Model](https://en.wikipedia.org/wiki/SERVQUAL) which the diagram above is derived from. This model analyses the types of _quality gaps_ in services and how consumer expectations and perceptions of a service arise. -In the [Staging And Classifying](Staging-And-Classifying.md) section we'll come back and build on this model further. +In the [Staging And Classifying](/risks/Staging-And-Classifying) section we'll come back and build on this model further. ### Fit and Audience diff --git a/docs/risks/Feature-Risks/Conceptual-Integrity-Risk.md b/docs/risks/Feature-Risks/Conceptual-Integrity-Risk.md index ded6b3188..e60090b24 100644 --- a/docs/risks/Feature-Risks/Conceptual-Integrity-Risk.md +++ b/docs/risks/Feature-Risks/Conceptual-Integrity-Risk.md @@ -23,4 +23,4 @@ Sometimes it can go for a lot longer. Here's an example: I once worked on some [Feature Phones](https://en.wikipedia.org/wiki/Feature_phone) are another example: although it _seemed_ like the market wanted more and more features added to their phones, [Apple's iPhone](https://en.wikipedia.org/wiki/IPhone) was able to steal huge market share by presenting a much more enjoyable, more coherent user experience, despite being more expensive and having _fewer_ features. Feature Phones had been drowning in increasing [Conceptual Integrity Risk](/tags/Conceptual-Integrity-Risk) without realising it. -Conceptual Integrity Risk is a particularly pernicious kind of [Feature Risk](/tags/Feature-Risk) which can only be mitigated by good design and [feedback](/thinking/Cadence.md). Human needs are [fractal in nature](../estimating/Fractals.md): the more you examine them, the more complexity you can find. The aim of a product is to capture some needs at a *general* level: you can't hope to anticipate everything. As with the other risks, there is an inherent [Schedule Risk](/tags/Schedule-Risk) as addressing these risks takes _time_. \ No newline at end of file +Conceptual Integrity Risk is a particularly pernicious kind of [Feature Risk](/tags/Feature-Risk) which can only be mitigated by good design and [feedback](/thinking/Cadence). Human needs are [fractal in nature](/estimating/Fractals): the more you examine them, the more complexity you can find. The aim of a product is to capture some needs at a *general* level: you can't hope to anticipate everything. As with the other risks, there is an inherent [Schedule Risk](/tags/Schedule-Risk) as addressing these risks takes _time_. \ No newline at end of file diff --git a/docs/risks/Feature-Risks/Feature-Drift-Risk.md b/docs/risks/Feature-Risks/Feature-Drift-Risk.md index 8b31759f0..81e866add 100644 --- a/docs/risks/Feature-Risks/Feature-Drift-Risk.md +++ b/docs/risks/Feature-Risks/Feature-Drift-Risk.md @@ -30,4 +30,4 @@ As shown in the diagram, saving your project from Feature Drift Risk means **fur Sometimes, the only way to go is to start again with a clean sheet by some **distruptive innovation**. -[Feature Drift Risk](/tags/Feature-Drift-Risk) is _not the same thing_ as **Requirements Drift**, which is the tendency projects have to expand in scope as they go along. There are lots of reasons they do that, a key one being the [Hidden Risks](/thinking/Glossary.md#hidden-risk) uncovered on the project as it progresses. +[Feature Drift Risk](/tags/Feature-Drift-Risk) is _not the same thing_ as **Requirements Drift**, which is the tendency projects have to expand in scope as they go along. There are lots of reasons they do that, a key one being the [Hidden Risks](/thinking/Glossary#hidden-risk) uncovered on the project as it progresses. diff --git a/docs/risks/Feature-Risks/Regression-Risk.md b/docs/risks/Feature-Risks/Regression-Risk.md index 4284b7176..d31b112b3 100644 --- a/docs/risks/Feature-Risks/Regression-Risk.md +++ b/docs/risks/Feature-Risks/Regression-Risk.md @@ -17,9 +17,9 @@ part_of: Feature Risk Delivering new features can delight your customers, but breaking existing ones will annoy them! -[Regression Risk](Feature-Risk.md#regression-risk) is the risk of breaking existing features in your software when you add new ones. As with other feature risks, the eventual result is the same: customers don't have the features they expect. +Regression Risk is the risk of breaking existing features in your software when you add new ones. As with other feature risks, the eventual result is the same: customers don't have the features they expect. -Regression Risks increase as your code-base [gains Complexity](/tags/Complexity-Risk). That's because it becomes impossible to keep a complete [Internal Model](/thinking/Glossary.md#internal-model) of the whole thing in your head, and also your software gains "corner cases" or "edge conditions" which don't get tested very often. +Regression Risks increase as your code-base [gains Complexity](/tags/Complexity-Risk). That's because it becomes impossible to keep a complete [Internal Model](/tags/Internal-Model) of the whole thing in your head, and also your software gains "corner cases" or "edge conditions" which don't get tested very often. As shown in the above diagram, you can address Regression Risk with **specification** (defining clearly what the expected behaviour is) and **testing** (both manual and automated), but this takes time and will add extra complexity to your project (either in the form of code for automated tests, written specifications or a more elaborate process for releases). diff --git a/docs/risks/Map-And-Territory-Risk.md b/docs/risks/Map-And-Territory-Risk.md index 950575621..4ef496d7f 100644 --- a/docs/risks/Map-And-Territory-Risk.md +++ b/docs/risks/Map-And-Territory-Risk.md @@ -16,17 +16,17 @@ part_of: Operational Risk -As we discussed in the [Communication Risk](Communication-Risk.md#misinterpretation) section, our understanding of the world is informed by abstractions we create and the names we give them. +As we discussed in the [Communication Risk](Communication-Risk#misinterpretation) section, our understanding of the world is informed by abstractions we create and the names we give them. For example, Risk-First is about naming _risks_ within software development, so we can discuss and understand them better. -Our [Internal Models](/thinking/Glossary.md#internal-model) of the world are constructed from these abstractions and their relationships. +Our [Internal Models](/thinking/Glossary#internal-model) of the world are constructed from these abstractions and their relationships. ![Maps and Territories, and Communication happening between them](/img/generated/risks/map-and-territory/communication.svg) -As the diagram above shows, there is a translation going on here: observations about the arrangement of _atoms_ in the world are _communicated_ to our [Internal Models](/thinking/Glossary.md#internal-model) and stored as patterns of _information_ (measured in bits and bytes). +As the diagram above shows, there is a translation going on here: observations about the arrangement of _atoms_ in the world are _communicated_ to our [Internal Models](/thinking/Glossary#internal-model) and stored as patterns of _information_ (measured in bits and bytes). -We face [Map And Territory Risk](/tags/Map-And-Territory-Risk) because we base our behaviour on our [Internal Models](/thinking/Glossary.md#internal-model) rather than reality itself. It comes from the expression "Confusing the Map for the Territory", attributed to Alfred Korzybski: +We face [Map And Territory Risk](/tags/Map-And-Territory-Risk) because we base our behaviour on our [Internal Models](/thinking/Glossary#internal-model) rather than reality itself. It comes from the expression "Confusing the Map for the Territory", attributed to Alfred Korzybski: > "Polish-American scientist and philosopher Alfred Korzybski remarked that "the map is not the territory" and that "the word is not the thing", encapsulating his view that an abstraction derived from something, or a reaction to it, is not the thing itself. Korzybski held that many people _do_ confuse maps with territories, that is, confuse models of reality with reality itself." - [Map-Territory Relation, _Wikipedia_](https://en.wikipedia.org/wiki/Map–territory_relation) @@ -39,7 +39,7 @@ As the above diagram shows, there are two parts to this risk, which we are going ## Fitness -In the [Feature Risk](/tags/Feature-Risk) section we looked at ways in which our software project might have risks due to having _inappropriate_ features ([Feature Fit Risk](/tags/Feature-Fit-Risk)), _broken_ features ([Feature Implementation Risk](/tags/Implementation-Risk)) or _too many of the wrong_ features ([Conceptual Integrity Risk](/tags/Conceptual-Integrity-Risk)). Let's see how these same categories also apply to [Internal Models](/thinking/Glossary.md#internal-model). +In the [Feature Risk](/tags/Feature-Risk) section we looked at ways in which our software project might have risks due to having _inappropriate_ features ([Feature Fit Risk](/tags/Feature-Fit-Risk)), _broken_ features ([Feature Implementation Risk](/tags/Implementation-Risk)) or _too many of the wrong_ features ([Conceptual Integrity Risk](/tags/Conceptual-Integrity-Risk)). Let's see how these same categories also apply to [Internal Models](/thinking/Glossary#internal-model). ### Example: The SatNav @@ -51,22 +51,22 @@ This wasn't borne of stupidity, but experience: SatNavs are pretty reliable. _S There are two [Map and Territory Risks](/tags/Map-And-Territory-Risk) here: -- The [Internal Model](/thinking/Glossary.md#internal-model) of the _SatNav_ contained information that was wrong: the track had been marked up as a road, rather than a path. -- The [Internal Model](/thinking/Glossary.md#internal-model) of the _driver_ was wrong: his abstraction of "the SatNav is always right" turned out to be only _mostly_ accurate. +- The [Internal Model](/thinking/Glossary#internal-model) of the _SatNav_ contained information that was wrong: the track had been marked up as a road, rather than a path. +- The [Internal Model](/thinking/Glossary#internal-model) of the _driver_ was wrong: his abstraction of "the SatNav is always right" turned out to be only _mostly_ accurate. -You could argue that both the SatNav and the Driver's _[Internal Model](/thinking/Glossary.md#internal-model)_ had bugs in them. That is, they both suffer the [Feature Implementation Risk](/tags/Implementation-Risk) we saw in the [Feature Risk](/tags/Feature-Risk) section. If a SatNav has too much of this, you'd end up not trusting it, and getting a new one. With your _personal_ [Internal Model](/thinking/Glossary.md#internal-model), you can't buy a new one, but you may learn to _trust your assumptions less_. +You could argue that both the SatNav and the Driver's _[Internal Model](/thinking/Glossary#internal-model)_ had bugs in them. That is, they both suffer the [Feature Implementation Risk](/tags/Implementation-Risk) we saw in the [Feature Risk](/tags/Feature-Risk) section. If a SatNav has too much of this, you'd end up not trusting it, and getting a new one. With your _personal_ [Internal Model](/thinking/Glossary#internal-model), you can't buy a new one, but you may learn to _trust your assumptions less_. ![Some examples of Feature Fit Risks, as manifested in the Internal Model](/img/generated/risks/map-and-territory/map_and_territory_table_1.svg) -The diagram above shows how types of [Feature Fit Risk](/tags/Feature-Risk) can manifest in the [Internal Model](/thinking/Glossary.md#internal-model). +The diagram above shows how types of [Feature Fit Risk](/tags/Feature-Risk) can manifest in the [Internal Model](/thinking/Glossary#internal-model). ## Audience -Communication allows us to _share_ information between [Internal Models](/thinking/Glossary.md#internal-model) of a whole audience of people. The [Communication Risk](/tags/Communication-Risk) and [Coordination Risk](/tags/Coordination-Risk) sections covered the difficulties inherent in aligning [Internal Models](/thinking/Glossary.md#internal-model) so that they cooperate. +Communication allows us to _share_ information between [Internal Models](/thinking/Glossary#internal-model) of a whole audience of people. The [Communication Risk](/tags/Communication-Risk) and [Coordination Risk](/tags/Coordination-Risk) sections covered the difficulties inherent in aligning [Internal Models](/thinking/Glossary#internal-model) so that they cooperate. ![Relative popularity of "Machine Learning" and "Big Data" as search terms on [Google Trends](https://trends.google.com), 2011-2018 ](/img/google-trends.png) -But how does [Map and Territory Risk](/tags/Map-And-Territory-Risk) apply across a population of [Internal Models](/thinking/Glossary.md#internal-model)? Can we track the rise-and-fall of _ideas_ like we track stock prices? In effect, this is what [Google Trends](https://trends.google.com) does. In the chart above, we can see the relative popularity of two search terms over time. This is probably as good an indicator as any of the changing popularity of an abstraction within an audience. +But how does [Map and Territory Risk](/tags/Map-And-Territory-Risk) apply across a population of [Internal Models](/thinking/Glossary#internal-model)? Can we track the rise-and-fall of _ideas_ like we track stock prices? In effect, this is what [Google Trends](https://trends.google.com) does. In the chart above, we can see the relative popularity of two search terms over time. This is probably as good an indicator as any of the changing popularity of an abstraction within an audience. ### Example: Map And Territory Risk Drives The Hype Cycle @@ -171,7 +171,7 @@ The diagram above shows how Evolution-type [Feature Risks](/tags/Feature-Risk) c ## Humans and Machines -In the example of the SatNav, we saw how the _quality_ of [Map and Territory Risk](/tags/Map-And-Territory-Risk) is different for _people_ and _machines_. Whereas people _should_ be expected show skepticism for new (unlikely) information our databases accept it unquestioningly. _Forgetting_ is an everyday, usually benign part of our human [Internal Model](/thinking/Glossary.md#internal-model), but for software systems it is a production crisis involving 3am calls and backups. +In the example of the SatNav, we saw how the _quality_ of [Map and Territory Risk](/tags/Map-And-Territory-Risk) is different for _people_ and _machines_. Whereas people _should_ be expected show skepticism for new (unlikely) information our databases accept it unquestioningly. _Forgetting_ is an everyday, usually benign part of our human [Internal Model](/thinking/Glossary#internal-model), but for software systems it is a production crisis involving 3am calls and backups. For Humans, [Map and Territory Risk](/tags/Map-And-Territory-Risk) is exacerbated by [cognitive biases](https://en.wikipedia.org/wiki/List_of_cognitive_biases): @@ -215,4 +215,4 @@ As the book points out, while everyone _persists_ in using an inadequate abstrac Scientific journals are a single example taken from a closely argued book investigating lots of cases of this kind. It's worth taking the time to read a couple of the chapters on this interesting topic. (Like Risk-First it is available to read online). -As usual, this section forms a grab-bag of examples in a complex topic. But it's time to move on as there is one last stop we have to make on the [Risk Landscape](/thinking/Glossary.md#risk-landscape), and that is to look at [Operational Risk](/tags/Operational-Risk). \ No newline at end of file +As usual, this section forms a grab-bag of examples in a complex topic. But it's time to move on as there is one last stop we have to make on the [Risk Landscape](/thinking/Glossary#risk-landscape), and that is to look at [Operational Risk](/tags/Operational-Risk). \ No newline at end of file diff --git a/docs/risks/Operational-Risk.md b/docs/risks/Operational-Risk.md index 23636c575..8510d1dc6 100644 --- a/docs/risks/Operational-Risk.md +++ b/docs/risks/Operational-Risk.md @@ -26,7 +26,7 @@ There is a lot to this subject, so this section is just a taster: we're going to When building software, it's tempting to take a very narrow view of the dependencies of a system, but [Operational Risks](/tags/Operational-Risk) are often caused by dependencies we _don't_ consider - i.e. the **Operational Context** within which the system is operating. Here are some examples: - - **[Staff Risks](Scarcity-Risk.md#staff-risk)**: + - **[Staff Risks](Scarcity-Risk#staff-risk)**: - Freak weather conditions affecting ability of staff to get to work, interrupting the development and support teams. - Reputational damage caused when staff are rude to the customers. @@ -97,7 +97,7 @@ As we saw in [Map and Territory Risk](/tags/Map-And-Territory-Risk), it's very e ### Scanning The Operational Context -There are plenty of [Hidden Risks](/thinking/Glossary.md#hidden-risk) within the operation's environment. These change all the time in response to economic, legal or political change. In order to manage a risk, you have to uncover it, so part of [Operations Management](#operations-management) is to look for trouble. +There are plenty of [Hidden Risks](/thinking/Glossary#hidden-risk) within the operation's environment. These change all the time in response to economic, legal or political change. In order to manage a risk, you have to uncover it, so part of [Operations Management](#operations-management) is to look for trouble. - **Environmental Scanning** is all about trying to determine which changes in the environment are going to impact your operation. Here we are trying to determine the level of [Dependency Risk](/tags/Dependency-Risk) we face for external dependencies, such as suppliers, customers, markets and regulation. Tools like [PEST](https://en.wikipedia.org/wiki/PEST_analysis) are relevant, as is - **[Penetration Testing](https://en.wikipedia.org/wiki/Penetration_test)**: looking for security weaknesses within the operation. See [OWASP](https://en.wikipedia.org/wiki/OWASP) for examples. @@ -139,7 +139,7 @@ A Risk-First re-framing of this (as shown in the diagram above) might be the bal - The perceived [Scarcity Risks](/tags/Scarcity-Risk) (such as funding, time available, etc) of staying in development (pressure to ship). - The perceived [Trust & Belief Risk](/tags/Trust-And-Belief-Risk), [Feature Risk](/tags/Feature-Risk) and [Operational Risk](/tags/Operational-Risk) of going to production (pressure to improve). -The "should we ship?" decision is therefore a complex one. In [Meeting Reality](/thinking/Meeting-Reality.md), we discussed that it's better to do this "sooner, more frequently, in smaller chunks and with feedback". We can meet [Operational Risk](/tags/Operational-Risk) _on our own terms_ by doing so: +The "should we ship?" decision is therefore a complex one. In [Meeting Reality](/thinking/Meeting-Reality), we discussed that it's better to do this "sooner, more frequently, in smaller chunks and with feedback". We can meet [Operational Risk](/tags/Operational-Risk) _on our own terms_ by doing so: |Meet Reality... |Techniques | |----------------------------|----------------------------------------------------------------------| @@ -152,7 +152,7 @@ The "should we ship?" decision is therefore a complex one. In [Meeting Reality] ## The End Of The Road -In a way, [actions](/thinking/Glossary.md#taking-action) like **Design** and **Improvement** bring us right back to where we started from: identifying [Dependency Risks](/tags/Dependency-Risk), [Feature Risks](/tags/Feature-Risk) and [Complexity Risks](/tags/Complexity-Risk) that hinder our operation, and mitigating them through actions like _software development_. +In a way, [actions](/thinking/Glossary#taking-action) like **Design** and **Improvement** bring us right back to where we started from: identifying [Dependency Risks](/tags/Dependency-Risk), [Feature Risks](/tags/Feature-Risk) and [Complexity Risks](/tags/Complexity-Risk) that hinder our operation, and mitigating them through actions like _software development_. -Our safari of risk is finally complete: it's time to reflect on what we've seen in the next section, [Staging and Classifying](Staging-And-Classifying.md). +Our safari of risk is finally complete: it's time to reflect on what we've seen in the next section, [Staging and Classifying](Staging-And-Classifying). \ No newline at end of file diff --git a/docs/risks/Risk-Landscape.md b/docs/risks/Risk-Landscape.md index 5bafd2ecc..ee95eb075 100644 --- a/docs/risks/Risk-Landscape.md +++ b/docs/risks/Risk-Landscape.md @@ -31,9 +31,9 @@ To get there, we need to avoid the pitfalls dotted around the landscape like "Ru Our job as developers is to _navigate_ across this landscape, testing the way as we go, trying to get to a position of _more favourable risk_. -It's tempting to think of the [Risk Landscape](/risks/Risk-Landscape.md) as being like a [Fitness Landscape](https://en.wikipedia.org/wiki/Fitness_landscape). That is, you have a "cost function" which is your height above the landscape, and you try and optimise by moving downhill in a [Gradient Descent](https://en.wikipedia.org/wiki/Gradient_descent) fashion. +It's tempting to think of the [Risk Landscape](/risks/Risk-Landscape) as being like a [Fitness Landscape](https://en.wikipedia.org/wiki/Fitness_landscape). That is, you have a "cost function" which is your height above the landscape, and you try and optimise by moving downhill in a [Gradient Descent](https://en.wikipedia.org/wiki/Gradient_descent) fashion. -However, there's a problem with this: we don't have that cost function. We can only _guess_ at what risks there are. We have to go on our _experience_. For this reason, I prefer to think of the [Risk Landscape](/risks/Risk-Landscape.md) as a terrain which contains various categories of _fauna_ or _obstacles_ which we will find as we explore it. +However, there's a problem with this: we don't have that cost function. We can only _guess_ at what risks there are. We have to go on our _experience_. For this reason, I prefer to think of the [Risk Landscape](/risks/Risk-Landscape) as a terrain which contains various categories of _fauna_ or _obstacles_ which we will find as we explore it. ## Why Should We Categorise The Risks? @@ -61,19 +61,19 @@ Below is a table outlining the different risks we'll see. There _is_ an order t |[Deadline Risk](/tags/Deadline-Risk) |The risk of having a date to hit.| |[Software Dependency Risk](/tags/Software-Dependency-Risk)|The risk of depending on a software library, service or function.| |[Process Risk](/tags/Process-Risk) |When you depend on a business process, or human process to give you something you need.| -|[Boundary Risk](/tags/Boundary-Risk) |Risks due to making decisions that limit your choices later on. Sometimes, you go the wrong way on the [Risk Landscape](/risks/Risk-Landscape.md) and it's hard to get back to where you want to be.| -|[Agency Risk](/tags/Agency-Risk) |Risks that staff have their own [Goals](/thinking/Glossary.md#goal), which might not align with those of the project or team.| +|[Boundary Risk](/tags/Boundary-Risk) |Risks due to making decisions that limit your choices later on. Sometimes, you go the wrong way on the [Risk Landscape](/risks/Risk-Landscape) and it's hard to get back to where you want to be.| +|[Agency Risk](/tags/Agency-Risk) |Risks that staff have their own [Goals](/thinking/Glossary#goal), which might not align with those of the project or team.| |[Coordination Risk](/tags/Coordination-Risk) |Risks due to the fact that systems contain multiple agents, which need to work together.| -|[Map And Territory Risk](/tags/Map-And-Territory-Risk) |Risks due to the fact that people don't see the world as it really is. (After all, they're working off different, imperfect [Internal Models](/thinking/Glossary.md#internal-model).)| +|[Map And Territory Risk](/tags/Map-And-Territory-Risk) |Risks due to the fact that people don't see the world as it really is. (After all, they're working off different, imperfect [Internal Models](/thinking/Glossary#internal-model).)| |[Operational Risk](/tags/Operational-Risk) |Software is embedded in a system containing people, buildings, machines and other services. Operational risk considers this wider picture of risk associated with running a software service or business in the real world.| -After the last stop on the tour, in [Staging and Classifying](Staging-And-Classifying.md) we'll have a recap about what we've seen and make some guesses about how things fit together. +After the last stop on the tour, in [Staging and Classifying](Staging-And-Classifying) we'll have a recap about what we've seen and make some guesses about how things fit together. -Also on that page is a [periodic table](Staging-And-Classifying.md#towards-a-periodic-table-of-risks) showing a diagrammatic view of how all these risks fit together. +Also on that page is a [periodic table](Staging-And-Classifying#towards-a-periodic-table-of-risks) showing a diagrammatic view of how all these risks fit together. ## Causation & Correlation -Although we're going to try and categorise the kinds of things we see on this [Risk Landscape](/risks/Risk-Landscape.md), this isn't going to be perfect, because: +Although we're going to try and categorise the kinds of things we see on this [Risk Landscape](/risks/Risk-Landscape), this isn't going to be perfect, because: - One risk can "blend" into another just like sometimes a "field" is also a "car-park", or a building might contain some trees (but isn't a forest). - Ameliorating one risk probably means accepting another (hopefully lesser) risk. @@ -97,7 +97,7 @@ In the financial crisis of 2007, these models of risk didn't turn out to be much - This caused credit defaults (the thing that [Credit Risk](https://en.wikipedia.org/wiki/Credit_risk) measures were meant to guard against) even though the banks _technically_ were solvent. - Once credit defaults started, this worried investors in the banks, which had massive [Market Risk](https://en.wikipedia.org/wiki/Market_risk) impacts that none of the models foresaw. -All the [Risks](/thinking/Glossary.md#risk) were [correlated](https://www.investopedia.com/terms/c/correlation.asp). That is, they were affected by the _same underlying events_, or _each other_. +All the [Risks](/thinking/Glossary#risk) were [correlated](https://www.investopedia.com/terms/c/correlation.asp). That is, they were affected by the _same underlying events_, or _each other_. ![Causation shown on a Risk-First Diagram. More complexity is likely to lead to more Operational Risk](/img/generated/risks/landscape/causation.svg) diff --git a/docs/risks/Staging-And-Classifying.md b/docs/risks/Staging-And-Classifying.md index ff22147f7..457b84b9d 100644 --- a/docs/risks/Staging-And-Classifying.md +++ b/docs/risks/Staging-And-Classifying.md @@ -20,17 +20,17 @@ But if we are good collectors, then before we finish we should _[Stage](https:// ## Towards A "Periodic Table" Of Risks -As we said [at the start](A-Pattern-Language.md), Risk-First is all about developing _A Pattern Language_. We can use the terms like "[Feature Risk](/tags/Feature-Risk)" or "[Learning Curve Risk](/tags/Learning-Curve-Risk)" to explain phenomena we see on software projects. If we want to [De-Risk](/thinking/De-Risking.md) our work, we need this power of explanation so we can talk about how to go about it. +As we said [at the start](A-Pattern-Language), Risk-First is all about developing _A Pattern Language_. We can use the terms like "[Feature Risk](/tags/Feature-Risk)" or "[Learning Curve Risk](/tags/Learning-Curve-Risk)" to explain phenomena we see on software projects. If we want to [De-Risk](/thinking/De-Risking) our work, we need this power of explanation so we can talk about how to go about it. ![Periodic Table of Risks, Horizontal](/img/generated/staging-and-classifying/periodic-horizontal.svg) -The diagram above compiles all of the risks we've seen so far on our tour across the [Risk Landscape](/risks/Risk-Landscape.md). Just like a periodic table, there are perhaps others left to discover. _Unlike_ a periodic table, these risks are not completely distinct: they mix like paint and blend into one another. +The diagram above compiles all of the risks we've seen so far on our tour across the [Risk Landscape](/risks/Risk-Landscape). Just like a periodic table, there are perhaps others left to discover. _Unlike_ a periodic table, these risks are not completely distinct: they mix like paint and blend into one another. If you've been reading closely, you'll notice that a number of themes come up again and again within the different sections. It's time to look at the _patterns within the patterns_. ## The Power Of Abstractions -[Abstraction](/thinking/Glossary.md#abstraction) appears as a concept continually: in [Communication Risk](/tags/Communication-Risk), [Complexity Metrics](/risks/Complexity-Risk.md#kolmogorov-complexity), [Map and Territory Risk](/tags/Map-And-Territory-Risk) or how it causes [Boundary Risk](/tags/Boundary-Risk). We've looked at some complicated examples of abstractions, such as [network protocols](Communication-Risk.md#protocols), [dependencies on technology](/tags/Software-Dependency-Risk) or [Business Processes](Process-Risk.md#the-purpose-of-process). +[Abstraction](/thinking/Glossary#abstraction) appears as a concept continually: in [Communication Risk](/tags/Communication-Risk), [Complexity Metrics](/risks/Complexity-Risk#kolmogorov-complexity), [Map and Territory Risk](/tags/Map-And-Territory-Risk) or how it causes [Boundary Risk](/tags/Boundary-Risk). We've looked at some complicated examples of abstractions, such as [network protocols](Communication-Risk#protocols), [dependencies on technology](/tags/Software-Dependency-Risk) or [Business Processes](Process-Risk#the-purpose-of-process). Let's now _generalize_ what is happening with abstraction. To do this, we'll consider the simplest example of abstraction: _naming a pattern_ of behaviour we see in the real world, such as "Binge Watching" or "Remote Working", or naming a category of insects as "Beetles". @@ -52,7 +52,7 @@ As shown in the above diagram, _using an abstraction you already know_ means: As shown in the above diagram, _inventing a new abstraction_ means: - **Mitigating [Feature Risk](/tags/Feature-Risk).** By _giving a name to something_ (or building a new product, or a way of working) you are offering up something that someone else can use. This should mitigate [Feature Risk](/tags/Feature-Risk) in the sense that other people can choose to use your it, if it fits their requirements. -- **Creating a [Protocol](Communication-Risk.md#protocols).** Introducing _new words to a language_ creates [Protocol Risk](/tags/Protocol-Risk) as most people won't know what it means. +- **Creating a [Protocol](Communication-Risk#protocols).** Introducing _new words to a language_ creates [Protocol Risk](/tags/Protocol-Risk) as most people won't know what it means. - **Increasing [Complexity Risk](/tags/Complexity-Risk).** Because, the more words we have, the more complex the language is. - **Creating the opportunity for [Boundary Risk](/tags/Boundary-Risk).** By naming something, you _implicitly_ create a boundary, because the world is now divided into "things which _are_ X" and "things which _are not_ X". _Boundary Risk arises from abstractions._ @@ -66,7 +66,7 @@ As shown in the above diagram, _learning a new abstraction_ means: - **Accepting [Boundary Risks](/tags/Boundary-Risk).** Commitment to one abstraction over another means that you have the opportunity cost of the other abstractions that you could have used. - **Accepting [Map And Territory Risk](/tags/Map-And-Territory-Risk).** Because the word refers to the _concept_ of the thing, and _not the thing itself_. -Abstraction is everywhere and seems to be at the heart of what our brains do. But clearly, like [taking any other action](/thinking/Glossary.md#taking-action) there is always trade-off in terms of risk. +Abstraction is everywhere and seems to be at the heart of what our brains do. But clearly, like [taking any other action](/thinking/Glossary#taking-action) there is always trade-off in terms of risk. ## Your Feature Risk is Someone Else's Dependency Risk @@ -82,7 +82,7 @@ As shown in the diagram above, relationships of features/dependencies are the ba ## The Work Continues -On this journey around the [Risk Landscape](/risks/Risk-Landscape.md) we've collected a (hopefully) good, representative sample of [Risks](/thinking/Glossary.md#risk) and where to find them. But there are more out there. How many of these have you seen on your projects? What is missing? What is wrong? +On this journey around the [Risk Landscape](/risks/Risk-Landscape) we've collected a (hopefully) good, representative sample of [Risks](/thinking/Glossary#risk) and where to find them. But there are more out there. How many of these have you seen on your projects? What is missing? What is wrong? Please help by reporting back what you find. diff --git a/docs/risks/Start.md b/docs/risks/Start.md index 7371cfeff..7ce1bc04e 100644 --- a/docs/risks/Start.md +++ b/docs/risks/Start.md @@ -15,11 +15,11 @@ tweet: yes # Risks -Much of the content of [Risk-First](https://riskfirst.org) is a collection of [Risks as Patterns](A-Pattern-Language.md). +Much of the content of [Risk-First](https://riskfirst.org) is a collection of [Risks as Patterns](A-Pattern-Language). Here, we're going to take you through the various types of Risk you will face on every software project. -In [Thinking Risk-First](/thinking/One-Size-Fits-No-One.md), we saw how _Lean Software Development_ owed its existence to production-line manufacturing techniques developed at Toyota. And we saw that the _Waterfall_ approach originally came from engineering. If Risk-First is anything, it's about applying the techniques of _Risk Management_ to the discipline of _Software Development_ (there's nothing new under the sun, after all). +In [Thinking Risk-First](/thinking/One-Size-Fits-No-One), we saw how _Lean Software Development_ owed its existence to production-line manufacturing techniques developed at Toyota. And we saw that the _Waterfall_ approach originally came from engineering. If Risk-First is anything, it's about applying the techniques of _Risk Management_ to the discipline of _Software Development_ (there's nothing new under the sun, after all). One key activity of Risk Management we haven't discussed yet is _categorizing_ risks. So, this track of Risk-First is all about developing categories of risks for use in Software Development. diff --git a/docs/tags.yml b/docs/tags.yml index bc50b7eda..3b6023350 100644 --- a/docs/tags.yml +++ b/docs/tags.yml @@ -18,6 +18,10 @@ label: "Analysis" permalink: "Analysis" +"Anti-Goal": + label: "Anti-Goal" + permalink: "Anti-Goal" + "Approvals": label: "Approvals" permalink: "Approvals" @@ -201,6 +205,10 @@ "Performance Testing": label: "Performance Testing" permalink: "Performance-Testing" + +"Pressure": + label: "Pressure" + permalink: "Pressure" "Prioritising": label: "Prioritising" @@ -277,11 +285,15 @@ "Schedule Risk": label: "Schedule Risk" permalink: "Schedule-Risk" - + "Scrum": label: "Scrum" permalink: "Scrum" +"Security Risk": + label: "Security Risk" + permalink: "Security-Risk" + "Security Testing": label: "Security Testing" permalink: "Security-Testing" @@ -330,9 +342,9 @@ label: "Waterfall Development" permalink: "Waterfall-Development" -Extreme Programming: +"Extreme Programming": label: "Extreme Programming" - permalink: Extreme-Programming" + permalink: "Extreme-Programming" "Agile": label: "Agile" @@ -452,15 +464,15 @@ Extreme Programming: "Expected Value": label: "Expected Value" - permalink: "Expected Value" + permalink: "Expected-Value" "Expected Return": label: "Expected Return" - permalink: "Expected Return" + permalink: "Expected-Return" "Demand Management": label: "Demand Management" - permalink: "Demand Management" + permalink: "Demand-Management" "Standardisation": label: "Standardisation" diff --git a/docs/thinking/A-Conversation.md b/docs/thinking/A-Conversation.md index 9e5f9f8e8..e83aa7077 100644 --- a/docs/thinking/A-Conversation.md +++ b/docs/thinking/A-Conversation.md @@ -25,7 +25,7 @@ Uniquely as a species, humans are fascinated by story-telling _precisely because As humans, we all bring our own experiences to bear on the best way to solve problems. Sometimes, experience tells us that solving a problem one way will create a new _worse_ problem. -It's key that we share our experiences to improve everyone's [Internal Model](/thinking/Glossary.md#internal-model)s. +It's key that we share our experiences to improve everyone's [Internal Model](/thinking/Glossary#internal-model)s. ## A Risk Conversation @@ -41,9 +41,9 @@ Synergy's release process means that the app-store submission must happen in a f **Eve**: Well, you know Synergy did their review and asked us to upgrade our Web Server to only allow TLS version 1.1 and greater? -**Bob**: Yes, I remember: we discussed it as a team and thought the simplest thing would be to change the security settings on the Web Server, but we all felt it was pretty risky. We decided that in order to flush out [Hidden Risk](/thinking/Glossary.md#hidden-risk), we'd upgrade our entire production site to use it _now_, rather than wait for the app launch. **(1)** +**Bob**: Yes, I remember: we discussed it as a team and thought the simplest thing would be to change the security settings on the Web Server, but we all felt it was pretty risky. We decided that in order to flush out [Hidden Risk](/thinking/Glossary#hidden-risk), we'd upgrade our entire production site to use it _now_, rather than wait for the app launch. **(1)** -**Eve**: Right, and it _did_ flush out [Hidden Risk](/thinking/Glossary.md#hidden-risk): some of our existing software broke on Windows 7, which sadly we still need to support. So, we had to back it out. +**Eve**: Right, and it _did_ flush out [Hidden Risk](/thinking/Glossary#hidden-risk): some of our existing software broke on Windows 7, which sadly we still need to support. So, we had to back it out. **Bob**: Ok, well I guess it's good we found out _now_. It would have been a disaster to discover this after the app had gone live on Synergy's app-store. **(2)** @@ -53,9 +53,9 @@ Synergy's release process means that the app-store submission must happen in a f **Eve**: How about we run two web-servers? One for the existing content, and one for our new Synergy app? We'd have to get a new external IP address, handle DNS setup, change the firewalls, and then deploy a new version of the Web Server software on the production boxes... **(3)** -**Bob**: This feels like there'd be a lot of [Attendant Risk](/thinking/Glossary.md#attendant-risk): we're adding [Complexity Risk](/tags/Complexity-Risk) to our estate, and all of this needs to be handled by the Networking Team, so we're picking up a lot of [Process Risk](/tags/Process-Risk). I'm also worried that there are too many steps here, and we're going to discover loads of [Hidden Risks](/thinking/Glossary.md#hidden-risk) as we go. **(4)** +**Bob**: This feels like there'd be a lot of [Attendant Risk](/thinking/Glossary#attendant-risk): we're adding [Complexity Risk](/tags/Complexity-Risk) to our estate, and all of this needs to be handled by the Networking Team, so we're picking up a lot of [Process Risk](/tags/Process-Risk). I'm also worried that there are too many steps here, and we're going to discover loads of [Hidden Risks](/thinking/Glossary#hidden-risk) as we go. **(4)** -**Eve**: Well, you're correct on the first one. But, I've done this before not that long ago for a Chinese project, so I know the process - we shouldn't run into any new [Hidden Risk](/thinking/Glossary.md#hidden-risk). **(4)** +**Eve**: Well, you're correct on the first one. But, I've done this before not that long ago for a Chinese project, so I know the process - we shouldn't run into any new [Hidden Risk](/thinking/Glossary#hidden-risk). **(4)** **Bob**: OK, fair enough. But isn't there something simpler we can do? Maybe some settings in the Web Server? **(4)** @@ -63,7 +63,7 @@ Synergy's release process means that the app-store submission must happen in a f **Bob**: OK, and upgrading to Apache is a _big_ risk, right? We'd have to migrate all of our configuration... **(4)** -**Eve**: Yes, let's not go there. So, _changing_ the settings on Baroque, we have the risk that it's not supported by the software and we're back where we started. Also, if we isolate the Synergy app stuff now, we can mess around with it at any point in future, which is a big win in case there are other [Hidden Risks](/thinking/Glossary.md#hidden-risk) with the security changes that we don't know about yet. **(5)** +**Eve**: Yes, let's not go there. So, _changing_ the settings on Baroque, we have the risk that it's not supported by the software and we're back where we started. Also, if we isolate the Synergy app stuff now, we can mess around with it at any point in future, which is a big win in case there are other [Hidden Risks](/thinking/Glossary#hidden-risk) with the security changes that we don't know about yet. **(5)** **Bob**: OK, I can see that buys us something, but time is really short and we have holidays coming up. @@ -77,14 +77,14 @@ Synergy's release process means that the app-store submission must happen in a f At this point, you might be wondering what all the fuss is about. This stuff is all obvious! It's what we do anyway! Perhaps. Risk management _is_ what we do anyway. Let's go through the conversation and see how this panned out: -1. Here, Bob and Eve are [Meeting Reality](Meeting-Reality.md) by trying something risky early on to get feedback. +1. Here, Bob and Eve are [Meeting Reality](Meeting-Reality) by trying something risky early on to get feedback. 2. They do this because they know software releases are high-risk and there is reputational risk to consider. -3. They consider [Ignoring](De-Risking.md#ignore) the problem, but then decide to try and [Reduce](De-Risking.md#reduce). -4. They evaluate various solutions, comparing [Internal Models](Glossary.md#internal-model) of the risks each poses. -5. They create an [option](De-Risking.md#specific-tactics) for solving the problem in the future. -6. They [control](De-Risking.md#control) the risk by time-boxing the solution. -7. They [share](De-Risking.md#share) the risk with another team. -8. They [monitor](De-Risking.md#monitor) the risk of using the networking team. +3. They consider [Ignoring](De-Risking#ignore) the problem, but then decide to try and [Reduce](De-Risking#reduce). +4. They evaluate various solutions, comparing [Internal Models](Glossary#internal-model) of the risks each poses. +5. They create an [option](De-Risking#specific-tactics) for solving the problem in the future. +6. They [control](De-Risking#control) the risk by time-boxing the solution. +7. They [share](De-Risking#share) the risk with another team. +8. They [monitor](De-Risking#monitor) the risk of using the networking team. The problem is that although all this _is_ obvious, it appears to have largely escaped codification within the literature, practices and methodologies of software development. Further, while it is obvious, there is a huge hole: successful De-Risking depends heavily on individual experience and talent. @@ -92,7 +92,7 @@ The problem is that although all this _is_ obvious, it appears to have largely e This section has hopefully underscored the importance of _talking about risk_ with colleagues. If you're working in a team where this isn't happening then perhaps you can introduce this practice and improve your teams' odds of winning. -If you're working in a larger organisation then the chances are that risk management is already well embedded in the organisation. So in the next section we'll have a quick run-down covering what developers need to know about [Enterprise Risk Management](Enterprise-Risk.md). +If you're working in a larger organisation then the chances are that risk management is already well embedded in the organisation. So in the next section we'll have a quick run-down covering what developers need to know about [Enterprise Risk Management](Enterprise-Risk). diff --git a/docs/thinking/A-Simple-Scenario.md b/docs/thinking/A-Simple-Scenario.md index 5ae802a28..86ff9c685 100644 --- a/docs/thinking/A-Simple-Scenario.md +++ b/docs/thinking/A-Simple-Scenario.md @@ -42,53 +42,53 @@ For a moment, forget about software completely and think about _any endeavour at ## Goal In Mind -Now, in this endeavour, we want to be successful. That is to say, we have a **[Goal](/thinking/Glossary.md#goal)** in mind: we want our friends to go home satisfied after a decent meal and not to feel hungry. As a bonus, we might also want to spend time talking with them before and during the meal. So, now to achieve our [Goal](/thinking/Glossary.md#goal) we *probably* have to do some tasks. +Now, in this endeavour, we want to be successful. That is to say, we have a **[Goal](/thinking/Glossary#goal)** in mind: we want our friends to go home satisfied after a decent meal and not to feel hungry. As a bonus, we might also want to spend time talking with them before and during the meal. So, now to achieve our [Goal](/thinking/Glossary#goal) we *probably* have to do some tasks. -Since our goal only exists _in our head_, we can say it is part of our **[Internal Model](/thinking/Glossary.md#internal-model)** of the world. That is, the model we have of reality. This model extends to _predicting what will happen_. +Since our goal only exists _in our head_, we can say it is part of our **[Internal Model](/thinking/Glossary#internal-model)** of the world. That is, the model we have of reality. This model extends to _predicting what will happen_. If we do nothing, our friends will turn up and maybe there's nothing in the house for them to eat. Or maybe, the thing that you're going to cook is going to take hours and they'll have to sit around and wait for you to cook it and they'll leave before it's ready. Maybe you'll be some ingredients short, or maybe you're not confident of the steps to prepare the meal and you're worried about messing it all up. ## Attendant Risks -These _nagging doubts_ that are going through your head are what I'll call the [Attendant Risks](/thinking/Glossary.md#attendant-risk): they're the ones that will occur to you as you start to think about what will happen. +These _nagging doubts_ that are going through your head are what I'll call the [Attendant Risks](/thinking/Glossary#attendant-risk): they're the ones that will occur to you as you start to think about what will happen. ![Goal, with the risks you know about](/img/generated/introduction/goal_in_mind.svg) When we go about preparing for this wonderful evening, we can choose to deal with these risks: shop for the ingredients in advance, prepare parts of the meal and maybe practice the cooking in advance. Or, we can wing it and sometimes we'll get lucky. -How much effort we expend on these [Attendant Risks](/thinking/Glossary.md#attendant-risk) depends on how big we think they are. For example, if you know there's a 24-hour shop, you'll probably not worry too much about getting the ingredients well in advance (although, the shop _could still be closed_). +How much effort we expend on these [Attendant Risks](/thinking/Glossary#attendant-risk) depends on how big we think they are. For example, if you know there's a 24-hour shop, you'll probably not worry too much about getting the ingredients well in advance (although, the shop _could still be closed_). ## Hidden Risks -[Attendant Risks](/thinking/Glossary.md#attendant-risk) are risks you are aware of. You may not be able to exactly _quantify_ them, but you know they exist. But there are also **[Hidden Risks](/thinking/Glossary.md#attendant-risk)** that you _don't_ know about: if you're poaching eggs for dinner, perhaps you didn't know that fresh eggs poach best. Donald Rumsfeld famously called these kinds of risks "Unknown Unknowns": +[Attendant Risks](/thinking/Glossary#attendant-risk) are risks you are aware of. You may not be able to exactly _quantify_ them, but you know they exist. But there are also **[Hidden Risks](/thinking/Glossary#attendant-risk)** that you _don't_ know about: if you're poaching eggs for dinner, perhaps you didn't know that fresh eggs poach best. Donald Rumsfeld famously called these kinds of risks "Unknown Unknowns": > "Reports that say that something hasn't happened are always interesting to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns—the ones we don't know we don't know. And if one looks throughout the history of our country and other free countries, it is the latter category that tend to be the difficult ones." - [Donald Rumsfeld, _Wikipedia_](https://en.wikipedia.org/wiki/There_are_known_knowns) ![Goal, the risks you know about and the ones you don't](/img/generated/introduction/hidden_risks.svg) -Different people evaluate risks differently and they'll also _know_ about different risks. What is an [Attendant Risk](/thinking/Glossary.md#attendant-risk) for one person is a [Hidden Risk](/thinking/Glossary.md#attendant-risk) for another. +Different people evaluate risks differently and they'll also _know_ about different risks. What is an [Attendant Risk](/thinking/Glossary#attendant-risk) for one person is a [Hidden Risk](/thinking/Glossary#attendant-risk) for another. Which risks we know about depends on our **knowledge** and **experience**, then. And that varies from person to person (or team to team). ## Taking Action and Meeting Reality -As the dinner party gets closer, we make our preparations and the inadequacies of the [Internal Model](/thinking/Glossary.md#internal-model) become apparent. We learn what we didn't know and the [Hidden Risks](/thinking/Glossary.md#hidden-risk) reveal themselves. Other things we were worried about don't materialise. Things we thought would be minor risks turn out to be greater. +As the dinner party gets closer, we make our preparations and the inadequacies of the [Internal Model](/thinking/Glossary#internal-model) become apparent. We learn what we didn't know and the [Hidden Risks](/thinking/Glossary#hidden-risk) reveal themselves. Other things we were worried about don't materialise. Things we thought would be minor risks turn out to be greater. ![How Taking Action affects Reality, and also changes your Internal Model](/img/generated/introduction/model_vs_reality.svg) Our model is forced to [Meet Reality](/tags/Meeting-Reality), and the model changes, forcing us to deal with these risks, as shown in the diagram above. -In Risk-First, whenever we try to _do something_ about a risk, it is called [Taking Action](/thinking/Glossary.md#taking-action). [Taking Action](/thinking/Glossary.md#taking-action) _changes_ reality, and with it your [Internal Model](/thinking/Glossary.md#internal-model) of the risks you're facing. That's because it's only by interacting with the world that we add knowledge to our [Internal Model](/thinking/Glossary.md#internal-model) about what works and what doesn't. Even something as passive as _checking the shop opening times_ is an action, and it improves on our [Internal Model](/thinking/Glossary.md#internal-model) of the world. +In Risk-First, whenever we try to _do something_ about a risk, it is called [Taking Action](/thinking/Glossary#taking-action). [Taking Action](/thinking/Glossary#taking-action) _changes_ reality, and with it your [Internal Model](/thinking/Glossary#internal-model) of the risks you're facing. That's because it's only by interacting with the world that we add knowledge to our [Internal Model](/thinking/Glossary#internal-model) about what works and what doesn't. Even something as passive as _checking the shop opening times_ is an action, and it improves on our [Internal Model](/thinking/Glossary#internal-model) of the world. -If we had a good [Internal Model](/thinking/Glossary.md#internal-model) and [took the right actions](/thinking/Glossary.md#taking-action), we should see positive outcomes. If we failed to manage the risks, or took inappropriate actions, we'll probably see negative outcomes. +If we had a good [Internal Model](/thinking/Glossary#internal-model) and [took the right actions](/thinking/Glossary#taking-action), we should see positive outcomes. If we failed to manage the risks, or took inappropriate actions, we'll probably see negative outcomes. ## Why The New Terms? -I know that as a reader it's annoying to have to pick up new terminology. So you'll be pleased to learn that there are just three de novo terms to learn in the whole [Thinking](Start.md) part of Risk First: +I know that as a reader it's annoying to have to pick up new terminology. So you'll be pleased to learn that there are just three de novo terms to learn in the whole [Thinking](Start) part of Risk First: - - [Internal Model](Glossary.md#internal-model): actually a term from financial risk management, which we'll be employing widely. I'll expand on this in more detail in [Meeting Reality](Meeting-Reality.md). - - [Meeting Reality](Glossary.md#meet-reality), which is the process of improving your [Internal Model](Glossary.md#internal-model). This is a totally new term. - - [Taking Action](Glossary.md#take-action) which we'll use as a general term to cover a whole range of specific techniques for dealing with risks. We'll expand on this in [Derisking](De-Risking.md). + - [Internal Model](Glossary#internal-model): actually a term from financial risk management, which we'll be employing widely. I'll expand on this in more detail in [Meeting Reality](Meeting-Reality). + - [Meeting Reality](Glossary#meet-reality), which is the process of improving your [Internal Model](Glossary#internal-model). This is a totally new term. + - [Taking Action](Glossary#take-action) which we'll use as a general term to cover a whole range of specific techniques for dealing with risks. We'll expand on this in [Derisking](De-Risking). ## On To Software? @@ -99,4 +99,4 @@ Risk-First tries as far as possible to use pre-existing terminology from the wor -Clearly, what we really want to get to is talking about software development, but first I want to dig a bit deeper into the visual language used here to show risks, using [Risk-First Diagrams](Risk-First-Diagrams.md). +Clearly, what we really want to get to is talking about software development, but first I want to dig a bit deeper into the visual language used here to show risks, using [Risk-First Diagrams](Risk-First-Diagrams). diff --git a/docs/thinking/Anti-Goals.md b/docs/thinking/Anti-Goals.md index a593d18b4..d039e1eac 100644 --- a/docs/thinking/Anti-Goals.md +++ b/docs/thinking/Anti-Goals.md @@ -11,6 +11,7 @@ tags: - Goal - Risk Landscape - Expected Return + - Anti-Goal definitions: - name: Anti-Goal description: A particular destination on the Risk Landscape you don't want to arrive at. @@ -51,7 +52,7 @@ Terry _busted through_ the Anti-Goal, and eventually released VVVVVV to critical ## Visualising Anti-Goals -Goals and Anti-Goals are both kinds of [Risks](/thinking/Glossary.md#risk). While Goals are "Upside" risks or opportunities (the outcome is uncertain, but likely to be in your favour), Anti-Goals are "Downside" risks (again, uncertain outcome, likely to go against you): you'll want to try to navigate between these to arrive at the Goal, rather than the Anti-Goal. +Goals and Anti-Goals are both kinds of [Risks](/thinking/Glossary#risk). While Goals are "Upside" risks or opportunities (the outcome is uncertain, but likely to be in your favour), Anti-Goals are "Downside" risks (again, uncertain outcome, likely to go against you): you'll want to try to navigate between these to arrive at the Goal, rather than the Anti-Goal. Here at [Risk-First](https://riskfirst.org), there's lots of talk about navigating the [Risk Landscape](/risks/Risk-Landscape.md), which you can imagine being like the terrain of a golf course (as in the diagram above). diff --git a/docs/thinking/Cadence.md b/docs/thinking/Cadence.md index 2534ef721..883ee8977 100644 --- a/docs/thinking/Cadence.md +++ b/docs/thinking/Cadence.md @@ -48,13 +48,13 @@ In a software development scenario, you should also test your model against real This list is arranged so that at the top, we have the most visceral, most _real_ feedback loop, but at the same time, the slowest. -At the bottom, a good IDE can inform you about errors in your [Internal Model](/thinking/Glossary.md#internal-model) in real time, by way of highlighting compilation errors . So, this is the fastest loop, but it's the most _limited_ reality. +At the bottom, a good IDE can inform you about errors in your [Internal Model](/thinking/Glossary#internal-model) in real time, by way of highlighting compilation errors . So, this is the fastest loop, but it's the most _limited_ reality. Imagine for a second that you had a special time-travelling machine. With it, you could make a change to your software, and get back a report from the future listing out all the issues people had faced using it over its lifetime, instantly. That'd be neat, eh? If you did have this, would there be any point at all in a compiler? Probably not, right? -The whole _reason_ we have tools like compilers is because they give us a short-cut way to get some limited experience of reality _faster_ than would otherwise be possible. Because cadence is really important: the faster we test our ideas, the more quickly we'll find out if they're correct or not and the faster we can back out of the bets that aren't [paying off](Consider-Payoff.md) +The whole _reason_ we have tools like compilers is because they give us a short-cut way to get some limited experience of reality _faster_ than would otherwise be possible. Because cadence is really important: the faster we test our ideas, the more quickly we'll find out if they're correct or not and the faster we can back out of the bets that aren't [paying off](Consider-Payoff) ## Development Cycle Time @@ -80,4 +80,4 @@ Yes, CD will give you faster feedback loops, but even getting things into produc The right answer is to use multiple feedback loops, as shown in the diagram above. -In the next section we'll be [Considering Payoff](Consider-Payoff.md), and figuring out how we can use terminology from _betting_ to make us better software developers. +In the next section we'll be [Considering Payoff](Consider-Payoff), and figuring out how we can use terminology from _betting_ to make us better software developers. diff --git a/docs/thinking/Consider-Payoff.md b/docs/thinking/Consider-Payoff.md index b01df066b..3bc2269fb 100644 --- a/docs/thinking/Consider-Payoff.md +++ b/docs/thinking/Consider-Payoff.md @@ -90,13 +90,13 @@ One final note on sizing bets: [The Kelly Criterion](https://en.wikipedia.org/w In the film [The Martian](https://www.imdb.com/title/tt3659388), NASA scientists are trying to decide the best way to recover a stranded Matt Damon from the surface of Mars, where he'd been lost and presumed dead. In order to get a $500 million dollar supply probe out to Mars in a hurry, Jeff Daniels' character, Teddy, the director of NASA, decides to skip the testing phase and predictably, the probe explodes during launch. The whole sequence is there to demonstrate the _incompetence_ of Teddy as a risk manager. And while he's putatively on their team, Teddy is the film's antagonist: the other characters are constantly fighting against his poor risk management skills to get the job done. -While this fictional, it is a great example of going "All In" and risking everything on a short-term technical bet. Yes, the [Payoff](Glossary.md#payoff) would have been great if this had worked, but the stakes were very high and the probability of success was really low. Don't be Teddy. +While this fictional, it is a great example of going "All In" and risking everything on a short-term technical bet. Yes, the [Payoff](Glossary#payoff) would have been great if this had worked, but the stakes were very high and the probability of success was really low. Don't be Teddy. ::: ## Back To Software -As with NASA, the bets we are making in software development aren't directly about money. We want to make bets that reduce the risks to our project's [Health](Health.md), whether that's reducing security risks, increasing sales opportunities, making our software more robust or making it easier to adopt and use. So, the bets we make need to be framed in those terms. +As with NASA, the bets we are making in software development aren't directly about money. We want to make bets that reduce the risks to our project's [Health](Health), whether that's reducing security risks, increasing sales opportunities, making our software more robust or making it easier to adopt and use. So, the bets we make need to be framed in those terms. Sometimes, there will be multiple _actions_ you could take on a project and you have to choose the best one: @@ -118,13 +118,13 @@ The idea makes sense: if you take on extra work that you don't need, _of course But, there is always the opposite opinion: [You _Are_ Gonna Need It](http://wiki.c2.com/?YouAreGonnaNeedIt). As a simple example, we often add log statements in our code as we write it (so we can trace what happened when things go wrong), though following YAGNI strictly says we shouldn't. -So which is right? We should conclude that we do the work _if there is a worthwhile [Payoff](/thinking/Glossary.md#payoff)_. +So which is right? We should conclude that we do the work _if there is a worthwhile [Payoff](/thinking/Glossary#payoff)_. - Logging statements are _good_, because otherwise, you're increasing the risk that in production, no one will be able to understand [how the software went wrong](/risks/Dependency-Risk#invisibility-risk). - However, adding them takes time, which might [risk us not hitting our schedule](/tags/Schedule-Risk). - Also, we have to manage larger log files on our production systems. _Too much logging_ is just noise, and makes it harder to figure out what went wrong. This increases the risk that our software is [less transparent in how it works](/tags/Complexity-Risk). -So, it's a trade-off: continue adding logging statements so long as you feel that overall, the activity [pays off](/thinking/Glossary.md#payoff) reducing overall risk. +So, it's a trade-off: continue adding logging statements so long as you feel that overall, the activity [pays off](/thinking/Glossary#payoff) reducing overall risk. ### Example 2: Over-Engineering @@ -207,13 +207,13 @@ It's important to reflect on the fact that there are other factors at play here: -With that caveat aside, it should be clear that the way to escape the over-engineering trap is to think hard about [Expected Value](Glossary.md#expected-value). The above table tries to capture the difference in [Expected Value](Glossary.md#expected-value) between "Doing it Now" versus having the _option to_ "Do it Later". +With that caveat aside, it should be clear that the way to escape the over-engineering trap is to think hard about [Expected Value](Glossary#expected-value). The above table tries to capture the difference in [Expected Value](Glossary#expected-value) between "Doing it Now" versus having the _option to_ "Do it Later". There is no hard and fast right answer here. Sometimes, it is correct to strive for 100% coverage or polish the code factorisation. But hopefully thinking about the choice in terms of these two alternatives is helpful. ### Example 3: "Do The Simplest Thing That Could Possibly Work" -The previous example applied [Expected Value](Glossary.md#expected-value) to avoid over-engineering. Let's now consider an example of where [Expected Value](Glossary.md#expected-value) suggests we do _more_ work. +The previous example applied [Expected Value](Glossary#expected-value) to avoid over-engineering. Let's now consider an example of where [Expected Value](Glossary#expected-value) suggests we do _more_ work. Another mantra from Kent Beck (originator of the [Extreme Programming](https://en.wikipedia.org/wiki/Extreme_programming) methodology), is "Do The Simplest Thing That Could Possibly Work", which is closely related to YAGNI and is an excellent razor for avoiding over-engineering. @@ -221,27 +221,27 @@ At the same time, by adding "Could Possibly", Beck is encouraging us to go beyon Our risk-centric view of this strategy would be: -- Every action you take on a project has its own [Attendant Risks](/thinking/Glossary.md#attendant-risk). -- The bigger or more complex the action, the more [Attendant Risk](/thinking/Glossary.md#attendant-risk) it'll have. +- Every action you take on a project has its own [Attendant Risks](/thinking/Glossary#attendant-risk). +- The bigger or more complex the action, the more [Attendant Risk](/thinking/Glossary#attendant-risk) it'll have. - The reason you're taking action _at all_ is because you're trying to reduce risk elsewhere on the project. -- Therefore, the best [Expected Value](Glossary.md#expected-value) is likely to be the action with the least [Attendant Risk](/thinking/Glossary.md#attendant-risk). +- Therefore, the best [Expected Value](Glossary#expected-value) is likely to be the action with the least [Attendant Risk](/thinking/Glossary#attendant-risk). - So, usually this is going to be the simplest thing. -So, "Do The Simplest Thing That Could Possibly Work" is really a helpful guideline for Navigating the [Risk Landscape](/risks/Risk-Landscape.md), but this analysis shows clearly where it's left wanting: +So, "Do The Simplest Thing That Could Possibly Work" is really a helpful guideline for Navigating the [Risk Landscape](/risks/Risk-Landscape), but this analysis shows clearly where it's left wanting: - - _Don't_ do the simplest thing if there are other things with a better [Expected Value](/thinking/Glossary.md#expected-value) available. + - _Don't_ do the simplest thing if there are other things with a better [Expected Value](/thinking/Glossary#expected-value) available. An example of where this might be the case, think about how you might write a big, complex function (for example, processing interest accrual on a loan). The _simplest thing_ might be to just write a single function and a few unit tests for it. However, a slightly _less simple thing_ that would work might be to decompose the function into multiple steps, each with its own unit tests. Perhaps you might have a step which calculates the number of days where interest is due (working days, avoiding bank holidays), another step that considers repayments, a step that works out different interest rates and so on. ![Different payoff for doing the simplest thing vs something slightly less simple with more effort](/img/generated/introduction/risk_landscape_4_simplest.svg) -Functional decomposition and extra testing might not be the _simplest thing_, but it might reduce risks in other ways - making the code easier to understand, easier to test and easier to modify in the future. So deciding up-front to accept this extra complexity and effort in exchange for the other benefits might seem like a better [Payoff](/thinking/Glossary.md#payoff) than the simplest thing. +Functional decomposition and extra testing might not be the _simplest thing_, but it might reduce risks in other ways - making the code easier to understand, easier to test and easier to modify in the future. So deciding up-front to accept this extra complexity and effort in exchange for the other benefits might seem like a better [Payoff](/thinking/Glossary#payoff) than the simplest thing. ### Example 4: Continue Testing or Release? You're on a project and you're faced with the decision - release now or do more User Acceptance Testing (UAT)? -Obviously, in the ideal world, we want to get to the place on the [Risk Landscape](/thinking/Glossary.md#risk-landscape) where we have a tested, bug-free system in production. But we're not there yet, and we have funding pressure to get the software into the hands of some paying customers. But what if we disappoint the customers and create bad feeling? The table below shows an example: +Obviously, in the ideal world, we want to get to the place on the [Risk Landscape](/thinking/Glossary#risk-landscape) where we have a tested, bug-free system in production. But we're not there yet, and we have funding pressure to get the software into the hands of some paying customers. But what if we disappoint the customers and create bad feeling? The table below shows an example: |Risk Managed |Action |Attendant Risk |Payoff | |----------------------|-----------------------------|-----------------------------------------|-------------------| @@ -250,15 +250,15 @@ Obviously, in the ideal world, we want to get to the place on the [Risk Landscap This is (a simplification of) the dilemma of lots of software projects - _test further_, to reduce the risk of users discovering bugs ([Implementation Risk](/tags/Implementation-Risk)) which would cause us reputational damage, or _get the release done_ and reduce our [Funding Risk](/tags/Funding-Risk) by getting paying clients sooner. -Lots of software projects end up in a phase of "release paralysis" - wanting things to be perfect before you show them to customers. But sometimes this places too much emphasis on preserving reputation over getting paying customers. Also, getting real customers is [meeting reality](Glossary.md#meet-reality) and will probably surface new [hidden risks](Glossary.md#hidden-risk) that are missing from the analysis. +Lots of software projects end up in a phase of "release paralysis" - wanting things to be perfect before you show them to customers. But sometimes this places too much emphasis on preserving reputation over getting paying customers. Also, getting real customers is [meeting reality](Glossary#meet-reality) and will probably surface new [hidden risks](Glossary#hidden-risk) that are missing from the analysis. ## Manipulating The Payoff An important take-away here is that you don't have to accept the dilemma as stated. You can change the actions to improve the payoff, and [meet reality more gradually](Meeting-Reality#the-cost-of-meeting-reality): - - Start a closed [beta test](/practices/Glossary-Of-Practices.md#beta-test) with a group of friendly customers - - Use [feature toggles](/practices/Glossary-Of-Practices.md#feature-toggle) to release only some components of the software - - [Dog-food](/practices/Glossary-Of-Practices.md#dog-fooding) the software internally so you can find out whether it's useful in its current state. + - Start a closed [beta test](/practices/Glossary-Of-Practices#beta-test) with a group of friendly customers + - Use [feature toggles](/practices/Glossary-Of-Practices#feature-toggle) to release only some components of the software + - [Dog-food](/practices/Glossary-Of-Practices#dog-fooding) the software internally so you can find out whether it's useful in its current state. A second approach is to improve the payoff of the losing outcomes. Here are some examples: @@ -266,7 +266,7 @@ A second approach is to improve the payoff of the losing outcomes. Here are som - If I take a job on a project using React, then even if the job doesn't work out, I'll have learnt React. - TODO - another example. -**See:** The [Purpose of the Development Team](../bets/Purpose-Development-Team.md) article contains further examples of software bets. +**See:** The [Purpose of the Development Team](../bets/Purpose-Development-Team) article contains further examples of software bets. ## Summing Up @@ -284,4 +284,4 @@ Many Agile frameworks such as [Scrum](../bets/Purpose-Development-Team#case-2-sc Betting generally focuses on the odds of winning. However, there are entire classes of problem (such as short positions) where you need to focus on minimising the risk of losing. -Let's look at that next in [Anti-Goals](Anti-Goals.md). +Let's look at that next in [Anti-Goals](Anti-Goals). diff --git a/docs/thinking/Crisis-Mode.md b/docs/thinking/Crisis-Mode.md index e744c0a72..ae0fa5a55 100644 --- a/docs/thinking/Crisis-Mode.md +++ b/docs/thinking/Crisis-Mode.md @@ -20,7 +20,7 @@ As software developers, we face crises of different sorts. Perhaps there's a pro ## Testing For Invariances -In this section, we are going to look at how a risk management approach to software development is invariant to a number of different things: the size of the project, the level of pressure and the technology being used. For example, I am an advocate of [Extreme Programming](../methods/Extreme-Programming.md). However, as you scale the size of the project, it breaks down. This is well understood and explains why methodologies like [Scaled Agile](../methods/SAFe.md) arose which attempt to fix this. +In this section, we are going to look at how a risk management approach to software development is invariant to a number of different things: the size of the project, the level of pressure and the technology being used. For example, I am an advocate of [Extreme Programming](../methods/Extreme-Programming). However, as you scale the size of the project, it breaks down. This is well understood and explains why methodologies like [Scaled Agile](../methods/SAFe) arose which attempt to fix this. Why would we want to do this? Einstein's ideas around relativistic gravity were proven because people discovered that the Newtonian model of gravity [failed to predict the orbit of Mercury](https://en.wikipedia.org/wiki/Tests_of_general_relativity#Classical_tests). So it's important to explore and understand the limits of our ideas and practices. @@ -32,7 +32,7 @@ First though, let's talk specifically about "Crisis Management". A lot of liter This is not how Risk-First sees it. -First, we have the notion that Risks are discrete events. Some risks _are_ (like betting on a horse race), but most _aren't._ In the [Dinner Party](A-Simple-Scenario.md), bad preparation is going to mean a _worse_ time for everyone, but how good a time you're having is a spectrum, it doesn't divide neatly into just "good" or "bad" or "crisis" and "non-crisis": there are just _different levels of pressure_, which we'll address below. +First, we have the notion that Risks are discrete events. Some risks _are_ (like betting on a horse race), but most _aren't._ In the [Dinner Party](A-Simple-Scenario), bad preparation is going to mean a _worse_ time for everyone, but how good a time you're having is a spectrum, it doesn't divide neatly into just "good" or "bad" or "crisis" and "non-crisis": there are just _different levels of pressure_, which we'll address below. Second, the opposite of "Risk Management" (or trying to minimise the "Down-side") is either "Upside Risk Management" or "Opportunity Management", (trying to maximise the good things happening), or it's trying to make as many bad things happen as possible. @@ -47,11 +47,11 @@ You would expect that ideally, any methods for managing software delivery should - If there is a production outage during the working week, we don't wait for the next Scrum Sprint to plan and fix it. - Although a 40-hour, Monday-to-Friday work-week _is a great idea_, this goes out of the window if the databases all crash on a Saturday morning. -In these cases, we (hopefully calmly) _evaluate the risks and [Take Action](Glossary.md#taking-action)_. +In these cases, we (hopefully calmly) _evaluate the risks and [Take Action](Glossary#taking-action)_. This is **Pressure Invariance**: ideally, your methodology shouldn't need to change given the amount of pressure or importance on the table. -**See:** In [Debugging Bets](../bets/Debugging-Bets.md) I tell the story of a high-pressure situation where I applied a risk-analysis approach in order to try and bring a new problem to ground ahead of a big presentation. +**See:** In [Debugging Bets](../bets/Debugging-Bets) I tell the story of a high-pressure situation where I applied a risk-analysis approach in order to try and bring a new problem to ground ahead of a big presentation. ## Invariance #3: Scale Invariance @@ -69,25 +69,25 @@ In practice, however, we usually find methodologies are tuned for certain scales If the methodology _fails at a particular scale_ this tells you something about the risks that the methodology isn't addressing. One of the things Risk-First explores is trying to place methodologies and practices within a framework to say _when_ they are applicable. -In the previous section on [Health](Health.md) we looked at how risk management was used by the UK government at the scale of _the whole country_. +In the previous section on [Health](Health) we looked at how risk management was used by the UK government at the scale of _the whole country_. ## Invariance #4: Technology Invariance In 2020 the world was plunged into pandemic. Everything changed very quickly, including the nature of software development. Lots of the practices we'd grown used to (such as XP's small, co-located teams) had to be jettisoned and replaced with Zoom calls and instant messaging apps. This was a very sudden, rapid change in the technology we use to do our jobs, but in a more general sense we need to understand that Agile, XP and Scrum were invented at the turn of the 21st century. The [Lean Manufacturing](https://en.wikipedia.org/wiki/Lean_manufacturing) movement originated post-WW2. -The general ideas they espouse have stood the test of time but where they recommend particular technologies things are looking more shaky. [Pair Programming](/practices/Glossary-Of-Practices.md#pair-programming) where two developers share the same keyboard doesn't work so well anymore. However, it can be made to work over video conferencing and when we all move to augmented reality headsets perhaps there will be another configuration of this. We can now do Pair Programming with our artificial intelligence "co-pilots" - but is that managing the same risks? +The general ideas they espouse have stood the test of time but where they recommend particular technologies things are looking more shaky. [Pair Programming](/practices/Glossary-Of-Practices#pair-programming) where two developers share the same keyboard doesn't work so well anymore. However, it can be made to work over video conferencing and when we all move to augmented reality headsets perhaps there will be another configuration of this. We can now do Pair Programming with our artificial intelligence "co-pilots" - but is that managing the same risks? -The point I am making here is that while there are [technology tools to support risk management](Track-Risk.md) the idea itself is not wedded to a particular technology, culture or way of working. And, it is as old as the hills. +The point I am making here is that while there are [technology tools to support risk management](Track-Risk) the idea itself is not wedded to a particular technology, culture or way of working. And, it is as old as the hills. > "We've survived 200,000 years as humans. Don't you think there's a reason why we survived? We're good at risk management." - [Nassim Nicholas Taleb, _Author of "The Black Swan" in the New Statesman](https://www.newstatesman.com/encounter/2018/03/i-hope-goldman-sachs-bankruptcy-nassim-nicholas-taleb-skin-game) ## Summing Up -Humans have a built-in fight-or-flight mechanism which makes it hard for us to act rationally in times of stress. And as we'll explore in [Agency Risk](../risks/Agency-Risk.md), firms are able to abuse their staff's loyalty or enthusiasm in order to get them to work much longer than is healthy for either them or their projects. +Humans have a built-in fight-or-flight mechanism which makes it hard for us to act rationally in times of stress. And as we'll explore in [Agency Risk](../risks/Agency-Risk), firms are able to abuse their staff's loyalty or enthusiasm in order to get them to work much longer than is healthy for either them or their projects. Risk management, like everything else, can be abused or misunderstood. In this section, we've looked at an important "proof" - the idea that risk management applies irrespective of pressure, scale or technology trends (so far, at least). This is really important as we need to know whether there's a point at which our tools won't apply anymore. -In the next section, we'll start to look at how risk management can fit into working in our organisations, starting with discussing risk in a project team. On to [A Conversation](A-Conversation.md) +In the next section, we'll start to look at how risk management can fit into working in our organisations, starting with discussing risk in a project team. On to [A Conversation](A-Conversation) diff --git a/docs/thinking/De-Risking.md b/docs/thinking/De-Risking.md index b3d07b7d0..bc6990923 100644 --- a/docs/thinking/De-Risking.md +++ b/docs/thinking/De-Risking.md @@ -21,16 +21,16 @@ tweet: yes # Derisking -In this section, we're going to more closely at what, so far, we've called "[Taking Action](Glossary.md#taking-action)" and separate out different ways this can be done. We'll introduce the correct risk management terms and give examples of how to apply these to software development. +In this section, we're going to more closely at what, so far, we've called "[Taking Action](Glossary#taking-action)" and separate out different ways this can be done. We'll introduce the correct risk management terms and give examples of how to apply these to software development. ## What is Taking Action? -So far in Risk-First, we've talked about [Taking Action](Glossary.md#taking-action) as having two effects: +So far in Risk-First, we've talked about [Taking Action](Glossary#taking-action) as having two effects: - 1. It's the way in which we [Meet Reality](Glossary.md#meet-reality) to learn about the world and uncover [Hidden Risks](Glossary.md#hidden-risk). - 2. It's the way we change our position on the [Risk Landscape](Glossary.md#risk-landscape) via actions with a positive [Payoff](Glossary.md#payoff). + 1. It's the way in which we [Meet Reality](Glossary#meet-reality) to learn about the world and uncover [Hidden Risks](Glossary#hidden-risk). + 2. It's the way we change our position on the [Risk Landscape](Glossary#risk-landscape) via actions with a positive [Payoff](Glossary#payoff). -As we saw in the discussion of [Payoff](Consider-Payoff.md), any time you take action you are accruing [attendant risk](Glossary.md#attendant-risk), and we want to take actions with the most favourable payoff. So here we are going to look at common ways in which we can lean the payoff in our favour. This is called _derisking_: +As we saw in the discussion of [Payoff](Consider-Payoff), any time you take action you are accruing [attendant risk](Glossary#attendant-risk), and we want to take actions with the most favourable payoff. So here we are going to look at common ways in which we can lean the payoff in our favour. This is called _derisking_: > "To remove the risk from; to make safe. " - [Derisk, _Wiktionary_](https://en.wiktionary.org/wiki/derisk) @@ -49,7 +49,7 @@ The table above lists a set of _generic strategies_ for derisking which we'll lo ## Reduce -**Reducing** or **Mitigating** risk is taking steps towards minimising the **impact** (as we discussed in the [Evaluating Risk](Evaluating-Risk.md) section) of a risk arising. +**Reducing** or **Mitigating** risk is taking steps towards minimising the **impact** (as we discussed in the [Evaluating Risk](Evaluating-Risk) section) of a risk arising. > "To reduce, lessen, or decrease and thereby to make less severe or easier to bear." - [Mitigate, _Wiktionary_](https://en.wiktionary.org/wiki/mitigate) @@ -67,7 +67,7 @@ The table above lists a set of _generic strategies_ for derisking which we'll lo 1. **Take Care With Dependencies**: Choose popular technologies and known reliable components. Whilst hiring people is hard work at the best of times, hiring PL/1 programmers is _really hard_. This tactic is explored in much more depth in [Software Dependency Risk](/tags/Software-Dependency-Risk) -1. **Redundancy**: Avoid single points of failure. For example, Pair Programming is a control espoused by [Extreme Programming](/tags/Extreme-Programming-(XP)) to reduce [Key Person Risk](/tags/Agency-Risk) and [Communication Risk](/tags/Comunication-Risk). See [Dependency Risk](/tags/Dependency-Risk) for more on this. +1. **Redundancy**: Avoid single points of failure. For example, Pair Programming is a control espoused by [Extreme Programming](/tags/Extreme-Programming) to reduce [Key Person Risk](/tags/Agency-Risk) and [Communication Risk](/tags/Communication-Risk). See [Dependency Risk](/tags/Dependency-Risk) for more on this. 1. **Create Options**: Using _feature flags_ allows you to turn off functionality in production, avoiding an all-or-nothing commitment. Working in branches gives the same optionality while developing. @@ -101,7 +101,7 @@ The table above lists a set of _generic strategies_ for derisking which we'll lo ## Avoid -**Avoiding** risk, means taking a route on the [Risk Landscape](/thinking/Glossary.md#risk-landscape) _around_ the risk. Neither the stakes or the payoff are changed. +**Avoiding** risk, means taking a route on the [Risk Landscape](/thinking/Glossary#risk-landscape) _around_ the risk. Neither the stakes or the payoff are changed. ### General Examples @@ -215,6 +215,6 @@ There is a grey area here, because on the one hand you are [retaining](#retain) Here we've been building towards a vocabulary with which to communicate to our team-mates about which risks are important to us (_reduce_, _exploit_, _share_, _retain_, _control_, _monitor_). This helps us discuss which actions we believe are the right ones and how we should deal with them. -In the next section we will look at the indicators that tell you when to apply these levers. On to [Cadence](Cadence.md). +In the next section we will look at the indicators that tell you when to apply these levers. On to [Cadence](Cadence). diff --git a/docs/thinking/Development-Process.md b/docs/thinking/Development-Process.md index 7b008a1f4..566c742e3 100644 --- a/docs/thinking/Development-Process.md +++ b/docs/thinking/Development-Process.md @@ -18,9 +18,9 @@ tweet: yes # Analysing The Development Process -In [A Simple Scenario](A-Simple-Scenario.md) we introduced some terms for talking about risk (such as [Attendant Risk](/thinking/Glossary.md#attendant-risk), [Hidden Risk](/thinking/Glossary.md#attendant-risk) and the [Internal Model](/thinking/Glossary.md#internal-model)). +In [A Simple Scenario](A-Simple-Scenario) we introduced some terms for talking about risk (such as [Attendant Risk](/thinking/Glossary#attendant-risk), [Hidden Risk](/thinking/Glossary#attendant-risk) and the [Internal Model](/thinking/Glossary#internal-model)). -We've also introduced a notation in the form of [Risk-First Diagrams](./Risk-First-Diagrams.md) which allows us to represent the ways in which we can change the risks by [Taking Action](./Glossary.md#taking-action). +We've also introduced a notation in the form of [Risk-First Diagrams](./Risk-First-Diagrams) which allows us to represent the ways in which we can change the risks by [Taking Action](./Glossary#taking-action). Now, we are going to start applying our new terminology to software. In the example below, we'll look at a "toy" process and use it for developing a new feature on a software project and see how our risk model informs it. @@ -70,11 +70,11 @@ We can all see this might end in disaster, but why? Two reasons: 1. You're [Meeting Reality](/tags/Meeting-Reality) all-in-one-go: all of these risks materialize at the same time, and you have to deal with them all at once. -2. Because of this, at the point you put code into the hands of your users, your [Internal Model](/thinking/Glossary.md#internal-model) is at its least-developed. All the [Hidden Risks](/thinking/Glossary.md#hidden-risk) now need to be dealt with at the same time, in production. +2. Because of this, at the point you put code into the hands of your users, your [Internal Model](/thinking/Glossary#internal-model) is at its least-developed. All the [Hidden Risks](/thinking/Glossary#hidden-risk) now need to be dealt with at the same time, in production. ## Applying the Toy Process -Let's look at how our toy process should act to prevent these risks materializing by considering an unhappy path. One where, at the outset, we have lots of [Hidden Risks](/thinking/Glossary.md#hidden-risk). Let's say a particularly vocal user rings up someone in the office and asks for new **Feature X** to be added to the software. It's logged as a new feature request, but: +Let's look at how our toy process should act to prevent these risks materializing by considering an unhappy path. One where, at the outset, we have lots of [Hidden Risks](/thinking/Glossary#hidden-risk). Let's say a particularly vocal user rings up someone in the office and asks for new **Feature X** to be added to the software. It's logged as a new feature request, but: - Unfortunately, this feature once programmed will break an existing **Feature Y**. - Implementing the feature will use some api in a library, which contains bugs and have to be coded around. @@ -86,20 +86,20 @@ Let's look at how our toy process should act to prevent these risks materializin The diagram above shows how this plays out. -This is a slightly contrived example, as you'll see. But let's follow our feature through the process and see how it meets reality slowly, and the [Hidden Risks](/thinking/Glossary.md#hidden-risk) are discovered: +This is a slightly contrived example, as you'll see. But let's follow our feature through the process and see how it meets reality slowly, and the [Hidden Risks](/thinking/Glossary#hidden-risk) are discovered: ### Specification -The first stage of the journey for the feature is that it meets the Business Analyst (BA). The _purpose_ of the BA is to examine new goals for the project and try to integrate them with _reality as they understand it_. A good BA might take a feature request and vet it against his [Internal Model](/thinking/Glossary.md#internal-model), saying something like: +The first stage of the journey for the feature is that it meets the Business Analyst (BA). The _purpose_ of the BA is to examine new goals for the project and try to integrate them with _reality as they understand it_. A good BA might take a feature request and vet it against his [Internal Model](/thinking/Glossary#internal-model), saying something like: - "This feature doesn't belong on the User screen, it belongs on the New Account screen" - "90% of this functionality is already present in the Document Merge Process" - "We need a control on the form that allows the user to select between Internal and External projects" -In the process of doing this, the BA is turning the simple feature request _idea_ into a more consistent, well-explained _specification_ or _requirement_ which the developer can pick up. But why is this a useful step in our simple methodology? From the perspective of our [Internal Model](/thinking/Glossary.md#internal-model), we can say that the BA is responsible for: +In the process of doing this, the BA is turning the simple feature request _idea_ into a more consistent, well-explained _specification_ or _requirement_ which the developer can pick up. But why is this a useful step in our simple methodology? From the perspective of our [Internal Model](/thinking/Glossary#internal-model), we can say that the BA is responsible for: -- Trying to surface [Hidden Risks](/thinking/Glossary.md#hidden-risk) -- Trying to evaluate [Attendant Risks](/thinking/Glossary.md#attendant-risk) and make them clear to everyone on the project. +- Trying to surface [Hidden Risks](/thinking/Glossary#hidden-risk) +- Trying to evaluate [Attendant Risks](/thinking/Glossary#attendant-risk) and make them clear to everyone on the project. ![BA Specification: exposing Hidden Risks as soon as possible](/img/generated/introduction/development_process_ba.svg) @@ -109,15 +109,15 @@ This process of evolving the feature request into a requirement is the BA's job. ### Code And Unit Test -The next stage for our feature, **Feature X** is that it gets coded and some tests get written. Let's look at how our [Goal](/thinking/Glossary.md#goal) meets a new reality: this time it's the reality of a pre-existing codebase, which has it's own internal logic. +The next stage for our feature, **Feature X** is that it gets coded and some tests get written. Let's look at how our [Goal](/thinking/Glossary#goal) meets a new reality: this time it's the reality of a pre-existing codebase, which has it's own internal logic. -As the developer begins coding the feature in the software, they will start with an [Internal Model](/thinking/Glossary.md#internal-model) of the software, and how the code fits into it. But, in the process of implementing it, they are likely to learn about the codebase, and their [Internal Model](/thinking/Glossary.md#internal-model) will develop. +As the developer begins coding the feature in the software, they will start with an [Internal Model](/thinking/Glossary#internal-model) of the software, and how the code fits into it. But, in the process of implementing it, they are likely to learn about the codebase, and their [Internal Model](/thinking/Glossary#internal-model) will develop. ![Coding Process: exposing more hidden risks as you code](/img/generated/introduction/development_process_code.svg) -At this point, let's review the visual grammar of the diagram above. Here, we're showing how the balance of risks will change if the developer [Takes Action](/thinking/Glossary.md#taking-action) and writes some code. On the left, we have the current state of the world, on the right is the anticipated state _after_ taking the action. +At this point, let's review the visual grammar of the diagram above. Here, we're showing how the balance of risks will change if the developer [Takes Action](/thinking/Glossary#taking-action) and writes some code. On the left, we have the current state of the world, on the right is the anticipated state _after_ taking the action. -The round-cornered rectangles represent our [Internal Model](/thinking/Glossary.md#internal-model), and these contain our view of [Risk](/thinking/Glossary.md#risk), whether the risks we face right now, or the [Attendant Risks](/thinking/Glossary.md#attendant-risk) expected after taking the action. We're not at the stage where taking this actions is _completing_ the goal. In fact, arguably, we're facing _worse_ risks after taking action than before, since we now have _development difficulties_ to contend with! +The round-cornered rectangles represent our [Internal Model](/thinking/Glossary#internal-model), and these contain our view of [Risk](/thinking/Glossary#risk), whether the risks we face right now, or the [Attendant Risks](/thinking/Glossary#attendant-risk) expected after taking the action. We're not at the stage where taking this actions is _completing_ the goal. In fact, arguably, we're facing _worse_ risks after taking action than before, since we now have _development difficulties_ to contend with! But at least, taking the action of "coding and unit testing" is expected to mitigate the risk of "Duplicating Functionality". @@ -131,28 +131,28 @@ So, within this example process, this stage is about meeting a new reality: the ![Integration testing exposes Hidden Risks before you get to production](/img/generated/introduction/development_process_integration.svg) -As shown in the diagram above, at this stage we might discover the [Hidden Risk](/thinking/Glossary.md#hidden-risk) that we'd break **Feature Y** +As shown in the diagram above, at this stage we might discover the [Hidden Risk](/thinking/Glossary#hidden-risk) that we'd break **Feature Y** ### User Acceptance Test -Next, User Acceptance Testing (UAT) is where our new feature meets another reality: _actual users_. I think you can see how the process works by now. We're just flushing out yet more [Hidden Risks](/thinking/Glossary.md#hidden-risk). +Next, User Acceptance Testing (UAT) is where our new feature meets another reality: _actual users_. I think you can see how the process works by now. We're just flushing out yet more [Hidden Risks](/thinking/Glossary#hidden-risk). ![UAT - putting tame users in front of your software is better than real ones, where the risk is higher ](/img/generated/introduction/development_process_uat.svg) ## Observations -Here are a few quick observations about managing risk which you are revealed both by this toy software process and also our previous example of [The Dinner Party](A-Simple-Scenario.md): +Here are a few quick observations about managing risk which you are revealed both by this toy software process and also our previous example of [The Dinner Party](A-Simple-Scenario): - - [Taking Action](/thinking/Glossary.md#taking-action) is the _only_ way to create change in the world. - - It's also the only way we can _learn_ about the world, adding to our [Internal Model](/thinking/Glossary.md#internal-model). - - In this case, we discover a [Hidden Risk](/thinking/Glossary.md#hidden-risk): the user's difficulty in finding the feature. + - [Taking Action](/thinking/Glossary#taking-action) is the _only_ way to create change in the world. + - It's also the only way we can _learn_ about the world, adding to our [Internal Model](/thinking/Glossary#internal-model). + - In this case, we discover a [Hidden Risk](/thinking/Glossary#hidden-risk): the user's difficulty in finding the feature. - In return, we can _expect_ the process of performing the UAT to delay our release (this is an attendant schedule risk). ## Major Themes So, what does this kind of Risk-First analysis tell us about _development processes in general_? Below are four conclusions you can take away from the chapter, but which are all major themes of Risk-First that we'll be developing later: -**First**, the people who set up the development process _didn't know_ about these _exact_ risks, but they knew the _shape that the risks take_. The process builds "nets" for the different kinds of [Hidden Risks](/thinking/Glossary.md#hidden-risk) without knowing exactly what they are. In order to build these nets, we have to be able to categorise the types of risk we face. This is something we'll look at in the [Risks](/risks/Start.md) part of Risk-First. +**First**, the people who set up the development process _didn't know_ about these _exact_ risks, but they knew the _shape that the risks take_. The process builds "nets" for the different kinds of [Hidden Risks](/thinking/Glossary#hidden-risk) without knowing exactly what they are. In order to build these nets, we have to be able to categorise the types of risk we face. This is something we'll look at in the [Risks](/risks/Start) part of Risk-First. **Second**, are these really risks, or are they _problems we just didn't know about_? I am using the terms interchangeably, to a certain extent. Even when you know you have a problem, it's still a risk to your deadline until it's solved. So, when does a risk become a problem? Is a problem still just a schedule-risk, or cost-risk? We'll come back to this question soon. @@ -160,4 +160,4 @@ So, what does this kind of Risk-First analysis tell us about _development proces **Fourth**, hopefully you might be able to see from the above that really _all this work is risk management_ and _all work is testing ideas against reality_. -In the next section, we're going to look at the concept of [Meeting Reality](Meeting-Reality.md) in a bit more depth. \ No newline at end of file +In the next section, we're going to look at the concept of [Meeting Reality](Meeting-Reality) in a bit more depth. \ No newline at end of file diff --git a/docs/thinking/Enterprise-Risk.md b/docs/thinking/Enterprise-Risk.md index c0c95ce78..850744c31 100644 --- a/docs/thinking/Enterprise-Risk.md +++ b/docs/thinking/Enterprise-Risk.md @@ -36,7 +36,7 @@ So that being said, here we're going to do a quick tour of the eight components > "The internal environment encompasses the tone of an organization and establishes the basis of how risk is seen and addressed by the persons of an entity, including the risk management philosophy and risk appetite, integrity and ethical values, and the environment in which they operate." - [_Wikipedia_](https://en.wikipedia.org/wiki/Committee_of_Sponsoring_Organizations_of_the_Treadway_Commission#Eight_frame_components) -The first component of the COSO model is the _Internal Environment_ and asks you to consider the approach of the organisation to risk. It's perhaps surprising that _ethical values_ might form a part of this, but clearly, if your organisation is willing to cut some corners and overlook some illegal behaviour this is -in a way- an acceptance of [Legal and Reputational Risks](../risks/Operational-Risk.md). +The first component of the COSO model is the _Internal Environment_ and asks you to consider the approach of the organisation to risk. It's perhaps surprising that _ethical values_ might form a part of this, but clearly, if your organisation is willing to cut some corners and overlook some illegal behaviour this is -in a way- an acceptance of [Legal and Reputational Risks](../risks/Operational-Risk). A great example of risk appetite is [Meta (née Facebook)](https://www.meta.com) who, from 2004 until 2014 had the motto "Move fast and break things" - a clear statement of a high-risk attitude consistent with a desire to evolve their product as fast as possible. But in 2014, the firm changed tack completely to "Move fast with stable infrastructure" - _signalling an entirely different risk appetite._ @@ -46,9 +46,9 @@ A great example of risk appetite is [Meta (née Facebook)](https://www.meta.com) > "The objectives must exist before management can identify potential events that affect its achievement." - [_Wikipedia_](https://en.wikipedia.org/wiki/Committee_of_Sponsoring_Organizations_of_the_Treadway_Commission#Eight_frame_components) -There are not many organisations that simply allow their staff to turn up and do what they like and in larger firms objectives are usually "cascaded down" from the top of the firm. Conversely, in the small, our [dinner party example](/thinking/A-Simple-Scenario.md) requires that there is a [goal](/thinking/Glossary.md#goal) before we can consider the risks to that goal! +There are not many organisations that simply allow their staff to turn up and do what they like and in larger firms objectives are usually "cascaded down" from the top of the firm. Conversely, in the small, our [dinner party example](/thinking/A-Simple-Scenario) requires that there is a [goal](/thinking/Glossary#goal) before we can consider the risks to that goal! -In the [Health](/thinking/Health.md) chapter we looked at how _surviving and thriving_ become an objective of the organisation too. (TBD more here EXAMPLE) +In the [Health](/thinking/Health) chapter we looked at how _surviving and thriving_ become an objective of the organisation too. (TBD more here EXAMPLE) **Question:** What are the objectives of your project? Are there ways in which your team can "game" the objectives and introduce new risks? Are the objectives communicated to everyone in the team? @@ -56,7 +56,7 @@ In the [Health](/thinking/Health.md) chapter we looked at how _surviving and thr > "Internal and external events that affect the achievement of the objectives of an entity must be identified, distinguishing between risks and opportunities. The opportunities are re-channeled into management strategy or goal-setting processes." - [_Wikipedia_](https://en.wikipedia.org/wiki/Committee_of_Sponsoring_Organizations_of_the_Treadway_Commission#Eight_frame_components) -As we covered in the section on [Health](/thinking/Health.md), it is important not just to _react_ to events that occur to you but to look for trouble and try to be proactive / preventive about risks to health. For example, you don't need to wait until your application's hardware goes down or wait until users complain that their transactions aren't getting processed. These are risks you can think about in advance and identify. +As we covered in the section on [Health](/thinking/Health), it is important not just to _react_ to events that occur to you but to look for trouble and try to be proactive / preventive about risks to health. For example, you don't need to wait until your application's hardware goes down or wait until users complain that their transactions aren't getting processed. These are risks you can think about in advance and identify. **Question**: What single points of failure exist on your application, whether people, processes, dependencies or hardware? Have you identified the risks surrounding them? @@ -64,13 +64,13 @@ As we covered in the section on [Health](/thinking/Health.md), it is important n > "The risks are analyzed, considering the probability and impact, as a basis for determining how they should be managed. The risks are inherently and residually assessed. "- [_Wikipedia_](https://en.wikipedia.org/wiki/Committee_of_Sponsoring_Organizations_of_the_Treadway_Commission#Eight_frame_components) -Risk assessment is a topic we covered in the [Tracking Risks](/thinking/Track-Risk.md) section. +Risk assessment is a topic we covered in the [Tracking Risks](/thinking/Track-Risk) section. ### 5. Risk response > "Management selects risk responses, avoiding, accepting, reducing or sharing risk, developing a set of actions to align risks with the entity's risk appetite and risk appetite." - [_Wikipedia_](https://en.wikipedia.org/wiki/Committee_of_Sponsoring_Organizations_of_the_Treadway_Commission#Eight_frame_components) -Deciding how to respond to risk has been covered in depth in [Consider Payoff](/thinking/Consider-Payoff.md) and [Derisking](/thinking/De-Risk.md) so we won't go over this again. +Deciding how to respond to risk has been covered in depth in [Consider Payoff](/thinking/Consider-Payoff) and [Derisking](/thinking/De-Risking) so we won't go over this again. EXAMPLE. @@ -90,4 +90,4 @@ The entire business risk management is monitored and modifications are made as n tbd. -Next article is on [Software Methodology](One-Size-Fits-No-One.md) \ No newline at end of file +Next article is on [Software Methodology](One-Size-Fits-No-One) \ No newline at end of file diff --git a/docs/thinking/Evaluating-Risk.md b/docs/thinking/Evaluating-Risk.md index 16135d568..076fba32f 100644 --- a/docs/thinking/Evaluating-Risk.md +++ b/docs/thinking/Evaluating-Risk.md @@ -92,7 +92,7 @@ Enough with the numbers and the theory: we need a practical framework, rather t - First, there isn't enough scientific evidence for an approach like this. We can look at collected data about historic IT projects, but techniques and tools advance rapidly. - Second, IT projects have too many confounding factors, such as experience of the teams, technologies, used, problem domain, clients etc. That is, the risks faced by IT projects are _too diverse_ and _hard to quantify_ to allow for meaningful comparison from one to the next. -- Third, as soon as you _publish a date_ it changes the expectations of the project (see [Student Syndrome](/risks/Scarcity-Risk.md#student-syndrome)). +- Third, as soon as you _publish a date_ it changes the expectations of the project (see [Student Syndrome](/risks/Scarcity-Risk#student-syndrome)). - Fourth, metrics get [misused](/tags/Map-And-Territory-Risk) and [gamed](/tags/Agency-Risk). ## Discounting In A Crisis @@ -101,6 +101,6 @@ Reality is messy. Dressing it up with numbers doesn't change that and you risk Lots of projects start with good intentions. Carefully evaluating the risks of your actions or inaction is great when the going is good. But then when the project is hit with delays everything goes out of the window. -In the next section, on [Crisis Mode](Crisis-Mode.md) we'll see that actually risk management is _still occurring_, but in a subtly different way. +In the next section, on [Crisis Mode](Crisis-Mode) we'll see that actually risk management is _still occurring_, but in a subtly different way. diff --git a/docs/thinking/Health.md b/docs/thinking/Health.md index 5e10306dc..5106584e3 100644 --- a/docs/thinking/Health.md +++ b/docs/thinking/Health.md @@ -31,7 +31,7 @@ I am going to argue here that _risks_ affect the health of a thing, where the th - **A Software Product** is a thing we interact with, built out of code. The health of that software is damaged by the existence of [bugs and missing features](/tags/Feature-Risk). - - **A Project**, like making a film or organising a [dinner party](A-Simple-Scenario.md). + - **A Project**, like making a film or organising a [dinner party](A-Simple-Scenario). - **A Commercial Entity**, such as a business, which is exposed to various [Operational Risks](/tags/Operational-Risk) in order to continue to function. Businesses face different health risks than organisms, like key staff leaving, reputation damage or running out of money. @@ -73,7 +73,7 @@ Metrics are difficult though. Choosing the _right_ metrics, knowing their weakn ### Health as Critical Acclaim -Measuring fitness as you go along is not always possible. For a lot of projects, like dinner parties, films or construction projects, the success or failure has to be judged subjectively on completion, and not before. Essentially, the project is a [bet](Glossary.md#bet) on a future outcome. +Measuring fitness as you go along is not always possible. For a lot of projects, like dinner parties, films or construction projects, the success or failure has to be judged subjectively on completion, and not before. Essentially, the project is a [bet](Glossary#bet) on a future outcome. Building a new feature on a software project fits into this category: although you can build tests, do demos and run beta-test programmes the full picture of the health of what you've built won't emerge until later. @@ -122,7 +122,7 @@ When an organisation or a project hires an employee they are doing so in order t Sometimes, as discussed in [Agency Risk](/tags/Agency-Risk) these can be in conflict with one another: - - Putting in [a heroic effort](/risks/Agency-Risk.md#the-hero) might save a project but at the expense of your personal health. + - Putting in [a heroic effort](/risks/Agency-Risk#the-hero) might save a project but at the expense of your personal health. - [Lobbying](https://en.wikipedia.org/wiki/Lobbying) is trying to push the political agenda of an organisation at the state level, which might help the health of the organisation at the expense of the state or its citizens. @@ -132,8 +132,8 @@ Sometimes, as discussed in [Agency Risk](/tags/Agency-Risk) these can be in conf If all of these disparate domains at all of these different scales are tracking health risks, it is clear that we should be doing this for software projects too. -The health risks affecting people are well known (by doctors, at least) and we have the list of state-level risks above too. [Risk-First](https://riskfirst.org) is therefore about building a similar catalog for risks affecting the health of software development projects. Risks are in general _not_ unique on software projects - they are the same ones over and over again, such as [Communication Risk](/tags/Comunication-Risk) or [Dependency Risk](/tags/Dependency-Risk). Every project faces these. +The health risks affecting people are well known (by doctors, at least) and we have the list of state-level risks above too. [Risk-First](https://riskfirst.org) is therefore about building a similar catalog for risks affecting the health of software development projects. Risks are in general _not_ unique on software projects - they are the same ones over and over again, such as [Communication Risk](/tags/Communication-Risk) or [Dependency Risk](/tags/Dependency-Risk). Every project faces these. Having shown that risk management is _scale invariant_, we're next going to look at general strategies we can use to manage all of these various health risks. -On to [Derisking](De-Risking.md)... \ No newline at end of file +On to [Derisking](De-Risking)... \ No newline at end of file diff --git a/docs/thinking/Just-Risk.md b/docs/thinking/Just-Risk.md index 3f1af8e68..ff307a099 100644 --- a/docs/thinking/Just-Risk.md +++ b/docs/thinking/Just-Risk.md @@ -55,7 +55,7 @@ This _hints_ at the fact that at some level it's all about risk: ## Every Action Attempts to Manage Risk -The reason you are [taking an action](Glossary.md#taking-action) is to manage a risk. For example: +The reason you are [taking an action](Glossary#taking-action) is to manage a risk. For example: - If you're coding up new features in the software, this is managing [Feature Risk](/tags/Feature-Risk) (which we'll explore in more detail later). - If you're getting a business sign-off for something, this is managing the risk of everyone not agreeing on a course of action (a [Coordination Risk](/tags/Coordination-Risk)). @@ -66,11 +66,11 @@ The reason you are [taking an action](Glossary.md#taking-action) is to manage a - How do you know if the action will get completed? - Will it overrun, or be on time? - Will it lead to yet more actions? -- What [Hidden Risk](/thinking/Glossary.md#hidden-risk) will it uncover? +- What [Hidden Risk](/thinking/Glossary#hidden-risk) will it uncover? -Consider _coding a feature_. The whole process of coding is an exercise in learning what we didn't know about the world, uncovering problems and improving our [Internal Model](/thinking/Glossary.md#internal-model). That is, flushing out the [Attendant Risk](/thinking/Glossary.md#attendant-risk) of the [Goal](/thinking/Glossary.md#goal). +Consider _coding a feature_. The whole process of coding is an exercise in learning what we didn't know about the world, uncovering problems and improving our [Internal Model](/thinking/Glossary#internal-model). That is, flushing out the [Attendant Risk](/thinking/Glossary#attendant-risk) of the [Goal](/thinking/Glossary#goal). -And, as we saw in the [Introduction](A-Simple-Scenario.md), even something _mundane_ like the Dinner Party had risks. +And, as we saw in the [Introduction](A-Simple-Scenario), even something _mundane_ like the Dinner Party had risks. ## An Issue is Just A Type of Risk @@ -93,13 +93,13 @@ Let's look at a real-life example. The above image shows a selection of issues ## Goals Are Risks Too -[Earlier](Risk-First-Diagrams.md), we introduced something of a "diagram language" of risk. +[Earlier](Risk-First-Diagrams), we introduced something of a "diagram language" of risk. ![The Risk-First Diagram Language, with _stimulus_ on the left, the action (or _response_) we take in the middle, and the results on the right.](/img/generated/introduction/all_risk_management_language.svg) The above diagram is an idealised example of this, showing how we take action to address the risks and goals on the left and end up with new risks on the right. -[Goals](/thinking/Glossary.md#goal) live inside our [Internal Model](/thinking/Glossary.md#internal-model), just like Risks. Functionally, Goals and Risks are equivalent. For example, the Goal of "Implementing Feature X" is equivalent to mitigating "Risk of Feature X not being present". +[Goals](/thinking/Glossary#goal) live inside our [Internal Model](/thinking/Glossary#internal-model), just like Risks. Functionally, Goals and Risks are equivalent. For example, the Goal of "Implementing Feature X" is equivalent to mitigating "Risk of Feature X not being present". Let's try and back up that assertion with a few more examples: @@ -109,7 +109,7 @@ Let's try and back up that assertion with a few more examples: | Risk of looking technically inferior during the cold war | Feeling of technical superiority | Land a man on the moon | | Risk of the market not requiring your skills | Job security | Retrain | -There is a certain "interplay" between the concepts of risks, actions and goals. On the [Risk Landscape](/thinking/Glossary.md#risk-landscape), goals and risks correspond to starting points and destinations, whilst the action is moving on the risk landscape. +There is a certain "interplay" between the concepts of risks, actions and goals. On the [Risk Landscape](/thinking/Glossary#risk-landscape), goals and risks correspond to starting points and destinations, whilst the action is moving on the risk landscape. | **Starting Point** | **Movement** | **End Point** | |--------------------|--------------|--------------------------------| @@ -123,9 +123,9 @@ But risks, goals and actions are deeply connected. By focusing on "Risk-First", ![Risks, Goals, Opportunities, Anti-goals](/img/generated/introduction/risks_opportunities.svg) -Some literature talks about [Opportunities](Glossary.md#opportunity) as being the opposite of [Risks](Glossary.md#risk). Here, we tend to call these [Upside Risks](Glossary.md#upside-risk). Therefore, there is a related discipline of _opportunity management_. +Some literature talks about [Opportunities](Glossary#opportunity) as being the opposite of [Risks](Glossary#risk). Here, we tend to call these [Upside Risks](Glossary#upside-risk). Therefore, there is a related discipline of _opportunity management_. -Here, we're not going to get into this except to say that sometimes it is worth also considering the idea of [Anti-Goals](../misc/Anti-Goals.md): that is, being clear about the things you really want to avoid happening, as shown in the figure above. +Here, we're not going to get into this except to say that sometimes it is worth also considering the idea of [Anti-Goals](Anti-Goals): that is, being clear about the things you really want to avoid happening, as shown in the figure above. ## Summary @@ -133,4 +133,4 @@ A Risk-First diagram represents a starting point (a risk, a goal), some movement However, where this becomes problematic is when trying to decide what work to do: is the expected destination _worth_ the effort of the action? -So next, let's look at how we should [Track Risks](Track-Risk.md) in order to make sure we're not missing anything important. +So next, let's look at how we should [Track Risks](Track-Risk) in order to make sure we're not missing anything important. diff --git a/docs/thinking/Meeting-Reality.md b/docs/thinking/Meeting-Reality.md index 2a266bb7e..1c2ac4af8 100644 --- a/docs/thinking/Meeting-Reality.md +++ b/docs/thinking/Meeting-Reality.md @@ -28,25 +28,25 @@ tweet: yes # Meeting Reality -Of the new terminology we've looked at so far, [Meeting Reality](Glossary.md#meet-reality) might be one of the most baffling. However, it is a crucial concept in risk management. +Of the new terminology we've looked at so far, [Meeting Reality](Glossary#meet-reality) might be one of the most baffling. However, it is a crucial concept in risk management. -Here we look at how exposing your [Internal Model](Glossary.md#meet-reality) to reality is in itself a good risk management technique. +Here we look at how exposing your [Internal Model](Glossary#meet-reality) to reality is in itself a good risk management technique. ![Meeting Reality](/img/generated/principles/meet-reality.svg) ## Different Internal Models -The world is too complex to understand at a glance. It takes years of growth and development for humans to build a useful [internal model](Glossary.md#internal-model) of reality in our heads. +The world is too complex to understand at a glance. It takes years of growth and development for humans to build a useful [internal model](Glossary#internal-model) of reality in our heads. Within a development team, the model is split amongst people, documents, email, tickets, code... but it is still a model. -This "[Internal Model](/thinking/Glossary.md#internal-model)" of reality informs the actions we take in life: we take actions based on our model, hoping to change reality with some positive outcome. +This "[Internal Model](/thinking/Glossary#internal-model)" of reality informs the actions we take in life: we take actions based on our model, hoping to change reality with some positive outcome. ![Taking actions changes reality, but changes your model of the risks too](/img/generated/introduction/model_vs_reality_2.svg) -For example, while [organising a dinner party](A-Simple-Scenario.md) you'll have a model of who you expect to come. You might take actions to ensure there is enough food, that you've got RSVPs and so on. +For example, while [organising a dinner party](A-Simple-Scenario) you'll have a model of who you expect to come. You might take actions to ensure there is enough food, that you've got RSVPs and so on. -The actions we take have consequences in the real world. Hopefully, we eliminate some known risks but we might expose new [hidden risks](/thinking/Glossary.md#hidden-risk) as we go. There is a _recursive_ nature about this - we're left with an updated Internal Model, and we see new actions we have to take as a result. +The actions we take have consequences in the real world. Hopefully, we eliminate some known risks but we might expose new [hidden risks](/thinking/Glossary#hidden-risk) as we go. There is a _recursive_ nature about this - we're left with an updated Internal Model, and we see new actions we have to take as a result. ## Navigating the "Risk Landscape" @@ -54,11 +54,11 @@ The diagram above shows _just one possible action_ but really, you'll have choic What's the best way? -I would argue that the best choice of what to do is the one has the greatest [Payoff](Consider-Payoff.md) - the one that mitigates the most existing risk while accruing the least attendant risk to get it done. That is, when you take an action, you are trading off a big risk for a smaller one. +I would argue that the best choice of what to do is the one has the greatest [Payoff](Consider-Payoff) - the one that mitigates the most existing risk while accruing the least attendant risk to get it done. That is, when you take an action, you are trading off a big risk for a smaller one. ![Navigating The Risk Landscape](/img/generated/introduction/risk_landscape_1.svg) -You can think of [Taking Action](/thinking/Glossary.md#taking-action) as moving your project on a "[Risk Landscape](Glossary.md#risk-landscape)". Ideally, when you take an action, you move from some place with worse risk to somewhere more favourable, as shown in the diagram above. +You can think of [Taking Action](/thinking/Glossary#taking-action) as moving your project on a "[Risk Landscape](Glossary#risk-landscape)". Ideally, when you take an action, you move from some place with worse risk to somewhere more favourable, as shown in the diagram above. Now, that's easier said than done! Sometimes, you can end up somewhere _worse_: the action you took to manage a risk has made things worse. Almost certainly, this will have been due to a hidden risk that you weren't aware of when you embarked on the action, otherwise you'd not have chosen it. @@ -68,17 +68,17 @@ Now, that's easier said than done! Sometimes, you can end up somewhere _worse_: _Automating processes_ (as shown in the diagram above) is often tempting: it _should_ save time, and reduce the amount of boring, repetitive work on a project. But sometimes, it turns into an industry in itself, consumes more effort than it'll ever pay back and needs to be maintained in the future at great expense. -One popular type of automation is [Unit Testing](/practices/Glossary-Of-Practices.md#unit-testing). Writing unit tests adds to the amount of development work, so on its own, it _uses up time from the schedule_. It also creates complexity - you now have more code to manage. However, if you write _just enough_ of the right unit tests, you should be short-cutting the time spent finding issues in the User Acceptance Testing (UAT) stage, so you're hopefully trading off a larger [Schedule Risk](/tags/Schedule-Risk) from UAT and adding a smaller [Schedule Risk](/tags/Schedule-Risk) to Development. +One popular type of automation is [Unit Testing](/practices/Glossary-Of-Practices#unit-testing). Writing unit tests adds to the amount of development work, so on its own, it _uses up time from the schedule_. It also creates complexity - you now have more code to manage. However, if you write _just enough_ of the right unit tests, you should be short-cutting the time spent finding issues in the User Acceptance Testing (UAT) stage, so you're hopefully trading off a larger [Schedule Risk](/tags/Schedule-Risk) from UAT and adding a smaller [Schedule Risk](/tags/Schedule-Risk) to Development. ### Example: MongoDB On a previous project in a bank we had a requirement to store a modest amount of data and we needed to be able to retrieve it fast. The developer chose to use [MongoDB](https://www.mongodb.com) for this. At the time, others pointed out that other teams in the bank had had lots of difficulty deploying MongoDB internally, due to licensing issues and other factors internal to the bank. -Other options were available, but the developer chose MongoDB because of their _existing familiarity_ with it: therefore, they felt that the [Hidden Risks](/thinking/Glossary.md#hidden-risk) of MongoDB were _lower_ than the other options. +Other options were available, but the developer chose MongoDB because of their _existing familiarity_ with it: therefore, they felt that the [Hidden Risks](/thinking/Glossary#hidden-risk) of MongoDB were _lower_ than the other options. This turned out to be a mistake: the internal bureaucracy eventually proved too great and MongoDB had to be abandoned after much investment of time. -This is not a criticism of MongoDB: it's simply a demonstration that sometimes, the cure is worse than the disease. Successful projects are _always_ trying to _reduce_ [Attendant Risks](/thinking/Glossary.md#attendant-risk). +This is not a criticism of MongoDB: it's simply a demonstration that sometimes, the cure is worse than the disease. Successful projects are _always_ trying to _reduce_ [Attendant Risks](/thinking/Glossary#attendant-risk). ## The Cost Of Meeting Reality @@ -110,7 +110,7 @@ Activities like User Acceptance Testing (UAT) or incremental delivery give us so ## Trade-Offs -Making a move on the [Risk Landscape](Glossary.md#risk-landscape) is about accepting a trade-off. And the examples in this section are all classic software development trade-offs. If you're an experienced software developer, you'll understand that any technology decision (whether it's unit testing, database choices or release processes - the examples we've seen here) means accepting a trade-off. +Making a move on the [Risk Landscape](Glossary#risk-landscape) is about accepting a trade-off. And the examples in this section are all classic software development trade-offs. If you're an experienced software developer, you'll understand that any technology decision (whether it's unit testing, database choices or release processes - the examples we've seen here) means accepting a trade-off. The Risk-First diagram gives us two things. First, it makes this trade off clear: what do I lose? what do I gain? Second, by describing our trade-offs in terms of _risk_, we are also making clear the fact that up front, we're never certain whether the trade-off will be worth it. @@ -118,11 +118,11 @@ The Risk-First diagram gives us two things. First, it makes this trade off clea So, here we've looked at Meeting Reality, which basically boils down to taking actions to expose yourself to hidden risks and seeing how it turns out: -- Each action you take is a step on the [Risk Landscape](/thinking/Glossary.md#risk-landscape), trading off one set of risks for another. -- Each action exposes new [Hidden Risks](/thinking/Glossary.md#hidden-risk), changing your [Internal Model](/thinking/Glossary.md#internal-model). -- Ideally, each action should reduce the overall [Attendant Risk](/thinking/Glossary.md#attendant-risk) on the project (that is, puts it in a better place on the [Risk Landscape](/thinking/Glossary.md#risk-landscape). +- Each action you take is a step on the [Risk Landscape](/thinking/Glossary#risk-landscape), trading off one set of risks for another. +- Each action exposes new [Hidden Risks](/thinking/Glossary#hidden-risk), changing your [Internal Model](/thinking/Glossary#internal-model). +- Ideally, each action should reduce the overall [Attendant Risk](/thinking/Glossary#attendant-risk) on the project (that is, puts it in a better place on the [Risk Landscape](/thinking/Glossary#risk-landscape). -Could it be that _everything_ you do on a software project is risk management? This is an idea explored next in [Just Risk](Just-Risk.md). +Could it be that _everything_ you do on a software project is risk management? This is an idea explored next in [Just Risk](Just-Risk). diff --git a/docs/thinking/One-Size-Fits-No-One.md b/docs/thinking/One-Size-Fits-No-One.md index fce97f050..5e18aa586 100644 --- a/docs/thinking/One-Size-Fits-No-One.md +++ b/docs/thinking/One-Size-Fits-No-One.md @@ -24,13 +24,13 @@ tweet: yes Why are [Software Methodologies](https://en.wikipedia.org/wiki/Software_development_process) all different? -[Previously](Just-Risk.md), we made the case that any action you take on a software project is to do with managing risk. The last section, [A Conversation](A-Conversation.md) was an example of this happening. +[Previously](Just-Risk), we made the case that any action you take on a software project is to do with managing risk. The last section, [A Conversation](A-Conversation) was an example of this happening. Therefore, it stands to reason that software methodologies are all about handling risk too. Since they are prescribing a particular day-to-day process, or set of actions to take, they are also prescribing a particular approach to managing the risks on software projects. ## Methodologies Surface Hidden Risks... -Back in the [Development Process](Development-Process.md) section we introduced a toy software methodology that a development team might follow when building software. It included steps like _analysis_, _coding_ and _testing_. We looked at how the purpose of each of these actions was to manage risk in the software delivery process. For example, it doesn't matter if a developer doesn't know that he's going to break "Feature Y", because the _Integration Testing_ part of the methodology will expose this [hidden risk](/thinking/Glossary.md#hidden-risk) in the testing stage, rather than in let it surface in production (where it becomes more expensive). +Back in the [Development Process](Development-Process) section we introduced a toy software methodology that a development team might follow when building software. It included steps like _analysis_, _coding_ and _testing_. We looked at how the purpose of each of these actions was to manage risk in the software delivery process. For example, it doesn't matter if a developer doesn't know that he's going to break "Feature Y", because the _Integration Testing_ part of the methodology will expose this [hidden risk](/thinking/Glossary#hidden-risk) in the testing stage, rather than in let it surface in production (where it becomes more expensive). ## ... But Replace Judgement @@ -53,10 +53,10 @@ In this section, we're going to have a brief look at some different software met Waterfall is a family of methodologies advocating a linear, stepwise approach to the processes involved in delivering a software system. The basic idea behind Waterfall-style methodologies is that the software process is broken into distinct stages, as shown in the diagram above. These usually include: - [Requirements Capture](/tags/Requirements-Capture) -- [Specification](tags/Design) +- [Specification](/tags/Design) - [Implementation](/tags/Coding) - [Verification](/tags/User-Acceptance-Testing) -- [Delivery](/tags/Release) and [Operations](/tags/Support.md) +- [Delivery](/tags/Release) and [Operations](/tags/Issue-Management) - [Sign Offs](/tags/Approvals) at each stage Because Waterfall methodologies are borrowed from _the construction industry_, they manage the risks that you would care about in a construction project. Specifically, minimising the risk of rework, and the risk of costs spiralling during the physical phase of the project. For example, pouring concrete is significantly easier than digging it out again after it sets. @@ -101,19 +101,19 @@ Here are some high-level differences we see in some other popular methodologies: - **[DevOps](https://en.wikipedia.org/wiki/DevOps)**. Many software systems struggle at the [boundary](/tags/Boundary-Risk) between "in development" and "in production". DevOps is an acknowledgement of this, and is about more closely aligning the feedback loops between the developers and the production system. It champions activities such as continuous deployment, automated releases and automated monitoring. -While this is a limited set of examples, you should be able to observe that the [actions](/thinking/Glossary.md#taking-action) promoted by a methodology are contingent on the risks it considers important. +While this is a limited set of examples, you should be able to observe that the [actions](/thinking/Glossary#taking-action) promoted by a methodology are contingent on the risks it considers important. ## Effectiveness > "All methodologies are based on fear. You try to set up habits to prevent your fears from becoming reality." - [Extreme Programming Explained, _Kent Beck_](http://amzn.eu/d/1vSqAWa) -The promise of any methodology is that it will help you manage certain [Hidden Risks](/thinking/Glossary.md#hidden-risk). But this comes at the expense of the _effort_ you put into the practices of the methodology. +The promise of any methodology is that it will help you manage certain [Hidden Risks](/thinking/Glossary#hidden-risk). But this comes at the expense of the _effort_ you put into the practices of the methodology. -A methodology offers us a route through the [Risk Landscape](/thinking/Glossary.md#risk-landscape), based on the risks that the designers of the methodology care about. When we use the methodology, it means that we are baking into our behaviour actions to avoid those risks. +A methodology offers us a route through the [Risk Landscape](/thinking/Glossary#risk-landscape), based on the risks that the designers of the methodology care about. When we use the methodology, it means that we are baking into our behaviour actions to avoid those risks. ### Methodological Failure -When we [take action](/thinking/Glossary.md#taking-action) according to a methodology, we expect the [Payoff](/thinking/Glossary.md#payoff), and if this doesn't materialise, then we feel the methodology is failing us. It could just be that it is inappropriate to the _type of project_ we are running. Our [Risk Landscape](/thinking/Glossary.md#risk-landscape) may not be the one the designers of the methodology envisaged. For example: +When we [take action](/thinking/Glossary#taking-action) according to a methodology, we expect the [Payoff](/thinking/Glossary#payoff), and if this doesn't materialise, then we feel the methodology is failing us. It could just be that it is inappropriate to the _type of project_ we are running. Our [Risk Landscape](/thinking/Glossary#risk-landscape) may not be the one the designers of the methodology envisaged. For example: - NASA [doesn't follow an agile methodology](https://swehb.nasa.gov/display/7150/SWEREF-278) when launching space craft: there's no two-weekly launch that they can iterate over, and the the risks of losing a rocket or satellite are simply too great to allow for iteration in production. The risk profile is just all wrong: you need to manage the risk of _losing hardware_ over the risk of _requirements changing_. @@ -135,7 +135,7 @@ An off-the-shelf methodology is unlikely to fit the risks of any project exactly ![Methodologies, Actions, Risks, Goals](/img/generated/executive-summary/pattern_language.svg) -As the above diagram shows, different methodologies advocate different practices, and different practices manage different risks. If we want to understand methodologies, or choose practices from one, we really need to understand the _types of risks_ we face on software projects. This is where we [go next](/risks/Start.md). +As the above diagram shows, different methodologies advocate different practices, and different practices manage different risks. If we want to understand methodologies, or choose practices from one, we really need to understand the _types of risks_ we face on software projects. This is where we [go next](/risks/Start). -The last part of this track is the [Glossary](Glossary.md), which summarises all the new terms we've covered here. +The last part of this track is the [Glossary](Glossary), which summarises all the new terms we've covered here. diff --git a/docs/thinking/Risk-First-Diagrams.md b/docs/thinking/Risk-First-Diagrams.md index 13eb4eeb0..6d2d11051 100644 --- a/docs/thinking/Risk-First-Diagrams.md +++ b/docs/thinking/Risk-First-Diagrams.md @@ -21,11 +21,11 @@ tweet: yes # Risk-First Diagrams Explained -Throughout [A Simple Scenario](A-Simple-Scenario.md) we used diagrams to explain the risks we faced and the choices we were making. These are called "Risk-First Diagrams". Here, we're going to look at what is going on in these diagrams so that when we come to apply them to _software development_, they're not totally confusing. +Throughout [A Simple Scenario](A-Simple-Scenario) we used diagrams to explain the risks we faced and the choices we were making. These are called "Risk-First Diagrams". Here, we're going to look at what is going on in these diagrams so that when we come to apply them to _software development_, they're not totally confusing. ![Goal In Mind and Attendant Risks](/img/generated/introduction/goal_in_mind.svg) -The diagram above is taken from the [dinner party](A-Simple-Scenario.md) example: we want to host a successful party, but in doing so, we know there are [Attendant Risks](Glossary.md#attendant-risk). +The diagram above is taken from the [dinner party](A-Simple-Scenario) example: we want to host a successful party, but in doing so, we know there are [Attendant Risks](Glossary#attendant-risk). ![Nothing To Eat](/img/generated/introduction/diagram_example.svg) @@ -53,7 +53,7 @@ In the middle of a Risk-First diagram we see the actions you could take. In the ![Outcomes](/img/generated/introduction/outcome.svg) -_Nothing comes for free._ On the right, you can see the consequence or outcome of the actions you've taken: [Attendant Risks](/thinking/Glossary.md#attendant-risk) are the _new_ risks you now have as a result of taking the action. +_Nothing comes for free._ On the right, you can see the consequence or outcome of the actions you've taken: [Attendant Risks](/thinking/Glossary#attendant-risk) are the _new_ risks you now have as a result of taking the action. Hosting a dinner party opens you up to attendant risks like "Not Enough to Eat". As a result of that risk, we consider buying lots of snacks. As a result of _that_ risk, we start to consider whether our guests will be impressed with that. @@ -61,19 +61,19 @@ Hosting a dinner party opens you up to attendant risks like "Not Enough to Eat". It's worth pointing out that sometimes _the cure is worse than the disease_. -By [Taking Action](/thinking/Glossary.md#taking-action) you might end up in a worse predicament than you started. For example, cutting your legs off _would definitely cure your in-growing toenail_. We have to use our judgement to decide on the right course of action! +By [Taking Action](/thinking/Glossary#taking-action) you might end up in a worse predicament than you started. For example, cutting your legs off _would definitely cure your in-growing toenail_. We have to use our judgement to decide on the right course of action! ### A Balance of Risk -So Risk-First diagrams represent a [balance of risk](/thinking/Glossary.md#balance-of-risk): whether or not you choose to take the action will depend on your evaluation of this balance. Are the things on the left worse or better than the things on the right? +So Risk-First diagrams represent a [balance of risk](/thinking/Glossary#balance-of-risk): whether or not you choose to take the action will depend on your evaluation of this balance. Are the things on the left worse or better than the things on the right? ### Cause and Effect ![Stimulus, Response, Outcome](/img/generated/introduction/stimulus-response-outcome.svg) -You can think about a Risk-First diagram in a sense as a way of visualising _cause and effect_. In _biological terms_ this is called the [Stimulus-Response Model](https://en.wikipedia.org/wiki/Stimulus–response_model), or sometimes, as shown in the diagram above, Stimulus-Response-Outcome. The items on the left of the diagram are the _stimulus_ part: they're the thing that makes us [Take Action](Glossary.md#taking-action) in the world. The middle part (the action) is the response and the right side is the outcome. +You can think about a Risk-First diagram in a sense as a way of visualising _cause and effect_. In _biological terms_ this is called the [Stimulus-Response Model](https://en.wikipedia.org/wiki/Stimulus–response_model), or sometimes, as shown in the diagram above, Stimulus-Response-Outcome. The items on the left of the diagram are the _stimulus_ part: they're the thing that makes us [Take Action](Glossary#taking-action) in the world. The middle part (the action) is the response and the right side is the outcome. -There are [all kinds of risks](/risks/Risk-Landscape.md) we face in life and we attach different value or _criticality_ to them. Most people will want to take action against the worst risks they face in their lives and maybe put up with some of the lesser ones. Equally, we should also try and achieve our _most critical_ goals and let the lesser ones slide (at least, from a rational standpoint). +There are [all kinds of risks](/risks/Risk-Landscape) we face in life and we attach different value or _criticality_ to them. Most people will want to take action against the worst risks they face in their lives and maybe put up with some of the lesser ones. Equally, we should also try and achieve our _most critical_ goals and let the lesser ones slide (at least, from a rational standpoint). ### Functions @@ -90,27 +90,27 @@ There are a few other bits and pieces that crop up in these diagrams that we sho ### Containers For _Internal Models_ -The risks on the left and right are contained in rounded-boxes. That's because risks live in our [Internal Models](/thinking/Glossary.md#internal-model) - they're not real-world things you can reach out and touch. They're _contained_ in things like brains, spreadsheets, reports and programs. +The risks on the left and right are contained in rounded-boxes. That's because risks live in our [Internal Models](/thinking/Glossary#internal-model) - they're not real-world things you can reach out and touch. They're _contained_ in things like brains, spreadsheets, reports and programs. #### Example: Blaming Others ![Blame Game](/img/generated/introduction/blame.svg) -In the above diagram, you can see how Jim is worried about his job security, probably because he's made a mistake at work. Therefore, in his [Internal Model](/thinking/Glossary.md#internal-model) he has [Funding Risks](/tags/Funding-Risk), i.e. he's worried about money. +In the above diagram, you can see how Jim is worried about his job security, probably because he's made a mistake at work. Therefore, in his [Internal Model](/thinking/Glossary#internal-model) he has [Funding Risks](/tags/Funding-Risk), i.e. he's worried about money. -What does he do? His [Action](/thinking/Glossary.md#taking-action) is to blame Bob. If all goes according to plan, Jim has dealt with his risk and now Bob has the problems instead. +What does he do? His [Action](/thinking/Glossary#taking-action) is to blame Bob. If all goes according to plan, Jim has dealt with his risk and now Bob has the problems instead. ### Mitigated and Hidden Risk ![Mitigated and Hidden](/img/generated/introduction/hidden-mitigated.svg) -The diagram above shows two other marks we use quite commonly: we put a "strike" through a risk to show that it's been dealt with in some way and the "cloud" icon denotes [Hidden Risks](/thinking/Glossary.md#hidden-risk)- those _unknown unknowns_ that we couldn't have predicted in advance. +The diagram above shows two other marks we use quite commonly: we put a "strike" through a risk to show that it's been dealt with in some way and the "cloud" icon denotes [Hidden Risks](/thinking/Glossary#hidden-risk)- those _unknown unknowns_ that we couldn't have predicted in advance. ### Artifacts ![Artifacts](/img/generated/introduction/artifacts.svg) -Sometimes, we add _artifacts_ to Risk-First diagrams. That is, real-world things such as people, documents, code, servers and so on. This is because as well as changing [Internal Models](/thinking/Glossary.md#internal-model), [Taking Action](/thinking/Glossary.md#taking-action) will produce real results and consume inputs in order to do so. So, it's sometimes helpful to include these on the diagram. Some examples are shown in the diagram above. +Sometimes, we add _artifacts_ to Risk-First diagrams. That is, real-world things such as people, documents, code, servers and so on. This is because as well as changing [Internal Models](/thinking/Glossary#internal-model), [Taking Action](/thinking/Glossary#taking-action) will produce real results and consume inputs in order to do so. So, it's sometimes helpful to include these on the diagram. Some examples are shown in the diagram above. ### Causation and Correlation @@ -130,8 +130,8 @@ Let's quickly summarise again what's happening in these diagrams: ## Next -Risk-First is about understanding risk in software development, so next let's examine the scenario of a new software project. Instead of a single person organising a dinner party, we are likely to have a team, and our [Internal Model](Glossary.md#internal-model) will not just exist in our heads, but in the code we write. +Risk-First is about understanding risk in software development, so next let's examine the scenario of a new software project. Instead of a single person organising a dinner party, we are likely to have a team, and our [Internal Model](Glossary#internal-model) will not just exist in our heads, but in the code we write. -On to [Development Process](Development-Process.md)... +On to [Development Process](Development-Process)... diff --git a/docs/thinking/Track-Risk.md b/docs/thinking/Track-Risk.md index 9c5eb5349..3ba50c80c 100644 --- a/docs/thinking/Track-Risk.md +++ b/docs/thinking/Track-Risk.md @@ -21,10 +21,10 @@ In this section we're going to look at the importance of keeping track of risks. ## Risk Registers -Most developers are familiar with recording issues in an issue tracker. As we saw in [Just Risk](Just-Risk.md), _issues are a type of risk_, so it makes sense that issue trackers could be used for recording all project risks. Within risk management, this is actually called a [Risk Register](https://en.wikipedia.org/wiki/Risk_register). Typically, this will include for each risk: +Most developers are familiar with recording issues in an issue tracker. As we saw in [Just Risk](Just-Risk), _issues are a type of risk_, so it makes sense that issue trackers could be used for recording all project risks. Within risk management, this is actually called a [Risk Register](https://en.wikipedia.org/wiki/Risk_register). Typically, this will include for each risk: - The **name** of the risk, or other identifier. - - A **categories** to which the risk belongs (this is the focus of the [Risk Landscape](/risks/Risk-Landscape.md) section in Part 2). + - A **categories** to which the risk belongs (this is the focus of the [Risk Landscape](/risks/Risk-Landscape) section in Part 2). - A **brief description** or name of the risk to make the risk easy to discuss. - Some estimate for the **Impact**, **Probability** or **Risk Score** of the risk. - Proposed actions and a log of the progress made to manage the risk. @@ -33,7 +33,7 @@ If you work in software development and are familiar with [a product backlog](ht ### 1. A Continuum of Formality -In the [planning-a-dinner-party example](Meeting-Reality.md) the Risk Register happened *entirely in your head*. There is a continuum all the way from "in your head" through "using a spreadsheet" to dedicated Risk Management software. +In the [planning-a-dinner-party example](Meeting-Reality) the Risk Register happened *entirely in your head*. There is a continuum all the way from "in your head" through "using a spreadsheet" to dedicated Risk Management software. When you have a team of people trying to coordinate, then its very important that this stuff is written down in a "single source of truth" somewhere that everyone on the team can add to and view. Having a list of named risks (tasks, whatever) becomes useful when trying to decide what to do next and for dividing up work within the team. It's no good everyone having a different in-head risk register as you'll never find agreement on what things to tackle next. @@ -55,7 +55,7 @@ Let's look at an example. In a financial context (or a gambling one), we can co Risk Management in the finance industry _starts_ here and gets more complex. But often (especially on a software project), it's better to skip all this and just estimate a Risk Score. This is because if you think about "impact", it implies a definite, discrete event occurring (or not occurring) and asks you then to consider the probability of that. -So the second point to take away is - what is exactly happening when we set the priority of items in our backlog? Are we arranging them by **Impact**, **Probability**, **Risk Score** or are we looking also at the [action we would take](Glossary.md#taking-action) and factoring in the [Payoff](Glossary.md#payoff)? +So the second point to take away is - what is exactly happening when we set the priority of items in our backlog? Are we arranging them by **Impact**, **Probability**, **Risk Score** or are we looking also at the [action we would take](Glossary#taking-action) and factoring in the [Payoff](Glossary#payoff)? We'll come back to this in a minute. @@ -67,11 +67,11 @@ I am using **risk** everywhere because later we will talk about specific risks ( Additionally there is pre-existing usage in Banking of terms like [Operational Risk](https://en.wikipedia.org/wiki/Operational_risk) or [Reputational risk](https://www.investopedia.com/terms/r/reputational-risk.asp) which are also not really a-priori measurable. -Later, we'll dig into [Health](Health.md), which puts this on a better foundation. +Later, we'll dig into [Health](Health), which puts this on a better foundation. ### 4. An Issue Tracker Is Also A Risk Register -As covered in [Just Risk](Just-Risk.md), we know that _all work_ is managing risk. So therefore it stands to reason that if you are using an issue tracker then actually you are tracking risks. After all, issues are capturing the risk that: +As covered in [Just Risk](Just-Risk), we know that _all work_ is managing risk. So therefore it stands to reason that if you are using an issue tracker then actually you are tracking risks. After all, issues are capturing the risk that: - your customers stop using your product - someone is harmed by your product @@ -83,19 +83,19 @@ Much more likely, it will have a field for _priority_, or allow the ordering of - When someone says "this should be low priority as it's very unlikely to occur" then they're making a statement about **Probability**. - When someone says "this should be low priority because no one is going to care if we fix it" then they're making a statement about **Impact**. - - When someone says "this should be high priority as its a quick win" then maybe they're talking about [Payoff](Glossary.md#payoff). + - When someone says "this should be high priority as its a quick win" then maybe they're talking about [Payoff](Glossary#payoff). ## Visualising Risks ![Risk Matrix of Dinner Party Risks](/img/generated/introduction/risk_matrix.svg) -A risk matrix presents a graphical view on where risks exist. The diagram above is an example, showing the risks from the dinner party in the [A Simple Scenario](A-Simple-Scenario.md) section. The useful thing about this visualisation is it helps focus attention on the risks at the top and to the right - those with the biggest impact and probability. +A risk matrix presents a graphical view on where risks exist. The diagram above is an example, showing the risks from the dinner party in the [A Simple Scenario](A-Simple-Scenario) section. The useful thing about this visualisation is it helps focus attention on the risks at the top and to the right - those with the biggest impact and probability. -Risks at the bottom or left side of the diagram are candidates for being ignored or simply "accepted" (which we'll come to in a [later section](De-Risking#retain)). If you're using something like [Scrum](/practices/Glossary-Of-Practices.md#scrum), then these might be issues that you remove in the process of [backlog refinement](/practices/Glossary-Of-Practices.md#backlog-refinement). +Risks at the bottom or left side of the diagram are candidates for being ignored or simply "accepted" (which we'll come to in a [later section](De-Risking#retain)). If you're using something like [Scrum](/practices/Glossary-Of-Practices#scrum), then these might be issues that you remove in the process of [backlog refinement](/practices/Glossary-Of-Practices#backlog-refinement). ## Incorporating Payoff -The diagram above is _helpful_ in deciding what to focus on next, but it doesn't consider [Payoff](/thinking/Glossary.md#payoff). The reason for this is that up until this point, we've been tracking risks but not necessarily figuring out what to do about them. Quite often when I raise an issue on a project I will also include the details of the fix for that issue, or maybe I'll _only_ include the details of the fix. +The diagram above is _helpful_ in deciding what to focus on next, but it doesn't consider [Payoff](/thinking/Glossary#payoff). The reason for this is that up until this point, we've been tracking risks but not necessarily figuring out what to do about them. Quite often when I raise an issue on a project I will also include the details of the fix for that issue, or maybe I'll _only_ include the details of the fix. For example, let's say I raise an issue saying that I want a button to sort an access control list by the surnames of the users in the list. What am I really getting at here? This could be a solution to the problem that _I'm wasting time looking for users in a list_. Alternatively, it could be trying to solve the problem that _I'm struggling to keep the right people on the list_. Or maybe both. The risk of the former is around wasted time (for me) but the risk of the latter might be a security risk and might be higher priority. @@ -111,20 +111,20 @@ _Really good design_ would be coming up with a course of action that takes care ## Criticism -One of the criticisms of the [Risk Register](Track-Risk.md#risk-registers) approach is that of [mistaking the map for the territory](/tags/Map-And-Territory-Risk). That is, mistakenly believing that what's on the Risk Register _is all there is_. +One of the criticisms of the [Risk Register](Track-Risk#risk-registers) approach is that of [mistaking the map for the territory](/tags/Map-And-Territory-Risk). That is, mistakenly believing that what's on the Risk Register _is all there is_. -In the preceding discussions, I have been careful to point out the existence of [Hidden Risks](/thinking/Glossary.md#hidden-risk) for that very reason. Or, to put another way: +In the preceding discussions, I have been careful to point out the existence of [Hidden Risks](/thinking/Glossary#hidden-risk) for that very reason. Or, to put another way: > "What we don't know is what usually gets us killed" - [Petyr Baelish, _Game of Thrones_](https://medium.com/@TanyaMardi/petyr-baelishs-best-quotes-on-game-of-thrones-1ea92968db5c) Donald Rumsfeld's famous [Known Knowns](https://en.wikipedia.org/wiki/There_are_known_knowns) is also a helpful conceptualisation: - - **A _known_ unknown** is an [Attendant Risk](/thinking/Glossary.md#attendant-risk). i.e. something you are aware of, but where the precise degree of threat can't be established. - - **An _unknown_ unknown** is a [Hidden Risk](/thinking/Glossary.md#hidden-risk). i.e a risk you haven't even thought to exist yet. + - **A _known_ unknown** is an [Attendant Risk](/thinking/Glossary#attendant-risk). i.e. something you are aware of, but where the precise degree of threat can't be established. + - **An _unknown_ unknown** is a [Hidden Risk](/thinking/Glossary#hidden-risk). i.e a risk you haven't even thought to exist yet. ## Out of the Window In this section, we've looked at a _continuum of formality_ of risk tracking. Going from "in your head" (like the dinner party) through "using an issue tracker" and on to looking at visualisations to help understand which are the key risks to focus on. If you are leading a development project, you will need to decide how formal a process you need for tracking risks and this will depend on the nature of the project. Often, this will depend not just on your _own_ requirements but those of the project's stakeholders too, who will likely want to see that you are dealing with risk responsibly. -In the next section, [Health](Health.md) we'll be looking at the reason _why_ we need to track risks - to make sure that we keep our projects (and ourselves) healthy. +In the next section, [Health](Health) we'll be looking at the reason _why_ we need to track risks - to make sure that we keep our projects (and ourselves) healthy.