Measuring decision-making performance. Part 2

What constitutes a high quality decision, and how to measure it?

A decision is not a standard product whose quality can be easily verified according to certain norms. Outcomes of an executed decision can be evaluated after a certain time passed. Is it a long-term business strategy or decision about what to eat for lunch? We will know how good our choice was only after the first annual report or when the plate arrives. But is there a way to measure the quality of the decision at the moment that we are making it? Yes, there is!

Let’s use an example. Imagine you are going for a meeting where you will seek the Board of Directors’ approval for a decision you just made. What is important in such meetings?

  • Accuracy — defines how well a decision answers a stated problem. High accuracy is the ultimate goal of the analyses and evaluations of the decision-making process.
  • Confidence — is the level of certainty that the decision is accurate. Often decisions are made only because of confidence. Everyone has in mind a situation when “the worse” choice was selected just because its presenter was convincing. High confidence helps not only to communicate the decision but also to execute it without resistance.

Being confident and accurate with a decision is always a winning strategy.

Confidence-Accuracy Relation

Now, one may say: Isn’t confidence and accuracy the same thing? No, it is not. They can be related, but numerous studies show that the type of relationship varies depending on the decision character. In a judicial system, eye-witness confidence equals accuracy. This is correct when confidence levels are low — the eyewitness is not sure about facial recognition, most probably because the suspect is not the perpetrator. But the same is assumed when confidence is high — the eyewitness is confident means the subject is guilty. And too many times this assumption is wrong.

In the military decision-making problem, research shows that confidence and accuracy were counter-related and depended on the decision criticality (seriousness of outcomes). Simple decisions were characterized by higher confidence and lower accuracy. Serious decisions were, in general, more accurate but decision-makers were less confident about them. The research concludes:

“Individuals were significantly more confident in low DC [decision criticality] decisions than medium DC decisions, lending support to previous literature that confidence decreases as difficulty increases.”
“Research has also shown that task performance increases when participants find the task more important (…). This could relate to participants applying more attention and effort to decisions with greater consequences for an incorrect decision.”

Accuracy

The decision-making process should be designed in such a way to select an option that is the most accurate. One way to achieve this is by applying the following logic:

  1. Define the challenge objective — with categories or factors. Set their values that will describe the desired solution.
  2. Generate alternative solutions — a high number of scenarios adds to analysis efforts but also increases the likelihood that the best one will be found.
  3. Evaluate them against the objective — by measuring how each scenario performs in the categories and how close is this to the desired solution.

✓ The level of fulfillment of these requirements defines the accuracy of a particular alternative.

A numerical scale can be assigned for each category and then accuracy expressed as a percentage. A 100% accurate scenario would be the one to score the maximum in all categories.
Such ranking logic is the core of the decision matrix, a tool commonly used in decision-making. Even if you are used to working with other methods it is useful to run your top choices through a decision matrix before the final judgment.

There is rarely a case when a single option scores the most in all categories. Often importance factor (or weight) is used to introduce hierarchy between categories and facilitate the evaluation.

In practice, accuracy depends on: 

  • The level of detail when describing the objective, as a set of measurable factors with corresponding importance.
  • Capturing all workable alternatives.
  • Precision and correctness of measuring the performance of each scenario.

To improve accuracy, we should aim to maximize all three above components. In complex challenges, breaking down general factors (like cost) into single features (like the staff costs, marketing costs, production costs, etc.) allows us to be more precise in defining the objective and measuring scenarios performance.

accuracy vs. outcome evaluation

Accuracy measured at the moment decision is made does not provide us 100% certainty about decision outcomes. However, evaluation of the actual outcomes of an implemented decision after a certain time is neither. A decision is as accurate as assumptions about the uncertainty. On the other hand, decision outcomes depend on how a certain decision was executed and how well the solution was implemented. Even the best decision could be poorly implemented, and therefore outcomes are not achieved — in such cases, the decision itself should not be blamed.
Both accuracy measurement and the evaluation of decision outcomes should be performed. If we practice it in our organizations for each major decision, these two measures together will give us objective feedback about our decision-making process:
▸ if high accurate decision gives a positive outcome ⟹ everything is OK, and our process is good
▸ if high accurate decisions give a negative outcome ⟹ our decision-making process is still not good (we ignore some key factors or fail to measure performance properly), or we fail to implement the decisions correctly

Confidence

Imagine there is one scenario with an accuracy level of 80%. Seems high and we should be eager to implement it and look for the results. But! We know how the decision was made. There was little historical data, and the sales department was unable to provide reliable predictions. Assumptions came out of nowhere without proper analysis. Finally, we know there are very urgent investments happening now and the finance department may be unwilling to provide funds for something other than a 100% sure thing. With such knowledge, confidence is on the negative side.

Now imagine you do not know all these drawbacks and the scenario is being presented to you in a corporate newsletter with information about “weeks of analysis that finally led to a new strategy!”. Sounds familiar? You would not be that negative, maybe still not positive, but at least neutral in being convinced.

Confidence is a measurement of how much logic and certainty is behind a decision. In other words, we will not trust a recommendation that was produced by questionable data in an illogical manner.

confidence = logic + certainty

Confidence is subjective to every person, because:

  • It depends on how much we know about the process.
  • what is our understanding of logic?
  • what is our risk appetite (what level of uncertainty is acceptable)?

In other words:

confidence = knowledge * logic + risk acceptance * certainty

All these variables will have distinct values for different decision stakeholders. And we should be aware of their confidence level. Some people only contribute with information and analysis, others evaluate and make the decision, and finally, there is a group that will be affected by the decision. The more power a person has in the process the higher the confidence should be to accept the decision. If we are the one presenting a recommendation to decision-makers, we can try to estimate their confidence level by breaking down its components:

  • knowledge — how much does a person know about the decision-making process, what data, analytical methods, and models were used? Is this person familiar with these techniques?
  • logic — we can never be sure what a particular person or group of people believe logical is, but we can look at the history of their behavior. Logic is natural and always present, even if we think a person did something irrational — for them, it was 100% logical at that very moment.
  • risk acceptance — same as in logic. The level of risk a person accepted previously should indicate what the appetite is. If we do not have this information, it is safer to assume minimal risk acceptance.
  • certainty — what methods and data were used to estimate the likelihood of a particular event? Certainty or likelihood can be usually presented on a scale or as a number. It is always good to be transparent and honest about the justification.
Confidence has a strong influence on decision acceptance. Every person has a certain confidence threshold above which the likelihood of acceptance grows rapidly. However, if the confidence level is extremely high, we may fall into the overconfidence trap. We tend to be very confident of both accurate and inaccurate decisions.

Conclusions

Decision quality highly depends on:

  • available and reliable data
  • proper analytical and evaluation methods and tools used in the decision-making process
  • proper identification and estimation of certainty

All these three topics are crucial in every business and there is much to say about each. I just want to mention some basic but important tips:

  • Organize data that your business generates and gather it regularly. As simple as that, yet as powerful as nothing else. Yes, you need to spend some time planning and implementing the system, and to train your staff to use the system, but it will pay off very soon.
  • Think about decision-making as a process as indicated at the end of Part 1 article. Be aware that there are many techniques and tools to use. Depending on the decision type, some are more useful than others — always using the same set (like excel data and brainstorming) for very distinct decisions is not a good practice. Expand your knowledge about decision-making methods. Here or at Harvard Business Review where you will find a whole section about decision-making. In the future, we will publish more articles focused on analytical and evaluation tools.
  • Take your time to think about uncertainty. Is difficult for many people. Risk awareness is a natural feature of the human brain. However, for thousands of years, until very recently, risk was identified as danger. What is left from these ancient times is the reluctance we have towards thinking about risk, we do not like it. We associate risk with failure, and we are getting anxious even thinking about it. This is the reason we tend to procrastinate issues like contingency planning or risk analysis in projects or investment plans.
    There are ways to overcome these biases: 
    • think about the uncertain scenario that you are working on as a 100% sure thing. Do not use words like possibility or likelihood – assume it has already happened.
    • separate yourself from this scenario. Pretend consequences are happening to “a company” or “a person”.
      The above will distant you from what is uncertain and possible negative consequences. It will allow you to cut off some biases.
    • other methods include representing uncertainty as a probability distribution. This includes breaking down uncertainty into basic factors. Some will be known, the likelihood of others can be predicted from e.g., historical data, and those unknowns can be randomized. Everything is later put together and a simulation is created. If you run such simulation thousands of times, each time different outcomes will be generated, because of random factors. You will notice that certain outcomes appear more often than others indicating their frequency occurrence. Methods as Monte Carlo Simulation or more advanced ale commonly used in risk evaluation or portfolio management in the finance industry.
Most of epidemiology models are produced using Monte Carlo method. The above comes from covid19.healthdata.org

Leave a Reply

Your email address will not be published. Required fields are marked *