Putting Mental Models to Practice


:star: :star: :star: :star: :star:

Notes

Part 1

  • The first — and probably the single most important — principle is to ‘let reality be the teacher’. That is — if you have some expectations of a technique and try it out, and then it doesn’t work — either the technique is bad, or the technique is not suitable to your specific context, or your implementation of the technique is bad, or your expectations are wrong.
  • When it comes to practice, one should pay attention to actual practitioners. This is because their approaches have been tested by reality.
  • Without explanation, my framework is as follows:
    • Use intelligent trial and error in service of solving problems. This means two sub-approaches: first, using the field of instrumental rationality to get more efficient at trial and error. Second, using a meta-skill I call ‘skill extraction’ to extract approaches from practitioners in your field.
    • Concurrently use the two techniques known for building expertise (deliberate practice and perceptual exposure) to build skills in order to get at more difficult problems.
    • Periodically attempt to generalise from what you have learnt during the above steps into explicit mental models.

Part 2: An Introduction to Rationality

  • We may now see that Farnam Street’s list of mental models is really a list of three types of models:
    • Descriptive mental models that come from domains like physics, chemistry, economics, or math, that describe some property of the world.
    • Thinking mental models that have to do with divining the truth (epistemic rationality) — e.g. Bayesian updating, base rate failures, the availability heuristic.
    • Thinking mental models that have to do with decision making (instrumental rationality) — e.g. inversion, ‘tendency to want to do something’, sensitivity to fairness, commitment & consistency bias.

Part 3: Better Trial and Error

  • The search inference framework states that all of thinking can be modelled as a search for Possibilities, Evaluation Criteria (that Baron calls ‘Goals’), and Evidence. In addition to a process of search, a process of inference also happens as we strengthen or weaken the possibilities, by weighing the evidence we have found for each possibility in accordance to a set of evaluation criteria.
  • The search-inference framework, then, concerns three objects:
    • Possibilities are possible answers to the original question. In this case they are the course options you may take.
    • Evaluation criteria (or ‘goals’, as Baron originally calls them) are the criteria by which you evaluate the possibilities. You have three goals in the above example: you want an interesting course, you want to learn something about modern history, and you want to keep your work load manageable.
    • Evidence consists of any belief or potential belief that helps you determine the extent to which a possibility achieves some goal. In this example, the evidence consists of your friend’s report that the course was interesting and the work load was heavy. At the end of the example, you resolved to find your friend Sam for more evidence about the work load on the second course.
  • Imagine that you are a college student trying to decide which courses you will take next term. You are left with one elective to select, having already scheduled the required courses for your major. The question that starts your thinking is “which course should I take?”
    • You search for possibilities — that is, possible course options — by searching internally (from your memory) and externally (from the course catalog website, and from your friends). As you perform this search, you determined the good features of two courses, some bad features of one course, and a set of evaluation criteria, such as the fact that you don’t want a heavy course load for this elective. You also made an inference: you rejected the course on Soviet-American relations because the work was too hard.
  • The dominant approach in decision science is something called expected utility theory, which was created by Daniel Bernoulli in 1738. It asserts that a person acts rationally when they choose that which maximises their utility — that is, whatever decision it is that brings them the most benefits in pursuit of their goals.
    • The overall expected utility for a given option is the sum of all the states and probabilities.
    • Visualize dat bitch
    • Von Neumann-Morgenstern Rationality Axioms
  • While expected utility theory is sometimes used for decision analysis — especially in business and in medicine — it is too impractical to recommend as a general decision-making framework. As Baron puts it: “search has negative utility”. The more time you spend analysing a given decision, the more negative utility you incur because of diminishing returns.
  • The second problem with using expected utility theory as a personal prescriptive model is that, in the real world, judgments and results actually matter
  • field of naturalistic decision making. This world view stems from the premise that we cannot know the state of the world, that we do not have the mental power to make comprehensive searches or inferences, and that we should build our theories of decision making by empirical research — that is, find out what experts actually do when making decisions in the field, and use that as the starting point for decision making.
  • The second view is the view of Munger, Baron, Tversky, Kahneman, and Stanovich: that of rational decision analysis. This is the world view that we have explored for most of this essay. It assumes that you want to make the best decisions you can, perhaps because they are not reversible
  • What have we covered in this essay? We’ve covered the basics of trial and error, and the five ways it may fail. We have covered Baron’s search-inference framework of thinking, and used it as an organising framework for mental models of decision making. We have covered the foundations of decision science — or at least, the foundations of decision science as related to instrumental rationality. You now understand the basics of expected utility theory — the normative model that is used as the goal of mental models in decision making.
  • Mind map

Part 4: Expert Decisionmaking

  • Recognition-primed decision making (henceforth called RPD) is a descriptive model of decision-making: that is, it describes how humans make decisions in real world environments. RPD is one of the thinking models from the field of Naturalistic Decision Making (NDM), which is concerned with how practitioners actually make decisions on the job.
  • Memorize final RPD model
  • What are considered sources of bias in rational choice analysis are considered strengths in the RPD model.
    • Availability/representativeness heuristics
  • I believe that most of us work in domains that have what Kahneman and Klein call “fractionated expertise”. (In the 2009 paper they state that they believe most domains are fractionated). Fractionated expertise means that a practitioner may possess expertise for some portion of skills in the field, but not for others.
  • The most powerful lesson from their joint paper is that in fields with fractionated expertise, it is incredibly important to recognise where one has expertise and one does not.
  • Here’s where we tie the two threads together. I think trial and error is how most of us will build expertise in our careers — a direct result of the lack of theory and insight for many practicable areas of interest. Even practitioners in areas with good theory — such as medicine, engineering, or computer programming — must spend a large amount of their time developing expertise through experience and practice.
  • How do you know that you are getting better? For this, I think we should look to what actual practitioners do. In Principles, Ray Dalio suggests that we may use the class of problems we experience in our lives to gauge our progress. That is, while you might not be able to evaluate the results of a trial and error cycle immediately, you may, over time, observe to see if the problems that belong to that class seem to become easier to deal with. If you find that problems in that class no longer pose much of a challenge for you, then you may conclude that your collection of ‘principles’ or approaches are working and that you have improved.
  • That said, I think that everyone who is interested in decision making should pay attention to the nature of expert intuition. The adoption of intuitive decision-making as part of US military doctrine (in 2003) and the growth of NPD-based training programs for soldiers, nurses and firefighters is telling. The form of decision making that most of us do is recognition-primed decision making, not rational choice selection. We should pay close attention to what we actually use and figure out ways to improve it, instead of improving what we are told to use (but rarely do).

Part 5: Skill Extraction

  • Klein argues that we should adhere to two common-sense principles: first, we must find substitutes for real-world experience for the specific subskills where we can’t practice in the real world. Second, we must get the most out of every experience that we are able to get.
  • His strategy for developing expertise-driven decision making, then, is four-fold:
    • First, identify discrete decision points in one’s field of work. Each of these decision points represent discrete areas of improvement you will now train deliberately for.
    • Second, whenever possible, look for ways to do trial and error in the course of doing. For instance, run smaller, cheaper experiments instead of launching the full-scale project you’re thinking of. Look for quick actions that you may use to tests aspects of your domain-specific mental models. This is, of course, not always possible. Which leads us to —
    • Run simulations where you cannot learn from doing. Klein and co have developed a technique for running simulations called ‘decision making exercises’, or DMXs. The DMX style of decision training was originally developed for Marine Corps rifle squad leaders and officers in 1996. It is still in use for squad leader training; the version I describe here has been adapted by Klein for corporate decision makers.
    • Fourth, because opportunities for experiences are relatively rare, you should maximise the amount of learning you can get out of each. Klein has specific recommendations for decision-making criticism, though it won’t surprise you to hear that these are very similar to existing recommendations for after-action reviews. We will mention this only in passing.
  • The most experienced executives that played this game, however, had uneasy feelings from the very beginning (around items 5 and 7). These executives saw the contradiction between starting an internal project to use surplus labour while downsizing to reduce the labour supply. They picked up on the implications of the hiring freeze in item 3, and predicted that people were going to be pulled out from the project when new contracts were announced (items 5, 7, and 11). When two of Joe’s colleagues quit (item 14), they surmised that this would further intensify the labour shortage.
  • One technique that I’ve found quite useful is NDM’s approach to identifying decision points.
    • Decision Requirements table
    • Give examples
  • Expert decisionmakers:
    • Cues let us recognise patterns.
    • Patterns activate action scripts.
    • Action scripts are assessed through mental simulation.
    • Mental simulation is driven by mental models.
  • Thankfully, Klein and his collaborators have developed a technique for extracting tacit mental models of expertise. Their overall approach is known as Cognitive Task Analysis, and the specific method that is of interest to us as practitioners is known as the ‘critical decision method’, or CDM. This method requires some skill to use, but the simple form as relayed by Klein in Sources of Power is practical enough for us to attempt to apply.
    • The setup for CDM is to use the human instinct for storytelling to elicit mental models from the expert practitioner. Don’t ask how they did it — ask what happened, and then use cognitive probes to tease out their models.
    • Someone is defined to be believable if they have a record of at least three relevant successes, and have a good explanation of their approach when probed.
  • The last part of Klein’s decision training is to engage in decision-making critique.
  • It isn’t the best way. It is certainly one good way, and it can be a worthwhile pursuit given one’s domain. But the approach to decision making that it inhabits is not the full picture that’s available to us. It isn’t very effective if you are a novice getting started in some fractionated field.
    • Acquiring mental models of expertise represent the other half of good decision making — and finding a balance between the two approaches appears to be the increasingly mainstream prescription of decision science (well, if Klein is to be believed, that is).

Part 6: The Epistemology of Practice