I came across this book How to measure anything: finding the value of “intangibles” in business on Twitter. Someone highly recommended the book.

This title is very appealing, so I borrowed it from our library and started reading. I’ve only finished the first two sections now, but I could tell it is a great book.

There are 4 sections of this book: Measurement (I) - Before you measure (II) - Measurement methods (III) - Beyong the basics (IV). Here are some notes on the first 2 sections.

Section I. Measurement

“Fermi” reduction:

turner in chicago = pop/people per household * percentage of households with tuned piano * tuning per year

concept of measurement: a result of observations that quantitatively reduce uncertainty

information: uncertainty reduction

  • O: object of measurement

what do you mean by ?

why do you care?

clarification chain:

why we care? -> desirable/undesirable results -> detectable in some amount -> measurable

  1. if it matters at all, it’s detectable

  2. if it is detectable, it can be detected as an amount

  3. if it can be detected as a range of possible amount, it can be measured

  • M: methods

rule of 5: get random sample of 5, 93.75% chance that median of a population is between smallest to max in any random sample of 5

experiment: get something by trying

4 assumptions

  1. your problem is not as unique as you think

  2. you have more data than you think

  3. you need less data than you think

  4. an adequate amount of new data is accessible

Section II. before you measure

prior to measurement, ask these questions:

  1. what is the decision this measurment is supposed to support?

  2. what is the definition of the thing being measured in terms of observable consequences?

  3. how, exactly, does this thing matter to the decision being asked?

  4. how much do you know about it now?

  5. what is the value of additional information?

e.g. IT security -> number of undesirable events -> freq? number of people? productivity loss? duration? cost of labor loss?

measure risk through modeling: Monte carlo

measure the value of information:

all risk in any project investiment: range of uncertainty on the costs and benefits and probabilities on events that might affect them

the value of partial uncertainty reduction

expected value of information (EVI) = reduction in expected opportunitiy loss (EOL) = EOL(before info) - EOL(after info)

EOL = chance of being wrong x cost of being wrong

Expected value of perfect information (EVPI) = EOL (before info) (EOL after info is 0 if info is perfect)

expect = probability weighted average

EVI:

  • def: expected value of information (no matter perfect or not)

  • curve: convex curve

  • value of info tends to rise more quickly with small reductions in uncertainty but levels off when we approach perfect certainty

  • if reduce uncertainty by 50%, EVI more than 50% of EVPI

ECI:

  • def: expected cost of information, weighted average of all costs

  • curve: concave curve addtional uncertainty reduction become more and more expensive as we approach an uncertainty of 0

EVPI: expected value of perfect information (upper bound of EVI)

myth: when you have lots of uncertainty, you need a lot of data to tell you something useful

fact: if you have a ot of uncertainty, you don’t need much data to reduce uncertainty significantly; if you have a lot of certainty, you do need a lot of data to reduce even more

measure the value of information

  1. early part of measurement usually is the high-value part

  2. if you aren’t compute the value of a measurement, you are probably measuring the wrong things, the wrong way