Discussion about this post

User's avatar
Slow Loras's avatar

This reminds me of C. Thi Nguyen’s notion that bureaucracy drives us to reductive measures (one of his examples was grades in a school setting), which are compromises that attempt to make the compexity of different circumstances comparable — a numeric grade instead of a deep analysis of a student’s entire portfolio and activity in class. For Nguyen, the need for comparable measures doesn’t just compress information — it changes our values. We stop caring about the rich, holistic picture of student learning and start caring about grades, not (just) because we’re gaming the system, but because the bureaucratic structure makes grades the only thing that can travel across contexts, be aggregated, and drive decisions.

For Nguyen, the proximate cause is the need to communicate across contexts, for Davies, I think the cause is a similar need to manage across contexts.

Russ51's avatar

To a psychologist interested in cognition, what jumps out about Goodhart's law is the behavioral aspect. Measuring something OR making it a target, then labeling it, making it public, or otherwise calling attention to it, changes the behavior of people. People will always try to game a system for maximum benefit, in any context; that is rational behavior. Therefore, because of human cognition, announcing a measurement or target will alter people's behavior and change the meaning of the target measurement, in many cases.

Simple example: you are targeting a particular measurement in medical lab tests, and you make that the goal while keeping on with unhealthy lifestyle choices, then the target, the measurement, has become a poor representative of health. It's original meaning is changed. It has been changed precisely because the measurement was made a goal, a proxy for the real goal (health), and this invited "gaming the system."

9 more comments...

No posts

Ready for more?