“it’s just a fancied up bloody standard deviation”. In some sense this is inevitable. Just about everything in decision under uncertainty amounts to trading off some generalized mean return against some measure of risk.
Yeah, if I can be honest I felt the whole ‘don’t open the black box’ bit was one of the weaker points in your book - the whole bit on accounting which I thought was one of the best parts seemed pretty much like you doing exactly that. FWIW, a bit like POSIWID, I think the core principle behind what you are saying is right, it’s just that the framing is arguably a bit off. Rather than starting from the position of drawing the black boxes (ie “NEVER OPEN THE BOX!!! HERE BE DEMONS!!!”), I think it makes more sense to simply consider them another variety attenuating device you utilise when the throughput information is too complex to justify relative to the variety of outcomes (ie “Sometimes you are just going to have to admit you can’t open the box”).
tbh that's probably a fair critique, and I think I understand it considerably better from conversations post publication. What I maybe should have said is that the box isn't intrinsically black; it's black because it represents the end of your process of analysis; to identify something as a black box is to say that knowing about its internal states isn't a good use of your own capacity, given the other decisions made. So "there is nothing to be gained by opening the black box" is really "once you have decided to treat something as a black box, do so", and implies "if your system has become unregulated and needs revision, this is a problem with the system, not just a malfunction inside one box so that it's not giving the right responses".
"Black box" in most cases [maybe not large neural-like models] might not be a category of things, but rather a category of how you treat things; something is a black box when/if/because you don't want to know about the details (ideally because the marginal return on that knowledge isn't worth it, sometimes because you aren't allowed to). It's a way to use something, not what it is (although POSIWID might make the distinction debatable).
That said, I'm wondering if another reason not to open black boxes in very large organizations is that they make possible management patterns that are organizationally optimal but politically harder (e.g., a project with negative expected return but the right lack of correlation with your overall portfolio would probably be harder to defend in a detailed review, but would get thru inside a VaR box; black boxes can be commitment (-thru-ignorance) devices). (An analogy here with how bank opacity can help bank stability and hence positive externalities.)
Statistics is an unforgiving discipline ... with meagre and bitter fruit.
Even if you do everything right (and even eminent experts in the discipline regularly make mistakes), and find a correlation, the correct interpretation of that result is "I can't completely and definitively rule out the existence of a causative relationship". You are no better off than before you did the work.
Not how such a result is usually interpreted.
Statistics is possibly the best example of why we need superhuman intelligence.
There's another (not unconnected) theory that they published that manual to get the regulators onside with using VAR to underpin capital requirements. “Dennis, you created a monster,” one of the architects said afterwards. https://riskyfinance.com/wp-content/uploads/2012/10/Longerstaey-QA.pdf
“it’s just a fancied up bloody standard deviation”. In some sense this is inevitable. Just about everything in decision under uncertainty amounts to trading off some generalized mean return against some measure of risk.
Yeah, if I can be honest I felt the whole ‘don’t open the black box’ bit was one of the weaker points in your book - the whole bit on accounting which I thought was one of the best parts seemed pretty much like you doing exactly that. FWIW, a bit like POSIWID, I think the core principle behind what you are saying is right, it’s just that the framing is arguably a bit off. Rather than starting from the position of drawing the black boxes (ie “NEVER OPEN THE BOX!!! HERE BE DEMONS!!!”), I think it makes more sense to simply consider them another variety attenuating device you utilise when the throughput information is too complex to justify relative to the variety of outcomes (ie “Sometimes you are just going to have to admit you can’t open the box”).
tbh that's probably a fair critique, and I think I understand it considerably better from conversations post publication. What I maybe should have said is that the box isn't intrinsically black; it's black because it represents the end of your process of analysis; to identify something as a black box is to say that knowing about its internal states isn't a good use of your own capacity, given the other decisions made. So "there is nothing to be gained by opening the black box" is really "once you have decided to treat something as a black box, do so", and implies "if your system has become unregulated and needs revision, this is a problem with the system, not just a malfunction inside one box so that it's not giving the right responses".
I wish Kiearan had kept the original abstract to "Fuck Nuance":
> Seriously, fuck it.
"Black box" in most cases [maybe not large neural-like models] might not be a category of things, but rather a category of how you treat things; something is a black box when/if/because you don't want to know about the details (ideally because the marginal return on that knowledge isn't worth it, sometimes because you aren't allowed to). It's a way to use something, not what it is (although POSIWID might make the distinction debatable).
That said, I'm wondering if another reason not to open black boxes in very large organizations is that they make possible management patterns that are organizationally optimal but politically harder (e.g., a project with negative expected return but the right lack of correlation with your overall portfolio would probably be harder to defend in a detailed review, but would get thru inside a VaR box; black boxes can be commitment (-thru-ignorance) devices). (An analogy here with how bank opacity can help bank stability and hence positive externalities.)
...and their discontents?
Statistics is an unforgiving discipline ... with meagre and bitter fruit.
Even if you do everything right (and even eminent experts in the discipline regularly make mistakes), and find a correlation, the correct interpretation of that result is "I can't completely and definitively rule out the existence of a causative relationship". You are no better off than before you did the work.
Not how such a result is usually interpreted.
Statistics is possibly the best example of why we need superhuman intelligence.
There's another (not unconnected) theory that they published that manual to get the regulators onside with using VAR to underpin capital requirements. “Dennis, you created a monster,” one of the architects said afterwards. https://riskyfinance.com/wp-content/uploads/2012/10/Longerstaey-QA.pdf
without wanting to tell too many tales out of school or anticipate the Friday post, I would not regard that as merely a theory.