More off-cuts from the forthcoming “The Unaccountability Machine”… all the ideas discussed below do make an appearance in the book, but not in this form.
But what does “Profit maximisation”, the assumed goal of the firm in neoclassical economic modelling, actually mean? It’s not particularly problematic in a simple model – profit is the difference between revenue and expense. It’s maximised by the marginal cost rule, and the overall level of profit is determined by the competitive structure of the industry. That competitive structure can get plenty complicated; you are going to need maths well beyond undergraduate to deal with anything other than the two extreme cases of a single monopolist or a “perfectly competitive” industry made up of small firms which compete the profit rate down to zero[1]. But they’re well-defined problems where you can be sure that the solution exists, even if you have to throw away a lot of notebook paper to reach it.
In a non-toy model, it becomes a lot more difficult to say what “maximising profits” might even mean, though. Real companies last for longer than one period, and have to consider the consequences of their decisions over a planning horizon; building a bigger factory might reduce your profits this year, but it’s obviously a necessary condition of expanding your profits next year. This can be dealt with by saying that the quantity which is actually being maximised is the long term expected sum of total profits.
You can’t just add expected profit numbers together, though. The further out in the future they are, the less valuable they are, just because time is money. If you can get £110 in ten years’ time just by buying a £100 savings bond today, then you’re going to want to apply at least a 10% discount to your profit estimates ten years out.
And the further out into the future you go in forming your expectations, the less certain you can be. So there’s a temptation to say that you should add an additional discount; you can’t put the profits that might be earned from a new product launch, ten years out in the spreadsheet you put together to make the business case, on the same footing as profits from a stable existing business. Or should you?
An alternative argument might be that although the new product profits are less certain to show up at all, there’s also a chance that they might be double or triple your expectations. And if you’re an investor who has shares in two or three companies in the same industry, you might not care which specific one of them comes to dominate the widget market, only that one of them does. Estimating the discount rate for future cash flows is difficult. And in order to understand the value of what they ought to be maximising, a company would have to make this sort of calculation for everything which might happen over a period of decades. (In an absolutely pure textbook model, the discounted cash flow analysis would stretch out into perpetuity, but if you play around with a spreadsheet or just repeatedly multiplying a percentage by itself on a calculator, you quickly see that by the time you get to about thirty years, you’re dealing with numbers that are small enough to be ignored).
Most firms just don’t do this, though. Which means that economists take the refuge of scoundrels; the appeal to the invisible hand of the market. If you ever thought that some of the analysis of management cybernetics in the last chapter was a bit hand-wavey and unrigorous, do remember this. The standard approach to the same problem used in the science of economics is to do formal modelling as if everyone had either perfect information about the future or unbiased estimates[2] and everyone was fully aware of everyone else’s motivations (and therefore, their predictable reaction to one’s own plans).
And when faced with the fact that this is really obviously not true as a statement about the world, they retreat to the contention that the workings of the market ensure that in the long term and on average, everyone will end up behaving as if they had perfect information in equilibrium, because if you keep on being wrong you will go out of business. This raises many deep philosophical and methodological questions like “how long is the long term?”, “how big does the population of firms need to be for it to be true on average?”, “is there any real reason to believe that this equilibrium will be reached before another shock comes along?” and “if the system spends basically zero time in equilibrium, why would we necessarily think that the maximising solution to your system of equations is exerting any meaningful influence on outcomes?”. But as well as these questions, it also raises an objection which is both simpler and in many ways more fundamental, and it’s the one which both me and my client initially expressed, in the same words, on first having the model explained to us as youngsters at business school.
“Oh, come off it”.
Mindless philistinism has a worse reputation than it deserves.
[1] Not literally zero – there’s a convention that the “costs” of doing business include a baseline minimum level of profit that’s just enough to keep the owner of the firm in the game. There’s also a convention to not ask too many questions about what that level might be or how you might calculate it, because most of the available theories for doing so don’t work very well.
[2] An often under-justified assumption is that unbiased estimates can do most of the work of perfect information, because in a big enough population the variation will cancel out.