Recently I’ve become more sympathetic to the argument that big ambitious innovation is slowing. At first I resisted the notion because frankly it seemed silly. The advocates of this them overly conflate statistical growth with progress. Peter Thiel in particular seems to simply be whining that no one has yet built him a jetpack (h/t Scott Smith). I don’t have much sympathy for people who insist that the world should behave the way they want it to…or who think that “progress” must arrive only in the forms they’ve grown to expect. The usual reference point for such conversations – the space program – doesn’t help matters. Sure, the Apollo program was great for nationalistic morale, but the moon landings themselves never amounted to more than an impressive series of boondoggles.
Nevertheless, Neal Stephenson’s version of the hypothesis, titled Innovation Starvation, made me pause and reconsider my position. He starts off on the right foot by humbly acknowledging his own biases:
My lifespan encompasses the era when the United States of America was capable of launching human beings into space. Some of my earliest memories are of sitting on a braided rug before a hulking black-and-white television, watching the early Gemini missions. This summer, at the age of 51—not even old—I watched on a flatscreen as the last Space Shuttle lifted off the pad. I have followed the dwindling of the space program with sadness, even bitterness. Where’s my donut-shaped space station? Where’s my ticket to Mars? Until recently, though, I have kept my feelings to myself. Space exploration has always had its detractors. To complain about its demise is to expose oneself to attack from those who have no sympathy that an affluent, middle-aged white American has not lived to see his boyhood fantasies fulfilled. (emphasis mine)
He then shifts the conversation away from gee-whiz technology and towards tangible changes in the modern organization:
Researchers and engineers have found themselves concentrating on more and more narrowly focused topics as science and technology have become more complex. A large technology company or lab might employ hundreds or thousands of persons, each of whom can address only a thin slice of the overall problem. Communication among them can become a mare’s nest of email threads and Powerpoints. The fondness that many such people have for SF reflects, in part, the usefulness of an over-arching narrative that supplies them and their colleagues with a shared vision. (emphasis mine)
That last sentence rings true for me. It is not that individuals have lost innovative ambition. If anything the individual’s inclination to innovate (however you define that) has probably increased. The crux of the issue is that individual efforts are now less likely to be coordinated around shared purpose. Large organizations like NASA have always had highly specialized engineers, even in the heyday of the Apollo program. The difference today is that those specialized roles are less likely to be directed towards common goals.
The “big hairy audacious goals” of the late 20th century are out, replaced by management philosophies that favor decentralization. Today’s buzzwords - agile development and open innovation - emphasize the distribution of resources across myriad small bets. Likewise, the management press celebrates flat organizational structures and the democratization of the workplace. Valve Software recently enjoyed fifteen minutes of fame for its claim to be the first “spontaneously ordered” corporation.
I don’t mean to scoff. I believe that all of these developments do in fact represent progress. Yet, we should be careful not to ignore the unintended consequences – that all of the trends above contribute to the undermining of shared narrative. Moreover, the lean workplace has – in the search for efficiency – pared away much of the slack that once buffered ambitious projects from short term consequences:
Innovation can’t happen without accepting the risk that it might fail. The vast and radical innovations of the mid-20th century took place in a world that, in retrospect, looks insanely dangerous and unstable. Possible outcomes that the modern mind identifies as serious risks might not have been taken seriously—supposing they were noticed at all—by people habituated to the Depression, the World Wars, and the Cold War, in times when seat belts, antibiotics, and many vaccines did not exist. Competition between the Western democracies and the communist powers obliged the former to push their scientists and engineers to the limits of what they could imagine and supplied a sort of safety net in the event that their initial efforts did not pay off. (emphasis mine)
Is this trend part of the natural evolution of things, or did we make a wrong turn somewhere?
Stagnation or Stability?
Have we lost our capacity for risk taking? Have we become decadent? [You may interpret "we" however you like.] Perhaps we have just reached a moment in our developmental trajectory at which our self-created environment offers more low-risk/low-return opportunities than high-risk/high-return opportunities.
One way to think about it is in terms Venkat Rao’s notion of hackstability. Venkat proposes that the constructed environment – or The Technium, in the words of Kevin Kelly – has become increasingly interdependent and interconnected, to the point that wholesale change now presents more potential for risk than reward. Supposing that this balance persists, the result is a kind of stable stagnation…or hackstability:
I’ve concluded that we’re reaching a technological complexity threshold where hacking is going to be the main mechanism for the further evolution of civilization. Hacking is part of a future that’s neither the exponentially improving AI future envisioned by Singularity types, nor the entropic collapse envisioned by the Collapsonomics types. It is part of a marginally stable future where the upward lift of diminishing-magnitude technological improvements and hacks just balances the downward pull of entropic gravity, resulting in an indefinite plateau, as the picture above illustrates.
I call this possible future hackstability.
The Nature of Hacking
“Hacking” is an inspired metaphor for describing this state of affairs. Hacking is characterized by the replacement of existing solutions with more efficient solutions aimed at the same ends:
Besides computer hacking, we now have lifehacking (using tricks and short-cuts to improve everyday life), body-hacking (using sensor-driven experimentation to manipulate your body), college-hacking (students who figure out how to get a high GPA without putting in the work) and career-hacking (getting ahead in the workplace without “paying your dues”).
While hacking allows for specific solutions to be revised, reformed or replaced, the needs that those solutions addressed are never fully eliminated. In Venkat’s parlance, the prevailing technology paradigm never becomes disposable.
The distinction between replaceable and disposable is subtle but important. You may have noticed that I defined hacking as the application of novel solutions to conventional ends. You might notice the same thing in the examples quoted above – the outcomes themselves are taken for granted. There is nothing novel about pursuing a higher GPA or improved health metrics.
By only addressing the means of attainment, hacking decontextualizes the ends while reinforcing their presumed importance. A perfect GPA, which once acted as a proxy for context-specific learning outcomes, becomes something sought for its own sake.
Repeated rounds of hacking compound the effect. Cultural conventions are ever more deeply embedded into the foundation of the cultural regime as they become increasingly removed from their original meaning. Eventually this legacy is so distant as to be unquestionable. That is the root of non-disposability.
The nature of the hackstable state is nicely illustrated by Zeno’s Paradox.
Zeno’s Paradox describes Achilles in a footrace with the much slower tortoise. Supposing that Achilles gives the tortoise a head start of say 100 meters, he will then have to traverse that initial distance in order catch the tortoise. But by the time Achilles has covered 100m the tortoise will have moved ahead by perhaps another 10 meters, which Achilles will still need to cover. And once Achilles covers those 10 meters the tortoise will have moved ahead once again.
By breaking a finite distance into an infinite series of discrete steps, Zeno’s Paradox implies that it is actually impossible to traverse any finite distance. Moreover, if traversing a finite distance is impossible then all forward progress is impossible.
Hacking results in a similar paradox. Hacking produces efficiencies by replacing the conventional methods with more direct routes to specific desired outcomes. As such, hacking is a process of optimizing to local conditions. This proliferation of unique solutions has the unintended consequence of further fracturing the opportunity space into discrete localities. In other words, hacking causes a self-reinforcing proliferation of niches.
As the environment fractures it becomes increasingly difficult to address anything holistically. The (hackstable) paradigm gains inertia and prospects for comprehensive reforms are diminished. While niche-specific innovations continues to offer efficiency gains, true invention is also rendered an anachronism.
The result – as in Zeno’s Paradox – is diminishing returns. Despite the illusion of forward progress, the hackstable paradigm approaches an asymptotic limit.
The analogy between Zeno’s paradox and Venkat’s ‘hackstability’ helps reconcile the two sides of the innovation starvation argument. Both camps may be correct in their own way. The current moment in history may contain both a renaissance of democratized innovation and the onset of stagnation. It is possible that while more innovation is occurring, it is simultaneously becoming ever more disjointed…increasing in sheer quantity while the niches addressed get smaller and smaller.
The Signs of Inertia
At this point you may be asking whether there is any evidence in support of this interpretation. I believe there is. To make sense of it we will need to back up a few steps and reconsider some basic economics…
Econ 101 Reconsidered
In a recent Foreign Policy article we find the following:
Japan needs to close its fiscal gap, but has yet to find the political will to do so via tax increases or spending cuts. As Greek or Spanish politicians can testify, fiscal austerity is easy to say, hard to do; raising taxes or cutting spending would only deepen the malaise, just like it has elsewhere. If fiscal policy is off the table, it’s up to central bankers to boost growth. But with Japan’s economy now operating at the zero bound – a situation in which interest rates are extremely low and cannot be expected to go lower — the Bank of Japan has fewer tools to counteract the recession. (emphasis mine)
The existence of the zero bound should be perplexing to anyone who accepts the reasoning exemplified in this passage. In Econ 101 you learn that lowering interest rates has a stimulative effect on the economy…
Low rates mean money is cheaper. Cheap money means that private parties should be willing to take more risk. Businesses should increase borrowing in order to hire more workers and expand operations. Individuals should be more willing to buy homes, take out small business loans, or simply splurge on credit card expenditures. All of this activity should excite animal spirits, promote economic growth and alleviate recession. So the reasoning goes…
But then how is the zero bound possible? Why have near zero interest rates done so little to revive growth in Japan over the past two decades? And why aren’t historically low interest rates doing more to stimulate the US economy now?
The Logic of Liquidity
The reasoning above relies on certain premises – that economic malaise is caused by a lack of liquidity, that the lack of liquidity is derived from transient conditions, and that those transient conditions are exogenous to the system.
The logic of liquidity is evident in monetarist thinking, as expressed in the quantity theory of money:
MV = PQ
M = money supply
V = velocity of money
P = price level
Q = quantity of real output
The logic here should be straightforward even for those who haven’t sat through Econ 101. The equation simply states that the total amount of money in circulation (MV) must equal the nominal value of goods and services in circulation (PQ). If $100 worth of goods are produced and sold, then $100 in money must also change hands.
A liquidity trap occurs when the volume of money in circulation decreases below the threshold required to support real output, usually because the velocity of circulation (V) decreases. Left unaddressed, a decline in liquidity would necessitate a corresponding decrease in either the price level (P) or the amount of real output (Q). Expansionary monetary policy is meant to prevent any carry-over to the right side of the equation by propping up the money supply (M) via low interest rates.
This is the line of reasoning, repeated ad nauseam by financial media, that has conditioned us to associate low interest rates with stimulus and growth. But what if the current malaise is not caused (at least not entirely) by lack of liquidity? What if economic growth is stagnating because – as Stephenson asserts – we have lost the taste for risky innovation?
The Purpose of Interest Rates
What the logic of liquidity traps ignores is that – outside of liquidity crises – interest rates are understood quite differently. The purpose of an interest rate is to bring the supply of loanable funds into equilibrium with the demand for borrowed funds. The interest rate is simply the rate of return paid to a lender (investor) by a borrower. From the borrowers perspective, the interest rate is the cost (price) of capital.
Often you will see this expressed as a risk free rate plus a risk premium. The risk free rate reflects the rate of growth of the economy over the long run given some baseline level of risk. In other words, it is the opportunity an investor foregoes by investing in a particular opportunity rather than allowing his fortunes to rise with the tide.1 To put it concisely:
interest rate = expected rate of return = opportunity cost
The important takeaway is that persistently low interest rates indicate a lack of opportunity, which in turn indicates economic stagnation, not stimulus and growth.
Opportunity and Return
When private lenders lend money to borrowers – whether they be banks, money managers, or individuals - the interest rates they demand reflect their expected return on investment, which in turn is anchored to their opportunity cost. That opportunity cost is reflected in market interest rates and the returns on various benchmarks – i.e. the foregone investment opportunities.
When opportunities are abundant and returns are high, demand for capital increases and lenders demand their cut of the action in the form of higher rates. When opportunities are scarce and returns are low, lenders are forced to accept lower rates.
Reversing the logic – low interest rates imply limited opportunities for profitable investment. Lenders only accept low rates if higher returns (w/ similar risk) are unavailable. To make it more concrete, investors (lenders) won’t buy mortgage backed securities yielding 4% if corporate bonds (with similar risk profiles) offer 8%.
Low interest rates (expected returns) also imply the diminution of other opportunity costs such as foregone consumption. Anyone willing to accept a 2% return conveys an ambivalence towards current consumption. Such conservative decisions demonstrate an inordinate willingness to delay gratification. We might then infer that current consumption opportunities offer conservative investors minimal utility.
Institutionalizing Low Expectations
When central banks depress interest rates – effectively “lending” money to the market by buying bonds – they send a similar signal regarding available opportunities. The only difference is that central bank operations – due to their scale and scope – bias return expectations throughout the market.
Let’s consider this state of affairs first from the borrower’s perspective. A high interest rate represents a significant hurdle to be overcome by the borrower. A small business owner who borrows at 10% must subsequently earn at least a 10% margin in order to service the loan. You might therefore say that high rates demand a certain level of audacity from borrowers, filtering out those who would pursue middling opportunities.
By contrast, persistently low rates imply a degree of cynicism. As noted in the previous section, expansionary policy employed as a short term stimulus makes borrowing temporarily cheaper, essentially acting as a transient subsidy to borrowers. But when low rates are sustained indefinitely the message conveyed is that even the most conservative ventures deserve to be funded. With interest rates near zero, new ventures need only promise the thinnest of margins in order to justify their cost of capital.
From the perspective of the lender, depressed interest rates indicate diminished opportunities. Lenders must either accept reduced returns or stop offering funds to the market. Those lenders who do accept minimal returns will commensurately scale down their risk tolerance. As such, it should not be surprising if the current sustained low interest rate environment does indeed correlate with shortened time horizons and a depression in innovative risk taking.
The Nature of the Zero Bound
All of the above still does not adequately explain our quandary. Why have rates stayed so low? Why haven’t historically low interest rates managed to stimulate economic growth, even allowing for that growth being of the tamer low-risk/low-return variety?
My hypothesis is that as interest rates decrease towards zero their relevance as a market modulator dwindles. When interest rates approach zero the time value of money evaporates. In other words, a zero interest rate is meaningless. It is equivalent to no interest rate whatsoever. Therefore, as an economy approaches the zero bound monetary policy becomes irrelevant.
This can be illustrated quantitatively through asset valuations. The value of any asset is a function of the future cash flows provided by the asset, discounted by the cost of capital (i.e. the relevant interest rate). In the simplest terms – if you own a treasury bond that yields $10/year and the interest rate is 10%/year then your bond is worth $100 ($10/.1 = $100). If the interest rate is 2% then that same bond is worth $500 ($10/.02 = $500).
You probably see where this is going. As the cost of capital decreases asymptotically towards 0%, the valuation increases without bound ($10/.0002 = $50,000). Once the interest rate reaches 0%, the formula divides by zero and asset values become indeterminate.
At some point along that curve interest rates lose all meaning. Assets that yield $10 per year and those that yield $1000 per year both obtain equally indeterminate valuations. In other words, the time value of money ceases to be a meaningful determinant of asset values.
The Other Zero Bound
There is an intriguing parallel to the zero bound. Ronald Coase famously explained how the nature of the firm can be understood through transactions costs. Firms exist to economize on those transactions for which internal managerial organization is less costly than market transaction. For example, a firm will choose vertical integration when the process of negotiating with suppliers is more cumbersome (costly) than internal management of the supply chain.
Most people understand transactions cost theory to predict that decreasing transactions cost result in a decreasing size of the firm. The reality is somewhat more complicated. Firstly, transactions costs in absolute terms are less meaningful than the relationship between internal management costs and external transactions costs. It is when transactions costs in the open market decrease faster than management costs that we should expect firm size to decrease.
Secondly, when transactions costs approach zero everything goes haywire. To be more precise, costs approaching zero in absolute terms is just a special case of the general condition in which internal and external costs are equal – i.e. when:
External Costs – Internal Costs = 0
Rather than continuing to decrease, firm size becomes indeterminate. This is where we find parallels to declining interest rates. At this corollary of the zero bound, transactions costs cease to be a meaningful determinant of firm size. Independent freelancers become functionally equivalent to internal employees. Outsourced supply chains becomes functionally equivalent to owned and operated manufacturing lines.
The Elusive Search for Progress
Both conditions – the diminution of interest rates and of transactions costs – indicate a particular variety of “flatness”. The zero bound for interest rates represents the elimination of temporal gradients. The zero bound for transactions costs represents the elimination of spatial gradients.
Thomas Friedman entered the flat world into the popular lexicon by blithely describing its wonder and novelty. But if the world is flat then the project for which Friedman is chief cheerleader must be nearing completion. Moreover, flatness in its variety of forms, is hardly the unambiguous good we sometimes make it out to be. The emergence of multiple zero bounds suggests that the flat world may in fact be stagnant world in important respects. In contrast to Friedman’s aspirational vision, Neal Stephenson offers the following anecdote:
Most people who work in corporations or academia have witnessed something like the following: A number of engineers are sitting together in a room, bouncing ideas off each other. Out of the discussion emerges a new concept that seems promising. Then some laptop-wielding person in the corner, having performed a quick Google search, announces that this “new” idea is, in fact, an old one—or at least vaguely similar—and has already been tried. Either it failed, or it succeeded. If it failed, then no manager who wants to keep his or her job will approve spending money trying to revive it. If it succeeded, then it’s patented and entry to the market is presumed to be unattainable, since the first people who thought of it will have “first-mover advantage” and will have created “barriers to entry.” The number of seemingly promising ideas that have been crushed in this way must number in the millions.
The human understanding of progress is inextricably linked with the perception of gradients. Progress is always a matter of breaking down certain barriers and erecting others…of unifying along certain dimensions and differentiating along others. Convergence and divergence…
A world without addressable gradients – a flat world – is a world without meaning. However, I don’t believe that we can go backwards. I don’t believe – as the Austrian economist might assert – that raising interest rates, and thereby sweeping away enterprises sustained by low interest rates, will necessarily do the trick. Even if capital markets eventually demand austerity, we will eventually end up back at this same point, just with a slightly stronger foundation underfoot. The herds of (private) global investors – who sustain current interest rates through purchases of government bonds offering negligible yield – are evidence enough that the zero bound reflects a secular trend.
If the current paradigm is indeed losing it legs, the path forward won’t be found by rehashing the same ground. It also won’t be found by looking to the same set of signals that were adapted to the current environment. The world can only get so flat. It will be found in addressing new gradients…in discovering new areas of convergence and divergence.
What is due to converge? What is due to diverge?
1. There is a good reason why the risk free rate in practice is assumed to be the return on government bonds. In the long run the return on government debt must equal the rate of (monetized) growth in the economy. Barring policy changes, GDP growth is the only means of increasing the tax base, and consequently is the only means of growing the future tax revenues which allow the government to repay its debts with interest. In a sense, government bonds play the role of a hypothetical mutual fund indexing the national economy.
photo courtesy of hypergenesb