TEKNOIOT: Academic Articles
Showing posts with label Academic Articles. Show all posts
Showing posts with label Academic Articles. Show all posts

7 Sept 2020

1 Sept 2020

Lessons of the ELB - Barokong

I gave a short presentation on monetary policy at the Nobel Symposium run by the Swedish House of Finance. It was an amazing conference, and I'll post a blog review as soon as they get the slides up of the other talks. Offered 15 minutes to summarize what I know about the zero bound, as well as to comment on presentations by Mike Woodford and Stephanie Schmitt-Grohé, here is what I had to say. There is a pdf version here and slides here. Novelty disclaimer: Obviously, this involves a lot of recycling and digesting older material. But simplifying and digesting is a lot of what we do.

Update: video of the presentation here. Or hopefully the following embed works:

Lessons of the long quiet ELB

(effective lower bound)

We just observed a dramatic monetary experiment. In the US, the short-term interest rate rate was stuck at zero for 8 years. Reserves rose from 10 billion to 3,000 billion. Yet inflation behaved in this recession and expansion almost exactly as it did in the previous one. The 10 year bond rate continued its gentle downward trend unperturbed by QE or much of anything else.

Europe's bound is ongoing with the same result.

Source: Stephanie Schmitt-Grohé
Japan had essentially zero interest rates for 23 years. And..

Source: Stephanie Schmitt-Grohé

Inflation stayed quiet and slightly negative the whole time. 23 years of the Friedman rule?

Our governments set off what should have been two monetary atomic bombs. Almost nothing happened. This experiment has deep lessons for monetary economics.

Stability Lessons

We learned that inflation can be stable and quiet--the opposite of volatile--in a long-lasting period of immobile interest rates, and with immense reserves that pay market interest.

The simplest theoretical interpretation is that inflation is stable under passive policy or even an interest rate peg. Alternative stories--it's really unstable but we had 23 years of bad luck--are really strained.

Stability is the central concept in my remarks today, and I emphasize it with the cute picture. If inflation is unstable, a central bank is like a seal balancing a ball on its nose. If inflation is stable, the bank is like Professor Calculus swinging his pendulum. Watching inflation and interest rates in normal times you cannot tell the seal from the Professor. Asking the professor might not help. Tintin fans will remember that the Professor, perhaps like the Fed, thought he was following the pendulum, not the other way around.

But if you hold still the seal's nose, or the professor's hand, you find out which is the case.

We just ran that experiment. The result: Inflation is stable. Many hallowed doctrines fall by the wayside.

Quantity lessons

The optimal quantity of money
We learn that arbitrary quantities of interest-paying reserves do not threaten inflation or deflation. We can live the Friedman-optimal quantity of money. There is no need to control the quantity of reserves. There is no reason for government debt to be artificially illiquid by maturity or denomination. Governments could offer reserve-like debt to all of us, essentially money market accounts. Too bad for contrary hallowed doctrines.

Interest rate lessons

The lessons for interest rate policy are even deeper.

\begin{align} x_t &= E_t x_{t+1} - \sigma(i_t - E_t\pi_{t+1} + v^r_t) \label{IS}\\ \pi_t &= E_t\pi_{t+1} + \kappa x_t \label{NK}\\ i_t &= \max\left[ i^\ast + \phi(\pi_t-\pi^\ast),0\right] \label{TR} \end{align} \begin{equation} (E_{t+1}-E_t) \pi_{t+1} = (E_{t+1}-E_t) \sum_{j=0}^\infty m_{t,t+j} s_{t+j}/b_t .\label{FTPL} \end{equation}

A common structure unites all the views I will discuss: An IS relation linking the output gap to real interest rates; a Phillips curve; a policy rule by which interest rates may react to inflation and output; and the government debt valuation equation, which states that an unexpected inflation or deflation, which changes the value of government bonds, must correspond to a change in the present value of surpluses

The equations are not at issue. All models contain these equations, including the last one. The issues are, How we solve, use, and interpret these equations? What is nature of expectations--adaptive, rational, or in between? How do we handle multiple equilibria? And what is the nature of fiscal/monetary coordination? Preview: that last one is the key to solving all the puzzles.

Adaptive Expectations / Old-Keynesian

The adaptive expectations view, from Friedman 1968 to much of the policy world today, makes a clear prediction: Inflation is unstable, so a deflation spiral breaks out at the lower bound. I simulate such a model in the graph. There is a negative natural rate shock; once the interest rate hits the bound, deflation spirals away.

The deflation spiral did not happen. This theory is wrong.

Rational Expectations / New-Keynesian I

The New Keynesian tradition uses rational expectations. Now the model is stable. That is a a big feather in the new-Keynesian cap.

But the new-Keynesian model only ties down expected inflation. Unexpected inflation can be anything. There are multiple stable equilibria, as indicated by the graph from Stephanie's famous JPE paper. This view predicts that the bound--or any passive policy--should feature sunspot volatility.

For example, Clarida Galí and Gertler famously claimed that passive policy in the 70s led to inflation volatility, and active policy in the 1980s quieted inflation. A generation of researchers worried that Japan's zero bound, and then our own, must result in a resurgence of volatility.

It did not happen. Inflation is also quiet, and thus apparently determinate, at the bound. This theory is wrong--or at least incomplete.

New-Keynesian II Selection by future active policy

Another branch of new-Keynesian thinking selects among the multiple equilibria during the bound by expectations of future active policy.

To illustrate, this graph presents inflation in the simple new Keynesian model. There is a natural rate shock from time 0 to 5, provoking a zero bound during that period. There are multiple stable inflation equilibria.

The lower red equilibrium is a common choice, featuring a deep deflation and recession. To choose it, authors assume that after the bound ends, the central bank returns to active policy, threatening to explode the economy for any but its desired inflation target, zero here. Working back, we choose that one equilibrium during the bound.

Forward guidance

In this view small changes in expectations about future inflation work backwards to large changes at earlier times. Therefore, if the central bank promised inflation somewhat above target at the end of the bound, that promise would work its way back to large stimulus during the bound. Forward guidance offers strong stimulus.

One of Mike's main points today is that a price level target can help to enforce such a commitment. Stephanie's policy of raising rates to raise inflation at the end of the bound can similarly work its way back in time and stimulate during the the bound, perhaps avoiding the bound all together.

Forward guidance puzzles

This selection by future active policy, however, has huge problems. First, promises further in the future have larger effects today! I asked my wife if she would cook dinner if I promised to clean up 5 years from now. It didn't work.

Second, as we make prices less sticky, dynamics happen faster. So, though price stickiness is the only friction, making prices less sticky makes deflation and depression worse. The frictionless limit is negative infinity, though the frictionless limit point is small inflation and no recession. These problems are intrinsic to stability, and thus very robust: stable forward is unstable backward.

New Keynesian Solutions

The new-Keynesian literature is ripping itself apart to fix these paradoxes. Mike, Xavier Gabaix, and others abandon rational expectations. Alas even that step does not fix the problem.

Mike offers a k-step induction. It is complex. I spent over a month trying to reproduce a basic example of his method, and I failed. You have to be a lot smarter or more patient than me to use it. Moreover, it only reduces the magnitude of the backward explosion, not its fundamental nature.

If we go back to adaptive expectations, as Xavier and others do--after a similar hundred pages of difficult equations--then we're back to stable backward but explosive forward. Stable backward solves the forward guidance puzzle--but the lack of a spiral just told us inflation is stable forward. Also, you have to modify the model to the point that eigenvalues change from less to greater than one. It takes a discrete amount of irrationality to do that.

Fiscal theory of monetary policy

So let me unveil the answer. I call it the Fiscal Theory of Monetary Policy. The model is unchanged, but we solve it differently. We remove the assumption that surpluses ``passively'' accommodate any price level. Now, we pick equilibria by unexpected inflation, at the left side of the graph.

For example, an unexpected deflation can only happen if the government will raise taxes or cut spending to pay a windfall to bondholders. (Or, if discount rates raise the present value of surpluses, which is important empirically.) For example, if there is no fiscal news, we pick the equilibrium with the big red square at zero.

This is not some wild new theory. It is just a wealth effect of government bonds. We're replaying Pigou vs. Keynes, with much better equations.

The result is a model that is simple, stable, and solves all the puzzles.

Instantly, we know why the downward deflation jump did not happen. The great recession was not accompanied by a deflationary fiscal tightening!

Tying down the left end of the graph, promises further in the future have less effect today and there is a smooth frictionless limit. Tying down the left end of the graph stops backward explosions. You don't have to pick a particular value. The limits are cured if you just bound the size of fiscal surprises, and thus keep the jump on the left hand side from growing.

We can maintain rational expectations. This is not a religious commandment. Some irrational expectations are a fine ingredient for matching data and real-world policy; introducing some lags in the Phillips curve for example. But Mike's and others' effort to repair zero bound puzzles by irrational expectations is not such an epicycle. It asserts that the basic properties of monetary policy depend on people never catching on. It implies that all of economics and all of finance must abandon rational expectations even as rough approximations. Just to solve some murky paradoxes of new Keynesian models at the lower bound? For example, Andrei Shleifer, earlier today, argued for irrational expectations. But even he build on the efficient market rational expectation model, suggesting deviations from it. He did not require irrational expectations to begin to talk about asset pricing, or require that all of economics must adopt his form of irrational expectations.

I did not think the day would come that I would be defending the basic new-Keynesian program -- construct a model of monetary policy that plays by Lucas rules, or at least is a generalization of a model that does so -- and that Mike Woodford would be trying to tear it down. Yet here we are. Promote the fiscal equation from the footnotes and you can save the rest.


Neo-Fisherism is an unavoidable consequence of stability. If inflation is stable at a peg, then raising the interest rate and keeping it there must lead to higher inflation.

Conventional wisdom goes the other way. But it is still possible that higher interest rates temporarily lower inflation, accounting for that belief.

The standard new-Keyensian model, as illustrated in Harald and Marty's slides seems to achieve a temporary negative sign. However it only does so by marrying a fiscal contraction ("passively,'' but still there) to the monetary policy shock. It also requires an AR(1) policy disturbance -- beyond the AR(1) there is no connection between the permanence of the shock and the rise or decline of inflation.

Can we produce a negative sign from a pure monetary policy shock -- a rise in interest rates that does not coincide with fiscal tightening?

FTMP, long-term debt  and a negative short run response

The fiscal theory of monetary policy can deliver that temporary negative effect with long term debt. The graph presents the price level, in a completely frictionless economy consisting only of a Fisher equation and the valuation equation. When nominal interest rates rise, the market value of debt on the left declines. (First line below graph.) If surpluses on the right do not change, the price level on the left must also decline. Then, the Fisherian positive effect kicks in.

FTMP, long-term debt, sticky prices and a realistic response

If you add sticky prices, then a rise in interest rates results in a smoothed out disinflation. This is a perfectly reasonable--but long-run Fisherian--response function.


In sum, the long-run Fisherian result is an inescapable consequence of stability.

The fiscal theory can give a temporary negative sign, but only if the interest rate rise is unexpected, credibly persistent, and there is long-term debt. Those considerations amplify Stephanie's call for gradual and pre-announced interest rate rises to raise inflation.

The contrast between the US, that followed Stephanie's advice and is now seeing a rise in inflation, with Japan and Europe, is suggestive.

The negative sign in the standard new-Keynesian model comes by assuming a fiscal contraction coincident with the monetary policy shock.

Beware! These arguments do not mean that high inflation countries like Brazil, Turkey, and Venezuela can simply lower rates to lower inflation. Everything here flows from fiscal foundations, and absent fiscal foundations and commitment to permanently lower rates, inflation is inevitable.


I promised that the ELB was an experiment that would deliver deep implications for monetary policy. Think of the hallowed doctrines that have been overturned in the last 15 minutes.

  • "The New-Keynesian Liquidity Trap'' December 2017 Journal of Monetary Economics92, 47-63.
  • "Michelson-Morley, Fisher, and Occam: The Radical Implications of Stable Inflation at the Zero Bound" Macroeconomics Annual 2017.
  • "Stepping on a Rake: the Fiscal Theory of Monetary Policy'' January 2018.  European Economic Review 101, 354-375.

What I've said today, and the graphs, are in these references. They go on to show you how the fiscal theory of monetary policy provides a simple unified framework for interest rate policy, quantitative easing, and forward guidance, that works even in frictionless models, though price stickiness is useful to produce realistically slow dynamics.

19 Aug 2020

Lottery Winners Don't Get Healthier - Barokong

Alex Tabarrok at Marginal Revolution had a great post last week, Lottery Winners Don't get Healthier (also enjoy the url.)

Wealthier people are healthier and live longer. Why? One popular explanation is summarized in the documentary Unnatural Causes: Is Inequality Making us Sick?

The lives of a CEO, a lab supervisor, a janitor, and an unemployed mother illustrate how class shapes opportunities for good health. Those on the top have the most access to power, resources and opportunity – and thus the best health. Those on the bottom are faced with more stressors – unpaid bills, jobs that don’t pay enough, unsafe living conditions, exposure to environmental hazards, lack of control over work and schedule, worries over children – and the fewest resources available to help them cope.
The net effect is a health-wealth gradient, in which every descending rung of the socioeconomic ladder corresponds to worse health.
If this were true, then increasing the wealth of a poor person would increase their health. That does not appear to be the case. In important new research David Cesarini, Erik Lindqvist, Robert Ostling and Bjorn Wallace look at the health of lottery winners in Sweden (75% of winnings within the range of approximately $20,000 to $800,000) and, importantly, on their children. Most effects on adults are reliably close to zero and in no case can wealth explain a large share of the wealth-health gradient:

In adults, we find no evidence that wealth impacts mortality or health care utilization.... Our estimates allow us to rule out effects on 10-year mortality one sixth as large as the crosssectional wealth-mortality gradient.
The authors also look at the health effects on the children of lottery winners. There is more uncertainty in the health estimates on children but most estimates cluster around zero and developmental effects on things like IQ can be rejected (“In all eight subsamples, we can rule out wealth effects on GPA smaller than 0.01 standard deviations”). (My emphasis above)

Alex does not emphasize the most important point, I think, of this study.  The natural inference is,The same things that make you wealthy make you healthy. The correlation between health and wealth across the population reflect two outcomes of the same underlying causes.

We can speculate what those causes are.  (I haven't read the paper, maybe the authors do.) A natural hypothesis is a whole set of circumstances and lifestyle choices have both health and wealth effects. These causes can be either "right" or "left" as far as the evidence before us: "Right:" Thrift, hard work, self discipline and clean living lead to health and wealth. "Left:" good parents, good neighborhood, the right social connections lead to health and wealth.

Either way, simply transferring money will not transfer the things that produce money, and produce health.

Perhaps the documentary was right after all: "class shapes opportunities for good health."  But "class" is about more than a bank account.

Also, Alex can be misread as a bit too critical: "If this were true." It is true that health and wealth are correlated. It is not true that more wealth causes better health.  The problem is  not just "resources available to help them cope."

Why a blog post? This story is a gorgeous example of the one central thing you learn when doing empirical economics: Correlation is not causation. Always look for the reverse possibility, or that the two things correlated are both outcomes of something else, and changing A will not affect B.   We seldom get an example that is so beautifully clear.

Update:  Melissa Kearney writes,

"Bill Evans and Craig Garthwaite have an important study [AER] showing that expansions of EITC benefits led to improvements in self-reported health status among affected mothers.
Their paper provides a nice counterpoint to the Swedish lottery study, one that is arguably more relevant to the policy question of whether more income would causally improve the health of low-income individuals in the U.S.

Thanks Melissa for pointing it out. This is interesting, but I'd rather not get in to a dissection of studies here -- just who takes advantage of EITC benefits, how instruments and differences do and don't answer these problems. The main point of my post is not to answer once and for all the question -- how much does showers of money improve people's heath -- but to point out with this forceful example for non-economists the possibility that widely reported correlations - rich people are healthier -- don't automatically mean that money showers raise health.

Syverson on the productivity slowdown - Barokong

Chad Syverson has an interesting new paper on the sources of the productivity slowdown.

Background to wake you up: Long-term US growth is slowing down. This is a (the!) big important issue in economics (one previous post).  And productivity -- how much each person can produce per hour -- is the only source of long-term growth. We are not vastly better off than our grandparents because we negotiated better wages for hacking at coal with pickaxes.

Why is productivity slowing down? Perhaps we've run out of ideas (Gordon). Perhaps a savings glut and the  zero bound drive secular stagnation lack of demand (Summers). Perhaps the out of control regulatory leviathan is killing growth with a thousand cuts (Cochrane).

Or maybe productivity  isn't declining at all, we're just measuring new products badly (Varian; Silicon Valley). Google maps is free! If so, we are living with undiagnosed but healthy deflation, and real GDP growth is actually doing well.


First, the productivity slowdown has occurred in dozens of countries, and its size is unrelated to measures of the countries’ consumption or production intensities of information and communication technologies ... Second, estimates... of the surplus created by internet-linked digital technologies fall far short of the $2.7 trillion or more of “missing output” resulting from the productivity growth slowdown...Third, if measurement problems were to account for even a modest share of this missing output, the properly measured output and productivity growth rates of industries that produce and service ICTs [internet] would have to have been multiples of their measured growth in the data. Fourth, while measured gross domestic income has been on average higher than measured gross domestic product since 2004—perhaps indicating workers are being paid to make products that are given away for free or at highly discounted prices—this trend actually began before the productivity slowdown and moreover reflects unusually high capital income rather than labor income (i.e., profits are unusually high). In combination, these complementary facets of evidence suggest that the reasonable prima facie case for the mismeasurement hypothesis faces real hurdles when confronted with the data.

An interesting read throughout.

[Except for that last sentence, a near parody of academic caution!]

17 Aug 2020

NYT on zoning - Barokong

Conor Dougherty in The New York Times has a good article on zoning laws,

a growing body of economic literature suggests that anti-growth sentiment... is a major factor in creating a stagnant and less equal American economy.
...Unlike past decades, when people of different socioeconomic backgrounds tended to move to similar areas, today, less-skilled workers often go where jobs are scarcer but housing is cheap, instead of heading to places with the most promising job opportunities  according to research by Daniel Shoag, a professor of public policy at Harvard, and Peter Ganong, also of Harvard.
One reason they’re not migrating to places with better job prospects is that rich cities like San Francisco and Seattle have gotten so expensive that working-class people cannot afford to move there. Even if they could, there would not be much point, since whatever they gained in pay would be swallowed up by rent.
Stop and rejoice. This is, after all, the New York Times, not the Cato Review. One might expect high housing prices to get blamed on developers, greed, or something, and the solution to be government-constructed housing, "affordable" housing mandates, rent controls, low-income housing subsidies (which protect incumbent low-income people, not those who want to move in to get better jobs) and even more restrictions.

No. The Times, the Obama Administration, California Governor Gerry Brown, have figured out that zoning laws are to blame, and they're making social stratification and inequality worse.

In response, a group of politicians, including Gov. Jerry Brown of California and President Obama, are joining with developers in trying to get cities to streamline many of the local zoning laws that, they say, make homes more expensive and hold too many newcomers at bay.
.. laws aimed at things like “maintaining neighborhood character” or limiting how many unrelated people can live together in the same house contribute to racial segregation and deeper class disparities. They also exacerbate inequality by restricting the housing supply in places where demand is greatest.
“You don’t want rules made entirely for people that have something, at the expense of people who don’t,” said Jason Furman, chairman of the White House Council of Economic Advisers.
This could be a lovely moment in which a bipartisan consensus can get together and fix a real problem.

The article focuses on Boulder Colorado, where

.. the university churns out smart people, the smart people attract employers, and the amenities make everyone want to stay. Twitter is expanding its offices downtown. A few miles away, a big hole full of construction equipment marks a new Google campus that will allow the company to expand its Boulder work force to 1,500 from 400.
Actually, The reason Google and Twitter are in Boulder is that things are much, much worse in Palo Alto! A fate Boulder may soon share:

“We don’t need one more job in Boulder,” Mr. Pomerance said. “We don’t need to grow anymore. Go somewhere else where they need you.”

16 Aug 2020

How to step on a rake - Barokong

How to step on a rake is a little note on how to solve Chris Sims' stepping on a rake paper.

This is mostly of interest if you want to know how to solve continuous time new-Keneysian (sticky price) models. Chris' model is very interesting, combining fiscal theory, an interest rate rule, habits, long term debt, and it produces a temporary decline in inflation after a rise in nominal interest rates.

15 Aug 2020

A Look in the Mirror - Barokong

Tyler Cowen and Alex Tabarrok have written a splendid article, "A Skeptical View of the National Science Foundation’s Role in Economic Research" in the summer Journal of Economic Perspectives. Many of their points apply to research support in general.

The article starts with classic Chicago-style microeconomics: What are the opportunity costs -- money may be helpful here, but what else could you do with it? What are the unexpected offsetting forces -- if the government subsidizes more, who subsidizes less? What is the whole picture -- how much public and private subsidy is there to economics research without the NSF? Too many good economists just say "economic research is a public good, the government should subsidize it."

They go on to ask deeper questions, "Are NSF Grants the Best Method of Government Support for Economic Science?" The NSF largely supports mainstream research by established economists at high-prestige universities. Are there better "public goods," undersupported by other means, for it to support?

Yes. Among others, replication and data. There are few current rewards for replication, and much economics research is not replicable. We live in the age of big data, but it's expensive and hard to access. The NSF has done commendable work here -- and other government agencies including the Census, Bureau of Labor Statistics, Federal Reserve, etc. provide huge public goods by collecting and disseminating good data. Without data we would not exist.  That strikes me as the single most underfunded public good in the economics sphere.

I'm less a fan of their proposal to support "far out" research, naming "post-Keynesians, econo-physicists, or the Austrians." While they cite popular authors  and a "gadfly'" sensational claims for the end of macroeconomics in 2009, in fact Macroeconomics is not all that much changed since the crisis and recession, and none of these claims -- nor the wackier approaches -- have in fact borne any fruit.  Yes, it's easy to support mediocre incremental research, but government agency that must appear impartial can too quickly end up subsidizing crank research, of which there is plenty in economics (see my inbox!)

They ask a great question. If the government wants to subsidize economic research, why hand out grants, rather than hire people directly?

I think there are good answers here. Another big subsidy to economics research which they do not mention are the legions of government employees already doing it. The Federal Reserve, Treasury, OFR, CEA, SEC, CFTC, HHS, EPA and hundreds of other agencies employ thousands of PhD economists who spend considerable if not full time on "research," and are expected to write academic journal articles. Make up your own mind about the value of this effort. The success of the research university I think points to an important externality between doing research, teaching it, and evaluating it through service to the profession. Also, research coming out of government agencies always seems to find just how wonderful those agencies' policies are. However, replication and data production, or other more easily guided research seems a good fit.

Also not mentioned is the danger that government subsidized research ends up being politicized, or at least ends up calling for more government.

One of the main methods of NSF support is "summer support." Universities pay academics on a 9 month basis. If you get an NSF grant it pays for 2 months of "summer support."  This is, of course, a fiction. In fact, most universities chop up the "9 month" salary into 12 pieces anyway. And most academics are not about to go work elsewhere in the summer -- it's the only time to really focus on research, and as Alex and Tyler point out the rewards to publishing are huge.  By and large the NSF does not (or did not when I last looked in to it) buy off teaching or other duties, the one thing that might free up some marginal research time. Alex and Tyler mention low labor supply elasticities as a reason to be cautious about the effectiveness of support. They don't mention this system, practically guaranteed to be a pure transfer rather than induce more research.

On the other hand, NSF grants are typically awarded based on a working paper. They already are a "prize" as Tyler and Alex recommend. So perhaps the lump-sum nature of the reward is not such a bad idea, and ends up subsidizing good research rather than more effort.

I stopped applying for NSF grants some time ago. Sometime in the mid-1990s, I was driving through Indiana, and I saw a guy hooking a shiny new boat up to his pickup truck. It occurred to me, my NSF check for that summer was worth about 5 boats. I didn't think I could get out of the car and say with a straight face that he and four neighbors should forego their boats so I could work on unit roots for the summer. I'm not pure either; I still benefit from many government subsidies, not least of which the tax-deductibility of charitable contributions.

A world without cash - Barokong

Max Raskin and David Yermack have a nice WSJ OpEd last week, "Preparing for a world without cash." The oped summarizes their relatedpaper.

What would a government-backed digital currency look like? A country’s central bank would need to become a deposit-taking institution and hold accounts on behalf of citizens and businesses. All of their debits would be tracked on the central bank’s blockchain, a digital ledger resistant to tampering. The central bank would pay interest electronically by adjusting the balances of depositor accounts.
I'm a big fan of the idea of abundant interest-bearing electronic money, and that the Fed or Treasury should provide abundant amounts of it. (Some links below.) Two big reasons: First, we then get to live Milton Friedman's optimal quantity of money. If money pays interest, you can hold as much as you'd like. It's like running a car with all the oil it needs. Second, it is a key to financial stability. If all "money" is backed by the Treasury or Fed, financial crises and runs end. As Max and David say,

Depositors would no longer have to rely on commercial banks to hold their checking accounts, and the government could get out of the risky deposit-insurance business. Commercial banks that wished to keep making loans would raise long-term capital in the debt and equity markets, ending the mismatch between demand deposits and long-term loans that can cause liquidity problems.
However, there are different ways to accomplish this larger goal. Do we all need to have accounts directly at the Fed, and is a blockchain the best way for the Fed to handle transfers?

The point of the blockchain, as I understand it, is to demonstrate the validity of each "dollar" by keeping a complete encrypted record of its creation and each person who held it along the way.

Its archival blockchain links together all previous transfers of a given unit of currency as a method of authentication. The blockchain is known as a “shared ledger” or “distributed ledger,” because it is available to all members of the network, any one of whom can see all previous transactions into or out of other digital wallets
That, and a limited supply to control its value, was the basic idea of bitcoin. But when we are clearing transactions by transferring rights to accounts at the Fed, the validity of the "dollar" is not in question. It's at the Fed. And, the big advantage relative to bitcoin as I see it, the value of the dollar comes from monetary policy and ultimately the government's demand for "dollars" to be paid in taxes, not from a fixed supply as was the case with gold.

The blockchain also appears to clear transactions more quickly and offer some security advantages. The latter are very attractive -- in my personal life I've recently had the questionable pleasure of spending days enjoying 19th century finance of multi-day clearing times, obtaining notarized signatures and medallion guarantees, and sending pieces of paper around. But not yet ironclad -- The same week of the WSJ has a string of articles on the security ofBitcoin following a recent hack.

The biggest stumbling block in my mind is "all members of the network, any one of whom can see all previous transactions into or out of other digital wallets." Per Max and David, this has pluses and minuses:

Tax collection would become much simpler, and tax evasion and money laundering could become prohibitively difficult.
Yet the centralization of banking under this system would also create a Leviathan with the power to monitor and control the personal finances of every citizen in the country. This is one of the chief reasons why many are loath to give up on hard currency. With digital money, the government could view any financial transaction and obtain a flow of information about personal spending that could be used against an individual in a whole host of scenarios.
This really is a big change in how "money" works. Traditional cash has a lovely property, that it has no memory. Its physical properties determine its value in a way independent of its history. It is incredibly efficient, in a Hayek information sense. The economy does not need the memory of every transaction. Blockchains turn this around.

The anonymity of cash makes it enduringly popular -- cash holdings are up, not down in the digital age. The same week of WSJ reading had articles delving into the continuing popularity of cash, and themechanics of handling it, the ongoing fury over theplaneload of cashdelivered by the Obama administration to Iran. It's not hard to figure out why both Iranians and Administration needed to send old-fahshioned bills on an unmarked plane, not a wire transfer.

Indeed creating this Leviathan is a danger, to the economy, and to our political freedom. Our government likes to pass aspirational laws that we don't really mean to enforce. Get rid of cash, and allow the government to see every transaction and enforce every law regarding payment of anything, and 11 million immigrants suddenly can't work at all and become penniless. Rigorous enforcement of all transactions would not only stop your kids lemonade stand and babysitting business, it would wipe out most of the employment opportunities for lower-income America. Many businesses would come to a halt.

The natural response is, well, maybe we shouldn't pass laws we don't really mean to enforce. Good luck with that.

More deeply,  "flow of information about personal spending that could be used against an individual in a whole host of scenarios" is truly frightening. I don't think there is a political candidate in the whole country who could not be embarrassed with one purchase at some point in their lives. Consider the brouhaha now over "disclosure" of political contributions -- there is a real fear that disclosure is a way of setting up hit lists for the administration to go after its political enemies. Multiply that by a thousand. Dissenters could easily be silenced if the government can monitor or block every transaction.

The ability to transact with anonymity and privacy has been a central freedom for hundreds of years. It's largely gone already. Losing it entirely and giving the government huge power to enforce any law it passes is not necessarily a good thing.

Mike and David opine

creating and respecting privacy firewalls and rethinking legal-tender laws could mitigate the dangers of monopoly and stifled competition in currency markets.
[Subject-free sentences (creating?) are always a sign of trouble!] The dangers are not of monopoly and competition, the dangers are in the vast loss of privacy that the government, and its leakers and hackers knowing all our transactions implies.

(Here I'm out on a limb on my blockchain knowledge, but I gather that one does have to wipe the slate clean occasionally. Otherwise, the blockchain gets ridiculously long. Imagine each dollar, a hundred years from now, attached to a list of everyone who has ever held it! That wiping out process could do a lot for privacy.)

So, back to basics. It is not at all clear to me in their analysis why the Fed has to manage all the accounts. The Fed, Treasury, and the government in general are very good at defining the units of a currency, and providing an easy standard of value -- cash, coins, liquid government debt, reserves.  That is their natural monopoly. I don't see that the government has a similar natural advantage in providing low-cost transactions services, especially on monitoring fraud in the use of those services. The Fed got hacked by employees of the central bank of Bangladesh.

So I leave with two big questions -- and these are questions, and this is an invitation to more thought.

Is a blockchain really better than accounts at the Fed, and instructions to flip a switch to send money from my account to your account? What is the best way to get low transactions costs and fraud prevention, given that we don't need authentication of the dollar itself and a supply limitation?

Is it really better for the Fed to handle all transactions directly, rather than for the Fed to provide clearing accounts, and "banks" (narrow!) to provide transactions services between people, using reserves as now for netting and clearing? The latter setup allows competition and innovation in transactions services, and a better hope for an information firewall retaining some privacy and anonymity in transactions.

(Note for readers new to the blog: I've written about some of these issues inA new structure for US Federal Debt, Toward a run-free financial system, A blueprint for effective financial reform and previous blog posts, such as here.)

14 Aug 2020

Regional price data - Barokong

Some big news, to me at least: The Bureau of Economic Analysis is now producing "regional price parities" data that allow you to compare the cost of living in one place in the US to another. The BEA news release release is here; coverage from the tax foundation here (HT the always interesting Marginal Revolution). In the past, you could see regional inflation -- changes over time -- but you couldn't compare the level of prices in different places.

The states differ widely. It is in fact as if we live in different countries with different currencies. Hawaii (116.8) vs. Mississippi (86.7) is bigger than paying in dollars vs Euros (118) Yen (times 100, 1.01) and almost as big as pounds (1.30)

The variation across city/country and across cities is even higher:

In 2014, the metropolitan area with the highest RPP was Urban Honolulu, HI (123.5). Metropolitan areas with RPPs above 120.0 also included San Jose-Sunnyvale-Santa Clara, CA (122.9), New York-Newark-Jersey City, NY-NJ-PA (122.3), Santa Cruz-Watsonville, CA (121.8), San Francisco-Oakland-Hayward, CA (121.3), and Bridgeport-Stamford-Norwalk, CT (120.4). The metropolitan area with the lowest RPP was Beckley, WV (79.7), followed by Rome, GA (80.7), Danville, IL (81.1), Morristown, TN (81.9), and Jonesboro, AR (82.0).
No surprise, much of the variation is due to housing. Breaking it out, (look up your town here!)

San Francisco-Oakland-Hayward, CA

All items 121.3

Goods 108.4

Services: Rents 183.9

Services: Other 109.6

San Jose-Sunnyvale-Santa Clara, CA

All items 122.9

Goods 108.2

Services: Rents 200.7

Services: Other 109.3

Beckley, WV

All items 79.7

Goods 92

Services: Rents 52.8

Services: Other 92.5

There is still a 20% difference in the cost of goods and other services, but the variation in rents is really big. When you consider that the cost of real estate drives up other costs, its effect may be even larger: If the barbershop pays higher rent, and the barber pays higher rent, you're going to pay more for haircuts. And this is just rents. Since houses have thin rental markets, the true difference may be larger still. Also, rents are often controlled or poorly measured. I don't know how BLS deals with that.

You can see many uses for even more granular data. But since house price and rent are easy to get, you might get a good approximation by adding granular housing cost data to regional price data.

There are a lot of interesting issues here.

One question it raises is the true picture of inequality. Poor people, especially those who don't work, tend to live in low-rent areas. Relative to local prices, inequality may not be as bad as it seems. (I presume the BLS does something to adjust rents for quality of housing.)

One can also imagine that congresspeople from high price areas will soon ask for higher cost of living adjustments for benefits to their constituents.

This data ought to focus more attention on housing supply restrictions -- the main reason that rents vary so much.

It raises some puzzles too. I notice that the market for academics gives surprisingly little weight to cost of living variations. If you compare offers from a European and US university, nobody expects you to compare "100,000" in each place without converting currency. But nominal academic salaries are quite similar across chasms of cost of living. To some extent universities make it up with absurdly complex and inefficient housing subsidies, but that doesn't make much sense either.  I'm curious to what extent this phenomenon occurs in other markets.

And... who knows? New data always leads to interesting new research. Kudos to the BEA for making this available.

Comments from people who know how this data is constructed, with good parts and pitfalls, are especially welcome.


A colleague who knows a lot about these issues sent some useful information:

...it’s my understanding from conversations with a few people and brief reading on methodology (https://www.bea.gov/regional/pdf/RPP2015.pdf) that they are actually pretty poor measures of local prices. Essentially all of the variation comes from relatively poorly measured housing prices, almost by construction.

That’s because the only local retail price data going into the BEA indices comes from the BLS CPI data, which covers less than 30 cities (and not even on identical products across locations). They’re extrapolating from this small number of cities to all cities in the US by just taking the nearest city with CPI data and re-weighting it with local expenditures shares. So for example, there is no retail pricing data collected for Columbus, but they show up in the BEA metro area price parities. So where are they getting price data from? They just take the prices collected in Cleveland (where BLS collects data) and assume that are the same in Columbus with potentially slightly different weights in the consumption basket. So even if there is wide heterogeneity across cities in prices... this is for the most part not going to get picked up in their local price measures, since they’re imputing prices in most cities using pricing data from other cities. Since most states have either 0 or 1 BLS price collection cities, this means that close to 100% of the within-state variation in their price levels is coming from housing. So to close to a first approximation, these purchasing power indices are really just house price indices since they basically aren’t using data on local prices for anything except housing.

But the housing price data is coming from ACS with various hedonic adjustment. That is notoriously challenging, especially across locations. It’s much easier but still hard to compute house price changes across time using repeat sales indices like core logic, but the housing stock is fundamentally heterogeneous across space which puts huge standard errors on trying to construct the price for an equivalent unit of housing across space, so I take the exact numbers there with a big grain of salt.

So overall I think these indices basically just tell you that housing is more expensive in san francisco and NYC than in oklahoma, but I think their quantitative usefulness is pretty limited. I think to really measure price level differences across locations, scanner data is much more useful since we can measure identical products as well as product availability and varieties. (A weakness is that this can’t capture differences in service prices across space, but it’s hard to adjust for quality there just like for housing, even if we had a census of all service providers prices everywhere in the country). Jessie Handbury and David Weinstein’s 2014 restud paper is the best study I know of trying to take seriously measuring retail price levels across locations using that kind of data. I have no idea how it lines up with the BEA numbers.

From which I take: 1) This is very important 2) The BLS took a useful stab at it with the numbers they have but 3) understand the large limitations of the BLS numbers before you use them 4) get to work, big-data economists, on using scanner data, twitter feeds, amazon purchases, zillow, and everything else you can get your hands on, to produce 21st century granular price indices!

Update 2:

Enrico Moretti has already written a very nice paper, Real wage inequality (Also here)  adjusting inequality measures for local cost of living.

At least 22% of the documented increase in college premium is accounted for by spatial differences in the cost of living.
He creates local price indices. He also takes on the question whether higher prices in hot cities represent more housing -- better amenities -- or just higher prices which you have to pay in order to work high -productivity jobs.

Interview, talk, and slides - Barokong

I did an interview with Cloud Yip at Econreporter, Part I and Part II, on various things macro, money, and fiscal theory of the price level. It's part of an interestingseries on macroeconomics. Being a transcript of an interview, it's not as clean as a written essay, but not as incoherent as I usually am when talking.

On the same topics, I will be giving a talk at the European Financial Association, on Friday, titled  "Michelson-Morley, Occam and Fisher: The radical implications of stable inflation at the zero bound,"slides here. (Yes, it's an evolution of earlier talks, and hopefully it will be a paper in the fall.)

And, also on the same topic, you might find useful a set of slides for a 1.5 hour MBA class covering all of monetary economics from Friedman to Sargent-Wallace to Taylor to Woodford to FTPL.  That too should get written down at some point.

The talk incorporates something I just figured out last week, namely how Sims' "stepping on a rake" model produces a temporary decline in inflation after an interest rate rise. Details here. The key is simple fiscal theory of the price level, long-term debt, and a Treasury that stubbornly keeps real surpluses in place even when the Fed devalues long-term debt via inflation.

Here is really simple example.

Contrast a perpetuity with one period debt, and a frictionless model. Frictionless means constant real rates and inflation moves one for one with interest rates

$$ \frac{1}{1+i_t} = \beta E_t \frac{P_t}{P_{t+1}} $$

The fiscal theory equation, real value of government debt = present value of surpluses,  says

$$\frac{Q_t B_{t-1}}{P_t} = E_t \sum \beta^j s_{t+j}$$

where Q is the bond price, B is the number of bonds outstanding, and s are real primary surpluses. For one period debt Q=1 always. (If you don't see equations above or picture below, come back to the original here.)

Now, suppose the Fed raises interest rates, unexpectedly,  from \(i\) to \(i^\ast\), and (really important) there is no change to fiscal policy \(s\). Inflation \(P_{t+1}/P_t\) must jump immediately up following the Fisher relation. But the price level \(P_t\)might jump too.

With one period debt, that can't happen -- B is predetermined, the right side doesn't change, so \(P_t\) can't change. We just ramp up to more inflation.

But with long-term debt, any change in the bond price Q must be reflected in a jump in the price level P. In the example, the price of the perpetuity falls to

$$ Q_t = \sum_{j=1}^\infty \frac{1}{(1+i^\ast)^j} = \frac{1+i\ast}{i^\ast}$$

so if we were expecting P under the original interest rate i, we now have

$$\frac{P_t}{P} = \frac{1+i^\ast}{1+i} \frac{i}{i^\ast}$$

If the interest rate rises permanently from 5% to 6%, a 20% rise, the price level jumps down 20%. The sticky price version smooths this out and gives us a temporary disinflation, but then a long run Fisher rise in inflation.

Do we believe it? It relies crucially on the Treasury pigheadedly raising unchanged surpluses when the Fed inflates away coupons the Treasury must pay on its debt, so all the Fed can do is rearrange the price level over time.

But it tells us this is the important question -- the dynamics of inflation following an interest rate rise depend crucially on how we think fiscal policy adjusts. That's a vastly different focus than most of monetary economics. That we're looking under the wrong couch is big news by itself.

Even if the short-run sign is negative, that is not necessarily an invitation to activist monetary policy which exploits the negative correlation. Sims model, and this one, is Fisherian in the long run -- higher interest rates eventually mean higher inflation. Like Friedman's example of adjusting the temperature in the shower, rather than fiddle with the knobs it might be better to just set it where you want it and wait.

13 Aug 2020

Asset Pricing Mooc, Resurrected - Barokong

The videos, readings, slides/whiteboards and notes are all now here on my webpage.  If you just want the lecture videos, they are all on Youtube, Part 1 here and Part 2 here.

These materials are also hosted in a somewhat prettier manner on the University of Chicago's Canvas platform. You may or may not have  access to that. It may become open to the public at some point.

I'm working on the quizzes, problems, and exams, and also on finding a new host so you can have problems graded and get a certificate. For now, however, I hope these materials are useful as self-study, and as assignments for in-person classes. I found that sending students to watch the videos and then having a more discussion oriented class worked well.

What happened? Coursera moved to a new platform. The new platform is not backward-compatible, did not support several features I used from the old platform, and some of the new platform features don't work as advertised either. Neither the excellent team at U of C, nor Coursera's staff, could move the class to the new platform. And Coursera would not keep the old platform open. So, months of work are consigned to the dustbin of software "upgrades," at least for now.

Obviously, if you are thinking of doing an online course, I do not recommend that you work with Coursera. And make sure to write strong language about keeping your course working in the contract.

Update: The latest version of the class is here

9 Aug 2020

Volume and Information - Barokong

This is a little essay on the puzzle of volume, disguised as comments on a paper by Fernando Alvarez and Andy Atkeson, presented at theBecker-Friedman Institute Conference in Honor of Robert E. Lucas Jr. (The rest of the conference is really interesting too, but I likely will not have time to blog a summary.)

Like many others, I have been very influenced by Bob, and I owe him a lot personally as well. Bob pretty much handed me the basic idea for a "Random walk in GNP" on a silver platter. Bob'sreview of a report to the OECD, which he might rather forget, inspired the Grumpy Economist many years later. Bob is a straight-arrow icon for how academics should conduct themselves.

On Volume:  (alsopdf here)

Volume and Information. Comments on “Random Risk Aversion and Liquidity: a Model of Asset Pricing and Trade Volumes” by Fernando Alvarez and Andy Atkeson

John H. Cochrane

October 7 2016

This is a great economics paper in the Bob Lucas tradition: Preferences, technology, equilibrium, predictions, facts, welfare calculations, full stop.

However, it’s not yet a great finance paper. It’s missing the motivation, vision, methodological speculation, calls for future research — in short, all the BS — that Bob tells you to leave out. I’ll follow my comparative advantage, then, to help to fill this yawning gap.

Volume is The Great Unsolved Problem of Financial Economics. In our canonical models — such as Bob’s classic consumption-based model — trading volume is essentially zero.

The reason is beautifully set out in Nancy Stokey and Paul Milgrom’s no-trade theorem, which I call the Groucho Marx theorem: don’t belong to any club that will have you as a member. If someone offers to sell you something, he knows something you don’t.

More deeply, all trading — any deviation of portfolios from the value-weighted market index — is zero sum. Informed traders do not make money from us passive investors, they make money from other traders.

It is not a puzzle that informed traders trade and make money. The deep puzzle is why the uninformed trade, when they could do better by indexing.

Here’s how markets “should” work: You think the new iPhone is great. You try to buy Apple stock, but you run in to a wall of indexers. “How about $100?” “Sorry, we only buy and sell the whole index.” “Well, how about $120?” “Are you deaf?” You keep trying until you bid the price up to the efficient-market value, but no shares trade hands.

As Andy Abel put it, financial markets should work like the market for senior economists: Bids fly, prices change, nobody moves.

And, soon, seeing the futility of the whole business, nobody serves on committees any more. Why put time and effort into finding information if you can’t profit from it? If information is expensive to obtain, then nobody bothers, and markets cannot become efficient. (This is the Grossman-Stiglitz theorem on the impossibility of efficient markets.)

I gather quantum mechanics is off by 10 to the 120th power in the mass of empty space, which determines the fate of the universe. Volume is a puzzle of the same order, and importance, at least within our little universe.

Stock exchanges exist to support information trading. The theory of finance predicts that stock exchanges, the central institution it studies, the central source of our data, should not exist. The tiny amounts of trading you can generate for life cycle or other reasons could all easily be handled at a bank. All of the smart students I sent to Wall Street for 20 years went to participate in something that my theory said should not exist.

And it’s an important puzzle. For a long time, I think, finance got by on the presumption that we’ll get the price mostly right with the zero-volume theory, and you microstructure guys can have the last 10 basis points. More recent empirical work makes that guess seem quite wrong. It turns out to be true that prices rise when a lot of people place buy orders, despite the fact that there is a seller for each buyer. There is a strong correlation between the level of prices and trading volume — price booms involve huge turnover, busts are quiet.

At a deeper level, if we need trading to make prices efficient, but we have no idea how that process works, we are in danger that prices are quite far from efficient. Perhaps there is too little trading volume, as the rewards for digging up information are not high enough! (Ken French’s AFA presidential speech artfully asks this question.)

Our policy makers, as well as far too many economists, jump from not understanding something, to that something must be wrong, irrational, exploitative, or reflective of “greed” and needs to be stopped. A large transactions tax could well be imposed soon. Half of Washington and most of Harvard believes there is “too much” finance, meaning trading, not compliance staff, and needs policy interventions to cut trading down. The SEC and CFTC already regulate trading in great detail, and send people to jail for helping to incorporate information in to prices in ways they disapprove of. Without a good model of information trading those judgments are guesses, but equally hard to refute.

How do we get out of this conundrum? Well, so far, by a sequence of ugly patches.

Grossman and Stiglitz added “noise traders.” Why they trade rather than index is just outside the model.

Another strand, for example Viral Acharya and Lasse Pedersen’s liquidity based asset pricing model, uses life cycle motives, what you here would recognize as an overlapping generations model. They imagine that people work a week, retire for a week, and die without descendants. Well, that gets them to trade. But people are not fruit flies either.

Fernando and Andy adopt another common trick — unobservable preference shocks. If trade fundamentally comes from preferences rather than information then we avoid the puzzle of who signs up to lose money.

I don’t think it does a lot of good to call them shocks to risk aversion, and tie them to habit formation, as enamored as I am of that formulation in other contexts. Habit formation induces changes in risk aversion from changes in consumption. That makes risk aversion shocks observable, and hence contractable, which would undo trading.

More deeply, to explain volume in individual securities, you need a shock that makes you more risk averse to Apple and less risk averse to Google. It can be done, but it is less attractive and pretty close to preferences for shares themselves.

Finally, trading is huge, and hugely concentrated. Renaissance seems to have a preference shock every 10 milliseconds. I last rebalanced in 1994.

The key first principle of modern finance, going back to Markowitz, is that preferences attach to money — to the payoffs of portfolios — not to the securities that make up portfolios. A basket of stocks is not a basket of fruits. It’s not the first time that researchers have crossed this bright line. Fama and French do it. But if it is a necessary condition to generate volume, it’s awfully unpalatable. Do we really need to throw out this most basic insight of modern finance?

Another strain of literature supposes people have “dogmatic priors” or suffer from “overconfidence.” (José Scheinkman and Wei Xiong have a very nice paper along these lines, echoing Harrison and Kerps much earlier.) Perhaps. I ask practitioners why they trade and they say “I’m smarter than the average.” Exactly half are mistaken.

At one level this is a plausible path. It takes just a little overconfidence in one’s own signal to undo the no-trade-theorem information story — to introduce a little doubt into the “if he’s offering to sell me something he knows something I don’t” recursion.

On the other hand, understanding that other people are just like us, and therefore inferring motives behind actions, is very deep in psychology and rationality as well. Even chimps, offered to trade a banana for an apple, will check to make sure the banana isn’t rotten.

(Disclaimer: I made the banana story up. I remember seeing a science show on PBS about how chimps and other mammals that pass the dot test have a theory of mind, understand that others are like them and therefore question motives. But I don’t have the reference handy. Update: A friend sends this and this.)

More deeply, if you are forced to trade, a little overconfidence will get it going. But why trade at all? Why not index and make sure you’re not one of the losers? Inferring information from other’s offer to trade is only half of the no-trade theorem. The fact that rational people don’t enter a zero-sum casino in the first place is the other, much more robust, half. That line of thought equates trading with gambling — also a puzzle — or other fundamentally irrational behavior.

But are we really satisfied to state that the existence of exchanges, and the fact that information percolates into prices via a series of trades, are facts only “explainable" by human folly, that would be absent in a more perfect (or perfectly-run) world?

Moreover, that “people are idiots” (what Owen Lamont once humorously called a “technical term of behavioral finance”) might be a trenchant observation on the human condition. But, by being capable of “explaining” everything, it is not a theory of anything, as Bob Lucas uses the word “theory.”

The sheer volume of trading is the puzzle. All these non-information mechanisms — life-cycle, preference shocks, rebalancing among heterogeneous agents (Andy Lo and Jiang Wang), preference shifts, generate trading volume. But they do not generate the astronomical magnitude and concentration of volume that we see.

We know what this huge volume of trading is about. It’s about information, not preference shocks. Information seems to need trades to percolate into prices. We just don’t understand why.

Does this matter? How realistic do micro foundations have to be anyway? Actually, for Andy and Fernando’s main purpose, and that of the whole literature I just seemed to make fun of, I don’t think it’s much of a problem at all.

Grossman and Stiglitz, and their followers, want to study information traders, liquidity providers, bid-ask spreads, and other microstructure issues. Noise traders, “overconfidence,” short life spans, or preference shocks just get around the technicalities of the no-trade theorem to focus on the important part of the model, and the phenomena in the data it wants to match. Andy and Fernando want a model that generates the correlations between risk premiums and volume. For that purpose, the ultimate source of volume and why some people don’t index is probably unimportant.

We do this all the time. Bob’s great 1972 paper put people on islands and money in their hands via overlapping generations. People live in suburbs and hold money as a transactions inventory. OLG models miss velocity by a factor of 100 too. (OLG money and life-cycle volume models are closely related.) So what? Economic models are quantitative parables. You get nowhere if you fuss too much about micro foundations of peripheral parts. More precisely, we have experience and intuition that roughly the same results come from different peripheral micro foundations.

If I were trying to come up with a model of trading tomorrow, for example to address the correlation of prices with volume (my “Money as stock” left that hanging, and I’ve always wanted to come back to it), that’s what I’d do too.

At least, for positive purposes. We also have experience that models with different micro foundations can produce much the same positive predictions, but have wildly different welfare implications and policy conclusions. So I would be much more wary of policy conclusions from a model in which trading has nothing to do with information. So, though I love this paper’s answer (transactions taxes are highly damaging), and I tend to like models that produce this result, that is no more honest than most transactions tax thought, which is also an answer eternally in search of a question.

At this point, I should summarize the actual contributions of the paper. It’s really a great paper about risk sharing in incomplete markets, and less about volume. Though the micro foundations are a bit artificial, it very nicely gets at why volume factors seem to generate risk premiums. For that purpose, I agree, just why people trade so much is probably irrelevant. But, having blabbed so much about big picture, I’ll have to cut short the substance.

How will we really solve the volume puzzle, and related just what “liquidity” means? How does information make its way into markets via trading? With many PhD students in the audience, let me emphasize how deep and important this question is, and offer some wild speculations.

As in all science, new observations drive new theory. We’re learning a lot about how information gets incorporated in prices via trading. For example, Brian Weller and Shrihari Santosh show how pieces of information end up in prices through a string of intermediaries, just as vegetables make their way from farmer to your table — and with just as much objection from bien-pensant economists who have decried “profiteers” and “middlemen” for centuries.

Also, there is a lot of trading after a discrete piece of information hits the market symmetrically, such as a change in Federal Funds rate. Apparently it takes trading for people to figure out what the information means. I find this observation particularly interesting. It’s not just my signal and your signal.

And new theory demands new technique too, something that we learned from Bob. (Bob once confessed that learning the math behind dynamic programming had been really hard.)

What is this “information” anyway? Models specify a “signal” about liquidating dividends. But 99% of “information” trading is not about that at all. If you ask a high speed trader about signals about liquidating dividends, they will give you a blank stare. 99% of what they do is exactly inferring information from prices — not just the level of the price but its history, the history of quotes, volumes, and other data. This is the mechanism we need to understand.

Behind the no-trade theorem lies a classic view of information — there are 52 cards in the deck, you have three up and two down, I infer probabilities, and so forth. Omega, F, P. But when we think about information trading in asset markets, we don’t even know what the card deck is. Perhaps the ambiguity or robust control ideas Lars Hansen and Tom Sargent describe, or the descriptions of decision making under information overload that computer scientists study will hold the key. For a puzzle this big, and this intractable, I think we will end up needing new models of information itself. And then, hopefully, we will not have to throw out rationality, the implication that trading is all due to human folly, or the basic principles of finance such as preferences for money not securities.

Well, I think I’ve hit 4 of the 6 Bob Lucas deadly sins — big picture motivation, comments about about whole classes of theories, methodological musings, and wild speculation about future research. I’ll leave the last two — speculations about policy and politics, and the story of how one thought about the paper — for Andy and Fernando!


Anies Baswedan