Tuesday, June 2, 2015

Who Is On the RUC?

For the last year, I have been working to reconstruct the membership of the RUC, which is probably the most important policy entity in healthcare you've never heard of. The short of it is that RUC is a private organization with a critical public function: it advises the Centers for Medicare and Medicaid Services on how to set the relative prices for physician reimbursement within Medicare.

For example, it's the RUC's job to decide that, say, one treatment of a heart attack is equivalent in value to two treatments of pneumonia. It has come under extensive criticism -- see here, here, and here -- for basically being an unaccountable shadow government that acts in the interest of the American Medical Association and specialist doctors, rather than the medical community as a whole, patients, or the taxpayer. To be clear, I am repeating, not endorsing, that phrasing of the critique of RUC.

Initially, it was my intention, working with Judd Cramer, a friend and grad student at Princeton interested in labor economics, to try to link changes in the composition of the RUC to changes in Medicare's relative prices, known in health-policy circles as RVUs. But we never finished the project, mostly because I was overwhelmed with work this year -- I took a more-than-full load of classes and also wrote this research paper as independent work on the side.

Then the plan was to publish the list in an article with extensive commentary and discussion. In particular, I was very interested in potential conflicts of interest among RUC members, as prior work by Roy Poses has shown this to be a real problem. Yet, to do that, I really needed a complete and fully accurate membership list. That, as I have learned over the last few months, is basically impossible. RUC has been overseen by the AMA since 1991. It now has 32 seats, though it has expanded over the years. This means there are 736 person-years to account for. I could get all but 23 of them.

Over the last year, however, various health-policy researchers have found out that I have been working on this project -- and so I have an increasingly long list of people whom I've been telling to wait.

Yet I've decided that it's in the public interest for me just to publish the list already. (It's the document at the top of this post.) I do so with two honest caveats. First, it's incomplete. I'm missing a handful of years for certain seats, as my efforts to track down some person-years failed. Second, there are probably some inaccuracies. I do not think it is ridden with errors, but I would frankly be surprised if I got everything right. That's just the nature of trying to research a body that has made an extraordinary effort to remain cloaked in secrecy. (The type of error that I think is most likely is that I got some of the years wrong. I think all the names are correct; I am pretty sure anyone I claim was on RUC was in fact on RUC, for approximately the period I say they were. My guess is that I will be off by a year, say, for 10 percent of the people.)

Here is how I put this list together: dozens of hours of archival research. First, I managed to track down old AMA Board of Trustees reports. Those sometimes contained RUC appointments. Second, the medical-specialty newspapers and journals often mention who is currently serving on the RUC on the specialty's behalf. Third, the résumés and websites of ex-RUC doctors often list their full years of service; sometimes you can also find these in articles for the medical-specialty publications when they retire. Fourth, the AMA recently began publishing the current membership as part of an (admirable, but highly incomplete) effort towards transparency. Fifth, I relied on other efforts that Roy Poses and Brian Klepper, among others, have made, to identify RUC members.

I will also try to release some of the related research that I have done on RUC in the coming days. It was past time for me, however, to share this document. Thank you to the many who helped or cheered along this project.

SNAP and Food Security

"SNAP and Food Security: Evidence from Terminations" is the title of my first-ever working paper, which I wrote for my junior-year independent work at Princeton. What I do in the paper is try to measure very carefully the effect of participating in SNAP on households' food security, and the basic idea of how I do that is pretty simple:
[C]onsider two similar groups of households. The first group receives SNAP benefits in both November and December of a given year. The second group receives SNAP benefits in November but not in December. The difference in December food security between the two groups provides an intuitive estimate of the effect of SNAP benefits on food security in December.
 With that kind of comparison in mind, here's what I find:
SNAP participation increases the probability of food security by 10 percentage points (22 percent), with gains concentrated in reducing the probability of extreme food insecurity by 8 percentage points (36 percent), an effect that is broadly comparable to that of a change in household income from $10,000 to $20,000.
Naturally, there's a whole lot more in the paper itself.

Monday, May 25, 2015

Is Growth Understated?

Martin Feldstein has a nice op-ed in The Wall Street Journal arguing that the Bureau of Economic Analysis is understating GDP growth because of difficulties in adjusting for quality improvements and new products. It goes well with recent technical reports from Goldman Sachs and the Fed's Board of Governors. And read Paul Krugman for skepticism on whether technological progress is a big deal.

Here is a closely-related claim: Free access to Internet utilities like Google and Facebook means that market-based consumption growth understates growth in total consumption and therefore GDP growth understates gains in social welfare.

The way we should be thinking about this claim is "household production." My ability to use Google and Facebook doesn't require additional spending, just additional time. Spending time on the Internet rather than buying the newspaper, therefore, is functionally similar to making a sandwich at home from cold cuts in the fridge rather than buying a ready-made one at the deli.

Consider, then, the idea that these free Internet utilities are becoming more important, more powerful, more valuable, or whatever. That's identical to an improvement in my sandwich-making skills. And we would think that, as I become a better sandwich-maker, I will substitute market goods for home production by reducing my consumption of deli sandwiches. In particular, I'll cut back until the next deli sandwich is worth as much to me as the next homemade one.

The implication is that, if the marginal value of time on the Internet is actually rising due to Google, Facebook, and similar utilities, we should be seeing substitution away from the relevant alternative uses of time.

Do we? Yes, from Business Insider:



Consumers are substituting digital media, much of it free, for media sources they pay for, like TV and print. Maybe, then, we should also take the omission of free goods seriously, too, when we consider the divergence of GDP from a fuller, hypothetical measure of social welfare.

Thursday, May 14, 2015

Macro Mysteries and Non-Mysteries

There has been an interesting, if rather theoretical, debate between Roger Farmer, Brad DeLongPaul Krugman, and John Cochrane. The gist of it is simple enough: Is the current standard toolkit of macroeconomic models enough to explain the 2008 recession and limp recovery?

So that all blog-readers are on the same page, Keynesian macroeconomics has rallied around a certain framework since the 1980s. You start with a very classical model of the economy -- an economy that is always at potential, always has the right prices, and always has efficient allocations of resources -- and add some frictions, usually sticky prices or some sort of borrowing constraint. The result is a model where business cycles happen (and can be very severe) but where, eventually, the economy returns to potential. Krugman largely defends this theoretical tradition or, more precisely, a more primitive version of it.

This is not what Roger Farmer wants. Instead, Farmer wants economists to be thinking about models in which "potential" is not well defined -- that is, where it is very much possible for the economy to find equilibrium at many different levels of production. In short, Farmer wants ideas like multiple equilibria, nonlinearity, and self-fulfilling expectations back on the theoretical agenda. And, on the empirical side, Farmer has been trying to show empirically that we see this phenomena in key economic variables like unemployment and output.

In moderating the debate, DeLong faults Krugman's defense of the standard toolkit and argues that Farmer deserves some credit. The standard toolkit, DeLong contends, doesn't get the size of the recession right:
When I look at the size of the housing bubble that triggered the Lesser Depression from which we are still suffering, it looks at least an order of magnitude too small to be a key cause... To put it bluntly: Paul is wrong because the magnitude of the financial accelerator in this episode cries out for a model of multiple--or a continuous set of--equilibria. And so Roger seems to me to be more-or-less on the right track.
I do not think DeLong is correct when he says that the magnitudes come out wrong. My sense has been that the standard toolkit -- with the financial accelerator and sticky prices -- actually does get it right. It follows that, at the moment, we do not have compelling evidence that the stuff Farmer wants to put into macroeconomic models is needed.

Matteo Iacoviello, for instance, showed back in 2005 that textbook financial-accelerator models match what we see in the data. There's no mystery to be solved about why declines in home prices have such severe, protracted effects on economic growth. More recently, Atif Mian and Amir Sufi have put forward a lot of evidence that the hit to household balance sheets during the 2008 recession explains the decline in employment. On my part, I am doing some work to extend this line of inquiry to Spain's housing bubble, with some initial results showing that the boom and bust in mortgage lending, driven by wholesale finance, fully explains the boom and bust in housing prices.

Simon Gilchrist and Egon Zakrajšek have shown something similar is true in corporate bonds -- a financial market that, when hit with an adverse shock, propagates the shock into corporate investment and employment. Daniel Leigh and an army of economists at the International Monetary Fund have shown that, across the set of developed economies, the drop and sluggish recovery in business investment also lines up with the predictions of the textbook model.

Another approach is to put these financial frictions into a more developed model of the economy's structure, as in some recent work by Marco Del Negro, Marc Giannoni, and Frank Schorfheide. When you hit that model economy with the the kind of shocks that preceded the 2008 recession, the downturn that pops out of the model looks quite a lot like the 2008 recession.

I am not trying to say here that the 2008 recession raises no interesting questions. It does. But I think that a review of the empirical research would suggest that "why was the downturn so severe?" and "why has the recovery been so weak?" are not among them. When DeLong and Farmer say that our theoretical framework is insufficient to explain the evidence, I do not know what evidence they have in mind.

Farmer does some informal statistical work to try to show that real output drifts rather than returns to a trend. The problem with this argument is that, when you separate out permanent and transient shocks -- something Farmer doesn't do -- the transient ones look like shocks to demand, the permanent ones to supply. (Cochrane's post has a lot more to say on these statistical issues.)

Farmer might find some stronger evidence for his view that "potential" is a nebulous concept in some fascinating new work by Larry Ball, which compares the revision of estimates of potential output to the actual downturn in output. Where the downturn was worse, Ball shows, the loss of potential has been worse. However, there's some (very different) evidence from the bombings of Japan and Vietnam showing that long-run economic potential is almost indestructible.

Trying to find solid footing on this issue will be a challenge. It's terribly difficult, from the standpoint of research, to show that short-run fluctuations transmit into long-run catastrophes. "Permanent" is hard to distinguish from "long-lasting."

My feeling then, is that the heat in this debate is pretty misplaced. We have a mountain of evidence showing that financial shocks can generate long-lasting, deep recessions -- and yet, we are only at the beginning when it comes to understanding whether recessions do permanent damage, let alone how much. Why don't we start there?

Sunday, May 10, 2015

Who's The Best Candidate?

Martin O'Malley: If drafted, I will run; but if nominated, I will probably be a disaster.

Without a doubt, one of the best parts of political elections are active prediction markets. For observers of prediction-market activity, a lot can be learned about how politics works.

I'm curious whether prediction markets can answer an important political question in the U.S.: Is someone a good candidate for president? And they actually can.

Prediction markets give us probabilities that candidates will win the Republican and Democratic nominations for president and the probability that they will win the general election. If we assume that candidates compete for only one nomination and cannot stage a third-party run if they do not win -- that is, no John Andersons allowed -- then we can easily estimate the probability of them winning the general election, conditional on winning the party nomination.*

When political pundits discuss whether someone is a good candidate for president, I think this conditional probability is exactly what they mean.

Taking these two probabilities from three different prediction markets -- PredictWise, Betfair, and PredictIt -- I am able to estimate this "competitiveness" score for nine top contenders for the Republican and Democratic nominations: Jeb Bush, Marco Rubio, Scott Walker, Rand Paul, and Chris Christie for the Republicans, and Hillary Clinton, Elizabeth Warren, Joe Biden, and Martin O'Malley for the Democrats. (Some technical notes can be found below.)

Here's what I find: The best Republican candidate is Jeb Bush, who has a 67-percent chance of winning the general election if he wins the nomination. The worst Republican candidate is Scott Walker, who has a 44-percent chance.

Among Democrats, Joe Biden and Hillary Clinton are nearly tied for the top candidate, with 58-percent and 57-percent chances of general-election victory if either secures the nomination. With a 24-percent chance, Martin O'Malley is the worst Democratic candidate.

You can see the full table of results here:



It's worth noting here that, at the party level, prediction markets estimate a 58-percent chance of a Democrat winning the presidency and a 42-percent chance of a Republican win. So comparing the candidate's conditional probability with the party's overall probability gives you a sense of good, say, Jeb Bush is as a candidate relative to the Republican field.

I found the results pretty surprising. They suggest that Rand Paul is a viable general-election candidate, Elizabeth Warren and Scott Walker are pretty overrated, and that "Bush fatigue" is fake. I was also surprised, in general, how closely clustered the top candidates were -- one take-away from this is that the candidate matters less than you might think.

On the other hand, the prediction markets think that the rest of the field is remarkably weak. Another take-away to the parties, then, might be: Nominate one of these candidates, or you will get crushed. This also helps explain why many of the top candidates can have better than 50-50 odds of winning in the general election if they win their party's nomination.

What might differentiate, say, Jeb Bush from Scott Walker in this conditional probability? I'll mostly leave that to the pundits. Yet Andy Hall, a young political scientist at Harvard, has recently found compelling evidence that political extremism hurts candidates' chances in general elections.

Another possibility is that these conditional probabilities aren't a perfect measure of competitiveness. If some of these candidates win the nomination, you've got to imagine that they got lucky -- Biden, for instance, trails Clinton in his chance of winning the Democratic nomination -- and so there's a sense in which this conditional probability is premised on "something good" happening to the candidate.

I would also remind readers about the "no-John-Andersons" assumption. If a candidate could stage a viable third-party race -- one might imagine this for Warren or Paul -- then my estimates might be a bit low.

Assessing the viability of presidential candidates is too important to be left to polling and pundits. Prediction markets can shed some light on whether a candidate has a shot in the general election if they win their party's nomination. 

----

* I will step through the math. By the law of total probability:

P(wins election) = P(wins election | wins nomination)*P(wins nomination)
                                        + P(wins election | ! wins nomination)*P(! wins nomination),


and then by assumption that P(wins election | ! wins nomination) = 0 :

P(wins election) = P(wins election | wins nomination)*P(wins nomination)

and therefore

P(wins election | wins nomination) = P(wins election) / P(wins nomination).

** Two technical notes:

(1) Since prediction markets for both the nomination and the general election do not exist for all candidates, I wasn't able to go further than the top names. Another issue, for some long shots, was that the probabilities are coarsely estimated -- that is, if you have about 2-percent chance of winning the nomination, whether that 2-percent is really 2.4 percent or 1.6 percent matters, and I do not have that level of precision. So I excluded candidates that prediction markets see as long shots. 

(2) Prediction markets are Dutch-booked to ensure profits for the market maker. To correct for this, I re-based the relevant probabilities so that they summed to one.

Tuesday, May 5, 2015

Today's Links

1. My good friends Daniel Yu and Jason Kang are up to amazing things. Yu runs Reliefwatch, a tech startup that helps clinics in the developing world avoid shortages of medical supplies. Kang helped to design Highlight, a powdered bleach additive that makes decontamination against infectious disease much easier.

2. The IMF has released a new dataset on capital controls. And here's an amazing resource that explains how to use basically any major survey dataset.

3. Graph: How the market value of tech firms has evolved from 1980 to present.

4. When the State Speaks, What Should It Say? There is a lot that this book (by Corey Brettschneider) can bring to bear on recent debates about if and how the government should intervene against private discrimination.