Ignacio Flores is a postdoctoral scholar at the Stone Center and a member of The GC Wealth Project team. Earlier this year he published “The Weight of the Rich: Improving Surveys Using Tax Data” in a special issue of The Journal of Economic Inequality that also featured work by several other Stone Center scholars. Flores recently spoke to the Stone Center about his research and the algorithm that he and coauthors invented during the course of his Ph.D. years.

You designed a method that captures the right tail of the income and wealth distributions. Why is it so important to measure the very top?

Flores: We are trying to get as close to a complete picture as possible. For a long time, researchers almost exclusively used surveys to study income and wealth distributions. But there are at least three ways in which these surveys are biased, and this affects the way we picture not only the right tail, which is the top of the distribution, but also the bottom or the left tail.

First, not everybody replies to surveys with the same probability. We can define the probability of response to a survey — in general, the global response rates have been decreasing. But even if 10 percent of a sample responded to a survey, if those 10 percent were chosen randomly, there would be no problem, because they would be representative. But reality doesn’t work like that — response rates fall in both tails of the distribution, while they are higher and relatively constant in the middle of the distribution, which biases measured inequality.

The second behavioral bias is that people who do respond sometimes misreport. They are not necessarily lying: for example, there are people whose income doesn’t come mainly from a salary or wage, and their incomes can be especially variable. This is true of the incomes of the self-employed or of capital-income earners. Surveys ask respondents to report their income during a specified time period, which could be a year, or a month, or even a week or two. Short reference periods, in particular, might not be the best way to ask, if you want to capture capital income, because it’s often volatile. Also, there is an opposite bias in the other tail, a tendency for lower-income people to report higher incomes than they have. At the top, people tend to report less than what they actually have.

And the third bias is a statistical one that is called “small sample bias,” which operates whenever we are trying to measure what happens to a tiny subgroup of a survey’s target population, which is usually the case when studying the very rich.

In theory, you can design a perfect survey, but both behavioral and statistical biases can still distort results.

You and your two coauthors, Thomas Blanchet of the Paris School of Economics and Marc Morgan of the University of Geneva, developed a piece of software that allows you to apply this method: the Stata command bfmcorr. Can you explain how this came about?

Flores: If you want to implement everything that we do and show in the paper, there is this Stata command, which is a user-friendly program, that applies everything. The method can be applied to both income and wealth data or other similar variables. It’s pretty flexible.

Basically, it allows you to adjust the right tail of a distribution. Its most direct use is to adjust household survey data — working directly with the microdata, meaning data at the level of households or persons — with external data from administrative sources, because of those three reasons I described. Most of the time, these administrative data are specifically income tax files, which we know capture better what’s happening at the top of the distribution. But this external source is not perfect, either — it’s actually pretty bad at capturing what’s happening at the middle of the distribution, especially where there is a substantial amount of informal income that is not captured in the tax data.

So the whole point of the algorithm and the paper is how to combine two different sources of information in a way that makes sense, in a way that is statistically consistent, when none of the sources is 100 percent trustworthy, and in a way that actually goes a bit beyond — that provides many of the different features that a survey should have. That’s where it makes sense to use survey calibration theory.

We’re trying to preserve the integrity of the survey after we adjust it. Surveys have been developed over long periods of time, and there’s a lot of theory on the statistics behind them and how they are justified. We are trying to take into account all of those aspects while making use of external — and more accurate — data to adjust them, where available.

How long did it take you to develop this algorithm?

Flores: The idea came during my Ph.D. program. I’d started to work on the World Inequality Database, and at that point the whole database was mostly dedicated to study top income shares mainly using administrative data. Everybody was using this big new source, because administrative data tell you more about what’s happening at the top of the distribution, at least more than surveys.

So we were trying to pull in administrative data everywhere, but those kinds of statistics are not meant to be used by distribution analysts. They’re meant for the state to collect taxes. That’s it. So we were trying to tame the beast, and a lot of the energy was given to understand tax systems. What’s nice is you could use that to go further back in time than what surveys ever allowed. But that’s only with the top. These data don’t give a whole lot of information about what’s happening in the overall distribution, which is the ultimate goal.

So we knew that we had two different sources that were not necessarily telling us the same thing. They were contradicting each other, we were finding large gaps in developing countries, especially in Latin America. What was actually happening, then? How do you build that true distribution? That was the problem I was trying to solve. I’d just started with the team, and I wanted to make myself useful, instead of just being a guy who’s working with one country, which back then was Chile.

It took me a while. Before I had the first draft of an idea on how to solve this problem, it took me at least two or three months of sketching things, writing on my whiteboard, playing with equations and stuff like that. And then actually implementing the solution took way more time, and also meant including Marc and Thomas, who were absolutely fundamental to the success of the project. The first draft of the paper that presents our method took about a year and a half. That was finished in 2018. And from there to publication it took four years.

A lot of people think that research is you get this great idea, like a light bulb going off, but that’s maybe 15, 10 percent of the work. Then you actually need to do the thing.

In your paper, you applied the method to France, the UK, Norway, Brazil, and Chile. Did it work as expected?

Flores: We were pretty reassured to find that, empirically, our results were matching what was predicted in theory. I’m talking particularly about the theta coefficient, which is a variable that summarizes all the biases. It had a familiar shape in all cases.

For the purpose of this paper, the countries we looked at were just exercises meant to test the method, so that was not the most conclusive part of the study. But this method has been implemented in several other studies whose purpose is to measure inequality in the most accurate way. The method has become the standard practice at the Word Inequality Database, which means that it’s not only me and my co-authors that are using it — it’s everybody who’s using survey data and producing data for that database, and also independent researchers. A lot of different people are taking interest in it as well, which is pretty nice from a research point of view.

Read More: