Norbert Michel and Jerome Famularo
Claims of a relentless housing crisis are prevalent in policy circles, politics, and the media. But as we have described in several blog posts and a working paper, there are good reasons to be critical of that view. Proponents of this theory often uncritically cite estimates of the housing shortage ranging from one million to twenty million, and that range itself is enough to question the usefulness of those estimates.
In this post, we further analyze Freddie Mac’s most recent shortage estimate of 3.7 million units in the third quarter of 2024. The analysis suggests that policymakers should avoid relying too heavily on these kinds of estimates. At the very least, these kinds of estimates are highly subjective measures of the state of the housing market.
Freddie’s methodology—which uses some of the same variables as other groups’ methods—first calculates the target (desired) housing stock, defined as target households divided by target occupancy rate (1 minus the target vacancy rate). Their model then subtracts the target housing stock from the observed housing stock to obtain the “shortage.”
In equation form, the shortage is the difference between the target housing stock k* and the existing housing stock k:
shortage = k* – k
The target housing stock k* is a function of a target number of households hh* and a target vacancy rate v*:
k*= hh* / (1 — v*)
The shortage can therefore be defined in terms of k*, v* and hh*. In this formulation, the shortage is written as follows:
shortage = (hh* / (1 — v*)) — k
Thus, for any target number of households (and existing housing stock), if the target vacancy rate is decreased, the shortage shrinks. In other words, with a lower target vacancy rate, the shortage could even become a surplus.
Freddie Mac’s 2024 shortage estimate uses a target household figure of 133.1 million households, with a housing stock of 147 million. To test the sensitivity of Freddie’s estimate to various target vacancy rates, we use these figures and the above equation as follows:
shortage = (133,100,000 / (1 — v*)) — 147,000,000
We first replicate Freddie’s 2024 estimate (a shortage of 3.7 million units) using their target vacancy rate of 11.7 percent. We then vary the target vacancy rate and plot each resulting estimate of the “shortage.” As Figure 1 demonstrates, when the target vacancy rate is below 9.5 percent, the resulting estimate turns positive, indicating a housing surplus. Thus, using a target rate approximately 2 percentage points lower turns the estimate from a shortage to a surplus.
Based on previous Freddie Mac estimates, a difference of 2 percentage points for the vacancy rate appears to be within normal boundaries. For its 2018 and 2020 shortage calculations, Freddie chose a target vacancy rate of 13 percent. So, their 2024 estimate uses a target vacancy rate 1.3 percentage points lower than previous estimates.
This change in the vacancy rate highlights several issues with the types of “shortage” models that Freddie employs. For instance, if Freddie had used its new target vacancy rate (11.7 percent) for its 2018 and 2020 estimates, those shortages would have been much smaller. As Table 1 shows, the 2020 shortage would have dropped more than 100 percent, from 3,857,471 to 1,721,857. The decrease in the 2018 shortage would have been even more drastic, falling nearly five times, from more than 2.5 million to just 431,597.
This kind of sensitivity test demonstrates the importance of using the “right” vacancy rate in these types of models. Unfortunately, it is very difficult to choose an objectively correct rate.
For its 2018 and 2020 estimates, Freddie’s target vacancy rate of 13 percent is based on the average observed vacancy rate between the first quarter of 1990 and the second quarter of 2018. (Freddie rounded up the 12.5 percent average). For the 2024 calculation, Freddie’s revised target of 11.7 percent is the average from the first quarter of 1994 to the fourth quarter of 2003.
Freddie justifies this change by arguing that the housing market has changed since they provided the 2018 and 2020 estimates, but the new estimate uses older data. Regardless, using an average vacancy rate from a random period of the past does not imply that such a rate is the “optimal” vacancy rate. As Table 1 demonstrates, using the average from 1985 to 2006 (often referred to as the Great Moderation period), 11.5 percent, dramatically shrinks each of the shortage estimates. And, using the average vacancy rate from 1965 to 2025, 11 percent, switches Freddie’s 2018 estimate to a surplus of almost 700,000 homes.
This level of sensitivity is troubling given that the actual vacancy rate varied between about 8 percent and 15 percent between 1965 and 2024, a much wider range than needed to “switch” one of these estimates from a shortage to a surplus. (See Figure 2.)
An even bigger problem for using these kinds of estimates, though, is that the economically optimal vacancy rate is surely less than the observed rate. That is, vacancy rates—the percentage of unoccupied housing units at any given time—are themselves inefficiencies. Optimally, few units would sit vacant for any significant amount of time, and all people interested in buying and selling would match rapidly.
Even if all vacant units are not regarded as a surplus, there is no objective reason to use the historical average—from any period—as the optimal vacancy rate. The fact that these kinds of models also rely on subjective assessments of the target number of households compounds these problems and makes their usefulness highly questionable.
While it is surely worth studying vacancy rates and household formation, policymakers should not base major policy decisions on these kinds of simplistic models. These kinds of models do not capture the intricacies of housing supply and demand, and building national policy based on their estimates may lead to harmful consequences.