10 Top Academic Insights to Begin the Year

The annual jamboree of economists, the ASSA meetings, took place in San Antonio (TX) early January. As always, the meetings had a superb line-up of academics, providing the latest innovative insights into finance, economics, real estate, and many other fields. The MCRE team was well-represented, including 3 paper presentations on the impact of climate risk on the (Dutch) housing market, tenant satisfaction and the financial performance of commercial real estate, and the effect of subsidies on the adoption of solar panels. Below, each of our attending team members (+2) provides insight into their favorite paper(s).

1. Infrastructure as an Investment (Nils Kok)

It’s no secret that infrastructure is rapidly catching up with real estate as an asset class of choice for pension funds and other institutional investors (see this paper for some stylized facts about infra in the mixed asset portfolio). But, research on infrastructure has been slow to develop. At this year’s ASSA Meetings, there was a session dedicated to infrastructure, which definitely was my highlight of the conference. Some quick takes:

  • Paper 1. Airport privatization is a good thing. Airport privatization is well-established, with almost 20% of all airports around the globe owned by private parties. Importantly, this is an area where private equity and foreign direct investment seem to enhance efficiency/satisfaction, with more passengers, more nonstop routes and more amenities in terminals resulting from privatization. At the same time, they can improve the financial performance of airports after a takeover. (As opposed to most other industries where private equity comes in…). The exception to airport privatization is the US, where practically none of the airports are privatized. Would that explain the somewhat depressing experience of US air travel…?
  • Paper 2. Road roughness leads to unequal economic outcomes. In an impressing effort of data collection, Ed Glaeser and colleagues used vertical movement measured through the cell phones of thousands of Uber drivers to measure road roughness throughout the US. It turns out to be a very precise tool, that is of course negatively correlated with speed, and positively correlated with car maintenance costs. If only the government would have information like this – 90% of repavement spending in NY is NOT targeted, i.e. the roads are being repaved that are in much better shape that many of the roads that actually need work. And, as it goes, those with lower incomes suffer more from road roughness, not just because the road is rougher in lower income areas, but also because lower income households typically have to commute longer distances.
  • Paper 3. Power generation is increasingly in the hands of investors, not governments. Private equity and other investors own almost 25% of all power-generating capacity in the US, up from 6% in 2005. To a large extent, that is explained by investments in renewable energy that are mostly done by “new” market participants, rather than the incumbents. Incumbents are shifting slowly to cleaner forms of energy generation, but are also retiring “brown” assets (e.g. coal). Private equity seems to have little appetite for these assets, leading to limited leakage. The bad news: private investors charge higher prices for the electricity they generate (about 0.2cts/kWh, so fairly trivial), and pick contracts that allow them to be more flexible to play into peak demand, and changes in demand more generally. Importantly, market regulation plays an important role: in regulated markets, don’t expect new capital to flow in, the incumbents will continue to dominate. That’s a critical lesson for policymakers who want to green their grid.

2. AI, economy, and society (Alex Sun)

The fast progress of generative AI, such as ChatGPT, has shocked many aspects of our society. Despite the overall optimism, people are concerned about the “dark side” of AI. For example, the potential large-scale replacement of the workforce, the underlying bias, the black-box model that is hard to interpret, and so forth. At ASSA 2024, several sessions were dedicated to the discussion of AI, economy, and society. Here are some quick takes:

  • There has been a hypothesis (or hope?) that AI algorithms, which are trained on large samples of data, may perform better than humans on “common” cases. On the contrary, with conceptual understanding and logical reasoning, humans can outperform algorithms on uncommon, or “long tail” cases. In the paper “Comparative advantage of humans vs AI in the long tail”, Nikhil Agarwal and his team compared the performance of multiple algorithms and human experts on X-ray diagnosis. The findings are intriguing: although human experts do have a bit of advantage on “uncommon cases” compared to “classic” supervised image recognition algorithms, a new self-supervised algorithm can close this gap.
  • The paper “algorithmic recommendations and human discretion” also explores the performance differences between humans and algorithms, in the context of bail-out decisions. The algorithm uses the case and suspect information to give recommendations, but human judges can override. By comparing the misconduct rate of the bailed-out, we can compare whether human judges or algorithms made better decisions. Not surprisingly, 90% of the judges in the dataset underperform the algorithm when they override (most of these decisions are not better than random, see the figure below). But the remaining 10% judges outperform the algorithm on accuracy (the bailed-out suspect has no/less misconduct) and fairness (the accuracy is consistent in different groups of people). The researchers find a striking pattern of these high-performance judges from a survey that they made better use of the private information that is not available to the algorithm.
  • The interpretation of these results can be twofold: first, the long-tail hypothesis is (most likely) not true, and there is no limit to the learning capability of algorithms. Given the suitable model architecture, the algorithms can also learn “conceptually” from a few cases (few-shot or one-shot learning). Second, people should think “above” the idea of replacement, and think about how humans and AI should work together.

3. Bottlenecks for Evidence Adoption (Linde Kattenberg)

Before implementing a new policy, interventions are often implemented on a smaller scale to test their effectiveness by means of a randomized controlled trial (RCT). This methodology is seen as the ‘gold standard’ in research, as it is the cleanest way to causally identify whether a certain treatment is effective or not. One common limitation of RCTs is that the treatment might not always be scalable. That can be because we are not sure that the results of a small study are generalizable to a larger population, or because there is no labor capacity to execute the intervention at the same level of quality at a larger scale. An impressive paper presented by Stefano DellaVigna and co-authors studied exactly this question: what happens after an RCT is executed and the results are published? Are these results implemented to improve public policy?

In order to answer this question, they worked together with the Behavioural Insights Team-North America, an organization aimed at improving policy outcomes at the city level using light interventions, also referred to as nudges. The authors analyze all RCTs implemented by the organization between 2015 and 2019, which were 73 in total that were implemented across 30 US cities. In total, 27% of all tested interventions were implemented post-trial. Surprisingly, the statistical significance and effect size are not predictive of subsequent roll out of the policy. That means that even RCTs with a negative treatment effect have no difference in the chance of later being adopted than those with a positive effect. The authors find a limited role for organizational capacity, for instance, larger cities are more likely to have adopted the policy, as well as organizations where the contact person of the RCT is still employed at the moment of measurement.


However, by far the most predictive factor in the adoption of the RCT for city-wide implementation is whether the communication infrastructure used for the study was already in place pre-RCT. In those studies where a pre-existing communication has been modified to include a nudge, the policy has been implemented in 67% of the time, compared to an adoption rate of 12% for studies where a new communication channel has been set up. In the end, the economic impact of limited adoption of promising interventions is substantial. Actual improvements in targeted outcomes are only one-third of the predicted ones, which is caused by the limited adoption of interventions that were found to be effective. The results have strong implications for predicting the effectiveness of policy instruments, beyond the RCT stage. Additionally, they give an incentive to think about how researchers can nudge or guide policymakers in the post-RCT stage to boost the implementation of successful trials.

4. Development and redevelopment (Minyi Hu)

The behavior of real estate investors/developers could be affected by many factors. Under which circumstances will they choose to make the investment decision for the project, and how is their behavior affected by the exogenous shock? Following are some prominent papers in ASSA-AREUEA 2024 answering these questions for takeaway, which span from residential to commercial real estate:

One interesting paper is by Michael Ball et al., titled “Why Delay? Understanding the Construction Lag. Aka the Build Out Rate”, which explores the relationship between local demand shocks and site build-out rates. The authors first provide a model to predict a positive relationship between local demand shocks and the construction duration, because as demand rises developers expect a more rapid flow of sales with less downward pressure on prices. Secondly, by measuring local employment change as local demand shock, and using a sample of over 110,000 residential developments in England from 1996 to 2015, the authors empirically validate their theoretical model. In the heterogeneity analysis, the authors indicate that the negative relationship between demand shock and construction lag is less pronounced in areas where local planning is more restrictive, or that are more built-up, and where competition in the local development sector is lower. Their findings indicate that the slow build-out rate in England is the result of both market and policy failure, and contributes to a better understanding of real estate cycles and price dynamics.

Another interesting study is the paper by Simon C. Büchler et al., titled “On the Value of Market Signals: Evidence from Commercial Real Estate Redevelopment”. This paper tries to investigate how institutional investors adapt their buy-to-redevelop strategies according to tangible localized market signals, which could be detailed into two specific questions: (1) Whether previous local investment decisions of peers will affect investment strategies in the same area. (2) Whether the peer decision will capitalize on the transaction price of the property. Firstly, the authors build a theoretical framework that lays out fundamental mechanisms behind the exercise and valuation of redevelopment options. Secondly, in the empirical setting, focus on the intrinsic option value, which the intention to redevelop at the moment of the transaction, the authors define the capital intensity gap of a given property as the ratio between its FAR (Floor-to-area ratio) and the average FAR of recently built nearby buildings, and the index of economic activity mismatch as the ratio of recently built nearby properties with a type of economic activity that differs from the one of a given building, and using the RCA data from 2000 to 2018, the authors indicate that: (1) Capital intensity gap is positively related to the ‘buy-to-redevelop’ likelihood. (2) A mismatch between the current type of economic activity – residential, retail, industrial, or offices – of a given commercial property and the nearby recently built properties also positively related to the propensity to purchase the property for immediate redevelopment. Finally, the author indicates that the capital intensity gap and economic activity mismatch can capitalize up to 30% higher transaction prices via the asset’s intrinsic real option value. This paper hold important lessons for the development of cities and urban renewal.

5. Climate risk and government regulation (Philibert Weenink)

The ASSA conference prominently featured discussions on climate change's impact on real estate markets. Amidst various papers on market dynamics, a noteworthy focus emerged on the economic consequences of government interventions.

A standout study by Abigail Ostriker and Anna Russo explores the effectiveness of zoning regulations in mitigating flood risks in Florida. The study highlights that zoning regulations significantly reduce flood damage (60%). However, it does so at considerable costs. One-quarter of the damage decline stems from a reduction in new building supply in zoning areas, leading to an increase in prices for the remaining stock by 6%. The remaining three-quarters of the damage reduction results from elevations in newly developed buildings. While the elevation of properties limits future damage, it adds to their construction costs. The paper underscores that while zoning regulations can address the disparity between private and societal costs of building in flood-prone areas, it may not be the most efficient approach. Although risk declines align with a first-best tax scenario, zoning regulations entail substantially higher costs. Should regulatory measures be pursued, the authors recommend tailoring them specifically to areas facing the highest risks.

Another insightful presentation, by Meri Davlasheridze, Xing Miao, and Kayode Atoba, delves into the consequences of governments acquiring properties deemed structurally unprotected from climate hazards. Residential buyout is a crucial element of managed retreat. Beyond averting losses, as buyouts target disaster-damaged properties transformed into open spaces, it also enhances environmental amenity values. While the policy aims to limit physical damage, the paper highlights an intriguing side effect. In their analysis, the authors examine market outcomes following buyouts in Harris County, Texas, after the Hurricane Harvey event. The researchers discovered a 2% appreciation effect on nearby properties. Not only could such appreciation effects stimulate moral hazard (owners might speculate on further price improvements driven by societal costs), but it also offsets at least part of the policy goal, as the financial impact of a flood gets larger for these houses. The reasons behind this appreciation effect—whether it stems from a reduction in residential supply, an enhanced surrounding environment, or a signaling effect that nearby properties survived the hurricane—remain ambiguous. Nevertheless, the research prompts the question of whether buyouts are indeed the most effective means of risk reduction.

6. Refinancing and voting behavior (Dongxiao Niu)

At the heart of many American families lies their most significant investment – their home. But with this investment comes the burden of a mortgage, often their largest liability. In a paper presented at the ASSA-AREUEA mortgage session, Ben McCartney (Purdue University) and the co-authors explore the question that resonates with many: Do Americans vote influenced by their financial well-being? Titled "Do richer pocketbooks lead to more voting?", this study is not just a cursory glance at the American electoral landscape but a deep dive into how financial fluctuations, particularly those related to homeownership and mortgages, can affect political participation. The Relief Refinance/Home Affordable Refinance Program (HARP) introduced in March 2009, with its significant impact on the refinancing market, provides a perfect backdrop to study how such housing financing stimulation programs influence voter behavior. Theoretically, the effects are ambiguous. Financial distress might spur individuals to vote, either to seek a change in governance (the "angry voter hypothesis") or as a means to express discontent with the current regime. Conversely, the emotional and financial strains could lead to decreased participation; after all, voting is more than just showing up on election day. It involves a time-consuming process of understanding candidates, ensuring registration details are up to date, and often enduring long queues at polling stations. McCartney's paper stands out for its innovative approach to addressing this question. HARP was designed to assist homeowners with little to no equity in refinancing their mortgages. The findings are intriguing: Democrats who benefited from automatic rate resets on their adjustable-rate mortgages (ARMs) were more likely to vote in the 2012 presidential election than those who didn't benefit. In contrast, the opposite trend was observed among Republicans. This suggests that both parties reacted to the financial relief by rewarding the incumbent, albeit in different ways – Democrats through increased turnout and Republicans by staying home. This observation is critical in understanding how personal financial situations can shape the political landscape. The study concludes that such financial interventions have significant aggregate effects, potentially explaining hundreds of thousands of abstentions in U.S. elections. This observation calls for a deeper understanding of how financial well-being shapes democratic engagement, highlighting the intricate relationship between personal finance and the mechanics of democracy.

7. The role of Pigouvian subsidies in the fight for energy transition (Alexander Carlo)

A growing amount of subsidies are being implemented to stimulate the transition to a more sustainable economy. A recent working paper by researchers from the University of Michigan and the National Bureau of Economic Research examines the impact of the US Renewable Energy Production Tax Credit's time limit on wind energy generation after the subsidy period ends. The results are shown in Figure 4 here below.

The study finds that wind generation decreases by 5-10% when the ten-year subsidy ends. The result is striking as it concludes that time limits distort production even in inelastic industries. The authors recommend that allowing firms to choose both output and investment subsidies, like those for low-income housing, healthcare, or research and development, is more efficient than forcing them to choose between the two, as is currently the case with wind and geothermal energy's PTC and ITC. The authors did an excellent job presenting the paper, which is worth reading!


8. Climate risk: causes, consequences and mitigation strategies (Stefany Burbano)

Climate risk is an expanding research topic, increasingly exploring different aspects. More markets are recognizing the importance of climate risk in different aspects, such as agents' decisions like business location, and how it is related to insurance. Additionally, there is a growing focus on the indirect costs associated with floods, as well as the relationship between land development and long-term environmental costs. Here are the insights from the three papers presented in the session on Climate Risk:

Imperfect flood insurance enforcement and business misallocation (Xudong An, Yongheng Deng and Dayin Zhang):

Firms face a crucial decision regarding their location, especially considering flood risk. Theoretically, the social optimal allocation of resources implies firms should move out high-risk zones. However, insurance protection influences the firm’s location decision. Insurance helps to cover against climate risk ex-ante the flood, and government aid alleviates damages ex-post disaster. Nevertheless, enforcing insurance on business is challenging. In this context, what is the economic consequence of the imperfect flood insurance enforcement for commercial properties? In the U.S., there is evidence of a 10% increase in business at central business districts after being designated as flood zones. This increase suggests a misallocation of commercial properties, likely correlated with inadequate insurance enforcement and lower rents in these zones after flood events. This scenario is explained as a free-riding problem, where government aid is provided to uninsured commercial properties post-disaster. Consequently, developers capitalize on lower rents after floods, moving businesses to these areas, which leads to a social cost not internalized by the businesses.

Figure. Heterogeneous Effect: Businesses Movement Across Ares with Different Commercial Property Share


Estimating the indirect cost of floods: Evidence from high-tide flooding (Seunghoon Lee, Xibo Wan and Siqi Zheng):

Although common measures of flood costs primarily focus on direct and insured damages, this approach overlooks the broader, often more subtle impacts of flooding. High-tide flooding (HTF), a type of small-scale coastal flooding, exemplifies this issue. While not typically destructive, HTF is highly disruptive, affecting daily life. This raises a critical question: How much are we underestimating the true cost of floods by omitting these indirect impacts? Inundation of low lying coastal areas driven by tidal movement and sea level rise causes significant business and transportation disruptions, adaptation cost, loss of use value, among others. These events are not destructive, not related to flood insurance claims, and implying small direct damage. However, the cost in terms of rental rates or mobility is not negligible. For instance, in the U.S., exposure to HTF leads to a 9% decrease in visits to affected areas and a 0.23% reduction in rental rates over 12 months for each additional day of flooding. The evidence indicates that the indirect costs of floods, such as business disruptions, and adaptation expenses are substantially ignored.

Figure: Price Effect by Frequency and Distance. This plot shows how the price effect of the HTF varies by (a) the number of HTF incidents and (b) the distance of a zip code to the nearest coastal line.


Land use and flood damages: assessing long run consequences of economic development (Vincent Yao, Lu Han and Congyan Han):

Finally, a pivotal question in topic is: Which is the long-run environmental cost of land development? In the U.S., an often overlooked aspect is the public cost associated with subsidized flood insurance for different land use, which is not internalized in private markets. In other words, a transfer from taxpayers to developers. Notably, many development hotspots are localized in flood zones, related to a high proportion of flood insurance claims from businesses in these areas. The long-run elasticity of flood claims to land development in the U.S. is about 1.6%. This indicates that an increase in development correlates with a rise in flood insurance claims, predominantly in zones classified as cropland and treeland. Over a period of 15 years, this has amounted to around 600 million USD in flood damage for 1% increase in development – a significant social cost of land development that the market does not cover. This reveals the hidden, yet substantial, environmental and economic impacts of land development.


According to the authors of this paper (Currier, Glaeser, and Kreindler), road drivability is not random and road roughness leads to economic costs via time paid in driving more slowly. This indicates a willingness to pay for smooth driving. ‘Connecting’ highways are characterized by the highest measures of drivability or in other words the least measured bumpiness. The worst drivability are roads in coastal areas and within-municipality and town welfare is positively correlated with road surface smoothness.

The authors use Uber vertical acceleration data in combination with cell-phone location data. To be more precise, the authors use the vertical acceleration data, tracked by any modern cell phone to proxy for road surface bumpiness. The data richness enables the author to correct for driver specific driving style variation. Thus, road segment averages are computed for the sake of data manageability. The core aim thereof is to ensure comparability with state-gathered data whereby the ‘measurement’ vehicle drives at a pre-specified speed during non-rush-hour times.

Note however, that smooth drivability can also encourage speeding. Thus, it can from a policy perspective also be argued rationally, that there are roads where smooth riding should be discouraged. Examples thereof are school zones, (national) parks, and residential areas.

Interesting, would be to investigate external validity, within and beyond the US. The geographical area barely brushes upon the west coast. In European countries, for example, road maintenance is commonly not governed by the municipality, but by an overarching organization, responsible for roads in a wider area. Besides, tax collection schemes for road maintenance in the US are very specific. It would be interesting to investigate whether the road maintenance quality and efficiency is significantly different in countries where taxes in general are collected nationally and then redistributed by necessity. This is also motivated by the authors’ result that the data indicates that there are sharp changes in road smoothness at town borders. Given that the aim of any network is to connect different points on a map, sharp changes in drivability at (county and national) borders is a topic to address and relevant for policy makers.

Figure: Experienced Roughness and Speed on Repaved Road Segments


10. Forecasting and Managing Correlation Risks (Hugo Schyns)

Asset pricing is a key topic in finance and has always been, however the focus seems to be more and more tilted towards machine learning (ML). Given the extensive research in asset pricing over the last decades, it is not surprising to see the academic community keeping up with the current trend of ML by trying to find better answers to known problems, using new techniques.

The Asset Pricing: Machine Learning session was packed with very interesting papers from top authors in the field. The one that definitely caught my eye is “Forecasting and Managing Correlation Risks” by Tim Bollerslev, Sophia Li and Yushan Tang. It addresses the crucial topic of correlations and the challenges associated with forecasting those. Until now, the related literature mainly focused on the estimation and forecasts of volatility. However, for portfolio optimization purposes, being able to forecast correlations between the assets of interest is key.

The authors propose a novel approach to forecast realized correlation by applying a regularized regression with variable selection (LASSO), where the inputs of the model are: past correlations and factor-driven features. The results show that this approach yields better out-of-sample results than the commonly used methods, when applied on the S&P500 stocks. Additional empirical applications where correlations play a key role, such as a global minimum variance portfolio and pairs trading, confirm the performance of this novel, yet simple, approach.

A usual criticism addressed to ML models is their lack of interpretability but this is not a concern here, as the method used remains a regression method, more sophisticated. Hence, this paper should definitely end up on the reading list of those interested in asset pricing paper, with a nice ML twist!