Author:
(1) David Staines.
4 Calvo Framework and 4.1 Household’s Problem
4.3 Household Equilibrium Conditions
4.5 Nominal Equilibrium Conditions
4.6 Real Equilibrium Conditions and 4.7 Shocks
5.2 Persistence and Policy Puzzles
6 Stochastic Equilibrium and 6.1 Ergodic Theory and Random Dynamical Systems
7 General Linearized Phillips Curve
8 Existence Results and 8.1 Main Results
9.2 Algebraic Aspects (I) Singularities and Covers
9.3 Algebraic Aspects (II) Homology
9.4 Algebraic Aspects (III) Schemes
9.5 Wider Economic Interpretations
10 Econometric and Theoretical Implications and 10.1 Identification and Trade-offs
10.4 Microeconomic Interpretation
Appendices
A Proof of Theorem 2 and A.1 Proof of Part (i)
B Proofs from Section 4 and B.1 Individual Product Demand (4.2)
B.2 Flexible Price Equilibrium and ZINSS (4.4)
B.4 Cost Minimization (4.6) and (10.4)
C Proofs from Section 5, and C.1 Puzzles, Policy and Persistence
D Stochastic Equilibrium and D.1 Non-Stochastic Equilibrium
D.2 Profits and Long-Run Growth
E Slopes and Eigenvalues and E.1 Slope Coefficients
E.4 Rouche’s Theorem Conditions
F Abstract Algebra and F.1 Homology Groups
F.4 Marginal Costs and Inflation
G Further Keynesian Models and G.1 Taylor Pricing
G.3 Unconventional Policy Settings
H Empirical Robustness and H.1 Parameter Selection
I Additional Evidence and I.1 Other Structural Parameters
I.3 Trend Inflation Volatility
This section has two components; the first justifies the source of parameters selected and identifies sources of uncertainty and reasonable grounds for disagreement. The second looks at how robust the Phillips curve is to these different settings and undertakes a brief comparison with the existing benchmark. The findings are broadly supportive of the new solution.
This subsection explains the parametization choices in the paper. There are detailed comparisons with econometric evidence and where possible priors commonly used in macroeconomics. It has four divisions, reflecting the three main parameters and a fourth briefly discussing the policy rule..
H.1.1 σ = 1
Throughout the calibration we use σ = 1. This is motivated by the balanced growth path refinement proposed in Appendix D.1.2. In the non-stochastic limit, σ = 1 is necessary for the economy to experience a constant growth rate without labour supply permanently increasing or decreasing. In the nonstochastic limit, σ = 1 is necessary for the economy to experience a constant growth rate without labour supply permanently increasing or decreasing. This is a standard assumption in the growth literature and can be seen as the theoretical justification for the ubiquitous practice of filtering or de-trending data before business cycle analysis. Indeed, empirical work in many areas of long-run macroeconomics favours an estimate close to unity, which is the value suggested by the UK government economic service (see Groom and Maddison [2019]). Crucially, it simplifies calculations significantly by causing inter-temporal substitution and wealth effects on labour supply to cancel out.
The choice is plausible but not unproblematic from an econometric standpoint. Macroeconometric studies typically produce estimates that are large but often imprecise and weakly identified (see Hall [1988], Yogo [2004], Havránek [2015] and Ascari et al. [2021]). They often produce more plausible estimates when the focus is on durable goods, as in studies such as Mankiw [1982] and Ogaki and Reinhart [1998].
Nevertheless, it is microeconometric evidence that seems more reliable here. Crump et al. [2022] estimate σ = 2, using individual expectations data from a consumer survey. Meta-analysis from Havránek [2015] and Havranek et al. [2015] come to a similar answer. Although, the former favours a higher estimate of between 3 and 4, to reflect publication bias, which causes systematic underreporting of low or negative estimates. His confidence interval rules out σ < 1.25. These estimates only concern those who do not face binding borrowing constraints. It has been recognised since Vissing-Jorgensen [2002] that these consumers will not follow the standard Euler equation. There is widespread evidence of credit constraint.
As a refinement to our set-up, imagine there were a substantial fraction of consumers who were shut out of the financial system. Suppose, for simplicity, they behaved as though they had σ → 0. Kaplan et al. [2014] estimate the fraction of households living hand-to-mouth could be as high as one-third in the United States and UK. If I took σ = 2 as the value for the unconstrained, it would imply an average elasticity of substitution of 4/3 ≈ 1.3, which is closer to unity. Indeed, σ = 1 would lie inside the Havránek [2015] confidence bounds since the lower bound estimate for the unconstrained of 1.2 would correspond to an aggregate value of σ = 0.80 under the confidence interval.
There are several recent estimates that imply higher values of σ > 4, such as Best et al. [2020] and Landais and Spinnewijn [2021], concerning mortgage notches and benefit cut-offs respectively. However, these may be context specific and appear too large to apply to aggregate consumption. [128] My calibration is within range of the Smets and Wouters [2007] prior centred at 1.5 with standard deviation 0.37. [129] Overall, setting σ = 1 does not seem to be an unreasonable simplification of econometric evidence, although efforts could be made to relax this assumption in future work.
H.1.2 η = 4
This is the problem parameter. In general, microeconometric evidence of low elasticities (and therefore high η) clashes with macroeconometric studies that favor the opposite conclusion. On the one hand, at the intensive margin, which is technically the only margin in operation in this representative agent framework, labor supply is usually found to be almost unresponsive particularly for primary earners (see Meghir and Phillips [2010] and Keane [2011]). The standard real business cycle model requires η < 1/2 to come close to generating plausible labor supply volatility (see King and Rebelo [1999] and Whalen and Reichling [2017]). It is currently unclear how Keynesian forces would affect this discrepancy.
When I broaden the evidence base to include the intensive margin, results are frequently more favourable to a synthesis with η around 2 or 2.5 on average, as argued by Whalen and Reichling [2017] and Elminejad et al. [2021]. This is because it is often costly to adjust hours in full-time employment, which makes participation decisions relatively elastic for second earners and those nearing retirement. Unfortunately, publication bias exerts a server downward bias on average estimates for η. The issue is once again that it is rational for individual researchers to throw out negative or insignificant elasticity estimates, on theoretical grounds, with adverse consequences for the profession.
987 tax holiday in Iceland to discover a substantive labor supply response consistent with η around 2. His small positive estimates of the extensive margin reaction is within a standard range. The novel finding is a large response at the intensive margin, particularly concentrated amongst men, who took up second jobs. This is unusual because the previous consensus was that labor supply is less responsive for males than females. However, it chimes with findings of elastic labor supply in sectors where hours of work have greater flexibility (see for example, Fehr and Goette [2007], Farber [2015], Giné et al. [2017], Mas and Pallais [2019], Tazhitdinova [2022] and Angrist et al. [2021]).
Nevertheless, a couple of similar studies come up with very high values of η for tax experiments in Switzerland (Martínez et al. [2021]) and Argentina (Tortarolo et al. [2020]). Sigurdsson [2021] argues these estimates can be reconciled by appealing to different labor market structures in these countries, that make labor supply systematically less flexible for example in Switzerland than Iceland. As the authors acknowledge, Argentina is known to have a heavily regulated and unionized labor market. [130] The paper focuses on Norway, which has a structure somewhere between Iceland and Switzerland, where the evidence is consistent with η ≈ 6.
Finally, turning to the macroeconometric studies, authors like Elminejad et al. [2021] are surely too critical of macroeconometric methods. These studies are better able to account for general equilibrium effects than microeconometrics based on within life cycle comparisons, as set out by Gottlieb et al. [2021]. There is a body of recent work explaining labor market dynamics, with some success, using elasticities implying η between one and two, such as Krusell et al. [2017], Chang et al. [2019], Park [2020] and Kneip et al. [2020]. [131] Hall [2009] explicitly demonstrate how an economy with a high value of η and a shock to labor market frictions can mimic one with a more standard macroeconomic calibration for the parameter.
It is important to recognise that objectives may differ between microeconomics and macroeconomics. The paper has demonstrated that pricing frictions make for a complicated dynamic model. In the interests of parsimony one wants to simplify the labor market, as far as possible. For prediction and forecasting purposes the main objective may be to select a value of η, which closes the model with an accurate Okun’s law relationship, even if this is not an "accurate" reflection of microeconomics. After all, this is effectively what frequentist estimation does. The final procedure is to consider a range of estimates between 1 and 6. The lower bound is motivated by the recent macroeconomic studies. The upper bound is motivated by the confidence interval of Sigurdsson [2021] and the particular emphasis on 4 is that is the bottom value considered justifiable by Elminejad et al. [2021]. Robustness is important; my ad hoc selections are no substitute for rigorous econometric analysis. I predict that the techniques in this paper will help to analyse models with more realistic labor market frictions and heterogeniety.
H.1.3 α = 2/3
This parameter governs the degree of nominal rigidity. Its bounds are the tightest, reflecting the wide availability of large price databases from statistical agencies, as well as the overwhelming evidence of nominal rigidity. Nevertheless, there are still challenges and some uncertainty in mapping a heterogeneous environment into the single parameter of a benchmark model.
In US data headline prices are highly flexible, changing around once a quarter on average Bils and Klenow [2004]. Similar results have been reported for Israel (Baharad and Eden [2004]) and Britain (Bunn and Ellis [2012]). However, they are appreciably less flexible in the Eurozone, with only 15% changing each month (see Alvarez et al. [2006] and Dhyne et al. [2006]). The consensus is that headline figures overstate the flexibility of prices.
The first issue is with sales prices. There is a general concern that they might be orthogonal to macroeconomic developments and as such they ought to be excluded from calibrations of adjustment frequency here. Nakamura and Steinsson [2008] show that between 60% and 86% of sales prices return to the same level afterwards. There are a couple of theoretical rationales. Kehoe and Midrigan [2015] use a menu cost model to show how the transitory nature of sales reduces their impact on the overall flexibility of the price level. Guimaraes and Sheedy [2011] suggests that firms in direct competition have an incentive to stagger sales, smoothing away much of their effect on inflation.
Using scanner data Eichenbaum et al. [2011] shows that "reference prices"- defined as the modal price in a given quarter- are considerably more sticky than headline prices. They then describe price-setting by a simple and accurate rule and demonstrate that the degree of nominal rigidity roughly mimics a menu cost model, calibrated to fit the frequency of reference price adjustments, indicating this is probably the right moment to target. Finally, Anderson et al. [2017] report institutional details that suggest retail sales are predominantly fixed in advance, in addition to econometric evidence that they are unresponsive to economic conditions and are used in some instances to hide permanent price increases. On the other hand, Gorodnichenko and Talavera [2017], Anderson et al. [2017], Dixon et al. [2020], Kryvtsov and Vincent [2021] and Carvalho and Kryvtsov [2021] do find some responsiveness to business cycle conditions. [132] I advocate these effects be ignored in the small noise limit. By way of theoretical justification, consider a model, such as Alvarez and Lippi [2020], where firms pay a fixed cost to change their price plan. In the small noise limit these small thresholds would never be met.
Moreover, there are several other arguments in favor of dropping sales. Klenow and Malin [2010] shows that aggregation to quarterly frequency, as is standard in DSGE, depresses any biases. It reduces or removes differences between continental Europe and Anglo-Saxon nations, where discount strategies are less common (see for example Berardi et al. [2015] and Sudo et al. [2014]). [133] Finally, Nakamura and Steinsson [2008] shows that for the US the median frequency matches our calibration of α = 2/3.
The final issue is with heterogeniety. The frequency distribution across sectors is heavily right skewed so the median is less than the mean. Bils and Klenow [2004] have favored calibrating to the median for models with no ex ante heterogeniety. In models with greater heterogeniety, overall nominal rigidity is dominated by the slow-adjusting sectors. This favors a longer length of calibration, if one is interested in reflecting these concerns (see Dixon and Kara [2011]). This motivates the robustness checks with α = 4/5.
The lower bound α = 3/5 is justified by the recent finding that internet prices are more flexible (see Gorodnichenko et al. [2018]). The overall effect is quite small since differences seem to be concentrated in the subset of online only retailers, according to Cavallo and Rigobon [2016]. There is also the issue that these innovations do not fit the ergodic structure of our model and are only slowly being incorporated into official statistics. Finally, the lower bound is consistent with previous DSGE priors which favoured price flexibility. [134] rule does not yield an equilibrium solution. With previous intuition about the policy rule thoroughly overturned, I decided to aim for maximum robustness. Settings for the output reaction between 0 and 2.5 are considered. I suspect a more tightly bounded set of parameters, describing interest rate setting, will become available, as our understanding of optimal monetary policy develops.