Working Papers

Authors: Robert Allen

Jun 2013

This article responds to Professor Jane Humphries' critique of my assessment of the high wage economy of eighteenth century Britain and its importance for explaining the Industrial Revolution.  New Evidence is presented to show that women and children participated in the high wage economy.  It is also shown that the high wage economy provides a good explanation of why the Industrial Revolution happened in the eighteenth century by showing that increases of women's wages around 1700 greatly increased the profitability of using spinning machinery.  The relationship between the high wage economy of the eighteenth century and the inequality and poverty in Britain in the nineteenth century is explored.

Reference: Number 115

Individual View

Authors: David Hendry, Jurgen A. Doornik, Felix Pretis

Jun 2013

Using an extension of general-to-specific modelling, based on the recent developments of impulse-indicator saturation (IIS), we consider selecting significant step indicators from a saturating set to capture location shifts.  The approximate non-centrality of the test is derived for a variety of shifts using a 'split-half' analysis, the simplest specialization of a multiple-block search algorithm.  Monte Carlo simulations confirm the accuracy of the nominal significance levels under the null, and show rejections when location shifts occur, improving in non-null rejection frequency compared to the corresponding IIS-based and to Chow (1960) tests.

JEL Codes: C51, C22

Keywords: General-so-specific, step-indicator saturation, test power, location shifts, model section, Autometrics

Reference: 658

Individual View

Authors: Javier Fernandez-Macho

Jun 2013

This paper examines a test for the null of cointegration in a multivariate system based on the discrepancy between the OLS estimator of the full set of n cointegrating relationships in the n + k system and the OLS estimator of the corresponding relationships among first differences without making specific assumptions about the short-run dynamics of the multivariate data generating process.  It is shown that the proposed test statistics are asymptotically distributed as standard chi-square with n + k degrees of freedom and are not affected by the inclusion of deterministic terms or dynamic regressors, thus offering a simple way of testing for cointegration under the null without the need of special tables.  Small sample critical values for these statistics are tabulated using Monte Carlo simulation and it is shown that these non residual-based tests exhibit appropriate size and good power even for quite general error dynamics.  In fact, simulation results suggest that they perform quite reasonably when compared to other tests of the null of cointegration.

JEL Codes: C22, C12

Keywords: Brownian motion, cointegration, econometric methods, integrated process, multivariate analysis, time series models, unit root

Reference: 657

Individual View

Authors: Ferdinand Rauch

May 2013

This paper shows that Zipf's Law for cities can emerge as a property of a clustering process.  If initially uniformly distributed people chose their location based on a specific gravity equation as found in trade studies, they will form cities that follow Zipf's Law in expected value.  This view of cities as spatial agglomerations is supported empirically by the observation that larger cities are surrounded by larger hinterland areas and larger countryside populations.

JEL Codes: R12

Keywords: Zipf's Law for cities, distribution of city sizes

Reference: 656

Individual View

Authors: Margaret Meyer, Bruno Strulovici

May 2013

Abstract

Given two sets of random variables, how can one determine whether the former variables are more interdependent than the latter? This question is of major importance to economists, for example, in comparing how various policies affect systemic risk or income inequality. Moreover, correlation is ill-suited to this task as it is typically not justified by any economic objective.

Economists' interest in interdependence often stems from complementarities (or substitutabilities) in the environment they analyze. This paper studies interdependence using supermodular objective functions: these functions treat their variables as complements, and their expectation increases as the realizations of the variables become more aligned.

The supermodular ordering has a linear structure, which we exploit to obtain tractable characterizations and methods for comparing multivariate distributions, and extend when objective functions are also monotonic or symmetric. We also provide suffcient conditions for comparing random variables generated by common and idiosyncratic shocks or by heterogeneous lotteries, and illustrate our methods with several applications.

Revised August 2015

JEL Codes: D63, D81, G11, G22

Keywords: Interdependence, Supermodularity, Correlation, Copula, Mixture, Majorization, Tournament

Reference: 655

Individual View

Authors: Jennifer Castle,David Hendry

May 2013

We consider model selection for non-linear dynamic equations with more candidate variables than observations, based on a general class of non-linear-in-the-variables functions, addressing possible location shifts by impulse-indicator saturation.  After an automatic search delivers a simplified congruent terminal model, an encompassing test can be implemented against an investigator's preferred non-linear function.  When that is non-linear in the parameters, such as a threshold model, the overall approach can only be semi-automatic.  The method is applied to re-analyze an empirical model of real wages in the UK over 1860-2004, updated and extended to 2005-2011 for forecast evaluation.

JEL Codes: C51, C22

Keywords: Non-linear models, location shifts, model selection, autometrics, impulse-indicator saturation

Reference: 654

Individual View

Authors: H Peyton Young, H.H. Nax, M.N. Burton-Chellew, S.A. Westor

Apr 2013

Many interactive environments can be represented as games, but they are so large and complex that individual players are in the dark about others' actions and the payoff structure.  This paper analyzes learning behavior in such 'black box' environments, where players' only source of information is their own history of actions taken and payoffs received.  Specifically we study voluntary contributions games.  We identify two robust features of the players' learning dynamics: search volatility and trend-following.  These features are clearly present when players have no information about the game; but also when players have full informaiton.  Convergence to Nash equilibrium occurs at about the same rate in both situations.

JEL Codes: C70, C73, C91, D83, H41

Keywords: learning, information, public goods games

Reference: 653

Individual View

Authors: Simon GB Cowan

Apr 2013

The welfare and output effects of monopoly third-degree price discrimination are analyzed when inverse demand functions are parallel.  Welfare is higher with discrimination than with a uniform price when demand functions are derived from the logistic distribution, and from a more general class of distributions.  The sufficient condition in Varian (1985) for a welfare increase holds for these demand functions.  Total output is higher with discrimination for a large set of demand functions including those derived from strictly log-concave distributions with increasing cost pass-through, such as the normal, logistic and extreme value, and standard log-convex demands.

JEL Codes: D42, L12, L13

Keywords: Third-degree price discrimination, monopoly, social welfare, output

Reference: 652

Individual View

Authors: C. Knick Harley

Apr 2013

Modern economic growth first emerged in Britain about the time of the Industrial Revolution, with its cotton textile factories, urban industrialization and export orientated industrialization.  A period of economic growth, industrial diversification and export orientation preceded the Industrial Revolution.  This export orientation revolved around an Americanization of British trade for which the slave colonies of the Caribbean were central.  The Eric Williams' explored the extent to which this export economy based on West Indian slavery contributed to the coming of the Industrial Revolution.  His claim that profits from the slave trade were crucial to the Industrial Revolution has not stood up to criticial evaluation.  Nonetheless, modern speculations regarding endogenous growth plausibly postulate that manufacturing, urbanization, and a powerful merchant class all have a favourable impact for growth.  These hypotheses need careful consideration.

What set the British colonial empire aside from its rivals was not the quality of its sugar colonies but the involvement of the temperate colonies on the North American mainland.  Unlike the slave colonies created to exploit staple exports, English emigrants to the northern mainland sought to establish independent settlement.  These colonies lacked staple products and residents financed imports by exploited opportunities the empire provided providing for shipping and merchanising and compensating for the lack of European market for the timber or temperate agricultural products by exporting to the sugar colonies which, in turn, concentrated on the export staple.  The British Empire was unique and its development provided an important and growing diversified and relatively wealthy market for British manufactured goods that all other empires lacked.  Although the mainland colonies financed their imports of British manufactured goods by intergrading into the slave-based British Atlantic, it seems likely that in the absence of opportunities in the slave colonies the mainland colonies would have imported similar amounts of British manufactured goods.

Reference: Number 113

Individual View

Authors: Alexandre de Corniere, Greg Taylor

Mar 2013

Competition authorities all over the world worry that integration between search engines (mainly Google) and publishers could lead to abuses of dominant position.  In particular, one concern is that of own-content bias, meaning that Google would bias its rankings in favor of the publishers it owns or has an interest in, to the detriment of competitors and users.  In order to investigate this issue, we develop a theoretical framework in which the search engine (i) allocates users across publishers, and (ii) competes with publishers to attract advertisers.  We show that the search engine is biased against publishers that display many ads - even without integration.  Although integration may lead to own-content bias, it can also reduce bias by increasing the value of a marginal consumer to the search engine.  Integration also has a positive effect on users by reducing the nuisance costs due to excessive advertising.  Its net effect is therefore ambiguous in general, and we provide sufficient conditions for it to be desirable or not.

JEL Codes: L1, L4, L86

Keywords: Search engine, integration, advertising

Reference: 651

Individual View

Authors: Alexandre de Corniere, Romain De Nijs

Mar 2013

An online platform makes a profit by auctioning an advertising slot that appears whenever a consumer visits its website.  Several firms compete in the auction, and consumers differ in their preferences.  Prior to the auction, the platform gathers data which is statistically correlated with consumers' tastes for products.  We study the implications of the platform's decision to allow potential advertisers to access the data about consumers' characteristics before they bid.  On top of the familiar trade-off between rent extraction and efficiency, we identify a new trade-off: the disclosure of information leads to a better matching between firms and consumers, but results in a higher equilibrum price on the product market.  We find that the equilbrium price is an increasing function of the number of firms.  As the number of firms becomes large, it is always profitable for the platform to disclose the information, but this need not be efficient, because of the distortion caused by the higher prices.  When the quality of the match represents vertical shifts in the demand function, we provide conditions under which disclosure is optimal.

JEL Codes: D4

Keywords: Online advertising, privacy, information disclosure, auctions

Reference: 650

Individual View

Authors: Alexandre de Corniere

Mar 2013

Search engines enable advertisers to target consumers based on the query they have entered.  In a framework with horizontal product differentiation, imperfect product information and in which consumers incur search costs, I study a game in which advertisers have to choose a price and a set of relevant keywords.  The targeting mechanism brings about three kinds of efficiency gains, namely lower search costs, better matching, and more intense product market price-competition.  A monopolistic search engine charges advertisers too high a price, and has incentives to provide a suboptimal matching quality.  Competition among search engines eliminates the latter distortion, but exacerbates the former.

JEL Codes: D43, D83, L13, M37

Keywords: Search engine, targeted advertising, consumer search

Reference: 649

Individual View

Authors: Marta Troya-Martinez

Mar 2013

This paper uses a vertical relational contract between two firms to explore the implications of trade credit when the ability to repay is not observed by the supplier.  Trade credit limits the supplier's possibilities to punish the cashless downstream firms and termination may be used in equilibrium.  We find that the supplier always sells too little despite having enough instruments to fix the double marginalization problem.  The downward distortion in the quantity results from the need to make the contract self-enforced and/or to tackle the asymmetric information problem.  The distortion remains even as the firms become arbitrarily patient and a larger discount factor does not necessarily translate into a larger welfare.  We show that the optimal contract resembles a simple debt contract: if the fixed repayment is met, the contract continues to the next period.  Otherwise, the manufacturer asks for the highest possible repayment and terminates for a number of periods.  The toughness of the termination policy decreases with the repayment.

JEL Codes: C73, D82, L14

Keywords: Relational contracts, trade credit, imperfect monitoring

Reference: 648

Individual View

Authors: Tim Willems

Mar 2013

In an environment where voters face an inference problem on the competence level of policy makers, this paper shows how subjecting these policy makers to reelection can reduce the degree of policy experimentation to the benefit of the status quo.  This may be a reason why some notable policy experiments were implemented by non-accountable regimes (cf. Chile and China).  Whether experimentation in representative democracies is suboptimally low, depends on society's degree of risk aversion relative to that of the decision maker.  If the level of experimentation is suboptimal, taking decisions by direct democracy, or electing risk-loving politicians could improve welfare.  Interestingly, risk-lovers also seem to be overrepresented among Presidents of various countries.

JEL Codes: D72, D83

Keywords: Policy experimentation, learning, political economy, reform, status quo bias, career concerns

Reference: 647

Individual View

Authors: S.D.Smith, Martin Forster

Feb 2013

This study estimates agency's impact on the efficiency of sugar plantations on St. Vincent and the Grenadines during the early 19th century.  Using a panel data set covering the years 1814-1829, a series of stochastic frontier models are estimated to investigate whether estates employing agents were more technically efficient than those managed by the owners themselves.  Multiple imputation methods are used to deal with missing data problems.  There is no evidence, in any of the models estimated, to suggest that estates under agency were less efficient than those that were directed by their owners.  Estimates from a number of models suggest that agent-operated estates were more efficient.

Reference: Number 112

Individual View

Authors: C. Knick Harley

Feb 2013

Modern economic growth

Reference: Number 111

Individual View

Authors: Debopam Bhattacharya, Pascaline Dupas, Shin Kanaya

Feb 2013

Regular use of effective health-products such as insecticide-treated mosquito nets (ITN) by a household benefits its neighbors by (a) reducing chances of infection and (b) raising awareness about product-effectiveness, thereby increasing product-use.  Due to their potential social benefits and high purchase price, causing free-riding and sub-optimal private procurement, such products may be subsidized in developing countries through means-testing.  Owing to associated spillover effects, cost-benefit analysis of such subsidies requires modelling behavioral responses of both the subsidized household and its neighbors.  Using experimental data from Kenya where subsidies were randomized, coupled with GPS-based location information, we show how to estimate aggregate ITN use resulting from means-tested subsidies in the presence of such spatial spillovers.  Accounting for spillovers introduces infinite-dimensional estimated regressors corresponding to continuously distributed location coordinates and makes the inference problem novel.  We show that even if individual ITN use unambiguously increases with increasing incidence of subsidy in the neighborhood, ignoring spillovers may over- or under-predict overall ITN use resulting from a specific targeting rule, depending on the resulting aggregate incidence of subsidy.  Applying our method to othe Kenyan data, we find that (i) individual ITN use rises with neighborhood subsidy-rates, (ii) under means-testing, predicted ITN use is a convex increasing function of the subsidy incidence and (iii) ignoring spillovers implies a nearly-linear inceasing relationship leading to over-estimation of ITN use at lower and under-estimation at higher subsidy rates.

JEL Codes: I18, H23, H42, C21

Keywords: Treatment effect, policy targeting, spillover, externality, overlapping neighborhood, social learning, experimental data, three-step estimation, bootstrap validity, Kenya

Reference: 646

Individual View

Authors: Kevin Sheppard, Lily Liu, Andrew J. Patton

Feb 2013

We study the accuracy of a wide variety of estimators of asset price variation constructed from high-frequency data (so-called "realized measures"), and compare them with a simple "realized variance" (RV) estimator.  In total, we consider almost 400 different estimators, applied to 11 years of data on 31 different financial assets spanning five asset classes, including equities, equity indices, exchange rates and interest rates.  We apply data-based ranking methods to the realized measures and to forecasts based on these measures.  When 5-minute RV is taken as the benchmark realized measure, we find little evidence that it is outperformed by any of the other measures.  When using inference methods that do not require specifying a benchmark, we find some evidence that more sophisticated realized measures significantly outperform 5-minute RV.  In forecasting applications, we find that a low frequency "truncated" RV outperforms most other realized measures.  Overall, we conclude that it is difficult to significantly beat 5-minute RV.

JEL Codes: C58, C22, C53

Keywords: Realized variance, volatility forecasting, high frequency data

Reference: 645

Individual View

Authors: Mary Elisabeth Cox

Feb 2013

At the beginning of the First World War, the British imposed a blockade against Germany intending to prevent all imports from entering the country.  Germans began to call the British naval action the Hungerblockade, claiming that it seriously damaged the well-being of those on the home front, namely women and children, through lack of adequate nutrition.  These German claims that Britain used hunger as a weapon of war against civilians have sometimes been dismissed as propaganda.  However, newly discovered anthropometric measurements made of German school children during the war gives credence to German contentions that the blockade inflicted severe deprivation on children and other non-combatants.  Further, these data show that the blockade exacerbated existing nutritional inequalities between children of different social classes; working class children suffered the most profound effects of nutritional deprivation during the war.  Once the blockade ended however, working class children were the quickest to recover, regaining their pre-War standards in weight by 1921.  They surpassed their own pre-War height standards by 1923, and approximated the weight of middle class children by 1924.  This recovery of working class children is likely due to the outpouring of international aid targeted at poor German children.  These data also indicate significant gender inequalities starting at age fourteen in nutritional status, with male adolescents suffering far greater deprivation from 1914-1924.

Reference: Number 110

Individual View

Authors: Neil Shephard

Feb 2013

I discuss models which allow the local level model, which rationalised exponentially weighted moving averages, to have a time-varying signal/noise ratio.  I call this a martingale component model.  This makes the rate of discounting of data local.  I show how to handle such models effectively using an auxiliary particle filter which deploys M Kalman filters run in parallel competing against one another.  Here one thinks of M as being 1,000 or more.  The model is applied to inflation forecasting.  The model generalises to unobserved component models where Gaussian shocks are replaced by martingale difference sequences.

JEL Codes: C01, C14, C58, D53, D81

Keywords: Auxiliary particle filter, EM algorithm, EWMA, forecasting, Kalman filter, likelihood, martingale unobserved component model, particle filter, stochastic volatility

Reference: 644

Individual View

Authors: David Hendry, Felix Pretis

Feb 2013

We demonstrate major flaws in the statistical analysis of Beenstock, Reingewertz and Paldor (2012), discrediting their initial claims as to the different degrees integrability of CO2 and temperature.

JEL Codes: C1, Q5

Keywords: Econometric modelling, location shifts, data measurements, climate change

Reference: 643

Individual View

Authors: H Peyton Young, Paul Glasserman

Feb 2013

Interconnections among financial institutions create potential channels for contagion and amplification of shocks to the financial system.  We propose precise definitions of these concepts and analyze their magnitude.  Contagion occurs when a shock to the assets of a single firm causes other firms to default through the network of obligations; amplification occurs when losses among defaulting nodes keep escalating due to their indebtedness to one another.  Contagion is weak if the probability of default through contagion is no greater than the probability of default through independent direct shocks to the defaulting nodes.  We derive a general formula which shows that, for a wide variety of shock distributions, contagion is weak unless the triggering node is large and/or highly leveraged compared to the nodes it topples through contagion.  We also estimate how much the interconnections between nodes increase total losses beyond the level that would be incurred without interconnections.  A distinguishing feature of our approach is that the results do not depend on the specific topology: they hold for any financial network with a given distribution of bank sizes and leverage levels.  We apply the framework to European Banking Authority data and show that both the probability of contagion and the expected increase in losses are small under a wide variety of shock distributions.  Our conclusion is that the direct transmission of shocks through payment obligations does not have a major effect on defaults and losses; other mechanisms such as loss of confidence and declines in credit quality are more llikely sources of contagion.

JEL Codes: D85, G21

Keywords: Systemic risk, contagion, financial network

Reference: 642

Individual View

Authors: Neil Shephard

Feb 2013

I discuss models which allow the local level model, which rationalised exponentially weighted moving averages, to have a time-varying signal/noise ratio.  I call this a martingale component model.  This makes the rate of discounting of data local.  I show how to handle such models effectively using an auxiliary particle filter which deploys M Kalman filters run in parallel competing against one another.  Here one thinks of M as being 1,000 or more.  The model applied to inflation forecasting.  The model generalises to unobserved component models where Gaussian shocks are replaced by martingale difference sequences.

Keywords: Auxiliary particle filter, EM algorithm, EWMA, forecasting, Kalman filter, likelihood, martingale unobserved component model, particle filter, stochastic volatility

Reference: 2013-W01

Individual View


Loading Papers...