Students / Subjects


Experimenters
Email:

Password:


Forgot password?

Register

Glossary


A B C D E F G H I J K L M N O P Q R S T U V W X Y Z (Show all)

L1

The set of Lebesgue-integrable real-valued functions on [0,1].

Source: econterms

L2

A Hilbert space with inner product (x,y) = integral of x(t)y(t) dt.
Equivalently, L2 is the space of real-valued random variables that have variances. This is an infinite dimensional space.

Source: econterms

Ln

is the set of continuous bounded functions with domain Rn

Source: econterms

labor

"[L]abor economics is primarily concerned with the behavior of employers and employees in response to the general incentives of wages, prices, profits, and nonpecuniary aspects of the employment relationship, such as working conditions."
labor

Source: econterms

labor market outcomes

Shorthand for worker (never employer) variables that are often considered endogeneous in a labor market regression. Such variables, which often appear on the right side of such regressions: wage rates, employment dummies or employment rates.

Source: econterms

labor productivity

Quantity of output per time spent or numbers employed. Could be measured in, for example, U.S. dollars per hour.

Source: econterms

labor theory of value

"Both Ricardo and Marx say that the value of every commodity is (in perfect equilibrium and perfect competition) proportionaly to the quantity of labor contained in the commodity, provided this labor is in accordance with the existing standard of efficiency of production (the 'socially necessary quantity of labor'). Both measure this quantity in hours of work and use the same method in order to reduce different qualities of work to a single standard." And neither accounts well for monopoly or imperfect competition. (Schumpeter, p 23)

Source: econterms

labor-augmenting

One of the ways in which an effectiveness variable could be included in a production function in a Solow model. If effectiveness A is multiplied by labor L but not by capital K, then we say the effectiveness variable is labor-augmenting.

Source: econterms

LAD

Stands for 'Least absolute deviations' estimation.

LAD estimation can be used to estimate a smooth conditional median function; that is, an estimator for the median of the process given the data. Say the data are stationary {xt, yt}. The dependent variable is y and the independent variable is x.The criterion function to be minimized in LAD estimation for each observation t is:
q(xt,yt,q) = |yt=m(xt,q)|

where m() is a guess at the conditional median function.

Under conditions specified in Wooldridge, p 2657, the LAD estimator here is Fisher-consistent for parameters of the estimator of the median function.

Source: econterms

lag operator

Denoted L. Operates on an expression by moving the subscripts on a time series back one period, so: Let = et-1 Why? Well, it can help manipulability of some expressions. For example it turns out one can could write an MA(2) process (which see) to look like this, in lag polynomials (which see): et = (1 + p1L + p2L2)ut and then divide both sides by the lag polynomial, and get a legal, meaningful, correct expression.

Source: econterms

lag polynomial

A polynomial expression in lag operators (which see). Example: (1 - p1L + p2L2) where L2 = LL, or the lag operator L applied twice. These are useful for manipulating time series. For example, one can quickly show an AR(1) is equivalent to an MA(infinity) by dividing both sides by the lag polynomial (1-pL).

Source: econterms

Lagrangian multiplier

An algebraic term that arises in the context of problems of mathematical optimization subject to constraints, which in economics contexts is sometimes called a shadow price.

A long example: Suppose x represents a quantity of something that an individual might consume, u(x) is the utility (satisfaction) gained by that individual from the consumption of quantity x. We could model the individual's choice of x by supposing that the consumer chooses x to maximize u(x):

x = arg maxx u(x)

Suppose however that the good is not free, so the choice of x must be constrained by the consumer's income. That leads to a constrained optimization problem ............ [Ed.: this entry is unfinished]

Source: econterms

LAN

stands for 'locally asymptotically normal', a characteristic of many ('a family of') distributions.

Source: econterms

large sample

Usually a synonym for 'asymptotic' rather than a reference to an actual sample magnitude.

Source: econterms

Laspeyres index

A price index following a particular algorithm.

It is calculated from a set ('basket') of fixed quantities of a finite list of goods. We are assumed to know the prices in two different periods. Let the price index be one in the first period, which is then the base period. Then the value of the index in the second period is equal to this ratio: the total price of the basket of goods in period two divided by the total price of exactly the same basket in period one.

As for any price index, if all prices rise the index rises, and if all prices fall the index falls.

Source: econterms

Law of iterated expectations

Often exemplified by EtEt+1(.) = Et(.) That is, "one cannot use limited information [at time t] to predict the forecast error one would make if one had superior information [at t+1]." -- Campbell, Lo, and MacKinlay, p 23.

Source: econterms

LBO

Leveraged buy-out. The act of taking a public company private by buying it with revenues from bonds, and using the revenues of the company to pay off the bonds.

Source: econterms

Learning process

Consider a repeated play of a finite game. In each period, every player observes the history of past actions, and forms a belief about the other players? strategies. He then chooses a best response according to his belief about the other players? strategies. We call such a process a learning process.

Source: SFB 504

least squares learning

The kind of learning that an agent in a model exhibits by adapting to past data by running least squares on it to estimate a hypothesized parameter and behaving as if that parameter were correct.

Source: econterms

leisure

In some models, individuals spend some time working and the rest is lumped into a category called leisure, the details of which are usually left out.

Source: econterms

lemons model

Describes models like that of Akerlof's 1970 paper, in which the fact that a good is available suggests that it is of low quality. For example, why are used cars for sale? In many cases because they are "lemons," that is, they were problematic to their previous owners.

Source: econterms

Leontief production function

Has the form q=min{x1,x2} where q is a quantity of output and x1 and x2 are quantities of inputs or functions of the quantities of inputs.

Source: econterms

leptokurtic

An adjective describing a distribution with high kurtosis. 'High' means the fourth central moment is more than three times the second central moment; such a distribution has greater kurtosis than a normal distribution. This term is used in Bollerslev-Hodrick 1992 to characterize stock price returns.
Lepto- means 'slim' in Greek and refers to the central part of the distribution.

Source: econterms

Lerman ratio

A government benefit to the underemployed will presumably reduce their hours of work. The ratio of the actual increase in income to the benefit is the Lerman ratio, which is ordinarily between zero and one. Moffitt (1992) estimates it in regard to the U.S. AFDC program at about .625.

Source: econterms

Lerner index

A measure of the profitability of a firm that sells a good: (price - marginal cost) / price.

One estimate, from Domowitz, Hubbard, and Petersen (1988) is that the average Lerner index for manufacturing firms in their data was .37.

Source: econterms

leverage ratio

Meaning differs by context. Often: the ratio of debts to total assets. Can also be the ratio of debts (or long-term debts in particular, excluding for example accounts payable) to equity.

Normally used to describe a firm's but could describe the accounts of some other organization, or an individual, or a collection of organizations.

Source: econterms

Leviathan

The all-powerful kind of state that Hobbes thought "was necessary to solve the problem of social order." -- Cass R. Sunstein, "The Road from Serfdom" The New Republic Oct 20, 1997, p 37.

Source: econterms

Liability of newness

The liability of newness phenomenon describes the different risks of dying of an organization during its life course. It states that at the point of founding of an organization the risk of dying is highest and decreases with growing age of the organization. There are basicly three reasons why this might be the case (see Stinchcombe, 1965):
New organizations which are acting in new areas ask for new roles to be performed by their members. The learning of the new roles takes time and leads to economic inefficencies.
Trust among the organizational members has yet to be developed since in most cases the new employees of a firm do not know each other when the organization is founded.
New organizations have not yet built stable portfolios of clients.

These considerations can - at least in some respects - also apply to the new rules of an organization. A new rule also implies new roles that have to be learned and members have to develop trust towards the new rule. According to this theoretical concept a new organizational rule should also have its highest risk of beeing abolished just after its founding (see Schulz, 1993).

Source: SFB 504

Lifecycle hypothesis

The life-cycle hypothesis presents a well-defined linkage between the consumption plans of an individual and her income and income expectations as she passes from childhood, through the work participating years, into retirement and eventual decease. Early attempts to establish such a linkage were made by Irving Fisher (1930) and again by Harrod (1948) with his notion of hump saving, but a sharply defined hypothesis which carried the argument forward both theoretically and empirically with its range of well-specified tests for cross-section and time series evidence was first advanced by Modigliani & Brumberg (1954). Both their paper and advanced copies of the permanent income theory of Milton Friedman (1957) were circulating in 1953. Both the Modigliani-Brumberg and the Friedman theories are referred to as life-cycle theories.

The main building block of life-cycle models is the saving decision, i.e., the division of income between consumption and saving. The saving decision is driven by preferences between present and future consumption (or the utility derived from consumption). Given the income stream the household receives over time, the sequence of optimal consumption and saving decisions over the entire life can be computed. Note that the standard life-cycle model as presented here is firmly grounded in expected utility theory and assumes rational behavior.

The typical shape of the income profile over the life cycle starts with low income during the early years of the working life, then income increases until a peak is reached before retirement, while pension income during retirement is substantially lower. To make up for the lower income during retirement and to avoid a sharp drop in utility at the point of retirement, individuals will save some fraction of their income during their working life and dissave during retirement. This results in a hump-shaped savings profile over the life cycle ? the main prediction of the life-cycle theory.

Unfortunately, this prediction does not hold in actual household behavior. It is fair to say the reasons for this failure of the simple life-cycle model are still not understood. Rodepeter & Winter (1998) provide empirical evidence for Germany and discuss some extensions of the life-cycle model that might help to understand actual savings behavior. An important direction of current research tries to apply elements of behavioral economics to life-cycle savings decisions.

Source: SFB 504

Lifecycle hypothesis a review of the literature

This review of the literature on life-cycle consumption and saving decisions is adapted from Fisher (1987).

The life-cycle hypothesis presents a well-defined linkage between the consumption plans of an individual and her income and income expectations as she passes from childhood, through the work participating years, into retirement and eventual decease. Early attempts to establish such a linkage were made by Irving Fisher (1930) and again by Harrod (1948) with his notion of hump saving, but a sharply defined hypothesis which carried the argument forward both theoretically and empirically with its range of well-specified tests for cross-section and time series evidence was first advanced by Modigliani & Brumberg (1954). Both their paper and advanced copies of the permanent income theory of Milton Friedman (1957) were circulating in 1953 and led M.R. Fisher (1956) to carry out tests of the theories even preceding publication of Friedman´s work. Both the Modigliani-Brumberg and the Friedman Theories are referred to as life-cycle theories and they certainly have many similar implications, but the one that is more closely related to the life cycle with emphasis on age ? Modigliani and Brumberg ? is the one to which the following review concentrates.

The key which rendered the multi-period analysis tractable under subjective certainty was the specification that the life-time utility function be homothetic ? this permitted planned consumption for each future period to be written as a function of expected wealth as seen at the planning date, the functional parameters being in no way dependent upon wealth, but upon age and tastes. The authors further sharpened their hypothesis. They specified that an individual would plan to consume the same amount in real discounted terms each year. Throughout, desired bequest and initial assets were set to zero. However, the authors did show that bequests could be accounted for within the homothetic utility function itself if that became necessary.

From the outset, such sharp hypothesis was desired for empirical testing. For Modigliani at least, a propelling influence had been the debate about the explanatory power of the Keynesian consumption function for forecasting postwar consumption and income. The inadequacies revealed had led already to several refined theories, notably by Duesenberry (1949) and by Modigliani (1949) himself. In the 1940s, cross-section studies had been carefully carried through at the National Bureau of Economic Research (NBER), and empirical results from these studies were promoting theoretical insights. Any new theory had to be consistent with these findings. The tighter specification of the hypothesis enabled the spelling out of the pattern of accumulating savings in the working years to finance the retirement years ? hump savings. Assuming that real income of each member of the populationwide sample remained the same throughout working life, it was shown that the independent of the age and income distribution and dependent only on the proportion of retirement years to expected lifetime. This alerted economists to the fact that cross-section results do not directly translate into estimates of the marginal propensity to save of an individual planning function. This insight is of broader significance not confined to the simple hypothesis. The implications of the hypothesis for time series analysis were disseminated much more slowly as the companion paper to that on cross section interpretation was never published, accounts not being freely available until 1963 and the original text itself not until 1980.

Real consumption, including the depreciation of durable goods, is a proportion of expected real wealth, and wealth is the addition of initial assets at the planning date, current income and expected (discounted) future income. By then assuming that the proportionality factor referred to is identical across individuals, they devised an aggregate relation for each and every age group. Next they proceed to aggregate across age groups. Here the proportionality factor, depending as it does on age, is not independent of assets, and bias may be introduced, if the strictest set of assumptions used in the cross-section analysis is employed, the authors show that when aggregated real income follows an exponential growth trend the parameters of the aggregate relation remain constant over time. They are, however, sensitive to the magnitude of the growth rate of real income (a sum of growth rates in productivity and population), the saving-income ratio being larger the greater the rate of income growth.

If income and/or assets at any time move out of line with previous planning expectations, plans can be revised. Suppose income rises, yet income expectations are not revised, the change being viewed as an on-off event. Then the individual marginal propensity to save at that date would rise to finance subsequent consumption at a higher level until death. If income expectations were revised upwards permanently, then the marginal propensity to save would also rise but to a lesser degree than in the on-off case as higher consumption can more easily be provided for out of later-period incomes. Allowance for income variability is straightforward in cross section; with time series expected income, here labor income, may be set equal to a weighted average of aggregated past and expected future income, or subdivided according to whether the reference is to employed or unemployed consumers at any time (Modigliani & Ando (1963)).

Source: SFB 504

LIFT

Acronym for "Let It Function Today" - a concept very comparable to rationality (for a repeated discussion see Bogart, 1985):


    Everyone believes it exists, although some pessimist critics say it consists only theoretically and has no everyday value whatsoever (e.g. Lotterbottel, 1983);
    It has just one entry (the economist view), but still people can get into it coming from very different places or levels. Superficially, these levels all look the same (red), but they really are dependent on context factors (the psychologist view);
    It is at the core of the SonderForschungsBereich. However, there will never be more than four people being able to use it at the same time. The probability is high that this also is the time when the bell rings and the concept breaks down (Hausmeister, 1952);
    People are really into it and they talk a lot about it (Funk & Stoer, 1997). Behavioral observation has proved, however, that in fact nobody gets in (although some people report spiritual experiences of "being in a flight-like state" or "getting closer to the heavenly Geschäftsstelle" or being "lifted up", while others believe in "the key"). Instead people circle around it using the dissatisficing strategy of climbing the stairs of experimental simulation;
    It is supposed to work perfectly, but it could happen that at some point it wouldn't. Therefore it is not worked with preventively. As one result the concept just never works, as a second result people sweat a whole lot;
    There is some speculation about what would happen if it worked at some point, but empirical evidence for these theories is still weak (Autorenkollektiv, 1997).

    Source: SFB 504

likelihood function

In maximum likelihood estimation, the likelihood function (often denoted L()) is the joint probability function of the sample, given the probability distributions that are assumed for the errors. That function is constructed by multiplying the pdf of each of the data points together:
L(q) = L(q; X) = f(X; q) = f(X0;q)f(X1;q)...f(XN;q)

Source: econterms

Limdep

A program for the econometric study of limited dependent variables. Limdep's web site is at 'http://www.limdep.com'.

Source: econterms

limited dependent variable

A dependent variable in a model is limited if it is discrete (can take on only a countable number of values) or if it is not always observed because it is truncated or censored.

Source: econterms

LIML

stands for Limited Information Maximum Likelihood, an estimation idea

Source: econterms

Lindeberg-Levy Central Limit Theorem

For {wt} an iid sequence, Ewt=mu, and var(wt)=s2:
Let W=the average of the T wt's. Then: T1/2(W-mu)/s converges in distribution as T goes to infinity to a N(0,1) distribution

Source: econterms

linear algebra

linear algebra

Source: econterms

linear model

An econometric model is linear if it is expressed in an equation which the parameters enter linearly, whether or not the data require nonlinear transformations to get to that equation.

Source: econterms

linear pricing schedule

Say the number of units, or quantity, paid for is denoted q, and the total paid is denoted T(q), following the notation of Tirole. A linear pricing schedule is one that can be characterized by T(q)=pq for some price-per-unit p.

For alternative pricing schedules see nonlinear pricing or affine pricing schedule.

Source: econterms

linear probability models

Econometric models in which the dependent variable is a probability between zero and one. These are easier to estimate than probit or logit models but usually have the problem that some predictions will not be in the range of zero to one.

Source: econterms

Linear separability

The method typically used to combine the attribute weights was adapted from Tversky's (1977) contrast model of similarity. The attribute weights are assumed to be independent and combined by adding (that means they are linearly separable).

Source: SFB 504

link function

Defined in the context of the generalized linear model, which see.

Source: econterms

Lipschitz condition

A function g:R->R satisfies a Lipschitz condition if
|g(t1)-g(t2) <= C|t1-t2|
for some constant C. For a fixed C we could say this is "the Lipschitz condition with constant C."

A function that satisfies the Lipschitz condition for a finite C is said to be Lipschitz continuous, which is a stronger condition than regular continuity; it means that the slope so steep as to be outside the range (-C, C).

Source: econterms

Lipschitz continuous

A function is Lipschitz continuous if it satisfies the Lipschitz condition for a finite constant C. Lipschitz continuity is a stronger condition than regular continuity. It means that the slope is never outside the range (-C, C).

Source: econterms

liquid

A liquid market is one in which it is not difficult or costly to buy or sell.

More formally, Kyle (1985), following Black (1971), describes a liquid market as "one which is almost infinitely tight, which is not infinitely deep, and which is resilient enough so that prices eventually tend to their underlying value."

Source: econterms

liquidity

A property of a good: a good is liquid to the degree it is easily convertible, through trade, into other commodities. Liquidity is not a property of the commodity itself but something established in trading arrangements.

Source: econterms

liquidity constraint

Many households, e.g. young ones, cannot borrow to consume or invest as much as they would want, but are constrained to current income by imperfect capital markets.

Source: econterms

liquidity trap

A Keynesian idea. When expected returns from investments in securities or real plant and equipment are low, investment falls, a recession begins, and cash holdings in banks rise. People and businesses then continue to hold cash because they expect spending and investment to be low. This is a self-fulfilling trap.

See also Keynes effect and Pigou effect.

Source: econterms

Literature

: Gibbons (1992)

Source: SFB 504

Ljung-Box test

Same as portmanteau test.

Source: econterms

locally identified

Linear models are either globally identified or there are an infinite number of observably equivalent ones. But for models that are nonlinear in parameters, "we can only talk about local properties." Thus the idea of locally identified models, which can be distinguished in data from any other 'close by' model. "A sufficient condition for local identification is that" a certain Jacobian matrix is of full column rank.

Source: econterms

locally nonsatiated

An agent's preferences are locally nonsatiated if they are continuous and strictly increasing in all goods.

Source: econterms

log

In the context of economics, log always means 'natural log', that is loge, where e is the natural constant that is approximately 2.718281828. So x=log y <=> ex=y.

Source: econterms

log utility

A utility function. Some versions of this are used often in finance.
Here is the simplest version. Define U() as the utility function and w as wealth. a is a positive scalar parameter.
U(w) = ln-w

is the log utility function.

Source: econterms

log-concave

A function f(w) is said to be log-concave if its natural log, ln(f(w)) is a concave function; that is, assuming f is differentiable, f''(w)/f(w) - f'(w)2 <= 0 Since log is a strictly concave function, any concave function is also log-concave. A random variable is said to be log-concave if its density function is log-concave. The uniform, normal, beta, exponential, and extreme value distributions have this property. If pdf f() is log-concave, then so is its cdf F() and 1-F(). The truncated version of a log-concave function is also log-concave. In practice the intuitive meaning of the assumption that a distribution is log-concave is that (a) it doesn't have multiple separate maxima (although it could be flat on top), and (b) the tails of the density function are not "too thick". An equivalent definition, for vector-valued random variables, is in Heckman and Honore, 1990, p 1127. Random vector X is log-concave iff its density f() satisfies the condition that f(ax1+(1-a)x2)≥[f(x1)]a[f(x2)](1-a) for all x1, and x2 in the support of X and all a satisfying 0≤a≤1.

Source: econterms

log-convex

A random variable is said to be log-convex if its density function is log-concave. Pareto distributions with finite means and variances have this property, and so do gamma densities with a coefficient of variation greater than one. [Ed.: I do not know the intuitive content of the definition.] A log-convex random vector is one whose density f() satisfies the condition that f(ax1+(1-a)x2) ≤ [f(x1)]a[f(x2)](1-a) for all x1, and x2 in the support of X and all a satisfying 0≤a≤1.

Source: econterms

Logic of conversation

Inferring the pragmatic meaning of a semantic utterance requires to go beyond the information given. "In making these inferences, speakers and listeners rely on a set of tacit assumptions that govern the conduct of conversation in everyday life" (Schwarz, 1994, p. 124). According to Grice (1975) these assumptions can be expressed by four maxims which constitute the "co-operative principle". "First, a maxim of quantity demands that contributions are as informative as required, but not more informative than required. Second, a maxim of quality requires participants to provide no information they believe is false or lack adequate evidence for. Third, according to a maxim of relation, contributors need to be relevant for the aims of the ongoing interaction. Finally, a maxim of manner states that contributors should be clear, rather than obscure or ambiguous" (Bless, Strack & Schwarz, 1993, p. 151). These maxims have been demonstrated to have a pronounced impact of how individuals perceive and react to semantically presented social situations and problem scenarios.

Source: SFB 504

logistic distribution

Has the cdf F(x) = 1/(1+e-x)
This distribution is quicker to calculate than the normal distribution but is very similar. Another advantage over the normal distribution is that it has a closed form cdf. pdf is f(x) = ex(1+ex)-2 = F(x)F(-x)

Source: econterms

logit model

A univariate binary model. That is, for dependent variable yi that can be only one or zero, and a continuous indepdendent variable xi, that:
Pr(yi=1)=F(xi'b)
Here b is a parameter to be estimated, and F is the logistic cdf. The probit model is the same but with a different cdf for F.

Source: econterms

lognormal distribution

Let X be a random variable with a standard normal distribution. Then the variable Y=eX has a lognormal distribution.
Example: Yearly incomes in the United States are roughly log-normally distributed.

Source: econterms

longitudinal data

a synonym for panel data

Source: econterms

Lorenz curve

used to discuss concentration of suppliers (firms) in a market. The horizontal axis is divided into as many pieces as there are suppliers. Often it is given a percentage scale going from 0 to 100. The firms are in order of decreasing size. On the vertical axis are the market sales in percentage terms from 0 to 100. The Lorenz curve is a graph of the sales of all the firms to the right of each point on the horizontal axis.

So (0,0) and (100,100) are the endpoints on the Lorenz curve and it is weakly convex, and piecewise linear, between. See also Gini coefficient.

Source: econterms

loss function

Or, 'criterion function.' A function that is minimized to achieve a desired outcome. Often econometricians minimize the sum of squared errors in making an estimate of a function or a slope; in this case the loss function is the sum of squared errors. One might also think of agents in a model as minimizing some loss function in their actions that are predicated on estimates of things such as future prices.

Source: econterms

lower hemicontinuous

No appearing points

Source: econterms

LRD

Longitudinal Research Database, at the U.S. Bureau of the Census. Used in the study of labor and productivity. The data is not publicly available without special certification from the Census. The LRD extends back to 1982.

Source: econterms

Lucas critique

A criticism of econometric evaluations of U.S. government policy as they existed in 1973, made by Robert E. Lucas. "Keynesian models consisted of collections of decision rules for consumption, investment in capital, employment, and portfolio balance. In evaluating alternative policy rules for the government,.... those private decision rules were assumed to be fixed.... Lucas criticized such procedures [because optimal] decision rules of private agents are themselves functions of the laws of motion chosen by the government.... policy evaluation procedures should take into account the dependence of private decision rules on the government's ... policy rule." In Cochrane's language: "Lucas argued that policy evaluation must be performed with models specified at the level of preferences ... and technology [like discount factor beta and permanent consumption c* and exogenous interest rate r], which presumably are policy invariant, rather than decision rules which are not." [I believe the canonical example is: what happens if government changes marginal tax rates? Is the response of tax revenues linear in the change, or is there a Laffer curve to the response? Thus stated, this is an empirical question.]

Source: econterms

Copyright © 2006 Experimental Economics Center. All rights reserved. Send us feedback