[转载]写作基本功D-15:写作背景(失业问题)
(2010-08-09 18:43:13)
标签:
转载 |
第十五部分:失业问题
Few economic indicators are as important as the unemployment rate. A high unemployment rate, such as during the Great Depression, can precipitate tremendous political and legal change. Low unemployment is one of the surest signs of a healthy economy.
To be classified as unemployed by the U.S. Bureau of Labor Statistics (BLS), a person must be jobless, but must also be willing and able to take a job if one were offered, and must have actively looked for work in the preceding four weeks. The unemployment rate is calculated by dividing the number unemployed by the number in the labor force, where the labor force is the sum of the unemployed and the employed. The BLS calculates the unemployment rate monthly by surveying a random sample of about 50,000 households. The unemployment rate is criticized by some because it excludes "discouraged workers," that is, people who do not have jobs and are not actively seeking them because they believe that a job search would be fruitless.
Table 1 contains estimates of the average unemployment rate by decade beginning with the 1890s. The earliest figures come from Stanley Lebergott, who argues that unemployment in the early 1800s was very low—for example 1 to 3 percent in the 1810s—largely because the economy was dominated by agriculture, self-employment, and slavery. With the industrialization of the economy, the growth of wage labor, and the frequent occurrence of economic recessions and panics, unemployment became a serious problem after the Civil War. Lebergott guesses that unemployment averaged about 10 percent in the 1870s and 4 percent in the 1880s. Beginning in 1890, the decennial census asked questions about unemployment and Lebergott links these to other economic indicators to provide annual estimates of unemployment. Christina Romer argues that Lebergott's method overstates swings in the unemployment rate, because it incorrectly assumes that changes in employment mirror changes in annual output. This assumption contradicts a persistent relationship, known as Okun's Law, whereby changes in output are typically 2.5 to 3 times larger than changes in unemployment. Romer's estimates of unemployment (1890–1929) are given in the right-hand column. Her assumptions seem more realistic than Lebergott's but her estimates are still imprecise in comparison to estimates for later years. Figures after 1930 come from the BLS, but they have been criticized too. Michael Darby maintains that official figures vastly overstate unemployment between 1931 and 1942 because they improperly count millions of workers supported by federal work relief programs as unemployed. He argues that these jobs were substantially fulltime and paid competitive wages, so these workers should be counted as government employees. (On the other hand, there was substantial part-time work and work-sharing during the Great Depression, which is not reflected in unemployment figures.) Darby's estimates of unemployment for the 1930s and 1940s are given in the right-hand column.
The estimates in Table 1 show that the Great Depression was truly exceptional and that the second half of the twentieth century saw an upward drift in the unemployment rate, with a reversal at the end. Unemployment peaks were reached between 1894 and 1898 when the rate exceeded 10 percent for five years running. A strong spike occurred in 1921—11.7 percent by Lebergott's series, 8.7 percent according to Romer. In 1933 the official unemployment rate was 25 percent and exceeded 37 percent of non-farm employees. The highest postwar rate was 9.7 percent in 1982. The lowest rates have occurred during wartime, with a record low of 1.2 percent during World War II. Overall, the unemployment rate averaged about three percentage points lower than normal during wartime.
Economists distinguish among frictional, seasonal, structural, and cyclical unemployment. Frictional unemployment refers to the normal turnover of workers (and firms) in any dynamic market economy. Seasonal unemployment occurs because production in some sectors varies over the year. Structural unemployment refers to the mismatch between workers and jobs. The mismatch can be spatial—for example entry-level jobs in the suburbs may go begging because unemployed youths in central cities cannot easily get to them, or workers in the rust belt can be unable to find jobs while there are vacancies they could fill in sunbelt states. Structural unemployment can also be caused by skill-based mismatches—such as when blue-collar workers losing jobs in declining sectors cannot fill high tech white-collar job vacancies. Many commentators have worried, especially during the Great Depression era, that technological advances would cause continually increasing structural unemployment rates, as machines took away the jobs of people. The trends in Table 1 show that these fears were ill founded, especially in the long run, as rising productivity brought rising incomes and demands for new services. Together, frictional and structural unemployment define a natural rate of unemployment, one to which the economy tends in the long run. The natural rate is notoriously difficult to estimate, but seems to have risen and then fallen in the last four decades of the twentieth century. Part of this change was probably due to demographic forces. Because younger workers generally have higher unemployment rates, as the baby boom generation entered the labor force, the unemployment rate first climbed, and then dropped as boomers aged. Another probable part of this change was the restructuring of the economy with the move away from heavy industry and increased international competition. Cyclical unemployment arises during recessions.
There is no universally accepted theory of the causes of unemployment. Some economists argue that all unemployment is voluntary, because there are always job openings, even in a recession. Instead of taking such jobs, the unemployed rationally choose to wait for better offers. Other economists hold that unemployment arises because wages are too high in terms of supply and demand. Why don't wages fall to the point where the supply and demand for labor are equal and unemployment disappears? Wage "stickiness"—the failure of wages to fall when demand for labor falls—increased significantly in the late 1800s and has been attributed to rising bargaining power among workers and employers' fears that cutting wages during a recession would undermine worker morale, harm productivity, and spawn strikes. Furthermore, after World War I, firms shifted toward longer-term relationships with their employees and found that wage cutting could increase turnover and clashed with internal pay structures. Many firms, then, were unwilling to cut wages during a downturn in product demand and responded instead by laying off workers, protecting the majority of employees from the problem. In addition, some laws, such as the Fair Labor Standards Act, which established a minimum wage beginning in 1938, or the wage codes established temporarily under the National Recovery Administration in 1933, can keep wages above the equilibrium level and cause unemployment.
The duration and incidence of unemployment spells changed to a great extent between the late nineteenth century and the late twentieth century. Unemployment spells were much briefer in the earlier period, but the odds of any individual becoming unemployed were noticeably higher. Compared with workers in the late 1970s, those in 1910 faced a 37 percent higher monthly rate of entry into the ranks of the unemployed. On the other hand, they also had a 32 percent higher rate of exiting unemployment, so the average spell of unemployment lasted less than four months. Data from the late 1800s suggest an even more rapid pace of workers entering and leaving unemployment, with an average unemployment spell lasting about seventy days, much less than the late 1970s rate of almost half a year. Evidence suggests that nearly 80 percent of employees laid off in the late 1800s were eventually recalled and rehired by their initial employers, a rate that was about the same in the late twentieth century. In the late 1800s and early 1900s, unemployment was influenced by personal characteristics, but to a much smaller degree than in the post–World War II period when educated, married, middle-aged, and experienced workers had significantly lower unemployment rates than others. Although unemployment was fairly indiscriminate in the earlier period, workers in industries with a high risk of layoff commanded higher wages—usually high enough to fully compensate them for the greater income risks they faced.
In the late twentieth century, the incidence of unemployment differed little by gender, but greatly by race. The nonwhite unemployment rate was 1.8 times higher than the white rate in 1950 and 1970 and 2.2 times higher in 1990. This gap opened up only after World War II—the nonwhite unemployment rate was slightly lower than the white rate in 1890 and 1930 and only 1.15 times higher in 1940. Another significant change has been the gradual decline in seasonal unemployment. In the late 1800s, employment in agriculture was very seasonal, as was manufacturing employment. In 1900 most industries saw considerable employment drops—often 10 to 15 percent—in the winter and again, to a smaller degree, in the summer. Seasonality faded slowly as America industrialized and as technology circumvented the vagaries of climate.
Until the Great Depression, federal and state governments did very little to explicitly combat or ameliorate the effects of unemployment. During the deep recession of the 1890s, for example, almost all the help to the unemployed came from the traditional sources, private charities and local governments. However, in 1935, as part of the Social Security Act, the federal government established a system of unemployment insurance, administered at the state level. The American system of unemployment insurance differs in important respects from that in other developed countries. The economists who framed this legislation, led by John Commons, believed that employers had enough leeway to substantially reduce seasonal and other layoffs, and constructed a system that included incentives to avoid layoffs. Unemployment insurance taxes were "experience rated," so that firms with higher layoff rates were taxed at higher rates. Evidence suggests that subsequently within the United States, seasonal differences in employment fell the most in states where experience rating was highest. Likewise, seasonality in the construction industry fell by two-thirds between 1929 and 1947 to 1963, a much faster rate than in Canada where firms were not penalized for laying off workers.
Unemployment insurance in the United States was designed to reduce unemployment and also to provide workers with extra income so that they could continue spending during a job loss and mount effective job searches, rather than accepting substandard jobs. By the standards of other countries, American unemployment insurance has covered a smaller portion of the work force and has provided benefits that are lower in comparison to average wages. Unemployed workers are normally eligible for benefits for twenty-six weeks, although this can be extended to thirty-nine weeks if unemployment in a state is unusually severe or if Congress votes an extension. In comparison, during the postwar period most countries in Western Europe established maximum benefit durations of a year or more. Many economists argue that the generosity of European unemployment insurance helps explain why unemployment rates there surged past the American rate in the 1980s and became about twice as high in the 1990s.
Another way in which government has combated unemployment is by taking an active role in managing the economy. The Employment Act, adopted in 1946, declared the "responsibility of the Federal Government to use all practicable means…to coordinate and utilize all its plans, functions, and resources for the purpose of creating and maintaining … conditions under which there will be afforded useful employment opportunities … and to promote maximum employment." Congress essentially committed itself to "do something" to prevent depression, recessions, and other macroeconomic malfunctions. During the Great Depression the intellectual underpinnings for such an activist policy were laid out in the writings of British economist John Maynard Keynes, who called for governments to cut taxes or boost spending at the appropriate time to reduce the negative effects of recessions. By the late 1950s some leading economists argued that there was a consistent, stable relationship between inflation and unemployment—the Phillips Curve—which allowed policymakers to keep unemployment perpetually at a low rate: a 3 percent unemployment rate was attainable if we accepted an inflation rate of 7 percent, according to one set of calculations by two future Nobel laureates. Beginning in the late 1960s, however, it was learned that the additional government spending on the Vietnam War and new social programs could not push down the unemployment rate much below its long-term trend and that additional spending fueled accelerating inflation. The U.S. Full Employment and Balanced Growth Act of 1978 (also known as the Humphrey-Hawkins Act) "required" the federal government to pursue the goal of an overall unemployment rate equal to 4 percent. The goal was achieved only briefly during 2000. By the 1980s the federal government had largely given up on using taxation and expenditures to steer the economy and the role of macroeconomic stabilization was left primarily to the Federal Reserve. The Federal Reserve's principal goal, however, appeared to be controlling inflation, rather than reducing unemployment.