The End of Normal: The Great Crisis and the Future of Growth
Autor James K. Galbraithen Limba Engleză Paperback – 4 noi 2015
The years since the Great Crisis of 2008 have seen slow growth, high unemployment, falling home values, chronic deficits, a deepening disaster in Europe and a stale argument between two false solutions, "austerity" on one side and "stimulus" on the other. Both sides and practically all analyses of the crisis so far take for granted that the economic growth from the early 1950s until 2000 interrupted only by the troubled 1970s-represented a normal performance. From this perspective, the crisis was an interruption, caused by bad policy or bad people, and full recovery is to be expected if the cause is corrected.
The End of Normalchallenges this view. Placing the crisis in perspective, Galbraith argues that the 1970s already ended the age of easy growth. The 1980s and 1990s saw only uneven growth, with rising inequality within and between countries. And the 2000s saw the end even of that, despite frantic efforts to keep growth going with tax cuts, war spending and financial deregulation. When the crisis finally came, stimulus and automatic stabilization were able to place a floor under economic collapse. But they are not able to bring about a return to high growth and full employment. In The End of Normal, "Galbraith puts his pessimism into an engaging, plausible frame. His contentions deserve the attention of all economists and serious financial minds across the political spectrum" (Publishers Weekly).
Preț: 52.96 lei
Preț vechi: 100.00 lei
-47% Nou
10.14€ • 10.54$ • 8.41£
Carte indisponibilă temporar
Specificații
ISBN-10: 1451644930
Pagini: 304
Dimensiuni: 140 x 213 x 20 mm
Greutate: 0.2 kg
Editura: Simon&Schuster
Colecția Simon & Schuster
Notă biografică
Extras
One
Growth Now and Forever
To begin to understand why the Great Financial Crisis broke over an astonished world, one needs to venture into the mentality of the guardians of expectation—the leadership of the academic economics profession—in the years before the crisis. Most of today’s leading economists received their formation from the late 1960s through the 1980s. But theirs is a mentality that goes back further: to the dawn of the postwar era and the Cold War in the United States, largely as seen from the cockpits of Cambridge, Massachusetts, and Chicago, Illinois. It was then, and from there, that the modern and still-dominant doctrines of American economics emerged.
To put it most briefly, these doctrines introduced the concept of economic growth and succeeded, over several decades, to condition most Americans to the belief that growth was not only desirable but also normal, perpetual, and expected. Growth became the solution to most (if not quite all) of the ordinary economic problems, especially poverty and unemployment. We lived in a culture of growth; to question it was, well, countercultural. The role of government was to facilitate and promote growth, and perhaps to moderate the cycles that might, from time to time, be superimposed over the underlying trend. A failure of growth became unimaginable. Occasional downturns would occur—they would now be called recessions—but recessions would be followed by recovery and an eventual return to the long-term trend. That trend was defined as the potential output, the long-term trend at high employment, which thus became the standard.
To see what was new about this, it’s useful to distinguish this period both from the nineteenth-century Victorian mentality described by Karl Marx in Capital or John Maynard Keynes in The Economic Consequences of the Peace, and from the common experience in the first half of the twentieth century.
To the Victorians, the ultimate goal of society was not economic growth as we understand it. It was, rather, investment or capital accumulation. Marx put it in a phrase: “Accumulate, accumulate! That is Moses and the Prophets!” Keynes wrote: “Europe was so organized socially and economically as to secure the maximum accumulation of capital . . . Here, in fact, lay the main justification of the capitalist system. If the rich had spent their new wealth on their own enjoyments, the world would have long ago found such a régime intolerable. But like bees they saved and accumulated” (Keynes 1920, 11).
But accumulate for what? In principle, accumulation was for profits and for power, even for survival. It was what capitalists felt obliged to do by their economic and social positions. The purpose of accumulation was not to serve the larger interest of the national community. It was not to secure a general improvement in living standards. The economists of the nineteenth century did not hold out great hopes for the progress of living standards. The Malthusian trap (population outrunning resources) and the iron law of wages were dominant themes. These held that in the nature of things, wages could not exceed subsistence for very long. And even as resources became increasingly abundant, the Marxian dynamic—the extraction of surplus value by the owners of capital—reinforced the message that workers should expect no sustained gains. Competition between capitalists, including the introduction of machinery, would keep the demand for labor and the value of wages down. Marx again:
“Like every other increase in the productiveness of labour, machinery is intended to cheapen commodities, and, by shortening that portion of the working-day, in which the labourer works for himself, to lengthen the other portion that he gives, without an equivalent, to the capitalist. In short, it is a means for producing surplus-value.” (Marx 1974, vol. 1, ch. 15, 351)
Yet living standards did improve. That they did so—however slowly, as Keynes later noted—was a mystery for economists at the time. The improvement might be attributed to the growth of empires and the opening of new territories to agriculture and mining, hence the importance of colonies in that era. But in the nineteenth century, economics taught that such gains could only be transitory. Fairly soon population growth and the pressure of capitalist competition on wages would drive wages down again. Even a prosperous society would ultimately have low wages, and its working people would be poor. This grim fatalism, at odds though it was with the facts in Europe and America, was the reason that economics was known as the “dismal science.”
Then came the two great wars of the twentieth century, along with the Russian Revolution and the Great Depression. Human and technical capabilities surged, and (thanks to the arrival of the age of oil) resource constraints fell away. But while these transformations were under way, and apart from the brief boom of the 1920s, material conditions of civilian life in most of the industrial countries declined, or were stagnant, or were constrained by the exigencies of wartime. The Great Depression, starting in the mid-1920s in the United Kingdom and after 1929 in the United States, appeared to signal the collapse of the Victorian accumulation regime—and with it, the end of the uneasy truce and symbiotic relationship between labor and capital that had graced the prewar years. Now the system itself was in peril.
For many, the question then became: could the state do the necessary accumulation instead? This was the challenge of communism, which in a parallel universe not far away showed its military power alongside its capacity to inspire the poor and to accelerate industrial development. In some noncommunist countries, democratic institutions became stronger—as they tend to do when governments need soldiers—giving voice to the economic aspirations of the whole population. For social democrats and socialists, planning was the new alternative—a prospect that horrified Friedrich von Hayek, who argued in 1944 that planning and totalitarianism were the same.
By the 1950s, communism ruled almost half the world. In the non-communist part, it could no longer be a question of building things up for a distant, better future. Entire populations felt entitled to a share of the prosperity that was at hand—for instance, to college educations, to automobiles, and to homes. To deny them would have been dangerous. Yet the future also could not be neglected, and (especially given the communist threat) no one in the “free world” thought that the need for new investments and still greater technological progress was over. Therefore it was a matter of consuming and investing in tandem, so as to have both increased personal consumption now and the capacity for still greater consumption later on. This was the new intellectual challenge, and the charm, and the usefulness to Cold Warriors, of the theory of economic growth.
The Golden Years
From 1945 to 1970, the United States enjoyed a growing and generally stable economy and also dominance in world affairs. Forty years later, this period seems brief and distant, but at the time it seemed to Americans the natural culmination of national success. It was the start of a new history, justified by victory in war and sustained in resistance to communism. That there was a communist challenge imparted both a certain no-nonsense pragmatism to policy, empowering the Cold War liberals of the Massachusetts Institute of Technology (MIT) and the RAND Corporation, while driving the free-market romantics of Chicago (notably Milton Friedman) to the sidelines. Yet few seriously doubted that challenge could or should be met. The United States was the strongest country, the most advanced, the undamaged victor in world war, the leader of world manufacturing, the home of the great industrial corporation, and the linchpin of a new, permanent, stable architecture of international finance. These were facts, not simply talking points, and it took a brave and even self-marginalizing economist, willing to risk professional isolation in the mold of Paul Baran and Paul Sweezy, to deny them.
Nor were optimism and self-confidence the preserve of elites. Ordinary citizens agreed, and to keep them in fear of communism under the circumstances required major investments in propaganda. Energy was cheap. Food was cheap, with (thanks to price supports) staples such as milk and corn and wheat in great oversupply. Interest rates were low and credit was available to those who qualified, and so housing, though modest by later standards, was cheap enough for whites. Jobs were often unionized, and their wages rose with average productivity gains. Good jobs were not widely open to women, but the men who held them had enough, by the standards of the time, for family life. As wages rose, so did taxes, and the country could and did invest in long-distance roads and suburbs. There were big advances in childhood health, notably against polio but also measles, mumps, rubella, tuberculosis, vitamin deficiencies, bad teeth, and much else besides. In many states, higher education was tuition-free in public universities with good reputations. Though working-class white America was much poorer than today and much more likely to die poor, there had never been a better time to have children. And there never would be again. Over the eighteen years of the baby boom, from 1946 to 1964, the fruits of growth were matched by a rapidly rising population to enjoy them.
It was in this spirit that, in the 1950s, economists invented the theory of economic growth. The theory set out to explain why things were good and how the trajectory might be maintained. Few economists in the depression-ridden and desperate 1930s would have considered wasting time on such questions, but now they seemed critical: What did growth depend on? What were the conditions required for growth to be sustained? How much investment could you have without choking off consumption and demand? How much consumption could you have without starving the future? The economists’ answer would be that, in the long run, economic growth depended on three factors: population growth, technological change, and saving.
It was not a very deep analysis, and its principal authors did not claim that it was. In the version offered by Robert Solow, the rate of population growth was simply assumed. It would be whatever it happened to be—rising as death rates came under control, and then falling again, later on, as fertility rates also declined, thanks to urban living and birth control. Thomas Robert Malthus, the English parson who in 1798 had written that population would always rise, so as to force wages back down to subsistence, was now forgotten. How could his theory possibly be relevant in so rich a world?
Technology was represented as the pure product of science and invention, available more or less freely to all as it emerged. This second great simplification enabled economists to duck the question of where new machinery and techniques came from. In real life, of course, new products and processes bubbled up from places like Los Alamos and Bell Labs and were mostly built into production via capital investment and protected by patents and secrecy. Big government gave us the atom bomb and the nuclear power plant; big business gave us the transistor. Working together, the two gave us jets, integrated circuits, and other wonders, but the textbooks celebrated James Watt and Thomas Edison and other boy geniuses and garage tinkerers, just as they would continue to do in the age of Bill Gates and Steve Jobs, whose products would be just as much the offshoots of the work of government and corporate labs.
With both population and technology flowing from the outside, the growth models were designed to solve for just one variable, and that was the rate of saving (and investment). If saving could be done at the right rate, the broad lesson of the growth model was that good times could go on. There was what the model called a “steady-state expansion path,” and the trick to staying on it was to match personal savings with the stock of capital, the growth of the workforce, and the pace of progress. Too much saving, and an economy would slip back into overcapacity and unemployment. Too little, and capital—and therefore growth—would dry up. But with just the right amount, the economy could grow steadily and indefinitely, with a stable internal distribution of income. The task for policy, therefore, was only to induce the right amount of saving. This was not a simple calculation: economists made their reputations working out what the right value (the “golden rule”) for the saving rate should be. But the problem was not impossibly complex either, and it was only dimly realized (if at all) that its seeming manageability was made possible by assuming away certain difficulties.
The idea that unlimited growth and improvement were possible, with each generation destined to live better than the one before, was well suited to a successful and optimistic people. It was also what their leaders wanted them to believe; indeed, it was a sustaining premise of the postwar American vision. Moreover, there was an idea that this growth did not come necessarily at the expense of others; it was the product of the right sort of behavior and not of privilege and power. Tracts such as Walt W. Rostow’s Stages of Economic Growth spread the message worldwide: everyone could eventually go through “take-off?” and reach the plateau of high mass consumption.I Capitalism, suitably tamed by social democracy and the welfare state, could deliver everything communism promised, and more. And it could do it without commissars or labor camps.
A curiosity of the models was the many things they left out. The “factors of production” were “labor” and “capital.” Labor was just a measure of time worked, limited only by the size of the labor force and expected to grow exponentially with the human population. Capital (a controversial construct, subject to intense debate in the 1950s) was to be thought of as machinery, made from labor, measured essentially as the amalgam of the past human effort required to build the machines. As every textbook would put it, if Y is output, K is capital, and L is labor, then:
Y = f(K, L)
This simple equation said only that output was a function of two inputs: capital and labor. Note that, in this equation, resources and resource costs did not appear.II
The notion of production, therefore, was one of immaculate conception: an interaction of machinery with human hands but operating on nothing. Economists (Milton Friedman, notably) sometimes expressed this model as one in which the only goods produced were, actually, services—an economy of barbershops and massage parlors, so to speak. How this fiction passed from hand to hand without embarrassment seems, in deep retrospect, a mystery. The fact that in the physical world, one cannot actually produce anything without resources passed substantially unremarked, or covered by the assumption that resources are drawn freely from the environment and then disposed of equally freely when no longer needed. Resources were quite cheap and readily available—and as the theory emerged, the problem of pollution only came slowly into focus. Climate change, though already known to scientists, did not reach economics at all. It would have been one thing to build a theory that acknowledged abundance and then allowed for the possibility that it might not always hold. It was quite another to build up a theory in which resources did not figure.
Even the rudimentary and catch-all classical category “land” and its pecuniary accompaniment, rent, were now dropped. There were no more landlords in the models and no more awkward questions about their role in economic life. This simplification helped make it possible for enlightened economists to favor land reform in other countries, while ignoring the “absentee owners” at home, to whom a previous, cynical generation had called attention. Keynes had ended his The General Theory of Employment, Interest, and Money in 1936 with the thought that rentiers might be “euthanized.” Now they were forgotten; theory focused simply on the division of income between labor and capital, wages and profits.
Government played no explicit role in the theory of growth. It was usually acknowledged as necessary in real life, notably for the provision of “public goods” such as military defense, education, and transport networks. But since the problem of depressions had been cured—supposedly—there was no longer any need for Keynes’s program of deficit-financed expenditure on public works or jobs programs; at least not for the purpose of providing mass employment. Fiscal and monetary policies were available, though, for the purpose of keeping growth “on track”—a concept referred to as “fine-tuning” or “countercyclical stabilization.” Regulation could be invoked as needed to cope with troublesome questions of pollution and monopoly (such as price-fixing by Big Steel), but the purpose of that was to make the system resemble as much as possible the economists’ competitive dream world. Beyond those needs, regulation was accordingly a burden, a drag on efficiency, to be accepted where necessary but minimized.
The models supported the system in two complementary ways. They portrayed a world of steady growth and also of fundamental fairness. Both labor and capital were said to be paid in line with their contributions (at the margin) to total output. This required the special assumption that returns to scale were constant. If you doubled all inputs, you’d get twice the output. While the omnipresent real-world situations of “diminishing returns” (in farming) and “increasing returns” (in industry) lived on and could still be captured in the mathematics, most economists presented them as special cases and, for the most part, more trouble than they were worth. (This author’s teacher, Nicholas Kaldor of the University of Cambridge, was an exception.) As for inequality, while the basic theory posited a stable distribution, Simon Kuznets—who was not a romantic—offered a more realistic but still reassuring analysis based on the history of industrial development in the United States and Great Britain. Inequality would rise in the transition from agriculture to industry, but it would then decline with the rise to power of an industrial working class and middle class and the social democratic welfare state.
That these assumptions became the foundations of a new system of economic thought was truly remarkable, considering that less than twenty years had elapsed since the Great Depression, with its financial chaos, impoverishment, mass unemployment, and the threat of revolution. It seemed a world made new. Both history and the history of economics (known as classical political economy) became largely irrelevant. A certain style of thinking, adorned with algebra, would substitute. Curiosity about those earlier matters was discouraged, and pessimism, which had earlier been the hallmark of the establishment, became a radical trait.
Other issues that had seemed emergent in the 1930s were now left out. One of them was the role of monopoly power. In the new models, all prices were assumed to be set in free competitive markets, so that the inconvenient properties of monopoly, monopsony, oligopoly, and so forth, so much discussed in the 1930s, did not have to be dealt with. Along with Keynes, his disciple Joan Robinson and her work on imperfect competition were shunted to one side. So was the Austrian economist Joseph Schumpeter, an archconservative who had nevertheless pointed out the unbreakable link between technical change and monopoly power. The study of industrial organization—the field within economics that analyzes market power—was drained of its political and policy content, to be colonized by theorists of games.
Another inconvenient fact was even more aggressively ignored: that even in capitalist systems, certain key prices were simply controlled. They were (and are) set by fiat, just as they would have been under “central planning.” This was true first and foremost of industrial wages, which were set largely in collective contracts led by the major industrial unions in autos, steel, rubber, railroads, and other key sectors. It was true of service wages, largely governed by the standards set by the minimum wage. It was true of public wages, set by government. And it was true of construction wages, which largely followed standards set in the public sector. All of these bargains imparted stability to the cost structure, making planning by business much easier than it would have been otherwise.
But not only were wages fixed. So were American oil prices, which were set to a good first approximation in Austin, Texas, by the Texas Railroad Commission, which could impose a quota (as a percent of capacity) on all wells in Texas. These measures ensured against a sudden, price-collapsing glut. This simple and effective system, supported by the depletion allowance in the tax code, gave America a robust oil industry that could and did reinvest at home. It was a strategy of “drain America first”—protecting the US balance of payments and the world monetary system from imported oil—but for the moment, there was no shortage of oil. And since the price of oil was under control, all prices that incorporated oil as a cost had an element of control and stability built in. So oblique and effective was this system of control over resource pricing that it played no acknowledged role in the economics of the time. Apart from a few specialists, economists didn’t discuss it.
The new growth models also had no place for the monetary system—neither domestic nor international. Banks did not appear, nor did messy details of the real world such as bank loans, credit markets, underwriting, or insurance. Monetary and credit institutions were perceived as mere “intermediaries”: a form of market standing between ultimate lenders (the household sector, as the source of saving) and ultimate borrowers (the business sector, the fount of investment). Banks were not important in themselves. Bankers were not important people. The nature of credit—as a contract binding the parties to financial commitments in an uncertain world—was not considered, and economists came to think of financial assets based on credit contracts as simple commodities, as tradable as apples or fish.
The role of law, which had been fundamental to the institutionalist economists of the previous generation, disappeared from view. The assumption was made that developed societies enforced “property rights,” thus giving all producers and all consumers fair, efficient, and costless access to the enforcement of contracts. In such a world, crime would be met with punishment, and mere exposure would be met with catastrophic loss of reputation. Since businesses were assumed to maximize their profits over a long period of time, they would act so as to avoid such a calamity. Probity in conduct would result from market pressures. So argued the subdiscipline of “law and economics,” which rose to great and convenient influence.
Government had no essential role in the credit system, and if it ran unbalanced books, they would only get in the way. There was a single pool of resources, to be divided between consumption and saving. The part that was saved could be taken by government, but only at the cost of reducing what would be available for investment and new capital in the next generation. This tendency was called “crowding out.” It became a standard feature of public finance models and even of the budget forecasts made by the government itself.
The interest rate is a parameter that relates present to future time, and it could not be left out of a growth model. On this topic, elaborate and conflicting theories enthralled and perplexed a generation of students, with notions ranging from the “marginal product of capital,”?III to “loanable funds,” to “liquidity preference.” In growth models, the dominant view related interest to the physical productivity of capital, which (since the capital stock cannot be measured in physical terms) meant that the dominant model of interest rates remained a textbook abstraction; something students were taught to believe without ever being able to gauge the performance of the theory against fact.
Here, for once, the theory made reality seem more complex and difficult than it was. In fact, interest rates were based on another controlled price. The rate of return on overnight bank loans (the federal funds rate) was set by the Federal Open Market Committee, an entity of the Federal Reserve System. Then as now, the FOMC met every six weeks in Washington for this purpose. There was a bit of camouflage, which has since disappeared: both operational secrecy and implementation of the interest rate target by buying and selling government bonds through primary dealers. But the reality was, the core interest rate for the United States was a price fixed by the government. As it is now.IV
Other interest rates, such as how much savers could earn on deposits and how much they could be charged for mortgages and other loans, would depend in various ways on the core interest rate and the market power of banks and other financial institutions, but also on government regulation. Regulations prohibited the payment of interest on checking accounts, and gave savings and loans a small rate advantage over commercial banks. Later these regulations would disappear, and interest rates facing consumers would largely become a cartel-driven markup over the cost of funds.
Similarly, the international monetary system had no role in the theory. This was odd, because the actual system in place in those years was a human creation, built in 1945 largely by economists (in some cases, the close colleagues of the growth theorists) in response to the blatant failures of the world monetary system only a few years before. The new system was administered by two agencies of the United Nations—the IMF and the World Bank—newly created institutions with many jobs for economists. These institutions were headquartered in Washington and dominated largely by the United States, which was now the world’s dominant financial power, thanks to the outcome of World War II. In the global balancing mechanism known as Bretton Woods, the world tied its currencies to the dollar, and the dollar tied itself—for the purpose of official settlement of trade imbalances—to gold at the price of $35 per ounce.V Here was another fixed price in a system where the role of price-fixing had to be overlooked lest people realize that perhaps they did not actually live under the benign sovereignty of the “free market.”
It was all a fool’s garden, and into it the 1960s dropped an apple and a snake.
The apple was called the New Economics, a postwar and post-Keynes reassertion of government’s responsibility to promote full employment. Keynes’s ideas had been tested, to a degree, in the New Deal and in World War II. The Depression had proved that a lack of management was intolerable. The New Deal, in its helter-skelter way, and especially the war had proved that economic management could work, at least under extreme and emergency conditions. Some of this spirit had been embodied in the Employment Act of 1946, but during the Dwight Eisenhower years nothing happened to suggest that the mandate of that act was practical policy. The new American version of Keynesianism did not dominate policy until the election of John F. Kennedy in 1960. At that time, for the first time in peacetime, a president would proclaim that the economy was a managed system. By so doing, he placed the managers in charge and declared that the performance of the economy—defined as the achievement of economic growth—was a permanent function of the state.
Even though the theory of growth, invented by Kennedy’s own advisers, had no special role for government, from that point forward government was to be held responsible for economic performance. Depressions were out of the question. Now the question was control of recessions—a much milder term that connoted a temporary decline in GDP and deviation from steady growth. Tax cuts could be deployed to support growth, as they were in 1962 and 1964, setting the precedent later taken up by the Republicans under President Ronald Reagan. Given the belief that depression, recession, and unemployment could all be overcome, the president had to be engaged, even in charge, for he would be held personally to account. Speaking at Yale University in 1962, Kennedy bit the apple of responsibility:
“What is at stake in our economic decisions today is not some grand warfare of rival ideologies which will sweep the country with passion but the practical management of a modern economy . . . The national interest lies in high employment and steady expansion of output, in stable prices, and a strong dollar. The declaration of such an objective is easy . . . To attain [it], we require not some automatic response but hard thought.”
As it happened, this apple was decorated with a peculiar empirical assertion called the “Phillips curve,” also invented by Kennedy’s own advisers, Paul Samuelson and Robert Solow, in 1960. The Phillips curve appeared to show that there were choices—trade-offs—to be made. You could have a little bit more employment, but only if you were willing to tolerate a little more inflation. The president would have to make that choice. And sometimes outside forces might make it for him.
The snake that came into this garden was, as all agreed, the Vietnam War. Economically, the war itself was not such a big thing. Compared with World War II, it was almost negligible. But Vietnam happened in a different time, as Europe and Japan emerged from reconstruction, and the United States was no longer running chronic surpluses in international trade and no longer quite the dominant manufacturing power. Never again would the country’s judgment and leadership go unquestioned. Vietnam tipped America toward higher inflation and into trade deficits, and its principal economic consequence was to destabilize, undermine, and ultimately unravel the monetary agreement forged at Bretton Woods.
Deficits and inflation meant that dollars were losing purchasing power even as the United States was expecting its trading partners to hold more of them, roping them into complicity in a war that many strongly opposed. So countries impatient with the “exorbitant privilege” they had granted by holding the excess dollars with which they were paid for real goods—notably France under President Charles de Gaulle, but also Britain under Prime Minister Harold Wilson—began to press for repayment in gold, to which they were entitled under the charter of the International Monetary Fund (IMF).
The system could not hold against that pressure. Once the gold stocks were depleted, what would back the dollar? And why should the United States forgo vital national priorities—whether Lyndon Johnson’s Great Society or the fight against communism in Asia—just so that de Gaulle (and Wilson) could have the gold in Fort Knox for $35 an ounce? By the end of the 1960s, close observers could already see that the “steady-state growth model” was a myth. The economic problem had not been solved. The permanent world system of 1945 would not be around for much longer.
I. Part of the appeal of my father’s 1958 book The Affluent Society stemmed from the rebellion it spurred against this emerging consensus.
II. And would not, until Solow modified his model in the 1970s. But even then, the refinement was superficial; resources now entered only as another “factor of production.” The fact that they are nonrenewable played no special role.
III. In the marginal-productivity theory, the interest rate (or rate of profit on capital) was supposed to be an outcome of the model. Capital was paid according to its marginal contribution to output. If interest rates were low, that was the result of a mature economy having exhausted the easy investment opportunities. Capital would therefore flow out to developing countries where the returns were greater. However by the mid-1950s, economists already knew (or should have known) that, as a logical matter, this explanation could not hold. Since interest and profit could not be derived from the productivity of the capital stock, it was not meaningful to say that industries in rich countries were more “capital intensive” than in poor ones. Indeed, industrial studies suggested the opposite, a point that was called the “Leontief paradox.” The intractability of the concept of an aggregate capital stock would be debated heavily, acknowledged in the middle 1960s, and then ignored.
IV. In Britain and for much of the financial world, the comparable reference rate is the London Interbank Offered Rate (Libor), which, as we have learned, is a rate set by a cartel of global banks—and susceptible to manipulation in their own interest, as we have also learned.
V. In this way, if the United States imported more than it exported, other countries built up reserves in dollars rather than gold—and the economic growth of the United States was therefore not tightly constrained by the limited physical stocks of gold.
Recenzii
“Forceful prose and admittedly provocative suggestions.... Students of economics will enjoy the robust, fearless rebuke he delivers to some of the discipline’s giants.... A cleareyed…analysis of the new normal…for the 21st century.”
"Galbraith's study marks another sharp and suggestive installment in the ongoing effort to determine how and why our economic and political leaders have lost their once-confident grasp of sound strategies to promote macroeconomic growth. And to restore a measure of that lost confidence Galbraith lays out a bold intellectual agenda."
Descriere
From one of the most respected economic thinkers and writers of our time, a brilliant argument about the history and future of economic growth.
The years since the Great Crisis of 2008 have seen slow growth, high unemployment, falling home values, chronic deficits, a deepening disaster in Europe and a stale argument between two false solutions, "austerity" on one side and "stimulus" on the other. Both sides and practically all analyses of the crisis so far take for granted that the economic growth from the early 1950s until 2000 interrupted only by the troubled 1970s-represented a normal performance. From this perspective, the crisis was an interruption, caused by bad policy or bad people, and full recovery is to be expected if the cause is corrected.
The End of Normalchallenges this view. Placing the crisis in perspective, Galbraith argues that the 1970s already ended the age of easy growth. The 1980s and 1990s saw only uneven growth, with rising inequality within and between countries. And the 2000s saw the end even of that, despite frantic efforts to keep growth going with tax cuts, war spending and financial deregulation. When the crisis finally came, stimulus and automatic stabilization were able to place a floor under economic collapse. But they are not able to bring about a return to high growth and full employment. In The End of Normal, "Galbraith puts his pessimism into an engaging, plausible frame. His contentions deserve the attention of all economists and serious financial minds across the political spectrum" (Publishers Weekly).