The Edgy Optimist A clear-eyed view from Zachary Karabell Tue, 04 Feb 2014 20:43:07 +0000 en-US hourly 1 In emerging countries, focus on progress — not market volatility Tue, 04 Feb 2014 20:43:07 +0000

The start of the year has not been an easy one for financial markets. The Federal Reserve is continuing its policy of trimming its bond purchases by $10 billion a month, and the immediate result has been a sharp pullback of the currencies, and to some degree equities, of countries such as Indonesia, Turkey, India, South Africa and Argentina. The reason? According to traders, commentators, and even the head of Brazil’s central bank, Fed policy will trigger interest rate rises around the world, staunching the flow of easy money that has purportedly fueled global growth — and leading to struggles everywhere.

That thesis is hardly new. It was widely circulated last summer, when the Fed first hinted that it might begin to wind down its more aggressive measures to stimulate economic activity, which it introduced after 2009. In this reading, the boom times of many countries around the world has had nothing to do with the change in economic fortunes, or skilled leadership, or shifting global sands. It was and is simply a derivative of U.S. policies.

This view has wide play, and goes nearly unchallenged. That does not make it correct.

Indeed, it is likely wrong for at least two major reasons: it forgets that financial markets are not perfect proxies for real world economies, and it misses the fundamental transformation in countries around the world that has taken place over the past few decades and is about to accelerate this year.

As I wrote in a column last August, a U.S.-centric view extends well back into the 20th century, and the only wrinkle today is that China has now entered the mix. Low and behold, China, too, has recently seen some slowing of its growth, largely because of the determination of the Chinese government to shift the mix of its economic growth from state-led infrastructure and exports to domestic consumption. That transition will, inevitably, result in diminishing demand for commodities and raw materials, and that demand had also been a key factor in the strength of other economies, including many of the ones above.

Yes, China’s voracious demand for raw materials did provide a boost to countries as far-ranging as Chile, South Africa, Indonesia and Brazil. And yes, easy money from the Federal Reserve did get deployed to more speculative currencies ranging from Turkey to Argentina. But that speaks only to the narrow — albeit attention-grabbing and important — financial markets. Financial markets are roiling, as they do from time to time, but that doesn’t result in less activity by companies such as Unilever or Starbucks, nor a halt in the emergence of a global middle class in numerous countries.

We shouldn’t overweight the significance of financial markets when evaluating the status of emerging markets; to do so is to espouse a particularly U.S.-centric view of how the world works. More voters, and more Starbucks lattes, are bellwethers for progress, irrespective of how markets are performing.

Even more, what financial markets miss, in addition to the factors above, is that this year will see a massive democratic surge around the world. In fact, in a span of several months more people than ever before will go to the polls. In part, that is a natural consequence of more people living on the planet than any other time in history, but between now and this fall, massive democracies will elect parliaments, congresses and presidents.

Many of those countries, in fact, are the ones currently experiencing financial market volatility. Turkey, which has seen a run on the lira and then aggressive action by the Turkish central bank to raise interest rates, will have an election in August to determine the fate of the embattled prime minister, Recip Erdogan, and his Islamist party. Indonesia, the world’s largest Muslim democracy with more than 250 million people, will chose a new president and parliament over the summer. In October, more than a hundred million Brazilian voters will decide whether Dilma Roussef has earned another term in the midst of questionable economic growth and after hosting the World Cup. And in May, India — the world’s largest democracy — will elect a new government and see whether it can at last break the political knot that has constrained its economic potential.

The sheer volume of this democratic activity is unusual, and combined with the inevitably partisan midterm elections in the United States in November, it is a recipe for some volatility. Democracy, as we all know, is messy, noisy, chaotic, often bitter and frequently inconclusive. It is a perfect recipe for the kind of uncertainty that financial market participants frequently say they dislike. As such, it also provides the perfect excuse for sharp swings in sentiment and short-term-ism as traders attempt to game outcomes and as longer-term investors sit on the side awaiting the verdict of the polls.

Yet, if you had asked Americans or Europeans in the middle of the 20th century and indeed the parents of the hundreds of millions who will vote this year to define their ambitions for the early 21st century, their dreams would have been the world as it is today: one of democratic inclusiveness, increasingly open markets, rising affluence, and more and more freedom of movement, of economic life and of personal life than ever before, all capped by imperfect but radically improved political systems.

The coming months will see no end of obituaries written for the emerging world, and of the perils they face. There is nothing new about these prognostications, and they all miss the glaring realities of democratic transformation and its sometimes tumultuous consequences. They miss as well the degree to which hundreds of millions have seen vastly increased living standards, and are willing to demand some degree of accountability from their governments. That is a recipe for global security, and long-term prosperity, even if far too many declare the opposite.

PHOTOS:  Money changer counts Turkish lira bills at a currency exchange office in Istanbul January 24, 2014. REUTERS/Murad Sezer 

A supporter of India’s main opposition Bharatiya Janata Party (BJP) waves the party’s flag during a rally being addressed by Gujarat’s Chief Minister and Hindu nationalist Narendra Modi, the prime ministerial candidate for BJP, ahead of the 2014 general elections, at Meerut in the northern Indian state of Uttar Pradesh February 2, 2014. REUTERS/Ahmad Masood 


]]> 4
If the world is getting richer, why do so many people feel poor? Fri, 24 Jan 2014 18:39:45 +0000 In a widely-read statement in his annual foundation letter, Bill Gates took an unabashedly optimistic approach to the world this week. Not only did he tout the massive material progress evident everywhere in the world over the past decades, but he also predicted that as more countries accelerate their transformation from rural poverty to urban middle class societies, poverty as we know it will disappear within the next two decades. “By 2035, there will be almost no poor countries left in the world,” Gates wrote. “Almost all countries will be what we now call lower-middle income or richer.”

With an economy of words, Gates makes clear that he understands the issues. Yes, worldwide there is still immense poverty as defined by critically low incomes or GDP per capita, including less than $500 a year in Ethiopia, less than that in the vast and dysfunctional Democratic Republic of Congo, even less in Burundi and who knows what in North Korea. All of those countries, Gates predicts, will be substantially wealthier in twenty years.

This message is in rather stark contrast to the sense in the United States and Europe that we are mired in economic stagnation, and that as inequality grows, more people are unable to meet their basic needs. That message is likely to be a cornerstone of President Obama’s State of the Union address, and it suggests that life is not getting better for many Americans, but rather worse.

So which view more accurately describes the world we live in? While it does depend on what you consider progress, it should be hard to disagree with Gates and the evidence he presents. Yet today, many people do. They believe that their quality of life is deteriorating, and they look around and see the world through that lens.

In purely dollar terms, it is true that the vast middle class in America (and Western Europe) have seen their incomes stagnate. As is frequently noted, middle-class incomes in the United States have barely budged since the 1980s. Income data, however, has some substantial limitations. While it is used to determine official poverty rates, it says nothing about the relative cost of living. As many goods and essentials have become dramatically less expensive, stagnant incomes allow for higher living standards.

In 1950, for instance, the average American family spent almost 30 percent of its income on food. Now it spends barely more than 10 percent. Apparel costs have also dramatically decreased. Add in the free goods of the Internet — ranging from Google, to GPS data that helps you avoid traffic, to shopping online and saving travel costs — and you have a rather different picture of the net effect of stagnant incomes.

Then there is what used to be called the developing world. Gates notes that the only region of the world where there is still chronic poverty en masse is sub-Saharan Africa. That is probably overstating the case, given that India alone has at least 100 million people living in slums without clean water or basic services. Yet over the past decade, more than that number has been able to move out of slums and into legitimate dwellings. Cup half empty, or half full?

Or take a different measure: nutrition. According to the World Health Organization, every part of the world has seen a steady rise in calorie consumption over the past fifty years. Industrialized nations will see an increase from 3065 calories per person per day in the mid-1970s to an estimated (and perhaps excessive) 3440 by 2015. Developing nations will go from 2152 to 2850 in the same time. Only sub-Saharan Africa lags notably, and should increase its consumption from 2079 to 2360 calories during these four decades. East Asia, which was calorically poorer than Sub-Saharan Africa in the 1950s, now consumes over 3000 calories a day per person.

On almost any metric — quality of living, lifespan, health, education, income, basic needs such as food, clothing and shelter — life everywhere has improved. That point is being forcefully made by Gates, as it has by others such as Peter Diamandis, the founder of the X Prize for innovation. Yet, the prevailing sense in the West at least is that things are either getting worse or at best not getting better.

Given the evidence that globally, life is getting better, why do so many people feel like it’s getting worse? What Gates and others do not sufficiently address is that much of the modern world uses growth as the mark of progress. That means an expectation of more, more and more. It is also true that even in the most affluent societies, many people earn lower incomes and feel disconnected from the wealth and lifestyles of their peers.

Humans are wired to be acutely aware of how others near to them are faring, and we all mark ourselves in relative terms. It may be better to live in a trailer with satellite TV and an abundance of cheap, unhealthy food than to live in a shanty in Mumbai or a refugee camp in the Congo, but those groups do not live next to one another. They aren’t effective reference points for actual people. Most of us don’t relate to abstract populations elsewhere in the world. Even with humanitarian crises, we don’t take action because of stories we read, as much as pictures we see and testimony we hear.

In addition, residents of the more affluent parts of the world are aware that their ability to generate more has started to wane, save for a small minority of the very affluent. While many material goods are inexpensive, many needed services such as education and healthcare are only becoming more expensive, and account for a larger proportion of household spending than they did in previous generations. And those services are the keys to positive change. All of the flat screens in the world won’t make people feel that they are thriving if the quality of their medical care deteriorates.

In much of the world outside of Europe and the United States, of course, there is not that sense of malaise. Many societies are indeed exuding confidence — sometimes as in the case of the current government of Turkey, perhaps too much. Yes, countries such as Ethiopia and South Sudan and the Congo — or Venezuela in this hemisphere — offer little in the way of good governance or hope for the future, but those places are increasingly the exception.

What Gates has underscored is that the centuries-old struggle to end poverty and want is coming to an end. That does not mean everyone is happy or that a vast swath of the human race has what they consider to be enough. It does mean that as basic needs are met on a global scale, we will have to address a new suite of challenges ranging from how much more calories, clothing, square feet, years and income we need individually or collectively to thrive. Once most of the human race has secured the basics, it will not be the end of history. We will continue to ask questions of how to satisfy the next level of needs, what to strive for, how much is enough and does everyone indeed have enough. You can hear those questions beginning to form now in the developed world; you can be sure that they will be even louder in the years ahead.


PHOTO: Bill Gates, co-chair of the Bill & Melinda Gates Foundation poses with vaccine against meningitis during a news conference after his address to the 64th World Health Assembly at the United Nations European headquarters in Geneva May 17, 2011. REUTERS/Denis Balibouse

]]> 3
The real future of U.S. manufacturing Fri, 17 Jan 2014 16:36:33 +0000

Few topics have been more fraught than the fate of U.S. manufacturing. The sharp loss of manufacturing jobs since 2008 has triggered legitimate concern that America’s best days may have passed.

Even as recent leading indicators suggest more economic momentum, job growth remains at best sluggish and manufacturing has seen only marginal gains — having shed more than two million jobs in 2008-2009, and millions more since the peak in the late 1970s. Manufacturing accounted for slightly less than 20 million jobs at the peak in 1979. Now it’s barely 11 million.

The picture is even bleaker considering the population, since the labor force is considerably larger today. This has led to a widespread conviction that the core of the potent U.S. economy is being hollowed out.

So it is not surprising that Washington’s latest highly-touted initiative seeks to rejuvenate American manufacturing and restore lost jobs. President Barack Obama unveiled an initiative Wednesday in North Carolina designed to foster high-tech manufacturing for the long term.

With money from the Energy Department, the Raleigh-Durham area — already home to several leading universities that are part of what is called a research hub — will develop an innovation institute to foster high-tech manufacturing, such as semiconductors. The promise is that such manufacturing and its attendant jobs are vital to competing in today’s global economy. Though the administration can fund a number of these without Congress acting, the White House has called on the legislature to pass funding for an additional 45 such centers around the country.

The assertion that the United States, or any nation, requires continued investment in the technologies that will drive future production is indisputable. On that score, at least, the Obama White House is fighting the proverbial good fight.

The contention, however, that these technologies and the factories that harness them for production will be sources of well-paid, solidly middle-class jobs, is flawed. In our political debates, we maintain the comforting fiction that a manufacturing revival can and will go hand-in-hand with a jobs revival. Yet, as Obama’s initiative shows, the two can be — and increasingly are — uncoupled.

The issue is not the hollowing-out of manufacturing as defined by less production. Yes, many less expensive, simpler products are now made more cheaply elsewhere and are unlikely to be made in the United States anytime soon — even with the “on-shoring” of manufacturing. Though China ceases  to be the place of low-cost production, Vietnam, the Philippines and who knows where else (even Mexico) will be more attractive for apparel, furniture, electronics and anything plastic for a long time to come.

The high-end production that these new U.S. innovation hubs seek to promote is indeed in demand around the world. It is something where, as yet, China and other low-cost manufacturing centers have not excelled. This is why China actually imports considerable billions of higher-end equipment – particularly from Japan and Germany. So it is true that the United States could have a competitive advantage, especially given the plethora of research universities and the wealth of highly-educated talent that can be used for just this type of production.

But all this is not the same as a job creator for a workforce of at least 120 million and counting in nation of more than 320 million people. These high-tech factories might employ hundreds of people in conjunction with industrial robots, using sophisticated software systems for design and production. These factory workers bear little resemblance to the 1950s line workers doing rote tasks. They are more like Silicon Valley engineers or lab technicians. These are high-skill jobs — and not nearly as plentiful as the factory jobs of the past.

That is, of course, no reason to dismiss the importance of cultivating these centers. Promising that they will be job engines, however, is dicey at best, and disingenuous at worst.

Even hundreds of centers of innovation that focus on 3D printing, bespoke semiconductors and technology-laden products will not spell a revival of the manufacturing workforce commensurate with what many hope or expect.

It is instead likely, even with the reinvigoration of American manufacturing, that job creation is almost non-existent. It is likely as well that output as measured by gross domestic product goes up along with the revival — without producing a job renaissance.

Again, this is not an argument against these endeavors. They will indeed generate income and revenue and enhance productivity in the U.S. They will not, however, solve the conundrum of our structural unemployment challenges.

Over time, of course, as more people develop the skills required for this new wave of manufacturing, it is possible that the economic system overall generates a next wave of prosperity. Education and innovations, tethered to products, ideas, services and even entertainment, has no clear limit to growth.

In the interim, however, a generation ill-prepared for that change is likely to continue to struggle mightily.

So we should embrace these endeavors, absolutely. But we should do so with a clear sense of what they can do long-term and what they cannot do in the short term.  They cannot bring back lost jobs or industries. They also cannot solve the employment challenges for millions who have been displaced over the past few decades.

Obama’s plan can solve those for the next generation — but not for portions of an older generation now adrift. We should not fool ourselves about what can be done.

A lost generation may require years of support before the next is ready to carry the weight of the future. This is only a negative, however, if we pretend that an easy fix is on the horizon.


PHOTO (TOP): Workers assemble Motorola phones at the Flextronics plant that will be building the new Motorola smart phone “MotoX” in Fort Worth, Texas September 10, 2013. REUTERS/Mike Stone

PHOTO (INSERT): An abandoned steel blast furnace in Pittsburgh, Pennsylvania, April 8, 2011. REUTERS/Eric Thayer

]]> 16
What America won in the ‘War on Poverty’ Fri, 10 Jan 2014 21:07:17 +0000 In an unabashed endorsement of government action to alleviate the plight of the poor, this week President Obama commemorated the 50th anniversary of the War on Poverty with his own call for new policies to address the continued struggles of tens of millions of Americans.

In his official statement, Obama remarked that, “In the richest nation on earth, far too many children are still born into poverty, far too few have a fair shot to escape it, and Americans of all races and backgrounds experience wages and incomes that aren’t rising… That does not mean… abandoning the War on Poverty.  In fact, if we hadn’t declared ‘unconditional war on poverty in America,’ millions more Americans would be living in poverty today. Instead, it means we must redouble our efforts to make sure our economy works for every working American. “

It would seem hard to argue with such sentiments, yet some have done so. Fox News published a piece saying “despite trillions spent, poverty won.” Many others react by shaking their heads sadly, acknowledging the noble effort and concluding that it was an abject failure. The implication is clear: government spent a mint and did not end poverty, and now Obama is calling for more of the same.

This raises two crucial questions: did the first “war” really fail? And what should we do today?

As for the first, when Lyndon Johnson called for an end to poverty on January 8, 1964, he continued the tradition of the New Deal and decades of American policy designed to provide all Americans with basic standards of living — housing, education, healthcare and jobs. Americans believed that an activist government could achieve those goals, hence the trillions of dollars directed at the War on Poverty.

Those trillions have over time reduced the official “poverty rate” from 19 percent to 15 percent. Many have concluded that such a minor shift wasn’t worth the massive expense. Johnson’s legacy was tarnished by the chaos unleashed by opposition to the Vietnam War and by the morass of the 1970s, and the Reagan revolution of the 1980s was predicated in part on a conviction that the government’s attempt to alleviate the plight of the poor was not only social engineering, but badly-done social engineering.

Yet poverty today is of a different order than poverty 50 or 100 years ago. During the Great Depression, millions of Americans were still without electricity or running water. By the 1960s that had changed, but many people still lacked basic healthcare, and the elderly were often at the mercy of their families. Today, there is still widespread poverty as defined by official income statistics, but the conditions of poverty are materially different, as Jordan Weissmann at the Atlantic has shown.

In part, that is because of the safety net we have since created. Many conservatives believe that we were better off in a world where private charity groups and religious organizations provided assistance, rather than government programs such as food stamps, welfare, unemployment benefits, Social Security and disability payments. But while that world did place much greater stock in self-reliance, it also left far more people at a huge disadvantage, struggling for life’s basic necessities. You could — and some do — argue that such a world produced heartier souls more able to cope with life’s vicissitudes. You could also argue — and should — that such a world was harsh and destructive to many in ways that humans for centuries have strived to ameliorate.

Today we have a massive social safety net, thanks to both the New Deal and the substantial expansion of federal and state programs beginning in the 1960s. These programs soon included housing as well. Many have seen more waste than not, and housing programs in particular did not fare well, as the scarred urban landscape of housing projects demonstrates.

But that safety net — much of which is not well-captured in the per capita income statistics that are used to assess the poverty rate — did create a set of expectations about the minimum level of necessities that all Americans deserve. That minimum — consisting of adequate shelter, food, heat and air conditioning, public education, and access to healthcare for the elderly — is a reality today.

The real critique, however, and the area we should focus on in the years ahead, is that because Americans are divided about this safety net, we accomplish two things, neither of which are optimal. We spend trillions on programs designed to provide some level of basic security, and yet these programs remain controversial. Significant opposition to these programs and the constant threat that they could be cut means that instead of providing security, they create insecurity, and because of that opposition, it becomes almost impossible to discuss how they could be improved, rather than maintained or terminated.

The result is something of a worst of all possible worlds: we maintain a vast safety net while pretending that we do not, and many of us act as if safety nets are at best ineffective and at worst immoral. The net result is that as a society, we find ourselves unable to enact needed reforms.

The answer, then, is to recognize that in securing many basic necessities, the War on Poverty succeeded, either in actually ensuring that those necessities exist, or in establishing that having them is a fundamental right. Even the most virulent opponents to social safety net programs accept that right, which would not have been the case well into the 20th century. The programs may not have altered the poverty rate, but in part that’s because we have constantly reset and raised the bar about what we consider to be the most basic resources that every American deserves. Our “enough” today is considerably greater than it was fifty years ago.

The next solutions to the challenges of today’s poverty, therefore, are not better public housing and Medicaid. We do not need the same approach that various administrations have been advocating for the past 50 years. We need instead a consensus about what we believe are the next level of basic material rights of every citizen — beyond food, clothing and shelter. Many of those — such as self-esteem, the tools to build careers, the ability to navigate a world defined by information rather than manufacturing — are within the ability of government to provide.

State and local governments have been laboratories of new initiatives — from work and training programs, to partnerships between local businesses and community colleges, to food banks. Thankfully, such initiatives at all levels of government require less money than more traditional social services. They also demand more flexibility. Government programs defined not by ideology but by flexibility and the ability to help private and local institutions act — not by giving them grants as the War on Poverty did, but via tax incentives that help run programs — that would be welcome innovation, and the best way to continue the legacy of the War on Poverty. And with the federal government unlikely to spend more in today’s climate, it may also be the only way.

PHOTO: A homeless man begs for money in the Financial District in San Francisco, California March 28, 2012. REUTERS/Robert Galbraith 

]]> 1
The audacity of optimism Mon, 23 Dec 2013 20:46:11 +0000 Over the past four weeks, we’ve had a run of undeniably good news. A panoply of data has shown that the U.S. economic system appears to be on firm ground. More people have jobs, albeit not necessarily sterling jobs, and the pace of overall activity as measured by GDP is at the highest level in two years, expanding at 4.1 percent annually. On the political front, Congress passed a budget for the first time in more than three years, which suggests a period ahead where Washington tantrums do not threaten to upend whatever delicate equilibrium currently exists.

And yet, an aura of unease still seems to hover over us. In the year or more that I have written this column, I have often emphasized the way in which things may be going at least a bit right. That contrasts with the frequently repeated mantra that we are going dangerously off the rails. Of course, like anyone, I may be right or wrong or somewhere in between. What’s been perplexing about responses to this column, however, isn’t whether the analysis is right or wrong, wise or naïve, but that the very hint of optimism makes a fair number of people extremely angry.

It may be, of course, that my optimism is misplaced. It may be that the United States is actually headed to hell in a proverbial handbasket; that Europe is in a brief lull before its next leg toward dissolution of the Union; that Japan’s easy money spigot unleashed by the new government of Shinzo Abe will end with the same no-exit stagnation of the past 20 years; and the glorious story of emerging economies from Brazil to Mexico to India to China will end not so gloriously. It may also be that whatever appears to be working in the developed world is in truth working only for a small minority — for the wealthy and members of the middle class in privileged urban areas, and for anyone tethered to financial markets and global commerce.

But possibly being wrong doesn’t explain the anger my columns have provoked, in the form of email and online reactions. Weather forecasters and sports experts are routinely wrong about outcomes, and while those missed predictions can trigger some ridicule, they’re not usually a recipe for rage.

True, the online world of comments and commentary skews towards the negative, especially in the realm of economics and politics. People are more likely to express feelings based on disagreement and a sense of outrage than they are to react based on concord. Anger is a hot experience that triggers action; agreement, even strong agreement, tends to be a more passive reaction.

But why does optimism about today’s world generate such strong hostility? Perhaps because it contradicts what many people believe. Positive views on the present are seen as a slap in the face by people who have negative experiences, which, according to some polls, is the majority of Americans. Surveys suggest that more Americans than ever — 66 percent, according to one poll — believe that the country is headed in the wrong direction. Other polls say much the same thing. Two years ago the numbers were even worse. Americans of the past few years are less positive about the future than they have been at any point since the 1970s.

Interestingly, according to these surveys, blacks and Hispanics in the United States are more positive about the future than whites, perhaps reflecting the degree to which white males have seen their fortunes decline on a relative basis over the past decades, while Hispanics especially have seen significant improvement in incomes and education. That said, it is difficult to know the race and gender breakdown of online reactions to my political and economic analysis.

The problem is that in a country of 300 million people, let alone a world of 7 billion, any statement about an economic or societal trend is likely to differ from the actual experience of a great many people. While there may be upsides to the changing mechanisms of our economic system, there are unequivocally winners and losers and many shades between. Any suggestion that the struggles of one group may be juxtaposed against, though not offset by, the flourishing of another group can seem disrespectful and even indifferent to the challenges faced by many people.

The answer, however, is not to focus relentlessly on what isn’t working. Every society must find some balance between addressing real shortcomings and building on real strengths. The United States in particular oscillates between excessive self-congratulation (“the indispensable nation,” “the freest nation on Earth”) and extreme self-criticism. We can be making a transition from a manufacturing economy to an idea economy that sees millions finding a new way, and millions suffering. We can be educating millions brilliantly while failing to educate millions at all. We can see thriving urban centers even as suburban sprawl melts under too much debt and overpriced homes.

Optimism, as the theoretical physicist David Deutsch so brilliantly describes in The Beginnings of Infinity, doesn’t mean surety about good future outcomes. Optimism is simply the certainty that any human progress to date has been a product of our collective ability to understand how things work and to craft solutions. The conviction that the present is a prelude to a bad future negates that collective ability. Yes, we may indeed be at the end of the line, but by angrily dismissing optimistic arguments we are likely to fail more rapidly. Why bother striving for constructive change if you firmly reject the possibility? That leaves only one viable alternative: to envision a path forward. That path may not materialize, but striving to find it is a vital component of creating the future we dream about, and not the one that we fear.

PHOTO: British singer Robbie Williams gives thumbs-up arrives on the red carpet for the Bambi 2013 media awards ceremony in Berlin November 14, 2013. REUTERS/Tobias Schwarz 

]]> 14
Why Washington’s growing irrelevance is good for the country Fri, 13 Dec 2013 19:52:38 +0000 After three years of sclerosis, Congress is poised to at last pass an actual budget. We’ve been so consumed with the dysfunction of the parties on Capitol Hill that this feat appears significant. In fact, it should be routine. Yet in the context of the past few years, it is anything but.

The budget that passed the House still must wend through the Senate, and it is not exactly a study in legislative daring. It is, however, an actual budget, passed with substantial support from both parties by a vote of 332 to 94 and negotiated by two leaders, one from each party and each chamber — Representative Paul Ryan (R-Wisconsin) and Senator Patty Murray (D-Washington). The bill is the most modest endorsement of the current status quo, which stems from both the automatic and crude 2013 budget cuts known as the sequester, and from the chronic inability of either party to compromise over the past three years.

Even though the only real change over current spending is a modest $60 billion increase (meager in relation to the $15 trillion-plus U.S. economy), conservative groups still condemned it as too profligate and liberal groups assailed it as too draconian. Said Ted Cruz, who may be having mild limelight withdrawal, “The new budget deal moves in the wrong direction: it spends more, taxes more, and allows continued funding for Obamacare…I cannot support it.” Paul Krugman argued the contrary — that the bill is too meager, and does nothing to address the problem of structural, chronic unemployment. Writes Krugman: “if you look at what has happened since Republicans took control of the House of Representatives in 2010 — what you see is a triumph of anti-government ideology that has had enormously destructive effects on American workers.”

Partisanship aside, it’s tempting to look at the budget deal in one of two ways: our political system has fallen so low that just doing the minimum amount required to be a functional government is seen as a victory, or the fact that it took three years to pass a budget that essentially makes no changes is proof that the system is broken.

A third perspective is even closer to the mark — and cause for optimism. Namely, that both parties’ willingness to pass a budget that no one much likes is a sign that Washington neither can nor will torpedo the country. It is a sign as well that Washington neither can nor will save the country. That’s a far cry from an activist government doing good, or a small government doing much less. But it should come as a positive sign.

The self-created Washington crises over the past years have created an image of government as the vital actor on the American stage. But these crises have failed to either impede or energize economic activity, and they have led large numbers of Americans to tune politics out. In the New York mayoral race, for instance, voter turnout was 24 percent, which appears to have been a record low. Similar numbers were seen in New Jersey and Virginia. The political class, from Washington to the local level, has managed to alienate voters and make government less relevant, except in its ability to spy on citizens and manufacture crises.

The result of the budget deal is to remove crisis from the agenda in 2014. Yes, the debt limit still has to be raised in February, but it is difficult to see Congress refusing to increase the limit on a budget that it passed. Crisis also creates a negative feedback loop with partisans such as the Tea Party and the media. You need crisis to fuel passion, and donations to the cause, and you need crisis to justify media coverage.

As proof, notice how little attention the budget deal received in the media, relative to October’s government shutdown — which dominated the news and the airwaves. The budget deal didn’t even merit a page one story in the New York Times when it passed in Congress.

And for all of this, we should be thankful. In an ideal world, we would be served by a political system of noble legislators attending to the public good with dignity and passion. There are many such individuals throughout government, on the local, state and federal levels. But Washington has become a morass of a system, and expecting and demanding the locus of societal change to emanate from that system is unrealistic and counterproductive. Lowering the volume, shifting the focus away from the goings on of government, and turning to what is happening outside of that realm, in a world teaming with billions of new entrants to the middle class and hundreds of millions of Americans navigating a changed workplace without daily reference to government, that is all for the best.

PHOTO: Senate Budget Committee chairman Senator Patty Murray (D-WA) (R) and House Budget Committee chairman Representative Paul Ryan (R-WI) shake hands after a news conference to introduce The Bipartisan Budget Act of 2013 at the U.S. Capitol in Washington, December 10, 2013. REUTERS/Jonathan Ernst 


]]> 6
The real issues behind the minimum wage debate Fri, 06 Dec 2013 18:03:39 +0000 In his speech at the Center for American Progress this week, President Obama devoted considerable time to an issue suddenly much in discussion: the minimum wage. This is not a new debate. In fact, it neatly echoes the last time Congress raised the minimum wage, in 2007, which echoed the debates before that. Few economic issues are such sweet catnip to ideological camps, and there is precisely zero consensus about whether these minimums have positive, negative or no effect.

Supporters say that a higher minimum wage will give people a better standard of living and boost consumption. Detractors argue that it will lead companies to hire fewer workers and kill job creation. One thing no one addresses, however, is that regardless of whether the government raises the minimum wage, our society can’t endlessly coast with a system that includes wage stagnation for the many and soaring prosperity for the few, nor can the government snap its legislative fingers and magically produce income. Someone will pay for these increases; nothing is free.

You wouldn’t know that from the tenor of the debate. In Obama’s speech, he stated that, “it’s well past the time to raise a minimum wage that in real terms right now is below where it was when Harry Truman was in office.” He acknowledged that many resist the idea of mandating a wage above the current $7.25 an hour. “We all know the arguments that have been used against a higher minimum wage.  Some say it actually hurts low-wage workers — businesses will be less likely to hire them.  But there’s no solid evidence that a higher minimum wage costs jobs, and research shows it raises incomes for low-wage workers and boosts short-term economic growth.”

It was a robust, populist speech, and it triggered an inevitable retaliation on the right. “Mr. Obama wants to raise the minimum wage to please his union backers,” harrumphed a Wall Street Journal commentator. Jennifer Rubin of the Washington Post decried the idea as just more government wealth transfer, and she countered that rather than raising wages, “One way to lessen income inequality would be to stop transferring wealth from young to old.”

There is growing income inequality in the United States, which has accelerated in the past few decades. Wages for labor have flattened while capital has flourished. As Goldman Sachs chief executive officer Lloyd Blankfein recently remarked, “This country does a great job of creating wealth, but not a great of distributing it.” And he would know. The top 10 percent of earners in the United States have gone from constituting a third of all income in the U.S. in the 1970s to half today. The top 1 percent accounts for 20 percent of the nation’s wealth.

Meanwhile, the minimum wage of $7.25 an hour is actually much less than it appears, relative to the past. In 1996, the minimum wage was $4.75 an hour. Today’s $7.25 is only a few cents above that, when adjusted for inflation, and both minimum wages were significantly below the equivalent wage in the 1950s, 1960s, and 1970s. Today’s lower minimum wage has contributed to the rise in inequality over the past thirty years.

What’s not clear, however, is whether mandating a higher wage will do anything to change that. Nearly 20 states have a higher minimum wage than the federal rate. That means that the federal law has little effect in wide swaths of the country. What’s more, according to the Bureau of Labor Statistics, only 5 percent of all workers are paid at or less than the current minimum wage. Thus, increasing it will make precious little difference in most people’s lives.

Even an increase to $10, which is what Obama and others have proposed, would leave a family of two that depends on it with less than a living wage. Various programs, ranging from food stamps to the earned income tax credit to Medicaid, exist to close that gap. The proposed increase would only marginally improve the lives of minimum wage earners.

The oft-repeated warning that businesses will hire fewer workers or reduce wages is also unclear. Yes, businesses have already begun to cut hours in order to avoid paying workers various benefits, including healthcare. Under a higher minimum wage, a significant number of companies would likely trim payrolls in order to maintain profits.

Yet such actions are both short-sighted and inimical to collective prosperity. They are short-sighted because you can’t build a vibrant service- and consumer-oriented society with fewer and fewer people earning enough income to pay for the goods and services they need and want. They are inimical to collective prosperity because a dynamic society depends on a compact, often unwritten, that the proverbial deck will not be so unevenly stacked. That is not just true in a democratic society. In China today, one of the primary issues is the widespread revulsion against the corruption and enrichment of the elite. American companies may be profit engines, but they have a responsibility to the communities in which they operate.

The real problem with the minimum wage debate, however, is that it is a simple argument that masks some uncomfortable realities for all sides of the ideological spectrum. You cannot reduce inequality by the simple working of the unfettered free market, nor can you reduce inequality without money coming from somewhere. You cannot mandate an increase in the minimum wage without a concomitant increase in spending.

Other countries that have less income inequality possess characteristics that Americans of all stripes seem to reject. Germany, Japan and Scandinavian nations all have more government spending; higher costs of goods and services; higher taxes on goods, income, and services; and/or less wealth at the very top.

Americans, however, can agree on none of these trade-offs. Only a small percentage of people are willing to pay more for their goods and services — like organic food or artisanal goods. Americans always resist higher taxes, even when they only impact the rich, and the idea that incomes might be capped raises howls of protest.

In short, the black-and-white nature of the minimum wage debate obscures the fact that money doesn’t come from nothing. An increase in wages would require higher costs somewhere, lower incomes for the rich or larger amounts of debt. Those may be legitimate costs to bear, but we shouldn’t pretend that they aren’t an integral part of the equation. We also shouldn’t pretend that increasing the minimum wage is a good proxy for the debate over these issues.

We’d be better off starting this discussion about inequality and its consequences, the nature of a global capital system that sees capital pooling among the wealthy, and an expanding global middle class that is seeing its income increase even as affluent societies see theirs stagnate. Instead, we are left with the hollow symbolism of a minimum wage that few people actually earn, and which, if increased, will leave us no closer to addressing these issues. As the beginning of a discussion, it is welcome; as the end of one, it is one more distraction.

PHOTO: Protesters calling for higher wages for fast-food workers stand outside a McDonald’s restaurant in Oakland, California December 5, 2013. REUTERS/Noah Berger 

]]> 8
The youth unemployment crisis may not be a crisis Fri, 22 Nov 2013 21:49:02 +0000

“Youth Unemployment is the Next Global Crisis”

“America’s 10 Million Unemployed Youth Spell Danger for Future Economic Growth”

“Relentlessly high youth unemployment is a global time bomb”

There’s no doubting that worldwide, kids are out of work. In the United States alone, the unemployment rate for 15 to 24-year-olds is about 16 percent, nearly twice the national average. In parts of Europe, the figures are much worse, with a whopping 56 percent youth unemployment rate in Spain alone — representing about 900,000 people.

But do these high numbers represent a global labor market crisis that imperils future growth, as the headlines warn? Maybe not. Maybe instead, they’re evidence of a generation of college graduates determined not to settle, which bodes well for our future.

To understand why, it’s worth a quick detour through history. Until the early 20th century, there was no clear concept of “unemployment.” Classical economics emerged in the late 19th century at a time when there was an ample supply of labor to feed the relentless maw of industrial production in both Europe and America. Because there was no social safety net, people worked in order to generate essentials such as food, clothing and shelter. You had to work to survive, and there was always work to be done and need for bodies to do it. Many believed that “unemployment” was only an option for vagrants, who were in turn viewed as immoral.

The Great Depression threw those views into question. Millions found themselves unable to find jobs, even when they wanted to. The Bureau of Labor Statistics began to create an unemployment rate in the 1930s, and with it a definition of what qualified as “the workforce” and of what it meant to be unemployed. A key aspect of the definition was not that you were “out of work” but rather that you were actively looking for a job, yet unable to find one. It pointed to a flaw — either temporary and cyclical, or longer-lasting and structural — with the labor market and, by extension, with the economy as a whole.

Today, the high levels of youth unemployment are viewed primarily as a breakdown in the labor market and a sign of a failing system. That’s why so many call it a “crisis.” But if you start to look at the patterns of youth unemployment, a different set of conclusions is possible.

It’s best to start with the unemployment rate among recent college graduates, which attracts the lion’s share of attention. According to a recent Georgetown University study, about 8 percent of recent college graduates are unemployed, and the number is about 10 percent for students majoring in the arts, law, public policy, and most social sciences. The BLS actually says the situation is worse, with the unemployment rate for those under the age of 29 with only a bachelor’s degree above 15 percent for men and around 11 percent for women.

And the true unemployment numbers might actually be higher. For instance, in assessing unemployment among younger people the Bureau of Labor Statistics faces greater challenges in obtaining responses from cell phone users who don’t have land lines. Moreover, many of these recent grads are working in a succession of short-term jobs, which is difficult to classify in employment surveys.

Take a 25-year-old woman I met recently, who left her job to develop an app, work on a live-stream talk show, and write a book. If by some chance the Bureau of Labor Statistics contacted her, she would say that she doesn’t have a job, and hasn’t been looking. She would simply evaporate from the labor force and not be considered unemployed. But are her decisions a symbol of systemic crisis and failure? No.

Most economists believe that not having a job in your twenties has systemic repercussions for years to come. A study from the Center for American Progress claimed that, “the nearly 1 million young Americans who experienced long-term unemployment during the worst of the recession will lose more than $20 billion in earnings over the next 10 years. This equates to about $22,000 per person.”

Yet we should be wary of these statistics. The BLS has only been collecting data on age, unemployment and subsequent incomes for a few decades. That is not enough time to make conclusions. Even if accurate, the $22,000 figure doesn’t factor in how much was recouped in unemployment and other benefits, which likely would lower that figure considerably.

The larger point is that many college-educated young people are choosing not to take low-paying service-level jobs if they don’t absolutely have to. Because they can live with their parents (and as many as 45 percent of recent grads do) and because they rarely have much in the way of fixed costs such as homes and children, they can hold out for a job that matches their ambitions. They can also retool their skills as they discover that their college degree in marketing and communications may not leave them in the best position to get the type of job that they want.

This type of unemployment is one of choice — rational, legitimate choice — not of systemic failure. It is a challenge to find a meaningful job, but that hasn’t stopped people from trying. A youth cohort determined to create meaningful work should not be seen as lazy, lost or in dire straits. Instead it could be exactly the type who might actually lead the transition of our economy away from the making-stuff economy of the 20th century to an ideas economy of the 21st.

The employment picture for young people without a college degree is different. They’re being left further behind. According to the BLS, more than 30 percent of recent high school graduates who aren’t in college are unemployed, and the number is worse for those who dropped out of high school. African-Americans without a college degree, especially under the age of 20, have an unemployment rate that approaches 40 percent. African-Americans also have higher incarceration rates, especially males, and most states and companies enact punitive regulations that make employment for those with a prison record extremely challenging.

The Hispanic population faces similar, albeit slightly less acute, stats. But these are not indications of a breakdown of labor markets. They’re proof that social policies and a shift in labor markets towards rewarding different and newer skills sets are hitting these populations, especially young men without college degrees, extremely hard.

In the United States, youth unemployment is not quite what it seems. It is not a simple sign of how bad the economy is. Youth unemployment is actually a sign of ambition and expectation. Young people aren’t part of a generation of despair, but rather a generation determined not to settle. That may not always be realistic, but it is a vital fuel to propel our society forward.

PHOTO: A woman opens a glass door with a “Now Hiring” sign on it as she enters a Staples store in New York March 3, 2011. REUTERS/Lucas Jackson

]]> 19
Tweeting our way forward Mon, 11 Nov 2013 18:21:26 +0000

Twitter’s initial public offering last week was everything that Facebook’s botched offering a year and a half ago was not: the stock was reasonably priced; management wooed investors; and the company neither promised the moon nor the stars, and was rewarded with a substantial amount of cash raised, a stock that went up more than 75 percent, and a valuation of $25 billion.

Though shares pulled back sharply — and predictably — the day after its IPO, Twitter has now joined the pantheon of leading social media companies. It has yet to make a profit, but unlike the 1990s Internet comets it is routinely compared to, it is making substantial revenue (on pace for just under $600 million this year). That is substantially less than Facebook was making when it went public ($3.7 billion), but more than LinkedIn was generating when it went public in 2011 (estimated at $220 million).

That said, at its IPO Twitter was valued higher than either Facebook or LinkedIn at the time of their public offerings. In that sense, Twitter’s reception does raise a vital question: are these companies doing more than making their founders and investors rich? Are they doing more than satisfying some nice need of their customers? Are they, in short, changing the world the way they claim? Or is that claim just a useful marketing device that makes otherwise pedestrian businesses appear to be something far grander, convincing investors to pay more than they would for equivalent businesses in more prosaic industries?

I’ve been wondering about this question for several years, and for now, it remains a question. The hype and draw of social media in its many and various forms is undeniable. Whether it is the Twitter IPO, or Yahoo’s $1 billion purchase of Tumblr, or the panoply of new companies that pop up in Silicon Valley and NYC’s Silicon Alley, these companies have buzz and they also garner income. Because so many of them serve as new media companies, occupying the same general space as journalists, they garner attention. Yahoo’s Marissa Mayer gets substantial press, far more than the chief executive officers of many companies many times larger. The same can be said for Twitter and before it Groupon, Zynga, and a host of others whose size was modest compared to many public companies, but whose profile was anything but.

Ask denizens of the Valley what they think, and they’ll say that companies like Twitter command premiums and generate buzz because they are transformative. They are transformative the way that Apple, Google, Microsoft and Oracle were transformative. They change the way consumers and businesses live and function, and they make it possible for people to connect ever more seamlessly to the products, services and people that they wish to and need to. Or so the argument goes.

Many people outside of the Valley view these claims with skepticism, hearing an echo of 1990s tech utopianism. And they are right — to a point. The problem with the tech utopianism of the 1990s wasn’t the utopian part; it was the speculative bubble. In that respect, the problem with tech in the 1990s wasn’t the technology. It was instead a toxic mix of pop culture and Wall Street that produced such overweening, Tulip mania-valuations.

And in fact, the actual innovations of technology over the past twenty years have made businesses more productive and unlocked the potential of individuals. Oracle has more than lived up to the grandiosity of its vision, and has become the digital highway for vast swaths of the world. Google is, of course, Google, and numerous smaller companies known only to the technorati are making possible everything from Big Data to the next wave of smart phones.

The scale of such potential is barely captured by official productivity statistics. As MIT’s Erik Brynjolfsson and others have demonstrated, the “free goods” of the Internet such as Google, Facebook and now Twitter may be adding hundreds of billions of dollars every year to collective output without that output registering in official GDP numbers. Twitter is still amorphous in its effects, but it clearly allows for a staccato conversation about vital information, acting as a more robust version of the instant messaging that Wall Street traders found so useful in conducting business a decade ago.

In a world where ideas and information are increasingly the coin of the realm, navigating data and manipulating it have tangible value that businesses and individuals will pay for. Of course, they will also seek such services for free if they can, which increases the business challenge for companies serving those needs. Would you pay for Facebook, Google or Twitter? Advertisers will pay for your attention, and for your data, but should you have to? We already pay for cable television and Internet access, so why not for services such as Twitter and Facebook that facilitate how we navigate our lives digitally? That hasn’t been an issue so far, but it is hard to imagine that it will not be.

Given how new these tools are, there is no way to make a definitive case for whether we are wasting time online or whether these new services are a key element in the next great flourishing of business and creativity. How one answers is very much a litmus for how one feels about the future. If you believe that we are capable of creating transformative technologies that also constitute viable businesses, you’ll likely agree that while the current social media efflorescence may have its share of hype and greed, we are, on balance, living through the next great digital transformation. On the flip side, if you believe that we are in a downward trend of decreasing wages, irrelevant technology and a technology sector based on artifice, you’ll likely think that each new Twitter is one step closer to the massive correction and contraction that is coming soon.

Twitter itself may be a fad or a transformative force, but it requires some thick ideological blinders to see it as a harbinger of 1990s bubble-dom, and as an example of greed, hubris and sheer silliness. In toto, these services and technologies are connecting people to information, ideas, products and opportunities. That is the very heart of unlocking potential, the digital equivalent of the railroad and the telegraph. It doesn’t come without excess, nor without leaving initial investors burnt. But it produces a new way for humans to spend their time, money and energy. And if that is a bandwagon, I’m jumping on.

PHOTO: The Twitter Inc. logo is displayed on screens prior to its IPO on the floor of the New York Stock Exchange in New York, November 7, 2013. REUTERS/Lucas Jackson

]]> 3 is just the beginning Fri, 01 Nov 2013 20:52:53 +0000

The Obamacare blame game is in full swing, and without other news to fill pages and airtime, it’s likely to continue for some time. Attention is shifting from the myriad problems with the official website, and toward the health plans that are being canceled, even though President Obama promised that they would not be.

But the longer-term story isn’t the rollout and its many severe glitches. No one recalls whether the first batch of Social Security checks was sent on time in the late 1930s. The story that will matter, and linger, is that the Affordable Care Act was the first major law implemented almost entirely online. It’s the template for the future, and rather than using its launch as an excuse to renew attacks on the law, we need to learn what we can because, like this bill or not, it is part of the next wave of government.

The past two weeks have been filled with various individuals testifying to Congress about the design and implementation of, the web portal that allows individuals to access the new health plans and exchanges. The tenor of these hearings, convened by the Republican-controlled House, is that the design of the website exposed the fundamental failings of the law and government incompetence. But what’s actually been exposed is that the U.S. government has not yet made the transition to a digital age. While the administration could have and should have done far better, the reasons for its failure are less about a flawed process than a system currently ill-designed for this type of legislation.

It’s safe to say that Congress has never before passed a federal law whose primary mode of delivery is a web portal that will be used by tens of millions of people. And not just one portal, but a portal that serves as a gateway to numerous state healthcare exchanges along with the federal exchanges; a portal that must link up newly designed web pages and interfaces with legacy systems stretching from the Internal Revenue Service to the Veterans Administration to the Medicare and Medicaid systems, none of which are easily compatible or speak the same language.

Many in the tech community have tried to analyze what went wrong with the web launch. Some think the government shouldn’t have hired low-bid contractors, choosing agile development teams instead. There was also a lack of sufficient testing of the site before launch, but the site went live anyway because of political considerations. That the site’s code is not public has limited the ability of even savvy tech-heads to fully explain the many problems.

What is evident, however, is how inexperienced the federal government is when it comes to developing complicated technology systems unrelated to the defense department. Testing is certainly a major issue. Whenever a large or small tech company releases a new version of software, it is after months of assiduous testing of bugs and glitches in a beta version. Even then, the more complex the programs, the more problems there are. Microsoft for years has earned the reputation of releasing programs that are still flawed despite months of running the code through its paces. Some critics have faulted the administration for similar sloppiness, but in truth, the federal government didn’t have the option to do this kind of beta testing.

Imagine the political blowback if an early version of the site had been tested and then scrutinized by adversaries. They would have used the glitches as a compelling case to delay the implementation. Testing publicly only works when there is some consensus on what the outcome should be, which in almost all cases is the actual release of the product. If everyone agreed that the healthcare law was a good thing and required a first tier website, then you could have beta tested it extensively in order to make it better. But when a fair number of people would use the flaws revealed during testing as a way to torpedo the project, optimal testing just can’t be done. Given that, it would have been extraordinary if the site had been launched without major issues.

So, how can government deliver in a digital world? The British government recently revamped not just its websites but its approach to creating them, adopting the software development methods that are more reminiscent of Silicon Valley: open sourcing, collaboration and smallish teams. The failures of should spark similar changes in the U.S., but the problem with a partisan system of highly atomized political parties is that what works best for implementing policies is frequently trumped by the partisans wanting to prevent that implementation. Right now that means Republicans are determined to halt Obamacare, but it likely will mean Democrats adopting similar tactics when it suits them.

Very little of the public debate over the launch of, including who was responsible for what, is about what will matter going forward. How government adopts its procedures to meet the needs of digital governance will, because governance is going digital no matter what happens with (Just ask the NSA, whose spying program is nearly all digital in nature.)

And yet the United States has a current political system that is ill-suited to make best use of these new tools. Adversarial politics and the lack of government coalitions lead to too toxic of an environment to develop robust technology. But the U.S. also has a surplus of groups and individuals who created this digital sphere in the first place, and they’re highly adept at innovating and creating new systems for both the public and private spheres. We certainly have the capability. It’s yet to be seen if we have the will.

]]> 20