Barbara Kiviat Thu, 29 Sep 2011 16:51:55 +0000 en-US hourly 1 The “success” of workfare when jobs are scarce Thu, 29 Sep 2011 16:51:55 +0000 This year marks the 15th anniversary of the Personal Responsibility and Work Opportunity Reconciliation Act (PRWORA), the Bill-Clinton- and Newt-Gingrich-led overhaul of cash assistance to poor families with children.* One of the major changes of that law was adding work requirements so that most cash assistance applicants (generally single mothers) couldn’t receive help without heading into the world of market-based work.** When the bill passed, and unemployment was below 5%, there was some concern about what would happen when the economy slowed and jobs weren’t as easy to come by.

We are now finding out. As this graph from the Center for Budget and Policy Priorities shows, as unemployment has sky-rocketed, and other social safety net program like SNAP (a.k.a. food stamps) have seen a surge in participation, Temporary Assistance for Needy Families (TANF) has barely budged.

Some policymakers see this as a sign of “success.” At least that was the word Robert Doar, the commissioner of New York City’s Human Resources Administration, used on Wednesday at this New York University event. Between December 2007 and December 2009, as the number of unemployed people in the state of New York increased by 91%, TANF cases increased by just 4%. Doar is proud of this.

Now, if we’d somehow solved the problem of poor children—the people for whom cash assistance is ultimately intended—then I might agree. But that’s hardly the case. According to the Census Bureau, in 2010, 22.0% of Americans under the age of 18 lived in poverty. In New York City, the figure is 30%.

That works against Doar’s hypothesis that one of the reasons TANF cases haven’t risen is that would-be recipients are getting along by tapping other social welfare programs, such as unemployment insurance. It also illustrates a major misconception about unemployment insurance, which only half of all unemployed workers get, and those coming from low-wage jobs—like the ones cash assistance recipients tend to move into—typically don’t.

A more likely explanation is that eligible people aren’t joining the program. In fact, that’s what the data shows, both in New York and nationally. Before PRWORA, more than 80% of eligible families participated. Today, about 40% do.  In many states, benefits have become much stingier, which might help account for the decreased interest—except that the same drop is also seen in states like New York, where the dollar-value of benefits has remained essentially the same since 1996.

What has changed considerably is the process for applying for cash assistance. As policy analyst and TANF expert Bich Ha Pham detailed at the NYU event, an applicant in New York City must now attend 45 days of a 9-to-5 job-search workshop before having an application considered. Aside from the fact that the best way to look for a job is probably not to sit in a room with a bunch of other unemployed people for a month-and-a-half, this structure completely ignores the chaotic reality of being a single mother in financial crisis. (As the Community Service Society has shown, it also ignores the needs of high-school drop-outs, who would probably get a lot more out of a GED program than resume advice.) Indeed, a large proportion of applicants wind up being “non-compliant” during this initial 45-day job search.

The point here is not to bash Doar or his agency.*** The point is to illustrate that we can’t really have a conversation about whether or not linking cash assistance to market-based employment is problematic in a time of high unemployment, because program structure itself is distorting the behavior of would-be cash assistance recipients.

Although, in a way, maybe that is an answer to the question.  As sociologist Kathryn Edin and social anthropologist Laura Lein illustrated in their 1997 book Making Ends Meet (and in this shorter paper), the problem never really was that cash assistance recipients didn’t want to work. Indeed, interviews with hundreds of women showed that, depending on the city, between a third and half were working, just not in the formal economy. (Other data show that many cash-assistance recipients face problems that make mainstream employment difficult—more than a quarter have work-limiting physical, mental, or emotional  problems, compared with less than 5% of the general population.)

So maybe what we’re learning—should we be able to put aside the overly simplified view of the “deserving” and “undeserving” poor—is that it’s time for another round of welfare reform. But this time what needs to be reformed is how the system goes about understanding the needs and limitations of single working mothers. As Edin and Lein documented, barriers to formal employment include not just balancing work schedules with lone parenting and the added costs of having a job outside of the home (such as day care), but also the realities of low-wage work. Those realities include income volatility, the lack of unemployment insurance should a job be lost, and the lack of benefits that middle-class parents often depend on—such as sick days and the ability to make phone calls from work to check on children.

At the NYU event, even political scientist and PRWORA booster Larry Mead agreed that there is a lot of room for improvement in how work requirements are implemented. Much low-wage work is high-turnover and dead-end. The system, he said, would be much better if it focused not just on job placement, but also on job retention and job progression.

In other words, on reality.


*We typically call this law “welfare reform,” although that’s a bit misleading, since it didn’t address other social welfare programs, such as disability and unemployment insurance, workers’ compensation, Medicare, food stamps, and disaster relief.

**This is a fantastic bit of historical turnabout, since, as Theda Skocpol documents in this book, cash assistance to single mothers originally required women to stay at home to raise their children and not work outside of the home.

***While Doar-bashing isn’t the point, it is tough to avoid, especially when he says things like he finds it “troubling” that an increasing number of food-stamp recipients are working. Troubling, that is, because it indicates people are bilking the system, not because it reflects fundamental breakdowns in the labor market such as the decoupling of productivity gains from wage growth and rampant underemployment.

]]> 5
Cracking down on job-candidate credit checks Mon, 26 Sep 2011 14:44:01 +0000 Last week, the California legislature sent the governor a bill that would ban most employers from running credit checks on job applicants. If the governor signs the bill into law (which this web site tells us he’s likely to), California will become the biggest get yet for those pushing for such laws around the nation. Is this just what a country full of unemployed people with wrecked credit needs? Or is it, as HR managers have been hollering, a way of hindering them from finding good, upstanding workers?

The back story is as follows. A decade ago, about a third of employers ran credit checks on job applicants; today, some 60% do. HR types (and, of course, the Big Three credit bureaus) argue that credit checks help firms find reliable employees who are unlikely to steal from company coffers. Civil liberties types argue that pre-employment credit checks have a disparate impact on groups that tend to have lower credit scores, like minorities.

The Great Recession is what makes this back-and-forth particularly interesting. Losing a job is one of the fastest ways to wreck your credit. Now, it seems, that same bad credit may hinder you from regaining a steady paycheck and mending your finances. Quite the vicious cycle.

But you’ve also got to feel a little bad for firms. The labor market is full of asymmetric information and while employers often have the upper hand (they know how much other workers get paid, what employees actually contribute to the bottom line, etc.), it can be a very scary thing to go out into the world and pick a person to let into your business.

So who should win the debate? Should firms be banned from using credit checks in the hiring process?

Let’s look at the evidence.

There is a lot of reason to believe that using credit reports to judge candidates will lead to unfair outcomes. Consider, for instance, a case the Department of Labor won against Bank of America which revealed that by using credit checks in its application process for entry-level jobs, Bank of America excluded 11.5% of African-American applicants, but only 6.6% of white applicants. Who else might reliance on credit reports work to exclude? Well, the major causes of bad credit are things like divorce, large medical bills, and unemployment. So, maybe divorcees, the uninsured, and the currently jobless?

Now, one might argue that while such a situation is unfortunate, it is nonetheless part of a bigger picture. By judging job candidates on debt-to-income ratio, accounts in collection, foreclosures, bankruptcies, and education and medical debt (all things firms report will make them less likely to hire a candidate), employers are helping to ensure that they wind up with good workers.

The only problem is, there isn’t any evidence that credit is an indicator of how reliable a worker will be, or the likelihood that he will embezzle or otherwise steal. As a lobbyist for TransUnion testified in front of Oregon legislators last year: “At this point we don’t have any research to show any statistical correlation between what’s in somebody’s credit report and their job performance or their likelihood to commit fraud.” The state of Oregon has since banned job candidate credit checks.

So have Connecticut, Maryland, and Illinois, joining first-movers Washington and Hawaii. It looks like California will be next. And that’s almost certainly a good thing.

]]> 12
More lessons from paying people to be less poor Thu, 22 Sep 2011 18:40:08 +0000 By Barbara Kiviat

Back in 2007, New York City began paying members of some 2,400 poor families to do things like get dental check-ups, open savings accounts, hold down jobs, show up for school, and carry health insurance. Cash incentives were meant to get people with complicated, resource-constrained lives to invest in themselves and their children in ways that would ultimately break the inter-generational cycle of poverty.

The effort, which was inspired by “conditional cash transfer” programs abroad, was the first of its kind in the U.S. Now the program is expanding to Memphis, Tenn., as the mayor there announced yesterday.

Conditional cash transfers (CCTs) have a remarkable ability to bring people together—members of both the left and the right hate the idea. Depending on where you stand, CCTs are offensive because 1) policymakers shouldn’t presume to know what people ought to do, or 2) government shouldn’t pay people to do things they should be doing anyway. I find both sides of that debate disingenuous, unless it’s paired with an argument to end preferential tax treatment of things like home ownership, retirement savings, and student loans—middle- and upper-class equivalents, except for the fact that they are hidden in the tax code and thus distort perceptions of government spending.

I’m much more interested in knowing how CCTs actually change the lives of the poor. The original New York City experiment—and it was an experiment, with thousands of families in a comparison group—saw mixed results. More visits to the dentist, but no change in middle schoolers going to class. A reduction in the use of expensive financial services like check cashing, but a minimal budge in families having health insurance.

The Memphis experiment builds on what was learned in New York City. In Tennessee, the structure of payments is simplified, families may turn to staffers for advice on making plans to earn the payments, and money tied to educational outcomes focuses on high schoolers and adults going back for their GED.

But these are changes to program design, not theory. The same two ideas about how CCTs might transform lives are at play. The first notion—that transfers can reduce material hardship and instability—was clearly demonstrated in New York City, where families earned, on average, $3,000 per year. From a report on program results:

Program group members were less likely to be evicted (2.7 percent versus 4.3 percent for control group members), to have utilities shut off (5.6 percent versus 8.7 percent), and to have their phone disconnected (20 percent versus 25 percent). Program group members reported less food insecurity (not having enough food to eat) and were less likely to report having “insufficient food” at the end of the month (15 percent versus 22 percent). They were also less likely to forgo medical care or fill prescription drugs because they did not have enough money (by 3.9 percentage points and 2.1 percentage points, respectively).

The second notion—that increased stability and resources lead to better long-term planning and goal attainment—is still an open question.

Over the summer, MDRC, the evaluation shop in charge of analyzing the New York City experiment, put out a fascinating report based on interviews with 75 families that participated in the program. The report specifically focused on the education component of the program.

While families often spent program money in ways that directly supported education goals—paying for school supplies, extracurricular activities, even a foreign language trip and a home computer—both adult and high-school-aged participants didn’t often draw a link between these activities and an ability to reach long-term goals. Make no mistake, the goals were there—and long before CCTs came along. Reading the MDRC report makes quite clear that even parents whose families live paycheck to paycheck want their kids to go to college and get good jobs, and that those kids typically share those aspirations.

So then why didn’t families make a connection between their behavior and their ability to reach those goals?

The MDRC report floats a number of possibilities, but the most compelling one is this: families knew they shouldn’t come to count on the money they were earning through the program. The New York City experiment was designed to last three years, and the families participating knew that. In other words, the program inadvertently replicated some of the very instability it was designed to overcome. The result, from the MDRC report, was that:

[T]he program did not tend to inspire hope that families who were experiencing severe poverty would be able to escape from it. This finding is evident in the way that parents and children described their feelings about the end of the program. The desire to maintain a job in a volatile economy, illness or disability, or a desire to stay home with children made changes in work a difficult prospect for parents. As a result, families did not feel that they were able to replace rewards income with work or with a better-paying job after the program ended, and instead talked about the program as an unusual and lucky period in their lives — one in which they would have extra help in making ends meet and would be able to enjoy some greater comforts. (Several parents, in fact, called the program a “blessing.”)

Does that mean a conditional cash transfer program can’t work in the U.S.? Not at all. But it may mean that we’re not going to prove that it can with short-term experiments.

]]> 4
Everyone into the next shadow banking system Mon, 19 Sep 2011 20:22:17 +0000 By Barbara Kiviat

Consumer advocates have been worrying for a while now that the rapid rise of reloadable prepaid cards will lead to a two-tier financial system. There will be folks with bank accounts, and then there will be folks with prepaid cards.

Prepaid cards, which consumers (or their employers) load with money for debit-card-like spending, may  seem like the perfect solution for the 17 million American adults without a bank account—no carrying around wads of cash, no pesky check-cashing fees—but prepaid cards are hardly little angels. They often come with significant fees, including ones to use ATMs, to put more money on the card, and to close out the account. Plus, they tend not to have handy devices, like account statements, that people with bank accounts rely on.

This has led to calls to more closely regulate the industry.

One of the biggest issues is that, unlike bank accounts, prepaid cards don’t necessarily come with FDIC insurance or the protections of the Electronic Funds Transfer Act. To be clear: many prepaid cards do carry “pass-through” FDIC insurance, and some prepaid cards do fall under Reg E, thus carrying consumer protections like limited liability for lost or stolen cards. But the regulation of prepaid cards is patchwork, much of the compliance is voluntary, and consumers are almost certainly not shopping around based on which cards are covered. In many cases, consumers wouldn’t get to pick which card they use anyway, since their employer or government is the one putting money on the thing.

Typically, when people talk about these facts, the conversation is framed as being about the “unbanked,” the low-income, or those out of the “financial mainstream.”

But maybe it’s time we think a little more broadly about prepaid cards.

Since signing up to guest blog while Felix is on vacation, I’ve started doing things like reading wire stories about what MasterCard executives have to say to investors. This Dow Jones piece provides the following nugget:

MasterCard also is pushing its prepaid card business to spur growth.

Prepaid cards, which traditionally have been offered to low-income or “underbanked” consumers, are also a focus.

Increasingly, interest in prepaid cards “will be driven by banked consumers looking” to divide up and budget their spending, Murphy said.

In other words, prepaid cards are on their way into the financial mainstream.

The MasterCard execs point out a demand-side reason for the expansion: using a series of prepaid cards can be a useful way to set aside different pots of money for different uses. It’s mental accounting come to life. I’ve heard about people doing this—even buying prepaid cards specifically to store them away as savings—but I’m guessing what will have much more of an impact on the size and reach of the prepaid industry is the insane marketing muscle of firms like MasterCard.

Anyway, none of that is necessarily bad. Fifty years ago credit cards were the new kid on the block, and 30 years ago hardly anyone was using an ATM. New financial products can be useful, even life-changing.

But there are real reasons why current mainstream products typically come with substantial consumer protections. If the future is one in which we’re all using prepaid cards, perhaps more than those looking out for the “unbanked” should be paying attention.

]]> 9
It’s not the economists, it’s the economics Mon, 01 Nov 2010 20:58:20 +0000 There’s been some interesting discussion in response to my earlier post about why we expect too much from economists, although a lot of the comments miss my larger point. What I was trying to say is that economics might not entirely be up to the task of explaining what we generally consider to be economic phenomena because we are overconfident about what the discipline has the ability to account for. You might call this the Freakonomics Fallacy. Whatever it is in the world we are trying to explain—crime, climate change, test scores—economics has the answer.

If this isn’t in fact true, then why would we think it? Again, to underscore a part of my earlier argument that I probably didn’t make forcefully enough: we think economics has all the answers because economics has become our major mode of understanding the social world around us. Charities are social businesses. Policy makers are cost-benefit analyzers. Education is a market.

This has not always been the case. In earlier eras, we were often more likely to understand human behavior and social dynamics through other prisms, such as political science, sociology, psychology, anthropology or biology. Indeed, for better or for worse, many of these fields took a turn being the dominant social science. I’m not saying that one frame gives a more accurate or useful picture of the world than another. Just that each leads to a different way of understanding why things happen the way they do because each comes with its own set of assumptions and simplifications.

Now here is the part I didn’t talk about earlier. Using one frame over another may also lead to changes in perception and behavior. As economist Robert Frank once wrote, “Our beliefs about human nature help shape human nature itself.” I am indebted to Joe Magee for pointing me to this fantastic paper (PDF), which explains how the economic world view might be influencing us to act more in line with its assumptions—such as the primacy of self-interest in how people make decisions. The paper includes a number of great examples, including how the Chicago Board Options Exchange wound up conforming to option-pricing theory and why companies often think layoffs are the path to maximum value. Here’s a more trivial, although particularly salient, illustration that involves people playing the prisoner’s dilemma:

[The] game was called, in one instance, the Wall Street Game and, in the other, the Community Game. This simple priming using different language produced differences in participants’ choice of moves, as well as differences in the moves subjects anticipated from their counterparts. When the game was called the Community Game, “mutual cooperation was the rule. . .and mutual defection was the exception. . . . whereas the opposite was the case in the Wall St. Game” (Liberman et al., 2003: 15). Both participants and those that nominated them did not anticipate the extent to which this simple labeling or naming affected responses, and subjects’ responses to the situation were much more strongly predicted by the name of the situation than by the person’s presumed likelihood and reputation for being cooperative or defecting.

The same payoff matrix
and game was called, in one instance, the
Wall Street Game and, in the other, the Community
Game. This simple priming using different
language produced differences in participants’
choice of moves, as well as differences in the
moves subjects anticipated from their counterparts.
When the game was called the Community
Game, “mutual cooperation was the rule. . .
and mutual defection was the exception. . . .
whereas the opposite was the case in the Wall
St. Game” (Liberman et al., 2003: 15). Both participants
and those that nominated them did not
anticipate the extent to which this simple labeling
or naming affected responses, and subjects’
responses to the situation were much more
strongly predicted by the name of the situation
than by the person’s presumed likelihood and
reputation for being cooperative or defecting.

This is not an indictment of the economic world view, nor a way of complaining about how it has won out out above all others for all time (it hasn’t). Rather, this is simply a friendly reminder that we have all, to a large extent, adopted this world view as our own—and that has altered both the way we perceive problems, as well as the way we analyze and try to solve them. But this way of understanding  the world is, ultimately, only one of many. In certain circumstances it will fail. Economics cannot explain everything that comes our way. But sometimes we’re too enmeshed in economic thinking to see that.

]]> 14
Volcker’s rule on rules Fri, 29 Oct 2010 17:29:25 +0000 Former Fed chairman Paul Volcker has some advice for financial regulators writing rules to define new limits on banks’ ability to trade for their own accounts: be as vague as possible. At least that’s the message in this WSJ piece by Deborah Solomon (for which, to be upfront, Volcker declined to comment).

At first pass, that sounds a little nuts. If Dodd-Frank means to clamp down on proprietary trading at institutions that receive federal guarantees (like deposit insurance), then why wouldn’t regulations spell out, as specifically as possible, what those banks aren’t allowed to do? Solomon explains:

Mr. Volcker’s concern, according to several people familiar with the matter, is that narrow or prescriptive rules would invite gamesmanship on the part of banks and could allow firms to evade the rule’s intent. Already, some banks and their lobbyists are seeking to sway regulators and encourage them to narrowly define certain types of trading activities, according to government officials.

By being less specific, the logic goes, regulators will better be able to adapt to changing circumstances—and to banks’ tactics. Solomon compares this approach to the one the government already employs in the realm of money laundering. Another good example is insider trading law. It’s never been particularly clear what is, and what isn’t, insider trading. This can lead to bumpy prosecutions, but it does serve the important purpose of preserving flexibility. Leaving a fair amount of case-by-case judgment in the system lets regulators tap what Michael Polanyi called “tacit knowledge,” or the thing Supreme Court Justice Potter Stewart was getting at when he wrote of pornography: “”I shall not today attempt further to define the kinds of material I understand to be embraced within that shorthand description; and perhaps I could never succeed in intelligibly doing so. But I know it when I see it.”

In this Bloomberg column, Michael Lewis gives us reason to think this might be a good way to go. He writes:

The banks have no intention of ceasing their prop trading. They are merely disguising the activity, by giving it some other name. A former employee of JPMorgan, for instance, wrote to say that the unit he recently worked for, called the Chief Investment Office, advertised itself largely as a hedging operation but was in fact making massive bets with JPMorgan’s capital. And it would of course continue to do so. JPMorgan didn’t respond to a request for comment.

One conclusion: to have the best chance of ferreting out prop trading, regulators are going to need a lot of leeway in deciding what they go after.

Interestingly, though, Lewis comes to a different conclusion. He doesn’t opt for vague rules, but rather very strict, draconian ones that would change the face of finance more dramatically than most people probably imagine Dodd-Frank doing:

There’s a simple, straightforward way… to construe the Dodd-Frank language, and it would reform Wall Street in a single stroke: to ban any sort of position-taking at the giant publicly owned banks… If that means that Goldman Sachs is no longer allowed to make markets in corporate bonds, so be it. You can be Charles Schwab, and advise investors; or you can be Citadel, and run trading positions. But if you are Citadel you will be privately owned. And if you blow up your firm, you will blow up yourself in the bargain.

If you have little faith in regulators’ ability to keep up with Wall Street innovation* behavior and to use flexible rules to their fullest, then maybe Lewis’s approach is the smarter one. I’m sympathetic to that argument; I’ve voted to chop up overly large and entwined financial institutions before. But I don’t know if at this point that path is politically feasible. Dodd-Frank could have broken up the banks, but it didn’t. And I’m not sure that since the bill passed, the political clout of the be-tough-on-Wall-Street camp has grown.

Yet that camp is, admirably, still fighting. A group of senators, led by Carl Levin, recently wrote a letter to the new Financial Stability Oversight Council, urging regulators to really crack down and not let Dodd-Frank get watered down in the rule-making. It’s a good thing for people to hear, but so is Volcker’s message—that often the toughest rules are the ones that specifically prohibit the least.

*I regret having used this word, so I have gone back and changed it.

]]> 1
Why economists are(n’t) the answer to all our problems Wed, 27 Oct 2010 13:45:38 +0000 Over at the Curious Capitalist, my former colleague Steve Gandel asks me to react to this NYT article about how economists manage to disagree on such fundamental questions as whether the government should spend more or less money in response to economic malaise. I’ve been perplexed by this sort of thing before. In this post from August, I worried about the influence of ideology, and then decided that maybe the bigger take-away is that we should spend less time listening to economists, who, after all, represent just one possible lens onto the world of human behavior, decision-making and social dynamics:

[T]he economy is as much a product of sociology and policy as it is pure-form economics. Yet we’d not expect a sociologist or a political scientist to be able to write a computer model to accurately capture system-wide decision-making. The conclusion I’ve come to: while economists may have an important perspective on whether it’s time for stimulus or austerity, maybe we should stop looking to them as if they are people who are in the ultimate position to know.

After rereading my post, I started to wonder how economics and its famously flawed assumption of rational behavior came to dominate the discussion. If confidence is such an important part of getting the economy growing again, then why aren’t we taking advice from legions of social psychologists? If multinational corporations are back to profitability but still not adding jobs, then why aren’t we asking the organizational behavior experts for their models?

In search of an answer, I took a cue from Steve: I called Justin. He had all sorts of interesting things to say, like how economists after WWI thought long and hard about why they hadn’t played a larger role in the war effort (ostensibly hoping to do better next time), and how in the 1960s economists moved to get everyone working from the same basic model partly because a unified voice would be more influential. That is to say, economics won out over other social sciences, at least in part, because the discipline got its act together. (Justin may fill in more of the details later, but, if not, you’ve always got his history-packed book to turn to.)

So what do we do now that economics doesn’t, in fact, have all the answers? Well, some of us try to shoe-horn other approaches, like psychology, back into the picture. And some of us denounce academic economics altogether. But most of us just listen to the debate among economists and don’t quite understand how it can be happening because these are the guys and gals who are supposed to know this stuff. We have so completely absorbed the economic world view in so many aspects of our lives—public policy is determined by cost-benefit analysis, doing good in the world has become return on social investment, efficiency has morphed from the best way to reach a goal to the goal itself—that it doesn’t even occur to us that there could be a more illustrative starting point for asking a question or framing a debate.

That’s one idea, anyway. The economists disagree because they don’t have the tools to see the big picture. And most of us can’t see that.

]]> 10
The U.S. Chamber of Commerce is not the same thing as American business Mon, 25 Oct 2010 21:00:56 +0000 I don’t understand why everyone is so surprised to find out that large corporations are funneling massive amounts of money to the U.S. Chamber of Commerce. Last week’s NYT report has been making the Internet rounds, and while I appreciate the point that the Chamber is much more partisan than its non-profit status would suggest—70 of the Chamber’s 93 midterm campaign ads either support Republican candidates or attack their opponents, despite the Chamber’s promise to the Federal Election Commission that it only talks about issues—there’s also a curious amount of wonderment at big-company donations. Yes, Wall Street firms sent millions of dollars to the Chamber when financial re-regulation was on the table, and the insurance industry got out its checkbook when it was time to talk healthcare reform. Why would anyone be surprised?

The more counterintuitive and telling story, which the Times only flicks at, is how unsatisfied certain businesspeople are growing with the U.S. Chamber. A couple of weeks ago, New Hampshire’s Greater Hudson Chamber of Commerce decided to break ties with the national organization, because, in the words of the Nashua Telegraph:

[I]t felt recent political advertisements by the national chamber in support of specific parties and candidates were in “direct conflict” with the foundation of the Hudson chamber. Jerry Mayotte, executive vice president of the Greater Hudson Chamber of Commerce, said the Hudson group is a nonpartisan organization. He said he can’t remember the last time they chose not to renew their membership.

Last year, the Chamber of Commerce of Eastern Connecticut did the same thing. Tony Sheridan, the group’s president and CEO recently explained why:

“My issue with the national chamber is their willingness to take a very narrow slice of a piece of complicated legislation – and it’s generally the most negative spin they’re taking, like health care, when we all know that the health-care system is broken – and claim that the sky is falling, instead of using the money to educate people,” Sheridan said.

During financial re-reform, a number of local and regional chambers, including the South Carolina Small Business Chamber of Commerce and the U.S. Women’s Chamber of Commerce, tried to get out a similar message when it came to the proposed Consumer Finance Protection Agency. In one op-ed, the CEO of the U.S. Women’s Chamber wrote:

The U.S. Women’s Chamber of Commerce disagrees with the U.S. Chamber’s big business scare tactics regarding the benefits of a strong, independent Consumer Federal Protection Agency.  The U.S. Chamber would have small businesses believe that protecting the rights of bank and non-bank lenders to deceive, manipulate and bet against small businesses is good for the economy and good for our future – all evidence to the contrary.

The big take-away: the U.S. Chamber of Commerce is not the same thing as American business. It’s easy for the U.S. Chamber—in fact, it’s easy for any well-funded lobbying group—to say that they speak for an entire population. That’s probably never going to be true. And in the case of the U.S. Chamber, it seems to be less true with each passing day.

]]> 16
The less you know about finance the better Mon, 25 Oct 2010 11:52:46 +0000 Everywhere you turn these days, some bigwig policymaker is talking about the importance of financial literacy education. Here’s Ben Bernanke doing it. And there’s Tim Geithner and Arne Duncan. Even the President. It’s easy to understand why we feel like we need this, what with all the bad financial decision-making of recent years. The only problem is, there’s a fair amount of evidence that a lot of what we do to teach better financial habits, like courses in high school, doesn’t work. Some research has shown that financial education is more likely to stick if it’s focused on one topic and comes right before a person makes a related decision—learning about mortgages as you’re house shopping, say, or getting a lesson in compounding interest along with your credit card.

But maybe there’s a simpler approach. Maybe we should ignore real-world complexity altogether and just teach people financial rules of thumb.

A presentation at that microfinance conference last week got me going on this train of thought (although I’m by no means the first to ride it). In this experiment, researchers taught one group of small-time entrepreneurs in the Dominican Republic formal accounting, including double-entry bookkeeping, cash and working capital management and investment decision-making. Another group was taught simple rules of thumb, like “keep personal and business accounts separate” and “write everything down.” The results:

People who were offered rule-of-thumb based training showed significant improvements in the way they managed their finances as a result of the training relative to the control group which was not offered training. They were more likely to keep accounting records, calculate monthly revenues and separate their books for the business and the home. Improvements along these dimensions are on the order of a 10% increase. In contrast, we did not find any significant changes for the people in the basic accounting training. It appears that in this context, the rule-of-thumb training is more likely to be implemented by the clients than the basic accounting training.

When I caught up with Greg Fischer to ask what the U.S. consumer-class take-away might be, he was appropriately modest about his findings and hesitated to draw any universal conclusions. I lack such compunction, so let me say that I think this result contains a very important piece of wisdom. People live complicated, busy lives and the learning they are most likely to put to use is that which is simple to remember and implement. In Fischer’s study, some microentrepreneurs received follow-up training at their place of business: an educator stopped by to reinforce concepts and to answer questions. Once this happened, the group that received the formal accounting training applied what they had learned. But unless we want to set up a system in which your high school consumer finance teacher pops back up just in time for your first mortgage, rules of thumb might be the way to go.

And, actually, we already have many them. We just need to dig them out of the dustbin we tossed them into during the free-money euphoria. For example, don’t spend more than 2 1/2 times your annual salary on a house. And don’t take out more student loan debt than you expect to earn in your first year on the job (assuming you have the option). As Jack Bogle once said: “Your bond position should equal your age. I won’t tell you this is the best investment advice you’ll ever get, but the number of pieces of advice that are worse is infinite.” It’s not terribly complicated to figure out what we need to teach. We just need to jump to it.

]]> 17
The real revolution in microfinance Fri, 22 Oct 2010 13:48:22 +0000 People often talk (and write) about how commercialization is changing the nature of microfinance. Yet increasingly it looks like an even more fundamental shift is afoot. Microfinanciers are finally figuring out what their customers want.

The well-worn story of microfinance goes something like this. Lend a poor person in a poor country a little bit of money, and that person can invest in a business—by buying a sewing machine, say, or another cow. Over the long run, that person pulls himself out of poverty with the income generated by his endeavor.

One reason this story involves a loan is because in most countries it’s a whole lot easier to lend money than it is to take deposits. (The latter requires a banking license, which the former doesn’t.) But there’s another reason loan-making is at the center of traditional microfinance: the people who started this work more than 30 years ago assumed that since mainstream banks didn’t lend to poor people, there was a massive, untapped demand for borrowing.

The thing is, no one ever really asked poor people if business loans were the most important financial product they were missing. That’s now starting to change, thanks in part to a recent wave of academic research. As it turns out, poor people lead complicated financial lives and they need money for all sorts of things.

Thursday I was at this conference, where Dean Karlan of Yale talked about research he’s been doing with Jonathan Zinman of Dartmouth. In interviews with microfinance recipients in the Philippines, the pair discovered that some 46% of borrowers used a decent chunk of their business loan to pay down other debt and about 28% spent part of the money on a big household purchase—even though fewer than 4% of people in either category ever admitted this to their bank. (Disclosure: I was at this conference because I am now doing work for the Financial Access Initiative, which co-sponsored the event.)

This sort of finding—which quantifies what many practitioners have long suspected was the case—is having an impact on how microfinanciers go about their business. “We’re an industry built on assumptions, and we’ve gotten to a point where we have to test those,” said Carlos Danel, a co-founder of the Mexican microfinance behemoth Banco Compartamos. “Research is showing us that we actually don’t know a lot about the customers we serve.” That’s why Compartamos is conducting a 4-year study with Karlan and other researchers to find out how customers use microfinance products, and how those products do—or don’t—change their lives.

As Danel put it, microfinance is an industry that was born out of supply—one that came from people thinking about what organizations were capable of doing. Now, he said, the challenge is to figure out what poor people around the world actually need.

]]> 9