The “success” of workfare when jobs are scarce

Sep 29, 2011 16:51 UTC

This year marks the 15th anniversary of the Personal Responsibility and Work Opportunity Reconciliation Act (PRWORA), the Bill-Clinton- and Newt-Gingrich-led overhaul of cash assistance to poor families with children.* One of the major changes of that law was adding work requirements so that most cash assistance applicants (generally single mothers) couldn’t receive help without heading into the world of market-based work.** When the bill passed, and unemployment was below 5%, there was some concern about what would happen when the economy slowed and jobs weren’t as easy to come by.

We are now finding out. As this graph from the Center for Budget and Policy Priorities shows, as unemployment has sky-rocketed, and other social safety net program like SNAP (a.k.a. food stamps) have seen a surge in participation, Temporary Assistance for Needy Families (TANF) has barely budged.

Some policymakers see this as a sign of “success.” At least that was the word Robert Doar, the commissioner of New York City’s Human Resources Administration, used on Wednesday at this New York University event. Between December 2007 and December 2009, as the number of unemployed people in the state of New York increased by 91%, TANF cases increased by just 4%. Doar is proud of this.

Now, if we’d somehow solved the problem of poor children—the people for whom cash assistance is ultimately intended—then I might agree. But that’s hardly the case. According to the Census Bureau, in 2010, 22.0% of Americans under the age of 18 lived in poverty. In New York City, the figure is 30%.

That works against Doar’s hypothesis that one of the reasons TANF cases haven’t risen is that would-be recipients are getting along by tapping other social welfare programs, such as unemployment insurance. It also illustrates a major misconception about unemployment insurance, which only half of all unemployed workers get, and those coming from low-wage jobs—like the ones cash assistance recipients tend to move into—typically don’t.

A more likely explanation is that eligible people aren’t joining the program. In fact, that’s what the data shows, both in New York and nationally. Before PRWORA, more than 80% of eligible families participated. Today, about 40% do.  In many states, benefits have become much stingier, which might help account for the decreased interest—except that the same drop is also seen in states like New York, where the dollar-value of benefits has remained essentially the same since 1996.

What has changed considerably is the process for applying for cash assistance. As policy analyst and TANF expert Bich Ha Pham detailed at the NYU event, an applicant in New York City must now attend 45 days of a 9-to-5 job-search workshop before having an application considered. Aside from the fact that the best way to look for a job is probably not to sit in a room with a bunch of other unemployed people for a month-and-a-half, this structure completely ignores the chaotic reality of being a single mother in financial crisis. (As the Community Service Society has shown, it also ignores the needs of high-school drop-outs, who would probably get a lot more out of a GED program than resume advice.) Indeed, a large proportion of applicants wind up being “non-compliant” during this initial 45-day job search.

The point here is not to bash Doar or his agency.*** The point is to illustrate that we can’t really have a conversation about whether or not linking cash assistance to market-based employment is problematic in a time of high unemployment, because program structure itself is distorting the behavior of would-be cash assistance recipients.

Although, in a way, maybe that is an answer to the question.  As sociologist Kathryn Edin and social anthropologist Laura Lein illustrated in their 1997 book Making Ends Meet (and in this shorter paper), the problem never really was that cash assistance recipients didn’t want to work. Indeed, interviews with hundreds of women showed that, depending on the city, between a third and half were working, just not in the formal economy. (Other data show that many cash-assistance recipients face problems that make mainstream employment difficult—more than a quarter have work-limiting physical, mental, or emotional  problems, compared with less than 5% of the general population.)

So maybe what we’re learning—should we be able to put aside the overly simplified view of the “deserving” and “undeserving” poor—is that it’s time for another round of welfare reform. But this time what needs to be reformed is how the system goes about understanding the needs and limitations of single working mothers. As Edin and Lein documented, barriers to formal employment include not just balancing work schedules with lone parenting and the added costs of having a job outside of the home (such as day care), but also the realities of low-wage work. Those realities include income volatility, the lack of unemployment insurance should a job be lost, and the lack of benefits that middle-class parents often depend on—such as sick days and the ability to make phone calls from work to check on children.

At the NYU event, even political scientist and PRWORA booster Larry Mead agreed that there is a lot of room for improvement in how work requirements are implemented. Much low-wage work is high-turnover and dead-end. The system, he said, would be much better if it focused not just on job placement, but also on job retention and job progression.

In other words, on reality.

 

*We typically call this law “welfare reform,” although that’s a bit misleading, since it didn’t address other social welfare programs, such as disability and unemployment insurance, workers’ compensation, Medicare, food stamps, and disaster relief.

**This is a fantastic bit of historical turnabout, since, as Theda Skocpol documents in this book, cash assistance to single mothers originally required women to stay at home to raise their children and not work outside of the home.

***While Doar-bashing isn’t the point, it is tough to avoid, especially when he says things like he finds it “troubling” that an increasing number of food-stamp recipients are working. Troubling, that is, because it indicates people are bilking the system, not because it reflects fundamental breakdowns in the labor market such as the decoupling of productivity gains from wage growth and rampant underemployment.

COMMENT

In some states, such as Colorado, you are expected to enter “workfare” the moment any person living in your residence applies for SNAP (not cash assistance). I learned this lesson the hard way when a member of my household applied for food stamps.

I learned that the county demands that anyone in the household who is not employed 30+ hours a week at a traditional job attend an orientation in which they spend their time working for the county social work office (to earn “your” food stamps) whether or not they personally receive or qualify for benefits.

I went to an orientation with the promise of help finding a traditional job and it was an eye-opening experience. I spent those hours “volunteering” for the county (collating papers) to earn benefits that I never received. I demanded that the person living under my roof rescind the application as I couldn’t afford to comply with the county’s request.

I can’t see how it is legal to force someone to work for the government, for free, especially if he or she does not qualify for the program.

This has only led me to question the wisdom of our government.

How in the world does workfare help the poorest of the poor? People need money to get daycare and transportation. These resources are not supplied.

The cost to taxpayers to monitor food stamp recipients people must be insane.

Worse, I can only imagine that replacing government and quasi-governmental employees with unpaid workfare workers is driving down wages for the rest of us.

Maybe it’s time to overhaul welfare entirely or, at least, make it easier for people to start private charities. Private charities are the only way we will be able to help those who truly need it.

Something has got to change.

Posted by TisSheilah | Report as abusive

Cracking down on job-candidate credit checks

Sep 26, 2011 14:44 UTC

Last week, the California legislature sent the governor a bill that would ban most employers from running credit checks on job applicants. If the governor signs the bill into law (which this web site tells us he’s likely to), California will become the biggest get yet for those pushing for such laws around the nation. Is this just what a country full of unemployed people with wrecked credit needs? Or is it, as HR managers have been hollering, a way of hindering them from finding good, upstanding workers?

The back story is as follows. A decade ago, about a third of employers ran credit checks on job applicants; today, some 60% do. HR types (and, of course, the Big Three credit bureaus) argue that credit checks help firms find reliable employees who are unlikely to steal from company coffers. Civil liberties types argue that pre-employment credit checks have a disparate impact on groups that tend to have lower credit scores, like minorities.

The Great Recession is what makes this back-and-forth particularly interesting. Losing a job is one of the fastest ways to wreck your credit. Now, it seems, that same bad credit may hinder you from regaining a steady paycheck and mending your finances. Quite the vicious cycle.

But you’ve also got to feel a little bad for firms. The labor market is full of asymmetric information and while employers often have the upper hand (they know how much other workers get paid, what employees actually contribute to the bottom line, etc.), it can be a very scary thing to go out into the world and pick a person to let into your business.

So who should win the debate? Should firms be banned from using credit checks in the hiring process?

Let’s look at the evidence.

There is a lot of reason to believe that using credit reports to judge candidates will lead to unfair outcomes. Consider, for instance, a case the Department of Labor won against Bank of America which revealed that by using credit checks in its application process for entry-level jobs, Bank of America excluded 11.5% of African-American applicants, but only 6.6% of white applicants. Who else might reliance on credit reports work to exclude? Well, the major causes of bad credit are things like divorce, large medical bills, and unemployment. So, maybe divorcees, the uninsured, and the currently jobless?

Now, one might argue that while such a situation is unfortunate, it is nonetheless part of a bigger picture. By judging job candidates on debt-to-income ratio, accounts in collection, foreclosures, bankruptcies, and education and medical debt (all things firms report will make them less likely to hire a candidate), employers are helping to ensure that they wind up with good workers.

The only problem is, there isn’t any evidence that credit is an indicator of how reliable a worker will be, or the likelihood that he will embezzle or otherwise steal. As a lobbyist for TransUnion testified in front of Oregon legislators last year: “At this point we don’t have any research to show any statistical correlation between what’s in somebody’s credit report and their job performance or their likelihood to commit fraud.” The state of Oregon has since banned job candidate credit checks.

So have Connecticut, Maryland, and Illinois, joining first-movers Washington and Hawaii. It looks like California will be next. And that’s almost certainly a good thing.

COMMENT

This is actually the best rated most written piece that you just present with your arguments.

More lessons from paying people to be less poor

Sep 22, 2011 18:40 UTC

By Barbara Kiviat

Back in 2007, New York City began paying members of some 2,400 poor families to do things like get dental check-ups, open savings accounts, hold down jobs, show up for school, and carry health insurance. Cash incentives were meant to get people with complicated, resource-constrained lives to invest in themselves and their children in ways that would ultimately break the inter-generational cycle of poverty.

The effort, which was inspired by “conditional cash transfer” programs abroad, was the first of its kind in the U.S. Now the program is expanding to Memphis, Tenn., as the mayor there announced yesterday.

Conditional cash transfers (CCTs) have a remarkable ability to bring people together—members of both the left and the right hate the idea. Depending on where you stand, CCTs are offensive because 1) policymakers shouldn’t presume to know what people ought to do, or 2) government shouldn’t pay people to do things they should be doing anyway. I find both sides of that debate disingenuous, unless it’s paired with an argument to end preferential tax treatment of things like home ownership, retirement savings, and student loans—middle- and upper-class equivalents, except for the fact that they are hidden in the tax code and thus distort perceptions of government spending.

I’m much more interested in knowing how CCTs actually change the lives of the poor. The original New York City experiment—and it was an experiment, with thousands of families in a comparison group—saw mixed results. More visits to the dentist, but no change in middle schoolers going to class. A reduction in the use of expensive financial services like check cashing, but a minimal budge in families having health insurance.

The Memphis experiment builds on what was learned in New York City. In Tennessee, the structure of payments is simplified, families may turn to staffers for advice on making plans to earn the payments, and money tied to educational outcomes focuses on high schoolers and adults going back for their GED.

But these are changes to program design, not theory. The same two ideas about how CCTs might transform lives are at play. The first notion—that transfers can reduce material hardship and instability—was clearly demonstrated in New York City, where families earned, on average, $3,000 per year. From a report on program results:

Program group members were less likely to be evicted (2.7 percent versus 4.3 percent for control group members), to have utilities shut off (5.6 percent versus 8.7 percent), and to have their phone disconnected (20 percent versus 25 percent). Program group members reported less food insecurity (not having enough food to eat) and were less likely to report having “insufficient food” at the end of the month (15 percent versus 22 percent). They were also less likely to forgo medical care or fill prescription drugs because they did not have enough money (by 3.9 percentage points and 2.1 percentage points, respectively).

The second notion—that increased stability and resources lead to better long-term planning and goal attainment—is still an open question.

Over the summer, MDRC, the evaluation shop in charge of analyzing the New York City experiment, put out a fascinating report based on interviews with 75 families that participated in the program. The report specifically focused on the education component of the program.

While families often spent program money in ways that directly supported education goals—paying for school supplies, extracurricular activities, even a foreign language trip and a home computer—both adult and high-school-aged participants didn’t often draw a link between these activities and an ability to reach long-term goals. Make no mistake, the goals were there—and long before CCTs came along. Reading the MDRC report makes quite clear that even parents whose families live paycheck to paycheck want their kids to go to college and get good jobs, and that those kids typically share those aspirations.

So then why didn’t families make a connection between their behavior and their ability to reach those goals?

The MDRC report floats a number of possibilities, but the most compelling one is this: families knew they shouldn’t come to count on the money they were earning through the program. The New York City experiment was designed to last three years, and the families participating knew that. In other words, the program inadvertently replicated some of the very instability it was designed to overcome. The result, from the MDRC report, was that:

[T]he program did not tend to inspire hope that families who were experiencing severe poverty would be able to escape from it. This finding is evident in the way that parents and children described their feelings about the end of the program. The desire to maintain a job in a volatile economy, illness or disability, or a desire to stay home with children made changes in work a difficult prospect for parents. As a result, families did not feel that they were able to replace rewards income with work or with a better-paying job after the program ended, and instead talked about the program as an unusual and lucky period in their lives — one in which they would have extra help in making ends meet and would be able to enjoy some greater comforts. (Several parents, in fact, called the program a “blessing.”)

Does that mean a conditional cash transfer program can’t work in the U.S.? Not at all. But it may mean that we’re not going to prove that it can with short-term experiments.

COMMENT

A bail bond agent, or bondsman, is any person or corporation that will act as a surety and pledge money or property as bail for the appearance of persons accused in court. Although banks, insurance companies and other similar institutions are usually the sureties on other types of contracts (for example, to bond a contractor who is under a contractual obligation to pay for the completion of a construction project) such entities are reluctant to put their depositors’ or policyholders’ funds at the kind of risk involved in posting a bail bond. Bail bond agents, on the other hand, are usually in the business to cater to criminal defendants, often securing their customers’ release in just a few hours. Bail bond agents are almost exclusively found in the United States and its former commonwealth, The Philippines. In most other countries bail is usually much less and the practice of bounty hunting is illegal.

Everyone into the next shadow banking system

Sep 19, 2011 20:22 UTC

By Barbara Kiviat

Consumer advocates have been worrying for a while now that the rapid rise of reloadable prepaid cards will lead to a two-tier financial system. There will be folks with bank accounts, and then there will be folks with prepaid cards.

Prepaid cards, which consumers (or their employers) load with money for debit-card-like spending, may  seem like the perfect solution for the 17 million American adults without a bank account—no carrying around wads of cash, no pesky check-cashing fees—but prepaid cards are hardly little angels. They often come with significant fees, including ones to use ATMs, to put more money on the card, and to close out the account. Plus, they tend not to have handy devices, like account statements, that people with bank accounts rely on.

This has led to calls to more closely regulate the industry.

One of the biggest issues is that, unlike bank accounts, prepaid cards don’t necessarily come with FDIC insurance or the protections of the Electronic Funds Transfer Act. To be clear: many prepaid cards do carry “pass-through” FDIC insurance, and some prepaid cards do fall under Reg E, thus carrying consumer protections like limited liability for lost or stolen cards. But the regulation of prepaid cards is patchwork, much of the compliance is voluntary, and consumers are almost certainly not shopping around based on which cards are covered. In many cases, consumers wouldn’t get to pick which card they use anyway, since their employer or government is the one putting money on the thing.

Typically, when people talk about these facts, the conversation is framed as being about the “unbanked,” the low-income, or those out of the “financial mainstream.”

But maybe it’s time we think a little more broadly about prepaid cards.

Since signing up to guest blog while Felix is on vacation, I’ve started doing things like reading wire stories about what MasterCard executives have to say to investors. This Dow Jones piece provides the following nugget:

MasterCard also is pushing its prepaid card business to spur growth.

Prepaid cards, which traditionally have been offered to low-income or “underbanked” consumers, are also a focus.

Increasingly, interest in prepaid cards “will be driven by banked consumers looking” to divide up and budget their spending, Murphy said.

In other words, prepaid cards are on their way into the financial mainstream.

The MasterCard execs point out a demand-side reason for the expansion: using a series of prepaid cards can be a useful way to set aside different pots of money for different uses. It’s mental accounting come to life. I’ve heard about people doing this—even buying prepaid cards specifically to store them away as savings—but I’m guessing what will have much more of an impact on the size and reach of the prepaid industry is the insane marketing muscle of firms like MasterCard.

Anyway, none of that is necessarily bad. Fifty years ago credit cards were the new kid on the block, and 30 years ago hardly anyone was using an ATM. New financial products can be useful, even life-changing.

But there are real reasons why current mainstream products typically come with substantial consumer protections. If the future is one in which we’re all using prepaid cards, perhaps more than those looking out for the “unbanked” should be paying attention.

COMMENT

Great website you have here but I was curious if you knew of any message boards that cover the same topics discussed in this article? I’d really love to be a part of online community where I can get feedback from other knowledgeable individuals that share the same interest. If you have any suggestions, please let me know. Thanks a lot!

  • Archives

    •