The government’s 119-page civil lawsuit against credit rating agency Standard & Poor’s for allegedly inflating the ratings it gave to residential mortgage-related securities, or RMBS, in the run-up to the crash has removed whatever lingering doubts (there weren’t many!) might have remained about just how problematic the ratings game is. But it also raises a question: Why, in cases of white-collar wrongdoing, is it often the cogs in the wheel that seem to pay the highest price?

Let’s stipulate that there are weird things about this case. To lower its burden of proof, the government is using a 1989 law that is supposed to protect taxpayers from frauds against federally insured financial institutions. The result, as Bloomberg columnist Jonathan Weil has pointed out, is that the government is claiming that some of the very banks — mainly Citigroup — that packaged the securities were also defrauded by the rating agencies.

Plausible? Well, yes, particularly for Citi, where the right hand often doesn’t know what the left hand is doing. And just because the banks fell for their own scam doesn’t mean it wasn’t a scam. But it’s still weird. It’s also weird that the government names some S&P executives but leaves others anonymous. And  it’s weird that the government has sued S&P but not Moody’s Investors Service, which at least in outward appearance was equally culpable. (S&P, for its part, has stated that it is “simply false” that it compromised its analytical integrity, and that it has a “record of successfully defending these types of cases, with 41 cases dismissed outright or voluntarily withdrawn.”)

That said, you can’t read the case and feel good about the critical role that the rating agencies continue to play in our markets. At least according to the emails the government released, S&P’s claim that business considerations don’t affect its ratings is just flat-out false. Back in 2004, S&P circulated new criteria for rating structured securities. The agency’s “client value managers,” who were responsible for “managing the commercial relationship with clients” were to be “consulted for client information and feedback,” which would then be incorporated into the ratings. In a memo, an unnamed executive wrote, “Are you implying that we might actually reject or stifle ‘superior analytics’ for market considerations … does this mean we are to review our proposed criteria changes with investors, issuers and investment bankers?” He never got a response. In January 2005, S&P didn’t roll out a new model that would have made it harder to assign triple-A ratings to some CDOs (collateralized debt obligations). An analyst wrote that the model “could’ve been released months ago … if we didn’t have to massage the subprime and Alt A numbers to preserve market share.”

“Our old methodology gave us one single ‘best coin’ that is data driven,” wrote another S&P employee in 2007 as part of a discussion about how best to update models. “But if it turns out to be business unfriendly, we are stuck.” And so on.