Community Indicators for Your Community

Real, lasting community change is built around knowing where you are, where you want to be, and whether your efforts are making a difference. Indicators are a necessary ingredient for sustainable change. And the process of selecting community indicators -- who chooses, how they choose, what they choose -- is as important as the data you select.

This is an archive of thoughts I had about indicators and the community indicators movement. Some of the thinking is outdated, and many of the links may have broken over time.

Saturday, November 3, 2007

Happiness, Gender, and Statistics

Over the past month or so, there's been a series of articles in the traditional media and the blogosphere about a couple of papers suggesting a "happiness gap" between men and women. I won't re-hash the arguments, though I will link to some of the more interesting articles at the end of this post. What is of interest, I suspect, to community indicators practioners is the potential pitfalls in telling stories with data.

The controversy began with this New York Times article that attempted to tell the stories behind this article (PDF) and this one. It quickly spread to a couple of my favorite blogs, Language Log (for those of you who like linguistics) and the Freakonomics blog (for those who, like me, are easily amused by data.)

The core of the issue is that a small change in a trend line led to some strong statements generalized about the relative quality of life of men and women in America. There might be a need to make such statements, and the statements might in and of themselves be true (note how carefully I'm weaseling out of this particular debate in hopes of maintaining some sort of personal happiness!) The problem was, however, that the neither the data nor the trend lines appear to support such sweeping generalizations. Here's the lesson learned.

In 1999, we did some work locally on adult literacy. In that report, we used some synthetic estimates of adult functional literacy developed by Stephen Reder using a comparison between the 1990 Census and the National Adult Literacy Survey (NALS) conducted at the same time. Using those two sources, and finding the correlations between the NALS scores and Census responses, he could estimate adult functional literacy rates for every county in the United States. So as part of our project, we duly reported that an estimated 47 out of every 100 adults in our county could not read, write, and perform math skills at a 10th-grade level in 1990. (This was better than the state average, which had 51 out of 100 adults at levels 1 or 2 on a five-point functional literacy scale, meaning they had functional literacy problems.)

You'd have thought the sky fell in. For the next six years, our "horrific adult illiteracy" was the talk of the town, and that $%^&$#! 47 percent number was used to mean anything the speaker wanted it to. We did have a problem in preparing our workforce for higher literacy skills. We just didn't have the kind of sky-is-falling problem that started showing up in the media over and over again.

The moral of this story that I learned is to be careful telling stories with data. You want to draw community attention to a problem. You don't want to overstate or exaggerate the problem, however. That's when the data start to lose truthfulness and become less helpful in making community decisions or setting priorities.

That was our story with literacy. Here's a new story on happiness and gender, and I suspect the same lesson applies. I think you'll enjoy reading these -- the first two are the technical articles, and the rest are reactions and story telling.

Mark Liberman provides a timeline of articles here that I thought you might appreciate:

0 comments:

Post a Comment