Community Indicators for Your Community

Real, lasting community change is built around knowing where you are, where you want to be, and whether your efforts are making a difference. Indicators are a necessary ingredient for sustainable change. And the process of selecting community indicators -- who chooses, how they choose, what they choose -- is as important as the data you select.

This is an archive of thoughts I had about indicators and the community indicators movement. Some of the thinking is outdated, and many of the links may have broken over time.

Saturday, September 20, 2008

Comparables, Context, and Community Indicators

As I write this, I'm at cruising altitude looking down as the morning sun highlights land-use development patterns that appear remarkably standardized from the ones I saw yesterday in another airplane heading toward a completely different part of the country. (I'm spending much too much of my time in airplanes these days.) So when I look down from above the clouds and can't tell if I'm above Tallahassee or Thomasville, my mind naturally turns to ... community indicators.

More specifically, I start thinking about some comments yesterday about comparability and scalability of indicators, especially as it relates to the development of the State of the USA project. The Community Indicators Consortium was hosting a conversation between SUSA and community indicators practitioners, all held over at the Urban Institute offices in D.C. (special thanks to Tom Kingsley and Kathy Pettit of the National Neighborhood Indicators Partnership for providing meeting space and refreshments.) The discussion flowed easily, as Charlotte Kahn of the Boston Indicators project facilitated exploration into civic engagement strategies, technology and tools, and indicator development. I deeply appreciate the opportunity to participate in such a thoughtful discussion.

One challenge faced by communities as they design their indicator efforts is that of comparability. If the local unemployment rate is 5.5, as someone in one community asked me, is that good or bad? That's not an easy question to answer. I think we've talked about this before, but I'll recap the issues around comparability and context.

One option for communities to answer “is that good or bad?” is to develop a peer comparison set. This would be a set of roughly similar communities, at least in demographics or other key points, and compare one's own indicators to the results from those communities. This poses several challenges, including selecting which communities could act as reasonable peer comparisons, finding indicators that are measured the same way throughout the peer community set, and avoiding significant confounding factors that make the comparisons irrelevant or problematic. For example, if a peer community just faced the closing or transfer of a large employer, that spike in unemployment may not be a fair comparison. At the end of the day, however, using peer comparisons doesn't answer “is it good or bad?” It can only answer “is it better or worse than someone else?” The danger is the TGFM effect (Thank Goodness for Mississippi): there's always somebody doing worse, and so your indicator may be met with complacency, depending on the choice of comparables. In fact, things might be getting worse in your community, but as long as things are getting relatively more awful somewhere else, you can feel pretty good about doing poorly. Not an ideal recipe for change.

The other side is also true: real progress in the local community can be overshadowed by more progress somewhere else, turning celebrations into discouragement. The benefit of comparables, however, is to avoid local complacency in making incremental changes when the rest of the world is making rapid progress on an issue. Ken Jones, of the Green Mountain Institute, points out that making small steps toward reducing lead poisoning in children would be a travesty, since most communities have made huge leaps in lead reduction and childhood exposure. Only by examining the context in which these numbers are changing can you get an accurate read on what your progress means.

A second option for communities to answer “is this good or bad?” is to compare themselves against themselves – and against a community vision. This is the decision Jacksonville made with its indicators project 23 years ago. So the question of the unemployment rate becomes “is it better than it was last year?” and “how far away are we from our goal of full employment?” The advantage of this approach is that communities look inward, measuring themselves against their own standards of where they want to be, and can galvanize action or celebrate success accordingly. The disadvantage is that this approach can miss significant national confounding factors out of the control of the local community, or the community might not reach far enough or fast enough for improvement when the rest of the country is moving rapidly toward success.

A third option, and one that Jacksonville is moving toward, is that of using peer communities and national measures as context, not comparison. This means not asking “are we doing better than Nashville?” but instead “what does our trend line look like in the context of the shared movement of peer communities?” The question is not “where do we rank among these cities?” but instead asks what is happening across the country in relationship to the indicator being measured, and how can we understand local efforts and progress better.

This approach allows us to consider the national decrease in teen birth rates or violent crime and see if our local efforts at teen pregnancy prevention are making progress beyond that which would have been otherwise expected if we were just following national patterns. It allows us to see the local murder rate in context – although it is lower than it was a decade ago, something is happening in our community that makes it more violent than it should be if all else were equal. These are much harder questions to ask, and require more work to put together the information to provide useful context. But it may be getting easier, if a few new national project provide the tools for selecting peer comparables like I hope they will.

Imagine, if you will, being able to select your own comparable communities from an extensive national dataset, based on the sociodemographic variables you select. That would be useful, yes? Now imagine being able to select comparable communities based on a wider set of indicators than just socioeconomic status or demographics. Imagine putting together different peer sets depending on which variables/indicators are under consideration. Now imagine adding a time dimension to the equation, so that you could look at communities that five years ago were where your community is today on an issue, and then see which communities improved significantly. How much more useful is that to see who's actually solved the problem your community faces today? How exciting would it be to then examine their public policy approaches, their human service programs, or their community initiatives to discover what they did that made a difference, and then to replicate that success in your community?

This is, I think, one of the fascinating new possibilities that scalable, searchable, comparable national indicator data sets will be providing in the next few years. I think it will greatly enhance the capacity of local communities to understand their own indicators in a larger context and to identify successful practices for community change. And it is for community improvement that we measure all this stuff, isn't it? We want things to get better, and we know that they get better faster if we make decisions based on accurate information.

You may think my head's in the clouds. Right now, as I look out the airplane window, you would be right. But that won't keep these tools from being developed, and soon. More to come later ... right now, my tray table needs to be placed in an upright and locked position.

Signing off somewhere above a patchwork America.


Post a Comment