Community Indicators for Your Community

Real, lasting community change is built around knowing where you are, where you want to be, and whether your efforts are making a difference. Indicators are a necessary ingredient for sustainable change. And the process of selecting community indicators -- who chooses, how they choose, what they choose -- is as important as the data you select.

This is an archive of thoughts I had about indicators and the community indicators movement. Some of the thinking is outdated, and many of the links may have broken over time.

Friday, May 22, 2009

Notes on the ABTA Conference

I mentioned earlier that I would be speaking at the ABTA Conference. I took a few notes during the sessions that I'd like to share with you.

The good news is, all the presentations are now online. The videos of each session are said to be coming soon. This means you can skip my comments and go straight to the source, if you'd like.

Still here? All right then. I strongly recommend watching the videos when they are available -- this was a fantastic discussion, and I applaud the organizers for bringing all the pieces together. My notes follow the break:

Our first speaker was Congressman Bill Posey. Until the video of his presentation is available, you can watch his short (one-minute) summary of ABTA instead:

Deborah Carstens, director of ABTA, spoke next. She introduced the ABTA Data Reports website which has information on government expenditures and performance benchmarks for all 50 states and many of the counties within states. She estimated that they have 230,000 tables in their website, and it's still growing (the whole institute is less than a year old). Take a look.

We were next treated to a panel on Policy and Government Accountability, chaired by Gary VanLandingham, Director of the Office of Program Policy Analysis and Government Accountability for the Florida Legislature. Gary suggested that talking about using performance data in budgets is not a new idea -- in 1918-19 folks were talking about the same thing -- but that adoption has been slow. Currently, 39 states collect data, and 22 legislatures say they use the data in developing budgets. The problems are twofold: there's a cultural divide between rational data-gathering folks and politicians, and there's a communication problem in sharing that data in useful and meaningful ways. (Gary's powerpoint is available here.)

Judy Zelio, recently retired from the National Conference of State Legislatures, spoke next. She has just published some work on this issue with the IBM Business of Government website. Her powerpoint presentation is also available. Her point was that accountability means a lot of different things to a lot of different people. If you have cooperation between the executive and legislative branches you can move accountability forward much more quickly than if there's discord. In late April, NCSL reported that states are facing a combined $300 billion shortfall, evidence of a crisis in state revenues, and that FY2010 funding looks to be down 18 percent with no expected improvements in FY2011.

Jean Vandal presented without a powerpoint, so you'll have to wait for the video to be put online to get a fuller sense of her remarks. She's a Special Assistant with the Louisiana State University system with a great deal of experience in state government. She's serving on the steering committee of the Comparative Performance Measurement Project, a cooperative effort of the Urban Institute and NCSL. She said that Louisiana's move to performance-based budgeting was "uniquely designed" for Louisiana. She listed three keys to their effort:

1. This is hard work. It takes staff and resources, with an emphasis on good policy wonks. They used agency strategic plans to select the performance measures, which meant that the agencies couldn't attack or disavow the data -- they had already identified the measures as important and were supposedly using the data. The first two years of the project they agreed not to use the performance measures to issue findings against an agency to allow for auditing of processes and improving data collection.

2. The second key is that this work requires a champion. Someone needs to be pushing forward on the issue. It will take time and so needs someone committed to the effort for the long haul.

3. The third key is that the effort must be sustained. While it is hard to start, it is even harder to keep it going. Bureaucrats will try to outlast politicians. We need to have bureaucrats see the effort as positive -- as a way to tell their story and get credit for what they are accomplishing -- not just punitive. The annual budget cycle makes it tough to have broad outcome measures. Some issues cross agency boundaries and shared responsibilities. It's hard to assign some of these (like decreasing poverty) as solely one agency's responsibility.

Even with these three keys in place, prepare for the unexpected. Katrina and Rita destroyed trendlines and targets. Unit costs and comparability are tough, as is capturing all the costs when the efforts cross lines. It takes concentrated effort and care to make sure that the information presented is in a fashion that will inform the public.

The panelists answered questions -- some highlights from their responses follow:

Budget decisions are about values. Performance measures will inform decisions, but never make the decisions -- we aren't the ones who are elected. Good performance measurement systems help the elected officials make the difficult political choices.

Unit costs are only part of a broader picture. They don't say how effective you're performing, just how cheaply. We need both the costs and the outcomes/results.The performance management system has to be part of a long-term issue. You have to link results to costs or else we're left with a race to the bottom. At the same time, we can't define success so narrowly that we guarantee failure.

Florida Performs is a good example of of being general enough to get the broad picture without too high a level of detail. We need to have systems that can paint the big picture and then allow for drilling down to higher levels of detail if necessary. Too much detail up front only muddies the waters.

That got us to 10:30. The rest of my notes will have to follow in another post.


Post a Comment