Yesterday I led a lunchtime conversation via webinar on the question, Are your community indicators making a difference? The webinar was sponsored by the Community Indicators Consortium and was a members-only event, and I know several dozen of you were disappointed in not being able to attend. I thought I'd summarize my notes for both the attendees and those who missed the event, and continue the conversation. (Plus you ought to join CIC to not miss out on their next webinar!)
For the webinar, I'm speaking from the experience of an organization that is currently working on its 25th annual community indicators report. We've seen a generation of community leaders who have stepped into leadership roles that have always had our indicator reports there to guide them. Along the way, we've learned a little bit (through many trials and lots of errors!) about how to tell if your indicators are being effective.
I tried to organize my remarks this way:
One topic: Measuring the effectiveness of your indicators project
Two key questions: Who is your intended audience and what are your intended results?
Three meta issues: Design, Timing, and Source
Five areas to measure results
(I know there's no four. Feel free to chime in with what I missed.)
Let's jump to the two key questions: intended audience and intended results. Defining your audience is not easy work, but it is critical forthe rest of the discussion. Are you producing your indicators for elected officials? For public officials (the non-elected ones behave differently than those who need to campaign for their positions)? For community activists? For statisticians and data professionals? For chambers of commerce and business groups? For United Ways, community foundations, or other funders of non-profits? For grantwriters? For non-profit organizations and service providers? For everyday citizens? For students? For the media?
You may want everyone to use your indicators. I know the world would be a better place if everyone read and internalized every report I produce. But who is/are your primary audience?
And what do you want them to do with the indicators? Possible intended results include:
- Inform/ educate/ raise awareness
- Build shared priorities
- Shape decision-making
- Influence budget allocations
- Define public policy
- Inspire action
- Demand accountability
- Measure performance/outcomes
And there are more possibilities. Before we can deal with the big question -- are your indicators making a difference -- we have to be able to answer these two key questions -- who do you want to do what with your data.
In my organization in Jacksonville, the question of indicator effectiveness is driven by a Model of Community Improvement. It's our "theory of change" that explains why we do indicators and what we hope to accomplish with them. I'll include the model below:
Briefly, we suggest that change begins when we identify what change we want -- we create a vision for the future, based on our shared values in a community. (I know we like to think data are objective, but every indicator we include in our reports is a value judgement, as is every include we don't include. Every desired direction in a trend line is a value judgement. Go ahead and begin by articulating the values, instead of assuming implicit agreement on them.)
In order to know where we are as a community in relationship to that vision, we develop indicators. These community indicators then help us determine where we are falling short, what our priorities for action are, and inform the research, planning, and strategizing processes. The indicators themselves don't tell us what to do -- they are descriptive, not prescriptive. They do tell us where we need to do something, and we suggest that indicators be accompanied by planning processes to determine what to do about the indicators that fall short of our desired expectations.
Plans require action, which is the next step in the model. If we can act ourselves, we do so; if we need to convince others to act, then advocacy is required to get the desired actions.
Actions have consequences; the outcomes or results of those actions then need to be assessed to see if they achieved the desired results. Here is where our indicators come into play again -- are we closer to where we want to be? Based on the indicators, we can determine if we need to reshape our vision, adjust what we're measuring, or go back to the drawing board and develop new plans.
Indicators play two critical roles in our model for community change -- they identify priorities for action, and they assess the results of that action. In order to measure the effectiveness of our indicators, then, we measure how well they serve both of those functions.
This isn't the only possible theory of change, of course. Yours might be quite different. But detemining indicator effectiveness has to include some thinking about the model you're using in applying those indicators. Why are you measuring indicators? What difference do you want your community indicators to make?
That moves us from our two key questions to our three meta issues: design, timing, and source.
By design, I mean simply presenting the information so that your intended audience can use it to achieve the intended results. We don't think about design that way, I'm afraid. We look at what looks cool, what our peers are accomplishing, and what we like to see. We want to present our data in the most impactful way possible -- but many times, we're thinking about what is most impactful to us. And we tend to be different than our targeted audiences.
Elected officials, for example, tend to want the information presented clearly on one printed page in their hand when they need it. Researchers want more detail. Grantwriters need different kids of data break-outs. Regular citizens need something that's not so intimidating and doesn't make them feel like they're back in math class. Your design has to meet the needs of your audience in a way that allows and encourages them to use the information to achieve the desired outcomes. (On the webinar, I shared a quick succession of a series of indicator reports, both print and web-based, to show the wide variety out there. If you've been reading this blog, you've seen the examples and many more. Not every report needs to look alike -- but to work, they have to meet the intended audience where they are!)
By timing, I mean three things: time of year, update frequency, and data relevancy. The report needs to coincide with the decision cycles it hopes to influence, and the information in it needs to be current enough to influence action. For example, one of our intended audiences is our local United Way's resource allocation team. They need the information in our report to inform their decisions in allocating money to different programs. The report needs to be available before they meet, but not too far before they meet because the information in the report needs to be as current as possible. They make decisions on an annual basis, so to institutionalize the indicators in the decision-making cycle the indicators need to be updated annually. If your indicators are out of sync with your intended audience, they won't be used to achieve your intended results -- they become an interesting curiosity, not a decision necessity.
By source, I'm talking about who you are as an organization. When you publish your indicator report, is it seen as trusted information from a trusted place? Take a moment for some painful introspection. In general, data from advocacy organizations are not trusted by people without a shared belief in the cause. If your mission is to tell people to put children first, and you issue a report with indicators in it that say children should come first, your organization values will cloud the usefulness of that data. Your indicators will not be used by people who don't already believe children should come first.
How open and transparent is your indicator selection process? Who determines which indicators are chosen? Does the community know why you're measuring what you do? How open and transparent is your data review process?
Sometimes we have to choose our role in the community. It is remarkably difficult to be the trusted neutral source for information AND the community advocate for a single position. It almost never works to try to be both.
Once we have dealt with these issues, we can look at how we measure ourselves and the effectiveness of our indicators. There are at least five different areas in which we can look at effectiveness:
- Explicit use of indicators in information sharing. By this I mean the number of times your indicators are used by other people (media, public officials, other organizations, your intended audience) in talking about the issue. For example, we have been able to track not just the media coverage of our report releases, but the way the indicators have been used over the course of a year to talk about issues, to justify positions, or to advocate for a cause. If the intended result is to raise awareness, you can track how the indicators are being used for that purpose and how often your reported is cited, linked to, or quoted.
- Explicit use of indicators in decision-making. We find in whereas clauses and in public debates the use of our indicators in making key decisions. Sometimes we are asked to present the data to a decision-making body. Sometimes the indicators are cited in justifying decisions. Sometimes people will come to us and thank us for having the indicators available which helped them prevail in a political decision or in receiving a grant. If your intended result is to influence decision-making, track these. We also survey our intended audience and ask them about how they have used the indicators in their decision-making.
- Institutionalization of indicators in decision-making. This is where the process of making decisions are built with the data report in mind. This is an important outcome we work towards. This can include policy and budget decision-making, but it can embrace many other things. Our local Leadership Jacksonville program builds its curricula for its four leadership programs with our indicators in mind -- all participants receive a copy of the report, and they are encouraged to use the indicators to better understand the community. Think about who you want to use the indicators, and in what fashion, and then help them design their processes with the indicators as a fundamental/necessary piece of that process. Remember the issue of timing!
- Cross-disciplinary/cross-institutional priority-setting and collaboration around identified issues. Your indicators can help set the community agenda. What priorities have you identified? Who has embraced those priorities? More importantly, who has stepped out of their silo or comfort zone to step up to a shared community priority identified by your indicators? In our case, we pay attention when the indicators are used by our Chamber or Mayor to tackle an issue that's not traditionally their focus or responsibility, or when multiple groups join together in a common cause identified by the indicators. That's a desired result, and we note that activity.
- Improvements in the indicators themselves. Your measure indicators that you want to improve. They're important, or else you wouldn't measure them. Our model for community improvement demands that we pay attention to what the indicators are telling us -- are we moving closer to the desired goals? If none of your indicators are getting better over an extended period of time, then your report isn't being effective in motivating change.
That's a summary of what we talked about in the webinar. I'm interested in your comments and suggestions to continue to conversation.
Read more ...