Community Indicators for Your Community

Real, lasting community change is built around knowing where you are, where you want to be, and whether your efforts are making a difference. Indicators are a necessary ingredient for sustainable change. And the process of selecting community indicators -- who chooses, how they choose, what they choose -- is as important as the data you select.

This is an archive of thoughts I had about indicators and the community indicators movement. Some of the thinking is outdated, and many of the links may have broken over time.

Friday, June 26, 2009

Are your Community Indicators making a difference?

Yesterday I led a lunchtime conversation via webinar on the question, Are your community indicators making a difference? The webinar was sponsored by the Community Indicators Consortium and was a members-only event, and I know several dozen of you were disappointed in not being able to attend. I thought I'd summarize my notes for both the attendees and those who missed the event, and continue the conversation. (Plus you ought to join CIC to not miss out on their next webinar!)


For the webinar, I'm speaking from the experience of an organization that is currently working on its 25th annual community indicators report. We've seen a generation of community leaders who have stepped into leadership roles that have always had our indicator reports there to guide them. Along the way, we've learned a little bit (through many trials and lots of errors!) about how to tell if your indicators are being effective.

I tried to organize my remarks this way:

One topic: Measuring the effectiveness of your indicators project
Two key questions: Who is your intended audience and what are your intended results?
Three meta issues: Design, Timing, and Source
Five areas to measure results

(I know there's no four. Feel free to chime in with what I missed.)

Let's jump to the two key questions: intended audience and intended results. Defining your audience is not easy work, but it is critical forthe rest of the discussion. Are you producing your indicators for elected officials? For public officials (the non-elected ones behave differently than those who need to campaign for their positions)? For community activists? For statisticians and data professionals? For chambers of commerce and business groups? For United Ways, community foundations, or other funders of non-profits? For grantwriters? For non-profit organizations and service providers? For everyday citizens? For students? For the media?

You may want everyone to use your indicators. I know the world would be a better place if everyone read and internalized every report I produce. But who is/are your primary audience?

And what do you want them to do with the indicators? Possible intended results include:
  • Inform/ educate/ raise awareness
  • Build shared priorities
  • Shape decision-making
  • Influence budget allocations
  • Define public policy
  • Inspire action
  • Demand accountability
  • Measure performance/outcomes
And there are more possibilities. Before we can deal with the big question -- are your indicators making a difference -- we have to be able to answer these two key questions -- who do you want to do what with your data.

In my organization in Jacksonville, the question of indicator effectiveness is driven by a Model of Community Improvement. It's our "theory of change" that explains why we do indicators and what we hope to accomplish with them. I'll include the model below:

Briefly, we suggest that change begins when we identify what change we want -- we create a vision for the future, based on our shared values in a community. (I know we like to think data are objective, but every indicator we include in our reports is a value judgement, as is every include we don't include. Every desired direction in a trend line is a value judgement. Go ahead and begin by articulating the values, instead of assuming implicit agreement on them.)

In order to know where we are as a community in relationship to that vision, we develop indicators. These community indicators then help us determine where we are falling short, what our priorities for action are, and inform the research, planning, and strategizing processes. The indicators themselves don't tell us what to do -- they are descriptive, not prescriptive. They do tell us where we need to do something, and we suggest that indicators be accompanied by planning processes to determine what to do about the indicators that fall short of our desired expectations.

Plans require action, which is the next step in the model. If we can act ourselves, we do so; if we need to convince others to act, then advocacy is required to get the desired actions.

Actions have consequences; the outcomes or results of those actions then need to be assessed to see if they achieved the desired results. Here is where our indicators come into play again -- are we closer to where we want to be? Based on the indicators, we can determine if we need to reshape our vision, adjust what we're measuring, or go back to the drawing board and develop new plans.

Indicators play two critical roles in our model for community change -- they identify priorities for action, and they assess the results of that action. In order to measure the effectiveness of our indicators, then, we measure how well they serve both of those functions.

This isn't the only possible theory of change, of course. Yours might be quite different. But detemining indicator effectiveness has to include some thinking about the model you're using in applying those indicators. Why are you measuring indicators? What difference do you want your community indicators to make?

That moves us from our two key questions to our three meta issues: design, timing, and source.

By design, I mean simply presenting the information so that your intended audience can use it to achieve the intended results. We don't think about design that way, I'm afraid. We look at what looks cool, what our peers are accomplishing, and what we like to see. We want to present our data in the most impactful way possible -- but many times, we're thinking about what is most impactful to us. And we tend to be different than our targeted audiences.

Elected officials, for example, tend to want the information presented clearly on one printed page in their hand when they need it. Researchers want more detail. Grantwriters need different kids of data break-outs. Regular citizens need something that's not so intimidating and doesn't make them feel like they're back in math class. Your design has to meet the needs of your audience in a way that allows and encourages them to use the information to achieve the desired outcomes. (On the webinar, I shared a quick succession of a series of indicator reports, both print and web-based, to show the wide variety out there. If you've been reading this blog, you've seen the examples and many more. Not every report needs to look alike -- but to work, they have to meet the intended audience where they are!)

By timing, I mean three things: time of year, update frequency, and data relevancy. The report needs to coincide with the decision cycles it hopes to influence, and the information in it needs to be current enough to influence action. For example, one of our intended audiences is our local United Way's resource allocation team. They need the information in our report to inform their decisions in allocating money to different programs. The report needs to be available before they meet, but not too far before they meet because the information in the report needs to be as current as possible. They make decisions on an annual basis, so to institutionalize the indicators in the decision-making cycle the indicators need to be updated annually. If your indicators are out of sync with your intended audience, they won't be used to achieve your intended results -- they become an interesting curiosity, not a decision necessity.

By source, I'm talking about who you are as an organization. When you publish your indicator report, is it seen as trusted information from a trusted place? Take a moment for some painful introspection. In general, data from advocacy organizations are not trusted by people without a shared belief in the cause. If your mission is to tell people to put children first, and you issue a report with indicators in it that say children should come first, your organization values will cloud the usefulness of that data. Your indicators will not be used by people who don't already believe children should come first.

How open and transparent is your indicator selection process? Who determines which indicators are chosen? Does the community know why you're measuring what you do? How open and transparent is your data review process?

Sometimes we have to choose our role in the community. It is remarkably difficult to be the trusted neutral source for information AND the community advocate for a single position. It almost never works to try to be both.

Once we have dealt with these issues, we can look at how we measure ourselves and the effectiveness of our indicators. There are at least five different areas in which we can look at effectiveness:
  • Explicit use of indicators in information sharing. By this I mean the number of times your indicators are used by other people (media, public officials, other organizations, your intended audience) in talking about the issue. For example, we have been able to track not just the media coverage of our report releases, but the way the indicators have been used over the course of a year to talk about issues, to justify positions, or to advocate for a cause. If the intended result is to raise awareness, you can track how the indicators are being used for that purpose and how often your reported is cited, linked to, or quoted.
  • Explicit use of indicators in decision-making. We find in whereas clauses and in public debates the use of our indicators in making key decisions. Sometimes we are asked to present the data to a decision-making body. Sometimes the indicators are cited in justifying decisions. Sometimes people will come to us and thank us for having the indicators available which helped them prevail in a political decision or in receiving a grant. If your intended result is to influence decision-making, track these. We also survey our intended audience and ask them about how they have used the indicators in their decision-making.
  • Institutionalization of indicators in decision-making. This is where the process of making decisions are built with the data report in mind. This is an important outcome we work towards. This can include policy and budget decision-making, but it can embrace many other things. Our local Leadership Jacksonville program builds its curricula for its four leadership programs with our indicators in mind -- all participants receive a copy of the report, and they are encouraged to use the indicators to better understand the community. Think about who you want to use the indicators, and in what fashion, and then help them design their processes with the indicators as a fundamental/necessary piece of that process. Remember the issue of timing!
  • Cross-disciplinary/cross-institutional priority-setting and collaboration around identified issues. Your indicators can help set the community agenda. What priorities have you identified? Who has embraced those priorities? More importantly, who has stepped out of their silo or comfort zone to step up to a shared community priority identified by your indicators? In our case, we pay attention when the indicators are used by our Chamber or Mayor to tackle an issue that's not traditionally their focus or responsibility, or when multiple groups join together in a common cause identified by the indicators. That's a desired result, and we note that activity.
  • Improvements in the indicators themselves. Your measure indicators that you want to improve. They're important, or else you wouldn't measure them. Our model for community improvement demands that we pay attention to what the indicators are telling us -- are we moving closer to the desired goals? If none of your indicators are getting better over an extended period of time, then your report isn't being effective in motivating change.
That's a summary of what we talked about in the webinar. I'm interested in your comments and suggestions to continue to conversation.







4 comments:

  1. Caroline Majors, cmajors@mhsmarchitects.comJune 29, 2009 at 9:57 AM

    Thank you for posting this material! I found the webinar very helpful.

    You spoke briefly about community indicators and advocacy work --that data from advocacy organizations are generally not trusted. I am currently assisting an advocacy group in their process of developing a set of community indicators. Do you have any recommendations as to how we can, as advocates, report more compelling, trustworthy data?

    ReplyDelete
  2. Our greatest success in raising the importance of an issue is to place it in context with other issues. In other words, we were able to galvanize the community to action around infant mortality because we had previously established that infant mortality was a sentinel indicator for a series of system failures, and that measuring how well those systems were working was a critical piece of understanding the overall quality of life in our community, as important as understanding the economic vitality or environmental sustainability of the city.

    The key wasn't pointing out that infant mortality rates were high. They were, and had been for quite some time. Others had been pointing out the problem. But they had been pointing out the issue from a community health perspective, or from a child advocacy perspective. And so the recommendations for dealing with infant mortality, which involved increasing attention (and funding) for health and/or children's programming, might have been seen as self-serving. This certainly was a critically important issue to the advocates -- but why was it a critically important issue for everyone else?

    Using a tortuous analogy, you may have had a college professor (I know I had several!) who was convinced that their course was the most important thing in your life (and assigned homework. papers, and tests accordingly!) The professor seemed either oblivious or uninterested in the other demands on your time and attention, and sometimes failed to understand how their priorities fit in with a number of other priorities in your life. As a result, that professor tends to be resented more than respected. Their material may be important, even critical -- after all, you signed up for the course and bought in at least that much to the idea that this was important to you! -- but by failing to recognize the bigger picture, they lost the ability to gain people's enthusiasm. (Side note: I changed majors after dealing with one of these professors. In that case, their unreasonable disrespect of my multiple priorities created an active dislike for a subject that I had previously been pursuing vigorously. That field's loss, community indicators' gain, I suppose.)

    So what to do if you are an advocacy organization wanting to do community indicators? First, a hard self-assessment has to happen -- why are we doing indicators? If it is just to push a particular agenda forward, then we aren't doing indicators, we're researching advertising strategies. And statistics may be the wrong way to go -- see this note on Making It Stick: http://communityindicators.blogspot.com/2007/05/data-storytelling-and-making-it-stick.html

    If we are doing something broader-based, we might want to find a partner or two to help with the source credibility question. The Children's Commission reporting that "Children Are A Priority" -- not news, not interesting. The Children's Commission and the Chamber reporting that "Children Are A Priority" -- much more interesting.

    If finding a partner or outsourcing the indicators to a neutral source isn't possible, then you may need to think about what the indicators might be better suited for. Instead of a persuasive tool, they may be better used as a benchmark of progress -- performance measures in discussing how well the advocacy agenda is moving forward. That's a different function, which suggests a different theory of change. I'd go back to the model in which you're implementing the indicator set and see what function you hope to accomplish, and then see if the source issue affects that function.

    Does that help?

    ReplyDelete
  3. Caroline Majors, cmajors@mhsmarchitects.comJune 29, 2009 at 2:13 PM

    Very much so! Thanks for your input.

    ReplyDelete
  4. Such great content.This is authentic. Are you also searching for nursing writing services login? we are the best solution for you. We are best known for delivering the best

    ReplyDelete