Measuring the Performance of the Regional Data Center

by Bob Gradeck

December 14, 2016

Our goal at the Western Pennsylvania Regional Data Center is to make community information easier to find and use. While operating the open data portal is at the core of our work, we also are involved in many different activities in our role as a data intermediary. To better understand how we’re doing, we have developed a series of performance measures, which we describe in this blog post. We have also developed a performance management dashboard to share this information with you. To our knowledge, we are the only open data program to release our usage and performance statistics. We also hope you have a few minutes to provide us with your feedback, and share your data stories through our data user survey.

We finalized our core set of performance indicators soon after the open data portal launched last October, and have been developing and refining our data collection methodology and reporting framework since then. We collect data about our performance every day, and our staff review these measures at least once each month. This information helps us learn more about our users, uncover new opportunities for programs and initiatives, and adjust course as necessary.

Our thinking has been influenced by the guidebook “Monitoring Impact: Performance Management for Local Data Intermediaries” by Jake Cowan and Tom Kingsley. The document was written for members of the National Neighborhood Indicators Partnership (NNIP), a community of practice for neighborhood data intermediaries in over 30 cities. The NNIP program is managed by the Urban Institute, and we’ve been a proud member of the network since 2008.

In the guide, Cowan and Kingsley state that data intermediaries like the Regional Data Center have impact through their influence. Data intermediaries provide data, interpretation, services, events, and activities with the goal of positively influencing the behavior of other actors. We count it as a “win” when local actors have developed the ability to find and use information and make informed decisions on programs and policies. Examples of impacts that have been influenced by data include reductions in the number of blighted properties in a neighborhood, improved safety for school students, and reduced injuries and fatalities at dangerous intersections

The guide identifies four stages in the process of achieving influence through the use of information. In the first stage, information reaches the target audience. In the second stage, users interact with this information. This information causes users to adopt a new or changed mindset in stage three, and in the final stage, users take action as a result of the information.

To structure our performance management process, we developed a template in collaboration with Jake Cowan and staff at the Providence Plan, our NNIP partner in Rhode Island. The template was developed using the guide, and used to identify indicators relevant to our project activities, and develop a plan for collecting them. The template also allowed us to link our performance management framework with the four stages of reaching influence, as presented below.

  • Stage One: Information reaches intended users: By tracking our website and program statistics, capturing media mentions using Google Alerts, gathering social media usage statistics, and collecting feedback from data users, we are able to determine how we are connecting with our target audience.

We track the following data to determine if information is reaching users:

    • Number of users and sessions on the open data portal
    • Number of outreach meetings, training sessions, presentations, guest lectures, and events. We also record attendance at these activities
    • Media stories about the Regional Data Center, and news stories that incorporate open data from the Regional Data Center
    • The number of high school and University classes that use open data
    • Researchers that use open data
    • Reports and funding proposals that include open data
    • Social media interactions measured by Twitter followers and posts.
  • Stage Two: Users interact with the information: We rely on Website usage statistics, activity measures, and stories provided by our users to assess the degree to which people interact with data and use it in their work.

To understand how users interact with information, we track:

    • Length of time visitors stay on the open data portal, and the average number of pages viewed
    • Dataset downloads, API usage*, and pageviews in total and by dataset/resource
    • Number of comments made on the Website
    • Number of data requests made by data users
    • Number of processes and tools incorporating open data
    • Number of organizations that share or use data with residents
    • Technical assistance requests

* We’re still working to distinguish our internal API calls from those made by external users. Data on the dashboard shows total API calls. Most API calls in our statistics are generated by our automated publishing processes.

  • Stage Three: Users adopt a new or changed mind-set: We ask users to share stories  of how data has provided them and other members of the community with a new or heightened understanding of important issues. We collect this information in-person, online, and now through our data user survey. We also hope to learn about how people are using different tools that provide them with community information as part of this process.
  • Stage Four: Users take action as a result of the information. Whether in-person, online, through our user survey, or case studies, we rely on users to tell us their stories of how data has been used to guide actions within their organization or community. We hope to learn how data has been used to:
    • Guide and manage key initiatives
    • Advocate for a particular policy or action
    • Save money and time
    • Obtain funding for community improvement efforts
    • Enable collaboration across departments and organizations

We also track or are developing systems to track a number of program output measures related to our activities. This helps us to better-understand how we are performing against several internal goals and benchmarks. These measures are essential in tracking project management activity and interpreting results of the performance management data. Our output measures include:

    • Number of blog posts published
    • Number of newsletters published
    • Number of data publishers
    • Number of datasets shared on the open data portal
    • Number of datasets with automated publishing
    • Percent of datasets published on-time*
    • Number of datasets with complete metadata*
    • Number of datasets published with a data license*
    • Number of privacy breaches
    • Number of service outages
    • Number of funding proposals

* Measures marked by an asterisk are still under development.

We invite you to see how we’re achieving our goal of making community information easier to find and use by viewing progress on the “beta” version of our performance management dashboard.

Data users can also share their data story with us through our new data user survey.

We have also published the code for our performance management dashboard on GitHub.