Survey data is one of the most valuable, yet invisible, assets to any non-profit. As the communities we serve evolve, we have to be flexible and responsive to feedback to ensure we’re continuing to create positive literacy experiences for families, students, and teachers. Getting self-reported data from families, teachers, Site Leaders, and other seasonal staff enables us to add an extra layer of depth in our data reporting that informs programming decisions with each session. Constantly tweaking our procedures would be hard if we just operated on open-ended feedback as we continue to grow.

Surveys help us easily gather and report feedback from a large group of people.

We start the process with creating the survey questions, which are based on what would best inform programming decisions. Surveys cover the important aspects of the program, such as Family Workshops and general program feedback. Population-specific surveys, like the teacher survey, also touch on family and employee engagement.

We send out surveys at three points: before programming, immediately after programming, and 6 months after programming. These surveys have a large degree of overlap, but also have time-specific questions to use during programming. For instance, information from the pre-programming surveys inform programming in real-time.

Once all surveys are cleared for distribution, we can send them off! The survey distribution schedule aligns with the programming calendars so that everyone gets surveys at the time that makes the most sense. As programs tend to start at different times in different cities and districts, we try to keep neat distribution tracking. Automation using Salesforce’s scheduling features helps keep up with the demand and ensure surveys land in inboxes at appropriate times (so no one gets an email at 3am!).

It’s hard to encourage everyone to fill out surveys, especially with so many surveys for nearly everything.

To counter any potential survey fatigue, we offer a gift card raffle in exchange for survey participation for our hard-working teachers and families. This is a standard best practice that seems to keep people engaged!

After about a week from initial distribution and a couple reminders, it’s time to clean up and analyze the collected data. Systems are imperfect, and strange errors and duplicates are expected, so it’s helpful to have a human comb through to make sure the data is accurate. During this phase, we also compile and code qualitative feedback from open-response survey questions.

Once the data is clean, we break down each question domain and summarize results on a district and national level to gain a sense of how well Springboard programming went and what kind of development each population experienced as a result of programming. This is published in an easy-to-read document for internal staff that highlights key metrics and significant findings.

This data summary eventually makes it to the hands of the program team and flows into a larger project of improvement for the next programming session.

We value all the feedback we receive and try to take as much of it into consideration for the future.

This process is long but helps Springboard offer the best programs we can possibly offer!

Post contributed by Jada Gossett as part of the Impact blog series. Jada was a co-op from Drexel University, working at Springboard for 6 months as Research Coordinator.