Skip to main content

Introduction

Cambridge City Council sought to improve its public engagement around planning, particularly with open-ended feedback. As part of its Design Code consultation for northern Cambridge neighbourhoods—essentially guidelines shaping future development to ensure new construction aligns with character and community priorities—the Council wanted residents to validate proposed design principles. To do this well, they needed a way of efficiently processing qualitative input (free-text responses) without losing detail. They adopted Go Vocal’s AI assistant (a feature of its broader citizen engagement platform) to assist in analysing survey responses.

Challenges

1. Time-intensive qualitative analysis: Analysing free-text responses manually is slow and labour-intensive. For example, reviewing nearly 100 responses to key survey questions could take a full day.
2. Reluctance to use free-text questions: Because of the burden of analysis, there had been a tendency to avoid including many open-ended/free-text questions in past surveys. This limits the richness of feedback that can inform planning.
3. Ensuring accuracy and inclusivity: When summarising qualitative feedback, the risk is that some themes or important nuances are missed. It was important that any tool used preserves major points from residents, not just the obvious ones. The Council needed confidence that automation would not degrade quality or transparency.

Solutions

1. Go Vocal’s AI assistant: The assistant analyses free-text survey responses, summarises key trends, and extracts insights rapidly. The Council used it in the second phase of its Design Code consultation when refining principles and asking for feedback.
2. Integration into existing workflow: Rather than replacing existing processes entirely, Go Vocal complemented them: the team still exported responses but leveraged the AI assistant to do much of the summarisation and trend-identification, freeing up staff to focus on interpretation, stakeholder communication, and verifying results.
3. Validation and transparency built in: The Council compared the AI-generated summaries with the original responses to ensure nothing significant was missed, confirming that the tool’s accuracy was acceptable. This built trust among staff and stakeholders.
4. User-friendly adoption: Staff spent time exploring the tool’s features, then rolled it out with the team; its intuitive design meant the learning curve was manageable and uptake was smooth.

Results

1. 50% time savings: The Council estimates about 10 hours saved during the analysis phase of that consultation thanks to the AI assistant. They halved the typical time spent analysing free-text responses.
2. Capacity to include more free-text questions: Because analysis was more efficient, the Council was more willing to ask free-text questions in future consultations, which enriches the depth and quality of community input.
3. Preservation of important insights: The AI assistant did not miss significant points; the Council validated that summaries captured what respondents were saying. This means improved confidence in both quality and inclusivity of engagement.
4. Improved reporting & transparency: The AI-generated summaries made report creation easier, faster, and more transparent.
5. Positive user experience & ease of adoption: The tool was user-friendly; staff became comfortable with its features quickly, and were open to using it in the project.

“Given its proven effectiveness, we’re confident that it will continue to play a crucial role in our consultation processes.”

Esther Pickard
Digital and Web Product Manager

An innovation-led social enterprise

The directory is brought to you by the Digital Task Force for Planning, a not-for-profit organisation. Our ambition is to promote digital integration and advancement in Spatial Planning to tackle the grand challenges in the 21st Century.
Brought to you by:
Seed funded by: