Transforming learning through AI: Reflections from our March hackathon
A summary of the insights and learnings from our second education hackathon we hosted in partnership with the Department for Education (DfE) and the Department for Science, Innovation and Technology (DSIT).
In March, we hosted our second AI Education Content Store hackathon in partnership with the Department for Education (DfE) and the Department for Science, Innovation and Technology (DSIT).
We welcomed two Cabinet Ministers and their teams, as well as a host of firms, to our offices to trial the use of the AI Education Content Store.
The Content Store will act as a repository of curriculum guidance, lesson plans and anonymised pupil work which can then be used by AI companies to train their tools to generate safe, accurate, high-quality content.
Building on our first Content Store Hackathon (in which users worked hands-on with the first iteration of the Store), we worked with not just EdTech developers, but also teachers and representatives from educational organisations.
During this hackathon, attendees were able to work with an iterated version of the Content Store including:
A wider variety of educational content (including materials from teachers and pupils shared with the programme team)
An updated explorer app for users to browse and discover relevant content
An updated API for integration of Content Store data into EdTech solutions under development
We designed the hackathon to test whether having a wider range of data available led to the creation of better AI tools for education settings. To do this, teams developed prototype apps which used the Content Store, focusing on assessment and feedback use cases.
Building with educators, not just for them
Similar to the January hackathon, EdTech developers (including winners of the DfE’s AI Innovation Fund) were placed into teams with AI and education experts from Faculty, the ImpactEd group and AI in Education. We assigned a teacher (or representative from a multi-academy trust) to each team who could provide a unique pedagogical perspective.
Each team was then given a problem statement to address such as:
Generating practice SAT questions to be used as part of lesson plenaries.
Evaluating GCSE English Language written work in bulk and providing a class-wide view of attainment trends.
Providing targeted feedback on primary school work, surfacing any key guidance used to inform the feedback’s creation.
Before building anything, the attendees were also given the opportunity to discuss potential users of their prototypes and explore the Content Store - identifying relevant and useful content. Teams then explored how generative AI models currently tackle similar challenges, to see how LLMs respond in order to establish a baseline for a ‘pre-Content Store’ world.
After developing their solutions, teams tested their prototypes by gathering feedback from teachers in other groups. They then delivered presentations showcasing their demos, sharing the challenges they faced and the key lessons they learned.
Findings - notable successes
Applying the lessons learned from the previous hackathon, the inclusion of teachers and educational experts within the hackathon teams proved to be a transformative addition.
Their expertise offered invaluable insights, enabling teams to access information about the school system and incorporate pedagogical perspectives into their tool designs. Teachers played a crucial role during the ‘Problem Review/Discovery’ phase and offered additional guidance during the preparation of final presentations.
Additionally, the new ‘User Testing’ session was well-received, with teachers rotating between groups as testers. This approach provided participants with diverse perspectives while maintaining an engaging and dynamic atmosphere throughout the event.
The clearer definition of each session’s purpose and the key questions to be answered was also noted as a key improvement. This clarity enabled attendees to approach prototype development with greater focus and direction.
Finally, technical teams reported a smooth experience with the API. They encountered virtually no bugs, and noted significant improvements in the documentation. In particular, the translation of the formative assessment-related problem statement into tag filters was reported to be straightforward and efficient.
Findings: lessons to take forward
While the hackathon was marked by numerous successes, it also highlighted some key areas for improvement to help refine and elevate future events.
While participants were encouraged to compare outputs from base LLMs and their Content Store-backed solutions through various reminders and resources, the focus often shifted more towards developing prototypes than conducting thorough comparisons. To better address this aspect, incorporating the content store versus non-content store comparison directly into the problem statements from the outset could help ensure it remains a focal point throughout the review process.
Additionally, several participants had limited exposure to the updated Content Store prior to the event which made it less intuitive for them. Although the search functionality had improved since the last hackathon, some users still found it difficult to locate relevant data, as the search currently only covers file names and not the content within the files. We need to engage with users to improve our understanding of how they would prefer to search for data. Providing pre-hackathon access to the Store and allowing participants a few days to familiarise themselves could significantly improve the user experience.
The Store’s users also requested a more visual representation of the content organisation within the Store, such as a mindmap-style view, to enhance navigation and accessibility.
While the presence of teachers and educational experts during the hackathon proved instrumental, it became evident that there still needs to be a deeper understanding of the education system. To help bridge this knowledge gap, incorporating a wiki-style guide within the Store could be an effective solution, directing users to relevant content for specific use cases while providing comprehensive background information.
What’s next?
As we constantly refine the Store and the ways in which users can access it, we’ll also continue to do four things:
Build on the existing collection of educational content
Progress development in the open
Engage further with providers of educational content including teachers and MAT representatives
Continue to engage with EdTech developers who will use the Store in the future
Our goal is to ensure the AI Education Content Store evolves as a valuable, collaborative resource – shaped by the needs of educators, developers, and content providers.