Using AI, we can improve social research to inform policy development by using new data sources at a larger scale with new types of analysis.

Artificial intelligence (AI) can help researchers break through old-age barriers to conducting research, such as low response rates and the time-consuming nature of data processing. AI gives a refined toolbox not only for gathering but also for the interpretation of an amount of data, with a precision and depth not previously possible. From using natural language processing (NLP) to understand social media sentiment analysis to machine learning (ML) for forecasting social trends, AI is throwing new doors open for social analytics.

This blog will help demystify how AI can improve the quality of social research through data collection and analysis.

Capturing data trails

Think about your journey to work. You use a travel app to check for delays or road closures – this search is logged, anonymised and aggregated. You stop for a coffee – information about your purchase is logged, anonymised and aggregated. You check social media and make a post about your weekend plans – all this information is public and can be shared and collected. You get to your desk and scroll through a news website – with your clicks, preferences and saved items all logged and stored. 

As we go through our daily lives, we leave fragments and shadows of information about ourselves. It’s neither ethical, necessary, practical, nor technically feasible to combine these data points at an individual level. However, when aggregated and processed to preserve our privacy, these fragments of ourselves and our lives could inform any manner of public policy decisions, from where to build infrastructure to healthcare interventions and public service design and delivery.

This becomes even more valuable at a time when conducting traditional forms of research into our lives is becoming harder. Response rates for qualitative and quantitative surveys are dropping – particularly post-pandemic – and it’s harder to find participants (when was the last time you picked up the phone to an unknown number)? An increasingly fractured and polarised world makes reaching more marginalised or transient populations even more difficult – resulting in incomplete or biased research.

Challenges addressed through responsible AI

But processing the vast number of fragments of information we leave about ourselves is hugely challenging. These are potentially enormous datasets, challenging to process and analyse. Data science techniques and ML algorithms can help – as long as they are used safely and responsibly.

Whereas no one person could identify trends within travel patterns of thousands of journeys, clustering algorithms can determine different types of journeys made by commuters.

No one has the time, energy or headspace to trawl multiple social media platforms to identify narratives or how messages around particular events are spread. But, after having pseudonymised (processing personal data so it cannot be attributed to one specific person) all collected data,  a variety of NLP models and network analysis techniques can identify and map multiple, overlapping narratives and how they spread online.

For more sensitive datasets containing personal information, differential privacy techniques can make health and other public administration data available for research and to inform policy decisions.

Even for research collected through traditional methods, large language models (LLMs) can summarise interview transcripts, freeing up analysts’ time to focus on interpretive analysis and the thoughtful analysis only an experienced researcher can provide. 

What this means for the future

What does this mean for traditional social research? This means that now, more than ever, we need well-trained, experienced, and thoughtful researchers and analysts. 

Garbage in, garbage out: if the information inputted into our models is poor, the outputs will be poor. Uninformative interview transcripts from low quality research will result in unsatisfactory quality summaries generated by LLMs. NLP algorithms can’t identify narratives from inadequately sampled data. 

With so much potential analysis available on so many data sources, it will require experienced and thoughtful analysts. These analysts will need to develop the right research design, identify relevant data and conduct suitable analysis using the required (ML)/ data science technique to answer the specific research questions.

There will always need to be a human in the loop with the skills to carefully interrogate model outputs and conduct the subsequent interpretative analysis, accounting for any biases or limitations in the model outputs.

This combination of technical and research expertise, coupled with experience analyzing complex and vast data sources and understanding how to build safe and responsible ML/data science research programmes, is at the heart of our partnership with Verian. Verian is one of the UK’s largest social research companies, and our partnership is rooted in our deep expertise and shared values to improve public services through the safe and responsible adoption of AI within social research.

Using AI to understand the society we live in 

The fragments of data collected about us can unlock insights to shape public policy and public services. 

As traditional methodologies face challenges with accuracy, efficiency and participation, the use of AI within social research can provide richer insights at a far larger scale. The fusion of AI and social research can act as a catalyst for understanding the society we live in. 


Recent Blogs

Subscribe

Subscribe to our newsletter and never miss out on updates from our experts.