Search

Empathy Driven User Research vs. LLMs

  • user-research
  • empathy
  • ai
  • critical-engineering
Photo of Apollo 16 astronauts during a training session
The essence of empathy driven user research: humans letting their mirror neurons do what they do best. Being curious, getting to know people and understanding what makes them tick. Putting yourself in their shoes and getting hands-on with them, having fun with them. Understanding where they're facing problems, getting stuck, or encountering challenges.

I enjoyed reading the article Mozilla.ai did what? When silliness goes dangerous by tante which depicts how Mozilla AI conducted user research using Large Language Models (LLMs).

Why LLMs, you ask? Cause it’s another day for you and me tech paradise with some people still drinking the Kool-(AI)d. This little incident caused me to think about the bigger picture and also to reflect on an alternative, which is my preferred way of doing user research.

Let’s take a step back for a moment. In recent months and years, we have been conditioned, nudged and persuaded to solve almost every problem with the help of those funny little creatures called LLMs. Who’s been persuading us, you ask? Big software companies for example, but also overvalued startups. Both have invested huge sums of their and other people’s money in building AI services based on LLMs. Now it’s time for them to find the right nails for their shiny new hammers and sell as many of them as they can.

Back to the Mozilla AI team which also tried to persuade us (or maybe themselves) of the usefulness of LLMs. It conducted its user research on the usefulness of AI with the help of AI. You cannot get more meta than that! The team conducted a large amount of interviews and tried to extract the essence out of the data with its LLMs.

After each conversation, we wrote up summary notes. In total, these notes for the 35 conversations amounted to 18,481 words (approximately 24,600 tokens), almost the length of a novella. To avoid confirmation bias and subjective interpretation, we decided to leverage language models for a more objective analysis of the data. By providing the models with the complete set of notes, we aimed to uncover patterns and trends without our pre-existing notions and biases.

»Uncovering patterns and trends without pre-existing notions and biases« sounds great, doesn’t it? Here are some of the insights and conclusions their model was able to extract:

Mozilla AI

Here is the model’s response:

  1. Early stage startups prioritize accuracy, speed, and cost when it comes to LLMs.
  2. Startups prefer staying in a single ecosystem, such as OpenAI, due to time costs associated with switching models.
  3. There is a general consensus that certain frameworks are not a good option for building agents because of their complex architecture and difficulty debugging.
  4. The main LLM use-cases are Q&A on documents, code documentation, fact checking within a document using other document sources, and summarization and extracting information.
  5. Companies have different priorities when it comes to LLM solutions - some focus on maximizing accuracy in the easiest way possible, while others emphasize reusability and using small models for specific tasks.
  6. Others prioritize cost control over purely seeking high-quality outputs when it comes to model management.
  7. The main challenges faced by companies developing LLM solutions include aligning business needs with model outcomes, creating proper test cases, infrastructure and computational power for experimentation, prompt engineering overhead, privacy and security concerns, and the need for standardized metrics and a data warehouse…

Not particularly surprising. Seeing these “superficial” results made we wonder: is this technical effort really worth it? Do you really need to conduct 35 interviews and waste a lot of energy to reach this level of insight?

If you’re not worried about the energy consumption then at least do yourself a favour and estimate the cost-effectiveness of the AI-driven approach. Because the usage of AI is becoming an ever-increasing cost factor:

Sarah Wells in Generative AI’s Energy Problem Today Is Foundational on spectrum.ieee.org

De Vries argues that developers should also think critically about what products really need AI integration. For example, de Vries’s paper estimates that it would cost Google US $100 billion in server costs alone if the search engine were to incorporate AI inference into every single one of its Web searches.

Me

Remember those large investments needed to build them? Somebody’s gotta pay for that. Welcome to another day for you and me in tech paradise…

Given these circumstances, I guess we can all agree that one thing is certain: the prices of those AI services will rise!

So collecting huge amounts of research data and trying to use expensive and biased LLMs to extract unbiased insights just doesn’t seem like a good use of time, energy and money to me.

I would argue that some people, while blinded by the lights of AI hype in Silicon Valley, tend to forget about the existing and very effective tools for gaining insights into complex challenges and problems. Furthermore, the idea of involving LLMs in user research is based on the assumption that this phase is all about collecting data, processing it and finding the small, breakthrough needles of insight in this huge haystack of research data. You couldn’t be more wrong!

Enough of the Frowning! What Are the Alternatives?

Instead, there is one particular tool that LLMs can never imitate: Empathy. And in my experience, it is the most effective tool in user research, designing, creating and building a product. That’s why empathy-driven qualitative user research is my personal favorite way to get to the bottom of things.

Me
The core of this methodology is to simply rely on our humble superpowers: our mirror neurons. Let them do what they do best.

Being curious, getting to know people and understanding what makes them tick. Putting yourself in their shoes and getting hands-on with them, having fun with them. Understanding where they’re facing problems, getting stuck, or encountering challenges.

So in the particular case described above I think it is possible to reach similar – hopefully better – conclusions by doing this:

1
Let 2-3 experts do qualitative user interviews with less than 8 organizations. Experienced user research teams will be able to spot emerging patterns and to extract main insights after the first interviews already.
2
Use these insights to address the large part (80%) of the problems of your target group.
3
Do not fall into the trap of extending your research until you’ve uncovered the last 20% of insights as well. You will learn much more about those 20% while finding solutions to address the 80%.

Follow this path and you’ll make the world a better place. But anything on top of that – e.g. more interviews, more experts, more AI… – will lead to piling up huge amounts of research data that will not provide us with additional, groundbreaking insights.

If I Could Make a Wish

I’d like empathy based user research to be taught in schools as a basic tool for solving challenges. That way more people can be involved in the research phase of a project, not less. Because user research is fun, but it is also fundamental for coming up with solutions.