Knowing where to start with user research can be daunting. I’ve conducted user research myself in the past but that was in a previous role. It’s something I’d like to reengage with. Fortunately, I was prompted by yet another talk at the Agile Content Conf in February to think about it in more depth and in particular the obstacles to getting started again.

Why user research?

One obstacle to starting user research is wondering what the value of it is.

Lily Dart put it succinctly when she said “We aren’t a user of our service”. You can see Lily’s talk on user research at the Agile Content Conf for yourself though I’m going to pick up a number of her points in this post. User research is useful for helping you and, perhaps more crucially, your colleagues understand what your user’s needs are.

Resistant colleagues might argue that they are a user of the service they oversee, that they frequently interact with it in the same way that end-users do. We’ll come back to those resistant colleagues later. Even if they’re right though, it is all too easy to overlook what it is like to be a new user, unfamiliar with the site or service in question.

As Lily says, as an experienced user, you “know things that you forget that you know”.

Frustration – working in the dark

Lily Dart may be freelance service designer offering, developing, and hosting workshops on user design and research these days. But she started out her career as a web developer. As a developer, she felt the work she was doing was designed to satisfy managers and clients rather than end-users. She became frustrated with not knowing if the products she was working on were any good. This is something I – and, I dare say, colleagues in my team – can relate to.

So, bit by bit, she began experimenting with user research. The more she tried it and the more she read, the better she got at it. (I’ve included Lily’s reading list towards the end of this post).

Lily’s advice, when it comes to user research, is to just start doing it. You have to accept that you won’t be very good at it at the beginning – but that you can only get better through practice and experience. Luckily, using an Agile approach makes not being very good okay.

Different approaches

There’s a definite contrast between the traditional waterfall approach to projects and an Agile approach. That also affects user research.

1. Waterfall approach

The traditional waterfall approach to managing a project encourages collecting as much information as possible before one can even begin. This seems to make sense on the face of it; understand the problem fully, then get to work on the solution.

There are a couple of assumptions in this though. One is that you can recognise the problems up front. The other is that you can accurately capture them well enough to either go away and develop a product yourself or communicate these requirements on to someone else.

It’s possible to find that once you’ve spent time gathering as much information as possible and started developing a product, the requirements change. Or perhaps the development of the product itself will bring up new and different problems which weren’t recorded originally.

2. Agile approach

With an Agile or iterative process, however, we need just enough information to start. This way, as Lily says, we can “try something out” or “try multiple things out and see which one works best”. The implication for user research is that we’re not looking for a holistic view of everything a user might need but just the context we need to answer the question.

What I took away from Lily’s talk is that the key to integrating user research into an Agile approach is to keep the project focused on the question you’re trying to answer – and to keep asking it – rather than fix on one particular answer up front.

Most detail is hard to pass on or lost and forgotten. Asking these smaller questions also means fitting work into Agile sprints (or just smaller timeframes) and smaller questions.

Scope creep

Lily’s thoughts on this resonated with my own experience of using a waterfall style approach in a project several years ago.

My own experience

We produced “scoping documents” upfront for several tools. The documents determined what the tools we would build would do, what the expected features were. This sounds like a sensible approach as it sets limits on what is deemed achievable within the project; it sets expectations which stop the project from growing and trying to solve all sorts of other problems – what used to be known as “scope creep”.

The problem with the approach was that it arrived at solutions too early. By determining the solutions we were going to develop and specifying them upfront, we lost sight of the problems we were trying to solve.

Limiting ambitions

Although user research informed the initial understanding of the problems – I had helped put the brief together, based on my interactions with colleagues and with students – it was arguably aimed too firmly at identifying exactly what we needed to invest over a year of our time in.

Testing at the end showed success with some of the work we had done but some of our work was found not to solve the problems we thought we’d identified. Instead, it raised new questions and problems.

After the project, we concluded that we had been too ambitious. That may have been true to some extent. We gathered a lot of information about problems and tied ourselves to developing very particular solutions to tackle them all. But we neglected to recognise upfront that we might encounter new ways of seeing those original problems along the way.

Avoid assumptions

If we start with the premise that familiarity with a product or service does not help us see it from a user’s perspective but can in fact be a hindrance, the benefits of conduction user research and revisiting requirements through an Agile approach should be clear. Whatever tools we use to get our users’ views – interviews, workshops, focus groups – Lily says, the key is to have a broad conversation – don’t start with the conclusion. Begin with open questions: “What was your best experience of this thing?”; “What was your worst experience of this thing?”.

Don’t assume that the user has specific problems and that you know what they are. Your assumptions might be right; you might be aware that a need existed, but allowing the user to provide context will reveal the details and nuances needed to understand the need the problem more fully.

Broad vs. focused?

If you’ve seen Lily’s talk and you’re like me, you may be wondering if there is a contradiction at play. After I saw it live, I rewatched the video a couple of times trying to work out how to get started.

The apparent contradiction comes about because Lily starts by saying we need to have a broad conversation and not ask specific questions but in the next segment she says we need to focus on the question we’re trying to answer. There is some subtle reasoning at work here though that I think can be easy to miss. It’s because there are two types of question:

  1. The research question we ask ourselves.
  2. The broad questions we ask our users.

When Lily talks about having that broad conversation, she says quite quickly we should make sure “we’re asking around the research question we’re trying to solve”. So we should focus on a specific problem or topic and have that as our project focus internally, but when talking to end-users we need to keep the questions broad. Lily gives the example question “what are ours users confused about around this topic [say, holiday law]?”, which could be taken as the internal question. She then gives examples of the questions that might be posed to users, based on this:

  • “What is your understanding of holiday law?”
  • “What are the problems you face around holidays?”

As she states earlier in the talk, unless we’re starting from a completely blank slate, we should not be aiming for a holistic view and trying to capture all the user’s problem. But we should keep the questions we ask our users broad enough to avoid bias about the internal question we do want to address.

Bias and colleagues

Potential bias is a problem with any kind of research and can be difficult to mitigate, even if it’s easy to acknowledge. While maintaining the focus on the problem of interest, Lily suggests involving more perspectives in the research.

There are two benefits to this:

  1. Helps reduce potential for personal bias. Do it on your own, and you bring in bias.
  2. Helps get colleagues on board with the approach as a whole. Bringing colleagues in to help make the process transparent and obviously helpful.

When working user research, Lily sits with team working on product and does the analysis of findings with them, collaboratively. Her view is that simply providing colleagues with the data will lead to different interpretations. And simply providing your own interpretation can lead to the research being ignored altogether. Doing the analysis as a group is an important way to engage everyone with the research and establish an emotional connection to the research is important.

Pushback and resistance

In talking about colleagues and emotional connection, we should recognise that there may be emotional resistance to conducting such research. I’ve encountered this myself. Not everyone starts with a shared understanding of Agile or user research. It can be frightening for colleagues who don’t understand where they fit into a framework they don’t understand and this can lead to pushback.

By involving users and soliciting their opinion, your colleagues who are the subject-matter experts may have fears of losing status, their own clearly defined role as expert, or even their job. We don’t just need to be empathetic to users, but also to these colleagues.

When faced with a reluctance to engage, it’s important to take the time to understand where it’s coming from. Lily gives the example of a colleague who makes the common complaint that the user research is pointless because the sample size is too small. Rather than simply producing a study that might show them why they’re wrong, the empathetic response is to ask the colleague what they’re concerned the risk might be. Find out what their concerns are, Lily says, and it will help the project in lots of ways. You can even lead them towards identifying ways to mitigate the risks they identify.

You can even interpret the pushback as an offer of help.

Show the thing

Some of this inevitably comes back to the value of conducting the research. It comes down to “showing the thing”, to use Lily’s phrase; prove the research’s value by doing it and making something with it. Even a small amount of user research can help with this.

Lily described a central UK government team who were handed a project from an MP in the Cabinet Office but didn’t have any described user needs. The team were keen to do a good job but unsure of where to start. So they approached Lily in her agency. But they weren’t familiar with Agile and their subject-matter experts were reluctant to engage. So Lily recommended a user-focused approach. They started slow and did some prototyping, got some feedback, and began a conversation. In the end, they were united by watching someone trying to create something informed. After that people started to request more research and eventually asked for training on conducting user research themselves. The victory seemed to come for Lily when the subject-matter expert took responsibility for a feedback form embedded on the website in question.

We are starting slowly in this direction in our team. One of our Marketing Business Partners and our Market Insight & Research Manager conducted some user research into prospective postgraduate students recently. They found a number of things out but one fact was that prospective postgraduate students look for research opportunities on Department sites where their interests lie, because they’re looking for potential supervisors. Instead of creating a detailed set of central webpages on postgraduate research (or redesiging the ones that we have), we convinced various people that instead we should create a structured, templated page on each Department site. It’s been a small step but it’s led us towards convincing our colleagues in other areas that a user-focused approach works.


The main points I took away from Lily’s talk are:

  • Just do it. Practice is the only way to get good.
  • Ask yourself a specific question. Use the research question you are trying to resolve as the focus for the research.
  • Ask your users a broad question around that question. Try not to lead users to your point of view; let them state the problems in their own terms.
  • Involve as many colleagues as you can manage. Get them on board with the process, empathise with their concerns, and in doing so lessen your own bias.
  • Show the thing. Get to the point where you can show the value of what you’ve done through a prototype or tangible example of work informed by your research.

Further reading

Lily recommended some good books and a blog worth looking at on user research in an Agile context.

  • Just Enough Research by Erika Hall
  • Lean Analytics by Alistair Croll and Benjamin Yoskovitz
  • Rocket Surgery Made Easy by Steve Krug
  • User Research blog


This post concludes my notes on the talks from the 2016 Agile Content Conf I attended in February. I’ve enjoyed writing them – they’ve given me a chance to revisit my notes and reflect on what was said, applying it to my own experience or situation where possible.

I have more notes on the exercises I took part in (which are not available via video) and on the experience of the conference as a whole which I hope to turn into posts soon.