Can UX Researchers Remain Viable Without Integrating AI Into Their Practice?

UXR-Automation isn't new, but it may start clogging the pipeline

· ux,uxresearch,AI,tools

What’s a UX researcher to do, when faced with a environment in which many of their day-to-day tasks seem, at least on the surface, susceptible to automation?

In some ways, this isn’t really a question about AI. It’s more a question about our daily workload, and whether how it fundamentally contributes to a deeper understanding of human computer interaction and UX design.

For example, when we are looking through a session transcript and trying to pull out common threads across multiple participants, the semantic analysis may seem easy to speed along with some automated help: finding common phrases, analyzing completion times, and looking at how participants wend their way through structured tasks, the recording of which efficiently marked up. A “fully instrumented” usability test, for example, already lends itself to automated analysis.

For UX researcher to remain viable in this market, she’s going to have to make use of whatever tools make her job more efficient. But this doesn’t necessarily mean diving headfirst into using a large language model AI to analyze her research sessions.

In fact, that could be quite counterproductive. Many of the current models show, strong biases, lack context, awareness, and are prone to creating summaries of text one submits for analysis that are semantically logical, but contextually incoherent.

As a result, you may end up spending more time “debugging“ any AI-generated analysis then you would spend just performing it on your own.

Having said that, there are areas where, especially for individual, independent practitioners, automation tools can make a team of one far more efficient. Participant recruitment and compensation, screening, scheduling, panel management, transcript transcription, and generation of “highlight reels“ are all areas of UX research for which automation tools exist, and can perform quite reliably. AI might make them more efficient, but that has to be balanced against the quality-check a Researcher will have to perform on the results.

Knowing the difference between a tool that will enhance and expand your capacity, and one which will inhibit it, isn’t easy. But the UX researchers who master the ability to make that discernment will certainly go further than those who don’t.

The mundane tasks of UX Research don't just produce artifacts and reports - they produce better Researchers.

The biggest challenge, in my opinion, will be ensuring that junior level UX Researchers are still trained appropriately. All that transcription, note-taking, highlights-reel making... it doesn't just produce artifacts it produces researchers who have developed a set of expertise for discerning signal from noise.

Absent that practice of combing through reams of data to extract meaning, UX Researchers will have to find another way to building up expertise in analyzing and observing human behavior. Some of this will be in the form of the aforementioned QA of what AI Summarizers produce - but that can only spot obvious false signals. A more robust, error-proof analysis requires experience and the judgement that comes with it.

And judgement - the ability to make the correct decision quickly - is more important than speed in most UX Research. Tools - AI-driven or otherwise - that produce even a miniscule number of false findings are worse than useless; they're anti-productive. And the false confidence of LLMs (which often double-down when challenged on their false assertions) means that error-ridden UX Research could easily flood the market... and it might be years or decades before the harms can be undone.

More of This, In Your Inbox?