Synthetic research in UX/CX
If like me, you work in UX/ CX and spend your time designing interfaces and interactions, you will have started exploring the space of synthetic research and more specifically Synthetic Personas (SPs).
This was going to be a short article about SPs but I realised that it’s probably easier to break it into three parts (I seem to have started doing series a bit more these days): 1) what are and how do you make SPs? 2) how and where to use SPs? and 3) pros and cons. We’ll see how we get on.
But first: What are and how do you make SPs?
Synthetic Personas are small, stand-alone iterations of LLMs; custom iterations of a specific direction of interaction that you want the LLM to pursue. This means that you direct/ prompt the custom iteration to follow specific rules and respond in specific ways and provide enough additional information on the specificity of the response so the custom iteration can react meaningfully. I know, it’s a brain twister just to try to explain this, let alone to start making them.
So let’s take an example on something widely available: if you use ChatGPT you will have an option to create a “custom GPT” or to use open source “custom GPTs” created by other users. The custom GPT is a “smaller”, directed version of the LLM which is prompted to focus its responses on a specific set of data and to react in a certain way. GPTs can range from ones that people use for dating, self help to (more pertinent for this article) GPTs that can use existing frameworks for heuristic analysis to provide you with design guidance (like this Accessibility GPT or this UX Audit GPT). Custom GPTs are prompted to focus on specific knowledge, shape their answers using specific lingo and behave according to different parameters.
This is the most basic form of “specialised” custom LLMs and a step removed form a real Synthetic Persona.
A Synthetic Persona needs to combine two things: the knowledge of a Custom GPT and the perceived behaviours, needs, motivations and blockers of a user persona as defined in “non-synthetic research”. What do I mean? Well, you can, say, train a custom GPT to provide you with usability inputs BUT it will do so from the perspective of a UX specialist (please try it with the UX Audit GPT I linked above). So, what you need is for it to understand that it needs to focus its responses on design / UX inputs and provide outputs from the perspective of a [insert relevant persona here]. To build Synthetic Personas you need to be able to train the LLM to both know what UX rules are AND to only refer to those which would make sense to a specific segment and/or persona.
So, how does a custom GPT become a Synthetic Persona? Well, first you would need to have AN ACTUAL PERSONA. I cannot stress enough how unlikely it is that you will be able to make an SP with only basic demographic and psychographic data. One of the critical misunderstandings of UX/CX is that you can derive understanding of digital interactions from customer segmentations. This may be possible when your product is a tech / digital product but quite unlikely when your product is, say, a food type. Why? Well because customer segmentations always have a “bias” and when I say “bias” I don’t mean they’re biased, but that segments need to have a “direction” in order to be useful. So let me give you an example: if you do a customer segmentation exercise for a food product you will need to focus on consumption or purchase in your segments (or something else but let’s stick with these two for the time being); consumption segments will focus on how people choose their food and how they consume it; purchase segments will focus on how people shop, spend money and where they go for what. A consumption segment will talk to what drives people to make choices for THE product, the purchase segment — on what drives people when they transact for the product. So, as an example, Jane is a “dairy conscious optimiser” — meaning she prefers organic dairy and she will shift her basket choices based on the best available price. These two things speak to what Jane eats and how she spends money on food. Those two things are not the same :).
Now back to synthetic personas for CX/UX. Typically when we create customer segments, we won’t focus on how they interact with digital channels because …well, they are seen as channels so not intrinsic to the product itself (hence my example above where I say that customer segments may be relevant if your product is tech). This is what digital design works, usually, with Personas which bring behaviours related to channel into the conversation. So taking the above example of Jane, the conscious optimiser — she can be a digitally-savvy conscious optimiser which brings the added dimension of “she prefers to purchase online” and a “easily overwhelmed, digitally-savvy conscious optimiser” which means she struggles with information overload and cannot choose when confronted with too many options (a good UX designer will then tell you filters and labels are essential for Jane to be able to navigate interfaces well). Are you still with me?
If so, think about SPs for CX/ UX like this: they are customer iterations, with knowledge which includes UX cues and information about a specific customer segments’ digital behaviours and needs.
Okay, so now that we understand the basics of what needs to go into an SP, can you make one? In theory, yes. Most LLMs out there have custom iteration capability and there is a vast amount of information on UX/CX best practice. Most LLMs already have a custom UX Audit function, and it takes next to zero time to ask any of the larger engines to upskill themselves in the latest on UX by reading Nielsen Norman articles. You do need actual behaviour data from an actual persona and this is where the SPs may fall down because creating personas is not easy. So, if you have access to pre-existing information about how customers of product/ service x interact with product/service x’s interfaces you may be able to build an SP. BUT…
I tend to agree with the good folk at Nielsen Norman that it’s hard to assume that you will be able to mimic how a human will behave when confronted with an interaction or with a screen, and it’s even harder if you want certainty. Traditional personas used a lot of creative assumptions anyway. LLMs are not reasoning machines, so when we think they genuinely react to something, I’d say we’re making a leap of faith because they don’t. What they do is to use a vast amount of pre-existing reactions to provide the most predictable reaction to whatever they’re being prompted with. The good thing about using SPs (more on that later) is that you can ask them to learn and they do so very quickly so, in theory, you can pair an actual user with an SP and you will be able to scale outputs very quickly.
What’s the fool proof way to create an SP without actual behavioural data? You could use web analytics to enable it to identify what is not working now and thus create a benchmark for what COULD meet expectations. Or use past results from user testing. Or you could, as above, pair the SP with a human user for a period of time. But all of that is troubleshooting and we’re still in the fixing territory so the question will arise “what about creative user inputs?”.
I think we’re still figuring SPs out but what’s exciting about the whole thing is how easily you can TRY. Try to make them, try to use them. Document your learnings. It helps you and it helps the UX/CX community to, hopefully, make synthetic research better.
More on how to use them next week.
