Synthetic responses are human-like representations of respondents that are generated using artificial intelligence. In B2B research, these types of responses can be created by first defining a persona-context based on certain characteristics. These can include qualities such as job title, industry, age, gender, location, job title, management level, company size and a long list of other firmographic attributes. Then, using Generative AI, these personas are used to produce survey responses. The application of synthetic responses in consumer research has been met with a mix of intrigue and skepticism, becoming a notable topic of industry discussion in recent months. By examining the nuances of two approaches to synthetic user and response creation we hope to understand and shed light on the the current state of their viability in B2B research.

A Three-Way Approach

The team here at Emporia Research conducted a comparative analysis between three distinct groups of survey responses, each with a separate criteria. In order to maintain consistency and allow for a purely quantitative analysis, the survey consisted of only closed-ended question types. The survey aimed to gain both demographic information on the audience as well as more personal sentiments, with questions relating to career satisfaction, individual challenges and decision making preferences. The study targeted an audience of IT Decision-Makers, specifically users who were employed full or part-time and had at least some input in making decisions on what vendors or contractors to work with in the IT/Tech industry.

The three sets of survey responses were sourced from the following groups of users:

  1. Real respondents: this set consisted of LinkedIn-verified respondents, which were recruited using Emporia Research’s in-house sourcing engine
  2. Synthetic users based on AI-generated personas: this set consisted of synthetic responses that were created by having Generative AI build a diverse set of IT decision maker personas
  3. Synthetic users based on targeted LinkedIn profile data: this set consisted of synthetic responses formulated by prompting Generative AI with the LinkedIn information from the profiles of the real respondents

By analyzing the three fields of responses, we aimed to gain insights into the concept of "synthetic users" and their potential and viability in B2B market research. The findings of this study contribute to a better understanding of the potential applications and reliability of each methodology in data collection. We were also particularly interested in comparing the purely synthetic responses and the LinkedIn profile-informed synthetic responses.

The following comparisons prove especially revealing and highlight where synthetics fall short (today). The Emporia Research team found similar themes throughout the rest of the survey responses.

When asked about satisfaction in their current role, 47% of real respondents answered “Somewhat satisfied”. Whereas, of the synthetic users based on personas, 69% responded “Strongly satisfied”.
When asked about satisfaction with their salary, real respondents answers varied anywhere from “Somewhat unsatisfied” to “Strongly satisfied”. Whereas, of the synthetic users based on personas, 98% of respondents answered “Somewhat satisfied”.
In relation to satisfaction with worklife balance, only 45% of real respondents chose “Somewhat satisfied”. Whereas, 100% of synthetic users based on personas chose the same answer.

For questions relating to satisfaction level, both sets of synthetic users seem to have a stronger positive bias in comparison to real respondents. synthetic users also lack the nuanced personalization found in responses from real participants, as depicted by the contrast between the varied viewpoints of real respondents and the “herd-mentality” of the synthetic users.

Real respondents answered anywhere from 3 to 10. Whereas, the synthetic users answers only ranged from 6 to 10.

What can we conclude, and what other synthetic approaches might there be in data collection?

This study aimed to understand synthetic users' viability and the next evolution of research methodologies. The findings revealed that the responses created using Generative AI exhibited bias, particularly for questions with varying satisfaction levels, and uncovered some of its limitations due to a lack of variance. In summary, the synthetic responses fell short of the quality of insights attainable through Emporia's sourced data. In its early stages, and by considering this preliminary study, substituting real responses for ones created with generative AI does not seem viable.

However, we are hopeful of a brighter future in synthetic responses and will continue exploring their potential. And while we don’t recommend relying on synthetic responses alone (today), we believe that using generative AI can enrich and expand real responses, paving the way for a deeper and broader analysis of an audience.

Please join our mailing list below to be the first to read explorations into synthetic use cases in B2B market research.

Learn more

Similar case studies

Abstract customers share how they manage design systems, scale design operations, and collaborate cross-functionally.

Never miss an opportunity.

Join the newsletter.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.