To create the AI replicas, the researchers asked about 1,000 people to answer questions related to life values, personal experiences, and attitudes toward social issues.
It takes just two hours for a neuron to copy a human personality, say researchers at Google and Stanford.
Here's what you need to know:
- Neural networks can copy personality and mimic human behavior with 85% accuracy;
- The AI only needs a few hours of conversation to get everything out of you: psychology, knowledge, cultural characteristics, intelligence level;
- There are many uses for such "copying", including for dark purposes, social experimentation or manipulation;
The most disturbing part: you probably already have a digital copy - if you interact closely with the AI and share details of your life with it.


Stanford University researchers paid 1052 people $60 to read the first two lines of the The Great Gatsby to Annex. This done, the AI, which resembled a 2D sprite from the final fantasy game of the SNES era, asked participants to tell the story of their lives. The researchers took these interviews and worked them into an AI that they say replicates the participants' behavior with 85% accuracy.
The study, entitled 1000 generative agent simulations, is a joint venture between Stanford and scientists working for Google's DeepMind AI research lab. The pitch is that creating AI agents based on random people can help policymakers and businesspeople better understand the public. Why use focus groups or poll the public when you can talk to them once, spin up an LLM based on that conversation, and then have their thoughts and opinions forever? Or at least as close an approximation of those thoughts and feelings as the LLM is capable of recreating.
"This work provides a foundation for new tools that can help study individual and collective behavior, " the paper summarized.
"How can, for example, a diverse set of people respond to new public health policies and announcements, react to product launches, or respond to major shocks? ' The paper continued. "When simulated individuals are combined into collectives, these simulations could help pilot interventions, develop complex theories capturing nuanced causal and contextual interactions, and expand our understanding of structures such as institutions and networks in fields such as economics, sociology, organizations, and political science. "
All of these opportunities based on a two-hour interview submitted to LLM that answers questions mostly like their real-life counterparts.
Much of the process was automated. The researchers contracted Bovitz, a market research firm, to gather participants. The goal was to get a broad sample of the U.S. population, as broad as possible when limited to 1,000 people. To complete the survey, users signed up for an account in a custom-designed interface, made a 2D sprite avatar, and began talking to an AI interviewer.
The interview questions and style are a modified version of that used by American Voices Project, a joint project of Stanford University and Princeton that interviewed people across the country.
Each interview began with the participants reading the first two lines of the The Great Gatsby (" In my younger and more vulnerable years, my father gave me advice that I have been turning over in my mind ever since. 'Whenever you feel like criticizing someone,' he told me, 'just remember that all the people in this world didn't have the advantages you had. '") as a way of calibrating the sound.
According to the document, "The interview interface displays a 2-D sprite avatar representing the interviewing agent in the center, with the participant's avatar displayed at the bottom, walking toward a target post to indicate progress. When the AI interviewing agent spoke, it was signaled by a pulsing animation of the center circle with the interviewer's avatar. "
The two-hour interviews produced transcripts averaging 6,491 words in length. It asked questions about race, gender, politics, income, social media use, job stress and the makeup of their families. The researchers published the interview script and asked the AI questions.
These transcripts, less than 10,000 words each, were then fed into another LLM, which the researchers used to isolate generative agents to replicate the participants. The researchers then put both the participants and the AI clones through more questions and economic games to see how they would compare. "When an agent is asked, the entire interview transcript is injected into the model prompt, instructing the model to mimic the person based on their interview data," in the paper.
This part of the process was as close to controlled as possible. The researchers used General social survey (GSS) and Big Five Personality Inventory (BFI) to test how well LLMs match their inspiration. He then ran participants and LLMs through five economic games to see how they would compare.
The results were mixed. The AI agents answered about 85% of the questions in the same way as the real-world participants in the GSS. They hit 80% on the BFI. However, the numbers dropped when the agents started playing economic games. The researchers offered real-life participants cash rewards for playing games such as Prisoner's dilemma and The game of the dictator,
In the Prisoner's Dilemma, participants can choose to work together and both succeed or screw their partner for a chance to win big. In the Dictator Game, participants must choose how to allocate resources to other participants. Real-life subjects won money over the original $60 per game.
Faced with these economic games, AI clones of humans also do not replicate their real-world counterparts. "On average, generative agents achieve a normalized correlation of 0.66," or about 60%.
The whole document is worth reading if you're interested in how academics think about AI agents and the public. It didn't take long for researchers to boil down the personality of a person in an LLM who behaved in a similar way. Given time and energy, they can probably approximate the two.
This worries me. Not because I don't want to see the ineffectual human spirit reduced to a spreadsheet, but because I know that this kind of technology will be used for sick people. We've already seen dumber LLMs trained on public records tricking grandmothers into giving out banking info to an AI relative after a quick phone call. What happens when these machines have a script? What happens when they have access to purpose built personas based on social media activity and other publicly available information?
What happens when a corporation or politician decides that the public wants and needs something based not on their spoken will, but on a convergence of it?