Using Reddit’s popular ChangeMyView community as a source of baseline data, OpenAI had previously found that 2022’s ChatGPT-3.5 was significantly less persuasive than random humans, ranking in just the 38th percentile on this measure. But that performance jumped to the 77th percentile with September’s release of the o1-mini reasoning model and up to percentiles in the high 80s for the full-fledged o1 model.
So are you smarter than a Redditor?
If you don’t read the article, this sounds worse than it is. I think this is the important part:
This is the buried lede that’s really concerning I think.
Their goal is to create AI agents that are indistinguishable from humans and capable of convincing people to hold certain positions.
Some time in the future all online discourse may be just a giant AI fueled tool sold to the highest bidders to manufacture consent.
A very large portion of people, possibly more than half, do change their views to fit in with everyone else. So an army of bots pretending to have a view will sway a significant portion of the population just through repetition and exposure with the assumption that most other people think that way. They don’t even need to be convincing at all, just have an overwhelming appearance of conformity.