From auto-generated email responses to term papers written by ChatGPT, artificial intelligence now churns out text that both saves time and opens the door to fraud.
A new paper in the journal PNAS asks: Can humans tell the difference?
Research shows the ability to tell AI text from human writing varies by context. In this case, researcher wanted to test self-presentation.
When 4,600 participants read job applications, online dating info and Airbnb host profiles, their ability to tell real from fake was roughly that of a coin toss.
But they often agreed with each other’s assessments, likely because they shared the false notion that human authors were more likely to use first-person pronouns, focus on past events or mention family topics.
The fear now is that future AIs can use those assumptions to write text that sounds more human than actual human writing.