Latest AI Model Poses ‘medium Risk’ For Political Persuasion Via Text: OpenAI - Eastern Mirror
Tuesday, September 10, 2024
image
Science and Tech

Latest AI model poses ‘medium risk’ for political persuasion via text: OpenAI

6091
By IANS Updated: Aug 09, 2024 6:47 pm
OpenAI
Photo: IANS

SAN FRANCISCO — Sam Altman-run OpenAI has admitted that its new artificial intelligence (AI) model shows “medium risk” when it comes to persuading political opinions via generated text.

The company evaluated the persuasiveness of GPT-4o’s text and voice modalities. GPT-4o was launched publicly in May this year.

“Based on pre-registered thresholds, the voice modality was classified as low risk, while the text modality marginally crossed into medium risk,” the company revealed in a research paper.

For the text modality, the AI company evaluated the persuasiveness of GPT-4o-generated articles and chatbots on participant opinions on select political topics.

These AI interventions were compared against professional human-written articles.

“The AI interventions were not more persuasive than human-written content in aggregate, but they exceeded the human interventions in three instances out of twelve,” said OpenAI.

An OpenAI survey found that AI audio clips were 78 per cent of the human audio clips’ effect size on opinion shift while AI conversations were 65 per cent of the human conversations’ effect size on opinion shift.

“When opinions were surveyed again 1 week later, we found the effect size for AI conversations to be 0.8 per cent while for AI audio clips, the effect size was -0.72 per cent,” it added.

The company has thoroughly evaluated new models for potential risks and build in appropriate safeguards before deploying them in ChatGPT or the API.

“Building on the safety evaluations and mitigations we developed for GPT-4, and GPT-4V, we’ve focused additional efforts on GPT-4o’s audio capabilities which present novel risks, while also evaluating its text and vision capabilities,” said the AI company.

Some of the risks evaluated include speaker identification, unauthorised voice generation, the potential generation of copyrighted content, ungrounded inference, and disallowed content.

“Based on these evaluations, we’ve implemented safeguards at both the model and system-levels to mitigate these risks,” OpenAI informed.

The findings indicated that GPT-4o’s voice modality doesn’t meaningfully increase “Preparedness risks”.

6091
By IANS Updated: Aug 09, 2024 6:47:06 pm
Website Design and Website Development by TIS