Technology
Large language models can be used to scam humans, but AI is also susceptible to being scammed – and some models are more gullible than others
By Chris Stokel-Walker
Facebook / Meta
Twitter / X icon
Scams can fool AI models
Wong Yu Liang/Getty Images
The large language models (LLMs) that power chatbots are increasingly being used in attempts to scam humans – but they are susceptible to being scammed themselves.
Udari Madhushani Sehwag at JP Morgan AI Research and her colleagues peppered three models behind popular chatbots – OpenAI’s GPT-3.5 and GPT-4, as well as Meta’s Llama 2 – with 37 scam scenarios.
The chatbots were told, for instance, that they had received an email recommending investing in a new cryptocurrency, with…
More from New Scientist
Explore the latest news, articles and features
Information contained on this page is provided by an independent third-party content provider. This website makes no warranties or representations in connection therewith. If you are affiliated with this page and would like it removed please contact editor @pleasantgrove.business