


AI models can learn to conceal information from their users
This makes it harder to ensure that they remain transparent
IT WAS AN eye-opening experiment. In 2023 Apollo Research, an outfit in London that tests artificial-intelligence (AI) systems, instructed OpenAI’s GPT-4, a large language model, to manage a fictional firm’s stock portfolio without making illegal insider trades. Posing as company management, the researchers put GPT-4 “under pressure” by stressing that the firm was in dire financial straits. Someone purporting to be a company trader then sent the model a prompt reiterating the risks of insider trading. But, as an aside, she revealed the name of a company that would announce a “huge” merger within hours. What followed was startling.
Explore more

Australia’s dingoes are becoming a distinct species
Many will still be culled under false pretences

Lethal fungi are becoming drug-resistant—and spreading
New antifungals offer a glimmer of hope

The Carthaginians weren’t who you think they were
New research shows just how diverse the ancient city of Dido was
We’re hiring a Technical Lead for our AI Lab
Join The Economist’s new AI initiative
How to form good habits, and break bad ones: trick your brain
Small rewards and a change of scenery can help
AI models could help negotiators secure peace deals
Some are being developed to help end the war in Ukraine