A few years ago, many large language models had a problem. They were, to put it bluntly, racist. Users could quite easily find ways of making them regurgitate racist material, or hold racist opinions they had gleaned in scanning the vast corpus of texts on which they’d been trained. Toxic bias was, it seemed, ingrained in human writing and then amplified by AI.