This article is great. You should go and read it. Afterwards, if you’re scratching your head about the overall point, this might be helpful:
tl;dr Language models like GPT-3 include incorporate some model of the world. That’s why they can generate plausible-sounding text. Future language models will be larger, more powerful, and have more complete models of the world. So we will be able to ask the language model questions, like ‘what will happen if we do X?‘. By compiling the answers to many such questions, we can figure out the best thing to do.
 Assuming you have some utility function you can maximize.