There's a general problem with AI systems called the "Alignment Problem" - look it up.
TLDR - AI systems won't necessarily do what we want them to as it's hard to ensure that the reward system actually helps them optimize for our desired goals.
Beyond TLDR -
We've observed this issue in simple AI systems already (video game AI players using reinforcement learning) as well as in Bing (using GPT) and even in self-driving systems.
Also, nobody completely understands how these LLMs work under the hood - not even the researchers who are building them so we're hardly making any progress in solving for alignment.
Once these systems are powerful enough, the impact of alignment issues will be severe. The world is simply not spending enough resources on solving alignment in simple AI systems, let alone the gargantuan LLMs that we have and will build.