It did not just allow for hateful remarks, it created them. Out of whole cloth.
https://www.theringer.com/2025/07/09/tech/ai-x-grok-elon-musk-linda-yaccarino-hitler
"This week, though, the bot lost its nonexistent mind to a completely new degree, promoting nakedly antisemitic conspiracy theories, praising Hitler, and ranting about white genocide in a surreally exaggerated tone of Very Online glee. "
There are some real shocking details in that article and other articles it links to.
Of course it isn't Musk's fault. So Yaccarino must take the fall. But this was always her fate. Taking a job where you are Musk's figurehead boss, the "CEO in name only" to his "Chief Technology Officer (ha ha, who are we kidding?)" was always doomed to fail. This is precisely the kind of scenario she was kept around for.
All of this is so very, very disgusting, but the most important paragraph from the article is:
"I’m going to assume, for the sake of my own sanity, that you think all this is bad. But maybe you don’t think it’s that bad? Maybe there’s a small part of you that’s like, “OK, whatever, some knobs got turned and some ugly things got said, but the process of technological advancement always includes setbacks. They’ll learn from this, and it won’t happen again.” I’d ask you to consider, though, the possibility that this latest Grok incident is in fact a reason to be very, very frightened of AI chatbots generally. I’d ask you to consider the possibility that it will happen again, that it will go on happening for as long as this technology is in the hands of oligarchs like Musk, and that the really dangerous thing is that it won’t always be this overt. It will go on happening, only once the technology is properly tuned, we won’t be able to see it happening, and that will be much more damaging."