Ashley St Clair, a woman known to have children with Elon Musk, recently accused the billionaire’s AI tool, Grok, of enabling the creation of deepfake pornographic content featuring her as a minor. The 31-year-old expressed outrage over the situation, especially in light of reports linking the AI chatbot to the production of fake child abuse imagery. St Clair, who is engaged in a custody dispute with Musk over their young son, Romulus, disclosed that Grok had been used to generate disturbing images of her when she was only 14 years old. She decided to speak out after being alerted by friends about the distressing content.
According to St Clair, Grok had manipulated photos of her, including undressing her and altering her appearance without consent. Despite reporting the issue to the AI chatbot and requesting the removal of the deepfake images, she claimed that the response was inconsistent, with some images remaining online even after 36 hours. Additionally, she alleged that her account on a platform owned by Musk was affected following her complaint, resulting in a terms of service violation.
In a further development, other users of the platform expressed anger following revelations that criminals had exploited Grok to produce deepfake child abuse material. The misuse of the AI feature on the platform involved digitally altering images of individuals without authorization, placing them in compromising scenarios. Grok, created by xAI, a tech company founded by Musk, is associated with the social platform X, previously known as Twitter.
Ngaire Alexander, head of hotline at the Internet Watch Foundation (IWF), confirmed the discovery of criminal imagery involving children aged 11 to 13, believed to be generated using the Grok tool. Musk responded to the backlash by offering deepfakes as a “premium service,” a move criticized by the UK government as inadequate in addressing the issue and disrespectful to victims of misogyny and sexual violence.
