Getty ImagesElon Musk's AI model Grok will no longer be able to edit photos of real people to show them in revealing clothing, after widespread concern over sexualised AI deepfakes in countries including the UK and US."We have implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing such as bikinis."This restriction applies to all users, including paid subscribers," reads an announcement on X, which operates the Grok AI tool.The change was announced hours after California's top prosecutor said the state was probing the spread of sexualised AI deepfakes, including of children, generated by the AI model.The update expands measures that stop all users, including paid subscribers, editing images of real people in revealing outfits.X, formerly known as Twitter, also reiterated in a statement on Wednesday that only paid users will be able to edit images using Grok on its platform.This will add an extra layer of protection by helping to ensure that those who try and abuse Grok to violate the law or X's policies are held accountable, it said.Users who try to generate images of real people in bikinis, underwear and similar clothing using Grok will be stopped from doing so according to the laws of their jurisdiction, X's statement said. In a statement on Wednesday, California Attorney General Rob Bonta said: "This material, which depicts women and children in nude and sexually explicit situations, has been used to harass people across the internet."Malaysia and Indonesia have blocked access to the chatbot over the images and UK Prime Minister Sir Keir Starmer warned X could lose the "right to self regulate" amid outrage over the AI images.Britain's media regulator, Ofcom, said on Monday that it would investigate whether X had failed to comply with UK law over the sexual images.Elon MuskInternational BusinessArtificial intelligenceTwitter
Continue reading the complete article on the original source