AI Industry Leaders to Combat Image-Based Sexual Abuse

The U.S. government has received a set of voluntary commitments from AI industry leaders aimed at addressing the issue of image-based sexual abuse, including non-consensual intimate images (NCII) and child sexual abuse material (CSAM). Big players such as Adobe, Anthropic, Microsoft, and OpenAI have agreed to improve the way they source data for AI training and development, ensuring that harmful content is identified and halted before it can be exploited. These new commitments address the risks posed by artificial intelligence in generating explicit and abusive images. The companies will take steps to responsibly source datasets, enhance their development processes, and implement systems to prevent AI from being used to create sexual abuse images.

Responsible Data Sourcing and Development Processes

Adobe, Microsoft, and other firms have pledged to source their datasets more responsibly, ensuring they do not contain material that could be used to generate AI-driven sexual abuse images. This effort includes improving safeguards around how data is collected, filtered, and incorporated into AI training models. An aspect of this commitment is the prevention of AI-generated abuse content through the use of enhanced feedback loops and rigorous stress testing. AI firms will ensure that their models undergo reviews to detect and prevent potential outputs that could contribute to sexual abuse. The companies are also committing to removing explicit images from AI training datasets when necessary.

Tackling NCII and CSAM Through AI Model Oversight

NCII and CSAM have become prevalent as AI technologies have grown more advanced. AI-generated content, which includes digitally manipulated images, has created issues for companies working to prevent the spread of abusive material. The White House announcement showed that this form of abuse has large consequences for individuals, and the AI industry has a responsibility to respond. By implementing these protective measures, companies like Anthropic and OpenAI are focusing on reducing the creation and spread of harmful content. Microsoft has recently collaborated with organizations like StopNCII.org to help detect and prevent the distribution of non-consensual images on its platforms. Adobe has committed to advancing its development processes to protect against AI-generated sexual abuse content. This follows on from last year’s voluntary agreements, in which leading AI companies promised to prioritize safety, security, and trust in AI development.

The Role of AI Companies in Preventing Harm

AI-generated image-based abuse is a serious and growing challenge.The commitments demand for companies to actively prevent their AI models from producing or spreading such content. These actions have been seen from large organizations in other industries, such as Cash App and Square’s work to block payments linked to companies promoting image-based sexual abuse, and Google’s updates to its search platform to reduce the visibility of non-consensual intimate images. Meta and GitHub have implemented new strategies to limit the spread of these abusive images. Meta recently removed over 63,000 Instagram accounts involved in sextortion scams and expanded its efforts to block image-based abuse on its platforms. GitHub has updated its policies to prohibit software tools that facilitate the creation of non-consensual or manipulated imagery. These actions indicate a concerted effort to combat the spread of inappropriate AI-generated images.

The commitments made are a positive action towards addressing image-based sexual abuse, but the firms involved know that more work must follow. As AI technology advances, so too must the measures that companies put in place. The collaboration between the technology sector, civil society, and researchers will be important in continuing to fight the misuse of AI. A working group, including these stakeholders, will explore more ways to prevent and mitigate the harms caused by AI-generated image-based abuse.

Photo credits: M-Production, AdobeStock

Twitter Facebook LinkedIn Reddit Copy link Link copied to clipboard
Photo of author

Posted by

Mark Wilson

Mark Wilson is a news reporter specializing in information technology cyber security. Mark has contributed to leading publications and spoken at international forums with a focus on cybersecurity threats and the importance of data privacy. Mark is a computer science graduate.