Distribuir contenido | Develop Site
Meta Joins Thorn and Industry Partners in New Generative AI Principles
At Meta, we’ve spent over a decade working to keep people safe online. In that time, we’ve developed numerous tools and features to help prevent and combat potential harm – and as predators have adapted to try and evade our protections, we’ve continued to adapt too.
We’re excited about the opportunities that generative AI technology can bring, but we also want to make sure that innovation and safety go hand in hand. That’s why we take steps to build our generative AI features and models responsibly. For example, we conduct extensive red teaming exercises in areas like child exploitation with experts and address vulnerabilities we found.
Now Meta is joining Thorn, All Tech is Human and other leading tech companies in an effort to prevent the misuse of gen AI tools to perpetrate child exploitation. Alongside our industry partners, Meta commits to the below Safety by Design principles from Thorn and All Tech is Human, to be applied as appropriate, and will provide updates on our progress. These principles will inform how we develop gen AI technology at Meta to help ensure we mitigate potential risks from the start.
DEVELOP: Develop, build and train generative AI models that proactively address child safety risks.
- Responsibly source our training datasets, and safeguard them from child sexual abuse material (CSAM) and child sexual exploitation material (CSEM): This is essential to helping prevent generative models from producing AI-generated (AIG) CSAM and CSEM. The presence of CSAM and CSEM in training datasets for generative models is one avenue in which these models are able to reproduce this type of abusive content. For some models, their compositional generalization capabilities further allow them to combine concepts (e.g. adult sexual content and non-sexual depictions of children) to then produce AIG-CSAM. We are committed to avoiding or mitigating training data with a known risk of containing CSAM and CSEM. We are committed to detecting and removing CSAM and CSEM from our training data, and reporting any confirmed CSAM to the relevant authorities. We are committed to addressing the risk of creating AIG-CSAM that is posed by having depictions of children alongside adult sexual content in our video, image and audio generation training datasets.
- Incorporate feedback loops and iterative stress-testing strategies in our development process: Continuous learning and testing to understand a model’s capabilities to produce abusive content is key in effectively combating the adversarial misuse of these models downstream. If we don’t stress test our models for these capabilities, bad actors will do so regardless. We are committed to conducting structured, scalable and consistent stress testing of our models throughout the development process for their capability to produce AIG-CSAM and CSEM within the bounds of law, and integrating these findings back into model training and development to improve safety assurance for our generative AI products and systems.
- Employ content provenance with adversarial misuse in mind: Bad actors use generative AI to create AIG-CSAM. This content is photorealistic, and can be produced at scale. Victim identification is already a needle in the haystack problem for law enforcement: sifting through huge amounts of content to find the child in active harm’s way. The expanding prevalence of AIG-CSAM is growing that haystack even further. Content provenance solutions that can be used to reliably discern whether content is AI-generated will be crucial to effectively respond to AIG-CSAM. We are committed to developing state of the art media provenance or detection solutions for our tools that generate images and videos. We are committed to deploying solutions to address adversarial misuse, such as considering incorporating watermarking or other techniques that embed signals imperceptibly in the content as part of the image and video generation process, as technically feasible.
DEPLOY: Release and distribute generative AI models after they have been trained and evaluated for child safety, providing protections throughout the process.
- Safeguard our generative AI products and services from abusive content and conduct: Our generative AI products and services empower our users to create and explore new horizons. These same users deserve to have that space of creation be free from fraud and abuse. We are committed to combating and responding to abusive content (CSAM, AIG-CSAM and CSEM) throughout our generative AI systems, and incorporating prevention efforts. Our users’ voices are key, and we are committed to incorporating user reporting or feedback options to empower these users to build freely on our platforms.
- Responsibly host models: As our models continue to achieve new capabilities and creative heights, a wide variety of deployment mechanisms manifests both opportunity and risk. Safety by design must encompass not just how our model is trained, but how our model is hosted. We are committed to responsible hosting of our first-party generative models, assessing them e.g. via red teaming or phased deployment for their potential to generate AIG-CSAM and CSEM, and implementing mitigations before hosting. We are also committed to responsibly hosting third party models in a way that minimizes the hosting of models that generate AIG-CSAM. We will ensure we have clear rules and policies around the prohibition of models that generate child safety violative content.
- Encourage developer ownership in safety by design: Developer creativity is the lifeblood of progress. This progress must come paired with a culture of ownership and responsibility. We encourage developer ownership in safety by design. We will endeavor to provide information about our models, including a child safety section detailing steps taken to avoid the downstream misuse of the model to further sexual harms against children. We are committed to supporting the developer ecosystem in their efforts to address child safety risks.
MAINTAIN: Maintain model and platform safety by continuing to actively understand and respond to child safety risks.
- Prevent our services from scaling access to harmful tools: Bad actors have built models specifically to produce AIG-CSAM, in some cases targeting specific children to produce AIG-CSAM depicting their likeness. They also have built services that are used to “nudify” content of children, creating new AIG-CSAM. This is a severe violation of children’s rights. We are committed to removing from our platforms and search results these models and services. [This principle only applies to search engines and public-facing third party model providers.]
- Invest in research and future technology solutions: Combating child sexual abuse online is an ever-evolving threat, as bad actors adopt new technologies in their efforts. Effectively combating the misuse of generative AI to further child sexual abuse will require continued research to stay up to date with new harm vectors and threats. For example, new technology to protect user content from AI manipulation will be important to protecting children from online sexual abuse and exploitation. We are committed to investing in relevant research and technology development to address the use of generative AI for online child sexual abuse and exploitation. We will continuously seek to understand how our platforms, products and models are potentially being abused by bad actors. We are committed to maintaining the quality of our mitigations to meet and overcome the new avenues of misuse that may materialize.
- Fight CSAM, AIG-CSAM and CSEM on our platforms: We are committed to fighting CSAM online and preventing our platforms from being used to create, store, solicit or distribute this material. As new threat vectors emerge, we are committed to meeting this moment. We are committed to detecting and removing child safety violative content on our platforms. We are committed to disallowing and combating CSAM, AIG-CSAM and CSEM on our platforms, and combating fraudulent uses of generative AI to sexually harm children.
The post Meta Joins Thorn and Industry Partners in New Generative AI Principles appeared first on Meta.
monday.com vs. Jira: Which Suits Best for Your Team?
Grab 9 Ethical Hacking Courses for $30 and Improve Your Business Security
Google: Ignore Link Spam Especially To 404 Pages
Google: We Have Taken Action On Some Parasite SEO In Recent Update
Can a VPN Be Hacked?
Mikhail Parakhin Breaks Silence On Mustafa Suleyman Of Microsoft (Kinda...)
Google Business Profiles Gains Select Preferred Menu Source
Google: Crawl Budget Goes Across All Googlebot Crawling, Not Just Web Search
Price Drop: The World’s Top-Grossing Language Learning App is Only $150
The 9 Best Payroll Software and Services
SurePayroll vs. Gusto (2024): Which Payroll Software Is Better?
Programming Note: Offline For Passover
Daily Search Forum Recap: April 22, 2024
6 Best TurboTax Alternatives and Competitors for 2024
AI Sustainability: How Microsoft, Google Cloud, IBM & Dell are Working on Reducing AI’s Climate Harms
The 7 Best Nanny Payroll Services of 2024
Introducing Our Open Mixed Reality Ecosystem
Today, we’re taking a major step toward our vision for a more open computing platform for the metaverse. We’re opening up the operating system that powers our Meta Quest devices to third-party hardware makers, giving developers a larger ecosystem to build for and ultimately creating more choice for consumers. This platform is the product of a decade of investment into the underlying technologies that enable mixed reality, and opening it up means a lot more people will benefit from that investment. We’re working with leading global technology companies to create a new ecosystem of mixed reality devices, and we’re making it even easier for developers to build mixed reality apps.
We believe a more open ecosystem is the best way to bring the power of mixed reality to as many people as possible. With more devices, this new ecosystem will offer more choice to consumers and businesses around the world. Developers will have a much larger range of hardware that can run their apps, and more device makers will expand their market to a wider range of users, much like we’ve seen with PCs and smartphones.
Introducing Meta Horizon OSThis new hardware ecosystem will run on Meta Horizon OS, the mixed reality operating system that powers our Meta Quest headsets. Meta Horizon OS combines the core technologies that power today’s mixed reality experiences with a suite of features that put social presence at the center of the platform.
Meta Horizon OS is the result of a decade of work at Meta to build a next-generation computing platform. To pioneer standalone headsets, we developed technologies like inside-out-tracking and self-tracked controllers. To allow for more natural interaction systems and social presence, we pioneered hand, eye, face and body tracking. And for mixed reality, we built a full stack of technologies for blending the digital and physical worlds, including high-resolution Passthrough, Scene Understanding and Spatial Anchors. This long-term investment that began on the mobile-first foundations of the Android Open Source Project has produced a full mixed reality operating system used by millions of people.
Developers and creators can take advantage of all these technologies to create mixed reality experiences. And they can reach their audiences and grow their businesses through the content discovery and monetization platforms built into Meta Horizon OS, including the Meta Quest Store, which we’ll rename the Meta Horizon Store.
The social layer of Meta Horizon OS means people’s identities, avatars, social graphs and friend groups can move with them across virtual spaces, and developers can integrate these social features into their apps. And because this social layer is made to bridge multiple platforms, people can spend time together in virtual spaces that exist across mixed reality, mobile and desktop devices. Meta Horizon OS devices will also use the same mobile companion app that Meta Quest owners use today, which we’ll rename to the Meta Horizon app.
A New Generation Of HardwareLeading global technology companies are already working to bring this new ecosystem to life with new devices built on Meta Horizon OS:
- ASUS’s Republic of Gamers will use its expertise as a leader in gaming solutions to develop an all-new performance gaming headset.
- Lenovo will draw on its experience co-designing Oculus Rift S, as well as deep expertise in engineering leading devices like the ThinkPad laptop series, to develop mixed reality devices for productivity, learning, and entertainment.
- Xbox and Meta teamed up last year to bring Xbox Cloud Gaming (Beta) to Meta Quest, letting people play Xbox games on a large 2D virtual screen in mixed reality. Now, we’re working together again to create a limited-edition Meta Quest, inspired by Xbox.
All of these devices will benefit from Meta’s long-term partnership with Qualcomm Technologies, Inc., which builds the Snapdragon® processors that are tightly integrated with our software and hardware stacks.
A More Open App EcosystemAs we begin opening Meta Horizon OS to more device makers, we’re also expanding the ways app developers can reach their audiences. We’re beginning the process of removing the barriers between the Meta Horizon Store and App Lab, which lets any developer who meets basic technical and content requirements release software on the platform. App Lab titles will soon be featured in a dedicated section of the store on all our devices, making them easy for larger audiences to discover.
We’re also developing a new spatial app framework that helps mobile developers create mixed reality experiences. Developers will be able to use the tools they’re already familiar with to bring their mobile apps to Meta Horizon OS or to create entirely new mixed reality apps.
Consumers and developers alike will benefit from an ecosystem where multiple hardware makers build on a common platform. We look forward to continuing on this journey to bring mixed reality to more people.
The post Introducing Our Open Mixed Reality Ecosystem appeared first on Meta.