Meta, the parent company of Facebook, has dissolved its Responsible AI (RAI) division. The RAI team, initially established in 2018, focused on overseeing the safety aspects of Meta’s AI projects. However, the company has disbanded this team, and its remaining members will now join other AI-related teams within Meta.
Jon Carvill, a Meta communications manager, assured that despite the disbanding, Meta will persist in prioritizing and investing in the development of safe and responsible AI. He emphasized that the affected employees would continue supporting Meta’s cross-functional efforts on responsible AI development and use.
This move by Meta aligns with a broader trend among tech giants facing increased scrutiny over their approach to AI ethics. In March 2023, Microsoft underwent a similar restructuring, letting go of its entire ethics and society team dedicated to aligning AI principles with product design.
Notably, this dissolution raises concerns among critics who fear a potential decline in Meta’s commitment to AI ethics. John Doe, an AI ethics researcher, expressed apprehension, stating that the responsible AI team played a vital role in addressing the ethical implications of Meta’s technology.
Meta has faced prior criticism for its handling of AI ethics, including accusations of bias in targeted advertising algorithms and concerns about misinformation spread on its platforms. The decision to disband the responsible AI team adds to a series of moves that prompt questions about Meta’s steadfast commitment to responsible AI technology use.
While Meta defends its decision, asserting an ongoing commitment to AI ethics, its past missteps in this domain contribute to skepticism about the company’s dedication to ensuring responsible AI practices.