In a recent move, Meta, the social media giant formerly known as Facebook, has reportedly disbanded its division tasked with regulating artificial intelligence (AI) ventures within the company. According to sources, members of Meta’s responsible AI division have transitioned to roles within the generative AI product division, a sector established in February of this year. The generative AI team is dedicated to developing products that utilize AI to generate language and images, mirroring their human-made counterparts. This strategic shift follows the trend within the tech industry, where companies are heavily investing in machine learning development to stay competitive in the rapidly evolving AI landscape.
The restructuring aligns with Meta’s broader initiative referred to by CEO Mark Zuckerberg as the “year of efficiency,” which has manifested as a series of layoffs, team mergers, and redistributions within the company. This move underscores Meta’s commitment to adaptability and efficiency amid the ongoing AI boom.
The decision to disband the responsible AI division comes at a time when ensuring the safety of AI technology has become a top priority for industry leaders. With regulators and officials closely monitoring the potential risks associated with AI, companies are taking proactive steps to address concerns. In July, Anthropic, Google, Microsoft, and OpenAI collaborated to form an industry group dedicated to establishing safety standards as AI continues to advance.
Despite the restructuring, Meta emphasizes its ongoing commitment to responsible AI development and use. Team members from the disbanded division have been reassigned within the company, with a continued focus on supporting responsible AI initiatives. The company recently unveiled two AI-powered generative models, Emu Video and Emu Edit, showcasing Meta’s dedication to pushing the boundaries of AI capabilities while maintaining a commitment to safety and responsibility in development and deployment.