“AI Firm Anthropic Shifts Safety Focus Amid Competitive Pressures”

Anthropic, the AI firm responsible for the Claude chatbot, originally founded with a strong emphasis on safe technology, seems to be adjusting its safety priorities to stay competitive. The company announced a revision to its responsible-scaling policy, a set of voluntary guidelines aimed at preventing the creation of potentially harmful AI technologies that could lead to large-scale cyberattacks.

Despite the updated guidelines stating that Anthropic would still demand a “strong argument that catastrophic risk is contained” during AI development, the company now indicates that progress will not be halted “until and unless we no longer believe we have a significant lead” over competitors. This shift, according to the company, is driven by a shift in focus from AI safety concerns to economic potential in the U.S.

The change in safety guidelines by Anthropic coincides with the Pentagon’s threat to terminate contracts unless the company permits its technology for all legal military uses, though Anthropic claims that the alteration in guidelines is unrelated to this pressure. Initially, Anthropic positioned itself as a safety-centric company when it was established in 2021 by former OpenAI employees who had safety concerns. CEO Dario Amodei has consistently stressed the importance of safety, as articulated in a December interview with Fortune.

While the company’s blog post highlighted the importance of updating safety practices for transparency and accountability, Heidy Khlaaf, chief AI scientist at the AI Now Institute, criticizes Anthropic for not adequately addressing current AI technology’s potential harm, such as misuse of the Claude chatbot in fraud schemes and cyberattacks.

Khlaaf suggests that Anthropic is shedding its safety-oriented facade to align more with business interests amidst fierce competition among top AI companies like OpenAI and Google. The U.S. government’s strong pro-AI stance, threatening to withhold funding from states impeding AI advancement, further complicates the regulatory landscape for companies like Anthropic in both the U.S. and Canada.

Despite the pressure from the Pentagon, Anthropic emphasizes its commitment to not allowing its technology in certain military applications like autonomous weapons systems or mass surveillance. The ongoing dialogue with the government underscores the company’s stance on responsible AI use despite external pressures. Amodei’s statement indicates that while Anthropic aims to continue working with the Department of Defense, it will not compromise its principles on the use of its technology.

Latest articles