News

An advisory board convened by OpenAI says it should continue to be controlled by a nonprofit because the artificial ...
A former top engineer reveals an organization that runs on secrets, Slack messages, and social media buzz, where world ...
A former OpenAI engineer who worked on the company's most promising products recently shared an intriguing account of what it’s like to work at the $300 billion AI firm, following his resignation.
OpenAI has disbanded its Long-Term AI Risk Team, responsible for addressing the existential dangers of AI. The disbanding follows several high-profile departures, including co-founder Ilya ...
AI red teaming mostly relies on identifying and patching fixed vulnerabilities, which is a great starting point but not nearly enough.
AI models are under attack. Traditional defenses are failing. Discover why red teaming is crucial for thwarting adversarial threats.
By framing the issue as a matter of “AI privilege,” OpenAI is effectively proposing a new social contract for how intelligent systems handle confidential inputs.
Current and former Meta employees fear the new automation push comes at the cost of allowing AI to make tricky determinations about how Meta's apps could lead to real world harm.
Tests reveal OpenAI's advanced AI models sabotage shutdown mechanisms while competitors' AI models comply, sparking enterprise control concerns.
As a result, these AI companies have never been richer. In March, OpenAI raised $40 billion, the largest private tech-funding round on record, and hit a $300 billion valuation.
What happens when AI automates R&D and starts to run amok? An intelligence explosion, power accumulation, disruption of democratic institutions, and more, according to these researchers.
Inside the company, there’s a feeling that—particularly as DeepSeek dominates the conversation—OpenAI must become more efficient or risk falling behind its newest competitor.