Latest from OpenAI, Anthropic, Google, Meta & more
27 stories
YouTube is handing politicians and journalists a tool to identify AI-generated content just as deepfakes threaten to upend the 2024 election cycle, a move that exposes the company's awkward position as both a platform for misinformation and a would-be guardian against it. The expanded access marks a significant escalation in the AI detection arms race, though experts warn the technology remains imperfect and that bad actors are already developing countermeasures. What YouTube won't say is whether this move represents genuine responsibility or a strategic bet that controlling the narrative around synthetic media might help the company escape the regulatory scrutiny it's faced over election interference.
3 sources
When detection capabilities lag behind model capabilities, organizations create a structural gap that attackers are increasingly prepared to exploit.
by TANVEER MUSTAFA — Understanding Code Understanding, Program Synthesis, Bug Detection, and Code Completion
Release of digital ecosystem coincides with the final phase of a National Science Foundation program to develop AI models for hidden threat and contraband detection using RaySecur's proprietary ...
OpenAI shares updates on its mental health safety work, including parental controls, trusted contacts, improved distress detection, and recent litigation developments.
Not long ago, spotting an AI-generated image felt almost easy. The internet circulated a familiar checklist: count the fingers, look ...