Latest from OpenAI, Anthropic, Google, Meta & more
306 stories
Medical professionals in the region are raising alarm bells over a new artificial intelligence chatbot marketed as a health advisor, warning that the tool could dangerously mislead patients into delaying or forgoing legitimate medical care. The skepticism from local doctors and health officials underscores a growing tension between AI companies racing to capture the health-tech market and the medical establishment's concerns about liability and patient safety. With such tools already proliferating across consumer platforms, the question is no longer whether AI will play a role in healthcare, but whether regulators can establish guardrails fast enough to prevent harm.
2 sources
So when exactly is smut considered porn? | Cath Virginia / The Verge | Photos from Getty Images OpenAI's delayed "adult mode" for ChatGPT is expected to support saucy text conversations at launch,...
AI chatbots have been linked to suicides for years. Now one lawyer says they are showing up in mass casualty cases too, and the technology is moving faster than the safeguards.
Software demos and Pentagon records detail how chatbots like Anthropic’s Claude could help the Pentagon analyze intelligence and suggest next steps.
by triggerAll — Discover how OpenAI’s latest model outperformed all competitors.
We compared DeepSeek vs ChatGPT on performance, coding, cost, and features. See which AI model wins in 2026 across every major category.
A survey of 1,000 women in the UK aged 20 to 50 found that 53 per cent said they would use a free AI tool for medical advice, even while acknowledging that such tools can have an estimated 20 per...
OpenAI has delayed the launch of an “adult mode” for its ChatGPT after the company’s advisors warned of the risks — including the possibility of creating a “sexy suicide coach,” according to a report.
Learn how to use Spotify, Canva, Figma, Expedia, and other apps directly in ChatGPT.
The US military might use generative AI systems to rank lists of targets and make recommendations—which would be vetted by humans—about which to strike first, according to a Defense Department...