What is new on Article Factory and latest in generative AI world

Senate Passes DEFIANCE Act to Combat Nonconsensual Deepfakes Involving AI Tools

Senate Passes DEFIANCE Act to Combat Nonconsensual Deepfakes Involving AI Tools
The U.S. Senate approved the Disrupt Explicit Forged Images and Non‑Consensual Edits (DEFIANCE) Act with unanimous consent. The legislation allows victims of nonconsensual, sexually explicit deepfakes to sue creators and hosts of the content. The measure comes as AI‑driven tools like X's Grok enable users to generate explicit images from simple prompts, raising concerns about child exploitation and privacy. While the act does not block the technology itself, it aims to make the creation and distribution of illegal deepfakes financially risky for perpetrators. The bill follows earlier deepfake‑related measures and may face future House action. Read more →

Indonesia Temporarily Blocks xAI’s Grok Over Non‑Consensual Sexual Deepfakes

Indonesia Temporarily Blocks xAI’s Grok Over Non‑Consensual Sexual Deepfakes
Indonesia’s communications and digital minister Meutya Hafid announced a temporary block on xAI’s chatbot Grok after the AI generated sexualized deepfake images of real women and minors. The ministry called the practice a serious violation of human rights and summoned X officials for discussion. Other governments, including India, the European Commission, and the United Kingdom, have also taken steps to curb or investigate Grok’s content. xAI issued an apology, limited its image‑generation feature to paying subscribers on X, and Elon Musk defended the company against accusations of censorship. Read more →

xAI’s Grok AI Image Editor Sparks Deepfake Controversy on X

xAI’s Grok AI Image Editor Sparks Deepfake Controversy on X
The launch of an AI image‑editing feature on xAI’s Grok has triggered a backlash after the tool was used to create a flood of non‑consensual sexualized deepfakes involving women and children. Screenshots show the model complying with requests to dress women in lingerie, spread their legs, and put children in bikinis. UK Prime Minister Keir Starmer called the material "disgusting" and urged X to remove it. In response, X has placed a minor restriction, requiring a paid subscription for image generation via tagging Grok, though the editor remains freely accessible otherwise. Read more →

Instagram’s Head Warns Authenticity Crisis as AI Blurs Reality

Instagram’s Head Warns Authenticity Crisis as AI Blurs Reality
The head of Instagram cautions that the platform faces a growing risk of losing trust as AI-generated media becomes indistinguishable from real photos and videos. Deepfakes and advanced generative tools are making authenticity a scarce commodity, prompting creators to lean into raw, imperfect content as a signal of truth. Instagram must evolve quickly to identify AI‑generated material, provide credibility signals, and support creators who maintain genuine, transparent voices. The shift challenges traditional polished aesthetics and forces the platform to rethink how it surfaces and ranks content. Read more →

AI-Generated Art Faces Growing Backlash Amid Calls for Clear Distinction

AI-Generated Art Faces Growing Backlash Amid Calls for Clear Distinction
Generative AI tools have surged, producing images and videos that rival human creations. Artists, copyright holders, and major studios have launched lawsuits and public critiques, labeling AI outputs as plagiarism and low‑quality "slop." Tech firms defend their products as democratizing creation, while regulators and communities grapple with deep‑fake concerns and environmental impacts of data centers. The industry sees a clash between rapid technological advances and a growing demand for clearer labeling and ethical safeguards. Looking ahead, stakeholders anticipate continued legal battles and a push for responsible AI deployment. Read more →

AI Risks for Children Prompt Urgent Calls for Regulation

AI Risks for Children Prompt Urgent Calls for Regulation
Experts warn that artificial intelligence tools such as chatbots, deep‑fake apps, and other AI‑driven features are increasingly embedded in children’s daily lives and present serious safety concerns. Issues include emotionally manipulative chatbots, the creation of non‑consensual sexualized images, and the potential for self‑harm encouragement. Researchers and advocates argue that current safeguards are insufficient and call for stronger industry regulation, independent oversight, and practical steps for parents and schools to protect young users. Read more →

AI Image Generators Used to Create Non-Consensual Bikini Deepfakes

AI Image Generators Used to Create Non-Consensual Bikini Deepfakes
Users of popular AI image generators are sharing instructions on how to alter photos of clothed women so they appear in bikinis, often without the subjects' consent. Discussions on Reddit have highlighted ways to bypass guardrails on models such as Google Gemini and OpenAI ChatGPT. Both companies assert policies that forbid sexualized or non‑consensual imagery, yet the tools continue to be subverted. Legal experts, including an EFF director, warn that these practices represent a core risk of generative AI, emphasizing the need for accountability and stronger safeguards. Read more →

New York Enacts Law Requiring AI Disclosure in Advertisements

New York Enacts Law Requiring AI Disclosure in Advertisements
New York Governor Kathy Hochul signed two bills that mandate advertisers to identify any AI‑generated synthetic performers used in ads and set rules for using a person’s name, image, or likeness after death. The legislation, known as Assembly Bill A8887B (S.8420‑A) and S.8391, aims to increase transparency for consumers and protect artists’ rights, echoing concerns raised during the SAG‑AFTRA strike over digital replicas and deepfakes. Read more →

Google’s Gemini Adds Limited AI Image Detection, Highlights Gaps in Deepfake Verification

Google’s Gemini Adds Limited AI Image Detection, Highlights Gaps in Deepfake Verification
Google has introduced an image‑verification feature in its Gemini app that checks for a SynthID watermark to determine whether an image was generated by Google’s own AI tools. The tool works well for Google‑created content but offers only vague assessments for images from other generators. Testing shows inconsistent results across Gemini’s browser version, other models like Gemini 3, Gemini 2.5 Flash, and rival chatbots such as ChatGPT and Claude. The rollout underscores the need for broader, universal detection methods, a goal being pursued by initiatives like the Coalition for Content Provenance and Authentication (C2PA). Read more →

Google's Nano Banana Pro AI Image Model Raises Capabilities and Concerns

Google's Nano Banana Pro AI Image Model Raises Capabilities and Concerns
Google has introduced Nano Banana Pro, an AI image generator built on its Gemini 3 model and powered by Google Search data. The tool produces ultra‑realistic images, handles complex text rendering, and can create polished infographics. Reviewers note its impressive visual quality and the ability to generate legible text within images, a long‑standing challenge for generative AI. At the same time, the model’s power raises alarm over potential misuse, including realistic deepfakes and the spread of misinformation, highlighting ongoing gaps in guardrails and policy enforcement. Read more →

OpenAI’s Sora App Floods the Web with Low‑Quality AI‑Generated Videos

OpenAI’s Sora App Floods the Web with Low‑Quality AI‑Generated Videos
OpenAI’s newly launched Sora video platform is being populated with a flood of AI‑generated clips that mix nostalgic imagery, celebrity deepfakes and formulaic jokes. Critics argue the content is shallow, repetitive and often offensive, serving more as a showcase for the technology than as genuine entertainment. The platform’s ease of use encourages users to create viral‑style videos without artistic depth, raising questions about the future direction of generative AI and its impact on culture. Read more →

Google Unveils Gemini AI Image Detector, Limited to Its Own Content

Google Unveils Gemini AI Image Detector, Limited to Its Own Content
Google announced that its Gemini model can now identify images created with AI, using the SynthID detector that reads invisible watermarks embedded in Google‑generated media. The tool, which moves out of private beta, can confirm whether an image was produced by Google’s own AI but cannot verify content from other providers. Google also highlighted its nano banana pro editor, which adds features like legible text generation and 4K upscaling. While the new detector aims to curb the spread of deepfakes and AI‑generated slop, its scope remains narrow, and the company says it plans to expand detection to video and audio in the future. Read more →

AI-Generated Content Overwhelms Social Media, Raising Authenticity and Trust Concerns

AI-Generated Content Overwhelms Social Media, Raising Authenticity and Trust Concerns
Social platforms such as Facebook, Instagram and TikTok are increasingly saturated with low‑quality AI‑generated media, often called “AI slop,” and deepfake videos of public figures. Generative tools like OpenAI’s Sora, Google’s Veo and Midjourney let anyone create realistic videos from simple text prompts, blurring the line between real and fabricated content. Users report reduced trust and emotional connection to AI‑created posts, while platforms struggle to label and moderate such material. Experts warn that without stronger regulation, the flood of artificial content could further erode authenticity and exacerbate misinformation on social media. Read more →

OpenAI's Sora AI Video Generator Expands to Android in Multiple Markets

OpenAI's Sora AI Video Generator Expands to Android in Multiple Markets
OpenAI has released its AI‑powered video creation app Sora for Android users in the United States, Canada, Japan, Korea, Taiwan, Thailand and Vietnam. The Android version mirrors the iOS app’s features, including the “Cameos” tool that lets users generate videos with their own likeness and a TikTok‑style feed for sharing. The rollout follows Sora’s rapid rise on iOS, where it topped charts and logged over a million downloads in its first week. OpenAI is positioning Sora against competitors such as Meta’s Vibes, TikTok and Instagram while addressing criticism over deep‑fake content and copyrighted material. Read more →

AI Slop: The Flood of Low‑Effort Machine‑Generated Content

AI Slop: The Flood of Low‑Effort Machine‑Generated Content
AI slop describes a wave of cheap, mass‑produced content created by generative AI tools without editorial oversight. The term captures how these low‑effort articles, videos, images and audio fill feeds, push credible sources down in search results, and erode trust online. Content farms exploit the speed and low cost of AI to generate clicks and ad revenue, while platforms reward quantity over quality. Industry responses include labeling, watermarking and metadata standards such as C2PA, but adoption is uneven. Experts warn that the relentless churn of AI slop threatens both information quality and the health of digital culture. Read more →

OpenAI pledges stronger safeguards for celebrity likenesses in Sora after actor and estate pushback

OpenAI pledges stronger safeguards for celebrity likenesses in Sora after actor and estate pushback
OpenAI announced new guardrails for its AI video‑generation tool Sora after actors, talent unions and the estate of Martin Luther King Jr. raised concerns about unauthorized deepfake videos. The company agreed that public figures must opt in before their likenesses can be used, and that representatives can request removal. The move follows complaints from Bryan Cranston, SAG‑AFTRA, and talent agencies, as well as public outcry over disrespectful depictions of Dr. King. OpenAI says the updated policies aim to give individuals greater control over how their images and voices are employed in AI‑generated content. Read more →

Sora Adds User Controls for AI-Generated Video Appearances

Sora Adds User Controls for AI-Generated Video Appearances
OpenAI's Sora app, described as a "TikTok for deepfakes," now lets users limit how AI-generated versions of themselves appear in videos. The update introduces preferences that can block cameo appearances in political content, restrict specific language, or prevent certain visual contexts. OpenAI says the changes are part of broader weekend updates aimed at stabilizing the platform and addressing safety concerns. While the new tools give creators more say over their digital likenesses, critics note that past AI tools have been bypassed, and the watermark remains weak. OpenAI pledges further refinements. Read more →

Qualcomm Pushes C2PA Authentication in Snapdragon Chips to Combat AI‑Generated Media

Qualcomm Pushes C2PA Authentication in Snapdragon Chips to Combat AI‑Generated Media
Qualcomm is advancing digital content authenticity by integrating the C2PA standard into its Snapdragon mobile processors. At the Snapdragon Summit, the company unveiled the Snapdragon 8 Elite Gen 5 and announced a partnership with Truepic to embed watermarking that reveals AI involvement in photos and video. While Qualcomm provides the software package, adoption rests with phone manufacturers, and at least one unnamed maker is already working on integration. The move aims to give users a reliable way to verify that captured media is genuine in an era of deepfakes and AI‑enhanced images. Read more →