What is new on Article Factory and latest in generative AI world

Sen. Elizabeth Warren Questions Google Gemini's Built-In Checkout Over User Privacy

Sen. Elizabeth Warren Questions Google Gemini's Built-In Checkout Over User Privacy
Sen. Elizabeth Warren (D-MA) has written to Google CEO Sundar Pichai asking for details on the new checkout feature in the Gemini AI chatbot. She warns that the integration could let Google and retailers exploit sensitive user data or push consumers toward higher‑priced items. Warren seeks clarification on what data will be shared with retailers, how pricing might be affected, and whether users will be told when product suggestions are driven by upselling or advertising motives. Google has until mid‑February to respond. Read more →

CISA Acting Director Uploads Sensitive Government Docs to ChatGPT

CISA Acting Director Uploads Sensitive Government Docs to ChatGPT
The acting head of the Cybersecurity and Infrastructure Security Agency (CISA) uploaded internal government documents marked “for official use only” to the public ChatGPT platform, triggering automated security warnings. The director, Madhu Gottumukkala, had previously received an exception to use the tool, despite a department-wide ban. Homeland Security officials are assessing potential security impacts, while a CISA spokesperson described the usage as short‑term and limited. The incident raises concerns about the handling of unclassified but sensitive data on public AI services. Read more →

AI‑Powered Browsers Spark New Governance Challenges

AI‑Powered Browsers Spark New Governance Challenges
AI‑first browsers embed generative tools such as summarization, rewriting and real‑time suggestions directly into the web‑page experience. While they boost productivity, they also blur the line between approved enterprise software and shadow AI, making it harder for organizations to see when employees invoke AI and what data is processed. This hidden usage creates version drift, skips formal review steps, and shifts interpretation away from source documents, leading to gaps in audit trails, retention, compliance and operational consistency. Experts recommend new controls to keep AI‑generated content traceable and governed within existing workflows. Read more →

Anthropic Launches Claude Cowork Feature for MacOS Users

Anthropic Launches Claude Cowork Feature for MacOS Users
Anthropic introduced Cowork, a new capability for its Claude AI that lets subscribers grant the chatbot access to a MacOS folder. Users can chat with Claude to organize files, rename items, and generate spreadsheets or documents from the folder's contents. The feature, currently limited to Claude Max subscribers at $100 per month, also links to connectors for app integration and works with the Claude Chrome extension. Anthropic cautions that Cowork is in a research preview, recommending use only on non‑sensitive data and noting defenses against prompt‑injection attacks. Read more →

OpenAI Introduces ChatGPT Health Tab for Medical Queries

OpenAI Introduces ChatGPT Health Tab for Medical Queries
OpenAI announced a new ChatGPT Health tab designed to handle medical questions in a dedicated, private space. The feature separates health chat history, offers encryption, multifactor authentication, and promises that health conversations will not be used to train the model. Users can link wellness apps such as Apple Health and MyFitnessPal. While the service is not intended for diagnosis or treatment, experts warn that the lack of HIPAA coverage could expose health data to inadequate protections. OpenAI says the tab is currently in beta and invites users to join a waitlist. Read more →

OpenAI’s ChatGPT Health Raises Trust and Privacy Concerns

OpenAI’s ChatGPT Health Raises Trust and Privacy Concerns
OpenAI introduced ChatGPT Health, an AI‑driven virtual clinic that can read electronic medical records and fitness data to offer personalized health advice. While the service promises clearer explanations of medical jargon and quicker insight into test results, experts and users voice strong concerns about data privacy, the lack of HIPAA coverage, and the risk of AI hallucinations. Trust, transparency, and regulatory safeguards are cited as essential before widespread adoption can be considered safe. Read more →

OpenAI Introduces ChatGPT Health, a Dedicated AI Tool for Medical Conversations

OpenAI Introduces ChatGPT Health, a Dedicated AI Tool for Medical Conversations
OpenAI has launched ChatGPT Health, a new section within the ChatGPT app designed specifically for health‑related queries. Users can securely link medical records and wellness apps such as Apple Health, MyFitnessPal, and Peloton, allowing the AI to tailor responses to personal data. The tool is positioned as a support system rather than a diagnostic service, emphasizing that it should not replace professional medical care. OpenAI highlights extensive physician collaboration, layered security measures, and the ability for users to control data access and deletion. Access is currently limited to a waitlist, with broader rollout planned in the coming weeks. Read more →

OpenAI Launches ChatGPT Health, a Dedicated AI Health Portal

OpenAI Launches ChatGPT Health, a Dedicated AI Health Portal
OpenAI has introduced ChatGPT Health, a separate space within its AI chatbot that lets users link medical records and wellness apps for more personalized health‑related answers. The company says the feature includes extra privacy safeguards and that conversations in this area will not be used to train its foundational models. Still in testing, the service has regional limits on which health apps can connect. OpenAI stresses that ChatGPT Health is not meant for diagnosis or treatment and warns that AI chatbots are not qualified to give medical advice, citing risks of inaccurate information and privacy concerns. Read more →

Google Launches Private AI Compute to Blend Cloud Power with On‑Device Privacy

Google Launches Private AI Compute to Blend Cloud Power with On‑Device Privacy
Google is unveiling a new cloud‑based platform called Private AI Compute that lets users access more advanced artificial‑intelligence features while keeping their data private. The service mirrors Apple’s Private Cloud Compute by keeping sensitive information visible only to the user, even from Google, and by moving heavy computational tasks to a secure, fortified cloud space. Early implementations will appear on Pixel 10 phones, enhancing tools such as Magic Cue and expanding language support for Recorder transcriptions. Google says the approach will enable richer, more personalized AI experiences without compromising privacy. Read more →

Consumers Embrace Generative AI Yet Remain Wary of Privacy and Trust Issues

Consumers Embrace Generative AI Yet Remain Wary of Privacy and Trust Issues
A recent Deloitte survey shows that while a majority of U.S. consumers are actively using or experimenting with generative AI, they also express strong concerns about privacy, data security, and the trustworthiness of tech companies. More than half of respondents pay for AI services, yet many still verify AI‑generated information and are reluctant to share personal data. The findings highlight a paradox: rapid adoption of AI alongside growing skepticism about its impact and the motives of the firms behind it. Read more →

Underground Bunkers Repurposed as Ultra‑Secure Data Centers

Underground Bunkers Repurposed as Ultra‑Secure Data Centers
Former Cold War shelters and abandoned mines are being transformed into high‑security data centers. Companies such as Cyberfort operate these subterranean facilities, offering protection against both cyber and physical threats. The hardened concrete walls, blast‑proof doors and strict access controls promise data survivability even in extreme scenarios. While the physical security is emphasized, the facilities also address regulatory concerns like data sovereignty and environmental impact by sourcing renewable energy and using closed‑loop cooling. The trend reflects growing anxieties over data loss and the need for resilient infrastructure. Read more →

Consumers Embrace Generative AI Yet Remain Wary of Trust and Privacy Risks

Consumers Embrace Generative AI Yet Remain Wary of Trust and Privacy Risks
A recent Deloitte survey of U.S. consumers shows that while more than half are experimenting with or regularly using generative AI, a majority express concerns about rapid innovation, data privacy, and the accuracy of AI outputs. Around 40% of respondents pay for AI services, and many access the technology through mobile apps and websites. Trust remains fragile—privacy worries have risen, and users are reluctant to share sensitive personal data. Consumers indicate they are more likely to spend money with companies they trust, highlighting a tension between growing adoption and lingering skepticism. Read more →

When ChatGPT Isn’t the Right Tool: Key Limitations and Risks

When ChatGPT Isn’t the Right Tool: Key Limitations and Risks
ChatGPT excels at answering questions and drafting text, but it falls short in critical areas such as diagnosing health issues, providing mental‑health support, handling emergency safety decisions, offering personalized financial advice, and processing confidential or regulated data. It also cannot replace legal professionals, nor should it be used for cheating in education, real‑time monitoring, gambling, or creating art that is passed off as original. Understanding these constraints helps users avoid costly mistakes and rely on qualified experts when needed. Read more →

YouTube Faces Backlash Over AI-Driven Age Verification

YouTube Faces Backlash Over AI-Driven Age Verification
YouTube's new AI-powered age verification system has sparked a wave of criticism from creators and users who worry about privacy, data security, and the role of corporations in policing children's viewing habits. Prominent YouTuber Gerfdas Gaming launched a petition demanding transparency and a reevaluation of the policy, arguing that the AI scans every video a user watches and stores sensitive information that could be vulnerable in a breach. While YouTube has not responded, the petition has drawn hundreds of supporters, highlighting broader concerns about digital freedom and regulatory pressure on online platforms. Read more →

Cohere Launches North AI Agent Platform for Secure Enterprise Deployment

Cohere Launches North AI Agent Platform for Secure Enterprise Deployment
Cohere unveiled North, an AI agent platform designed to run on private infrastructure and keep enterprise data behind firewalls. The solution can operate on on‑premise hardware, hybrid clouds, VPCs or air‑gapped environments, using as few as two GPUs. North incorporates granular access controls, autonomy policies, continuous red‑team testing and third‑party security audits, and it complies with GDPR, SOC‑2 and ISO 27001. Built on Cohere’s Command and Compass technologies, the platform offers chat, search and document creation capabilities while providing citation trails for auditability. Early pilots include major firms such as RBC, Dell and LG. Read more →