What is new on Article Factory and latest in generative AI world

OpenClaw’s Skill Marketplace Becomes Malware Delivery Platform

OpenClaw’s Skill Marketplace Becomes Malware Delivery Platform
OpenClaw, the AI assistant that lets users manage tasks through messaging apps, is facing serious security concerns after researchers uncovered malware hidden in user‑submitted skill add‑ons on its ClawHub marketplace. Over a short period, dozens of malicious skills and hundreds of malicious add‑ons were identified, many posing as cryptocurrency tools while stealing sensitive credentials. The creator, Peter Steinberger, has introduced new publishing safeguards, but the risk of malicious code remains a notable attack surface for users granting the assistant deep device access. Read more →

Moltbook AI Social Network Exposes Human Credentials via Vibe‑Coded Flaw

Moltbook AI Social Network Exposes Human Credentials via Vibe‑Coded Flaw
Moltbook, a social platform designed for AI agents, suffered a major security breach that exposed millions of authentication tokens, tens of thousands of email addresses, and private messages. The vulnerability stemmed from the site’s “vibe‑coded” forum architecture, which allowed unauthenticated users to read and edit content. Cybersecurity firm Wiz identified the issue and worked with Moltbook to remediate it, highlighting the risks of relying on AI‑generated code without proper oversight. Read more →

OpenClaw AI Assistant Survives Trademark Dispute, Scams and Security Scrutiny

OpenClaw AI Assistant Survives Trademark Dispute, Scams and Security Scrutiny
OpenClaw, formerly known as Clawdbot and Moltbot, is an open‑source AI assistant that integrates directly into messaging apps to automate tasks, remember conversations, and send proactive reminders. After a rapid rise in popularity, the project faced a trademark challenge from Anthropic, a wave of crypto‑related scams, and several security concerns tied to exposed deployments. Despite these setbacks, the developer has rebranded the tool as OpenClaw, addressed many of the vulnerabilities, and continues to attract interest from developers and early adopters who see it as a glimpse of what a truly personal AI assistant could become. Read more →

OpenClaw Rebrands and Expands Its AI Assistant Ecosystem

OpenClaw Rebrands and Expands Its AI Assistant Ecosystem
OpenClaw, formerly known as Clawdbot and briefly as Moltbot, has settled on a new name after a trademark dispute. The open‑source AI assistant project has attracted a large GitHub following and spawned a community‑run social network where AI agents interact. While the platform’s growth has drawn attention from prominent AI researchers, its maintainers stress that security remains a top priority and that the tool is currently suited for technically experienced users. Sponsorship tiers have been introduced to support ongoing development. Read more →

Moltbots Rise: Open-Source AI Assistant Survives Trademark Scramble, Crypto Scams, and Bot Hijacks

Moltbots Rise: Open-Source AI Assistant Survives Trademark Scramble, Crypto Scams, and Bot Hijacks
An open‑source AI assistant originally called Clawdbot went viral, faced a trademark warning from Anthropic, endured social‑media handle squatting, a crypto‑scam impersonation, and a quirky mascot redesign, then rebranded as Moltbot. Created by Austrian developer Peter Steinberger, the tool integrates into everyday messaging apps, remembers past conversations, sends proactive reminders, and automates tasks across platforms. Despite the chaos, the project kept growing, attracting thousands of GitHub stars and praise from AI researchers and investors, while remaining a community‑driven, experimental alternative to commercial assistants. Read more →

cURL Ends Bug Bounty Program Amid Flood of Low‑Quality AI Reports

cURL Ends Bug Bounty Program Amid Flood of Low‑Quality AI Reports
The maintainer of cURL, one of the most widely used networking tools, announced the termination of its bug bounty program. The decision follows an overwhelming influx of low‑quality, often AI‑generated vulnerability reports that strained the small team of volunteers. Daniel Stenberg, the project's founder, expressed that the limited resources of the open‑source project could not sustain the volume of submissions, and the program will conclude at the end of the month. Read more →

AI Security Startup Depthfirst Secures $40 Million Series A Funding

AI Security Startup Depthfirst Secures $40 Million Series A Funding
Depthfirst, an AI‑focused cybersecurity startup, announced a $40 million Series A round led by Accel Partners with participation from SV Angel, Mantis VC, and Alt Capital. Founded in October 2024, the company offers its General Security Intelligence platform, an AI‑native suite that scans codebases, protects against credential exposures, and monitors threats to open‑source and third‑party components. The new capital will fund expanded research, engineering, product development, and sales teams. Co‑founder and CEO Qasim Mithani emphasized the need for defenses that keep pace with AI‑driven attacks, while the leadership team brings experience from Databricks, Amazon, Square, and Google DeepMind. Read more →

AI‑Generated ‘Vibe Coding’ Raises Security Concerns Amid Efficiency Gains

AI‑Generated ‘Vibe Coding’ Raises Security Concerns Amid Efficiency Gains
Vibe coding—using large language models to write software from prompts—offers faster development and broader accessibility, but it also introduces serious security risks. Studies show a significant portion of AI‑generated code contains serious flaws, and attackers can exploit poisoned code libraries to spread vulnerabilities. Experts stress that human oversight, strict code reviews, private sandboxed models, and Zero‑Trust access controls are essential to mitigate these threats while still benefiting from the efficiency of AI‑assisted development. Read more →

AI-Generated ‘Vibe Coding’ Raises New Software Supply‑Chain Security Risks

AI-Generated ‘Vibe Coding’ Raises New Software Supply‑Chain Security Risks
Developers are increasingly turning to AI‑generated code, dubbed “vibe coding,” to accelerate software creation. While the approach mirrors the efficiency of open‑source reuse, experts warn it introduces opaque code, potential vulnerabilities, and weakened accountability. Security firms highlight that AI models often draw on outdated or insecure codebases, making it hard to trace origins or audit outputs. A recent survey found that a third of security leaders report over 60 % of their code now originates from AI, yet fewer than one‑fifth have approved tools for such development. The emerging risk landscape calls for new safeguards and clearer governance. Read more →

Senior Developers Turn into AI Code Babysitters Amid Vibe Coding Surge

Senior Developers Turn into AI Code Babysitters Amid Vibe Coding Surge
Developers are increasingly using AI‑generated code, known as vibe coding, to speed up projects. Senior engineers, however, find themselves spending significant time correcting the AI's output, which can include hallucinated packages, deleted information, and security risks. Interviews with developers like Carla Rover and Feridoon Malekzadeh reveal frustrations, costly rewrites, and a new "innovation tax" of extra review work. Companies such as Fastly and NinjaOne acknowledge the productivity boost but stress mandatory human oversight and security scanning to keep AI‑generated code safe for production. Read more →