What is new on Article Factory and latest in generative AI world

Study Shows Large Language Models Can Be Backdoored with Few Malicious Samples

Study Shows Large Language Models Can Be Backdoored with Few Malicious Samples
Researchers found that large language models can acquire backdoor behaviors after exposure to only a handful of malicious documents. Experiments with GPT-3.5-turbo and other models demonstrated high attack success rates when as few as 50 to 90 malicious examples were present, regardless of overall dataset size. The study also highlighted that simple safety‑training with a few hundred clean examples can significantly weaken or eliminate the backdoor. Limitations include testing only models up to 13 billion parameters and focusing on simple triggers, while real‑world models are larger and training pipelines more guarded. The findings call for stronger data‑poisoning defenses. Leia mais →

Nvidia Denies Backdoor and Kill‑Switch Claims, Warns of Disaster

Nvidia Denies Backdoor and Kill‑Switch Claims, Warns of Disaster
Nvidia has refuted accusations from the Chinese government that its chips contain location‑tracking features and remote shutdown capabilities. In a blog post, Chief Security Officer David Reber Jr. emphasized that Nvidia’s products have no backdoors or kill switches and warned that mandating such mechanisms would be an “open invitation for disaster.” The company also urged policymakers to reject proposals requiring built‑in controls, while U.S. lawmakers consider legislation aimed at location verification for advanced chips. Nvidia’s stance highlights a clash between security concerns and government demands for hardware oversight. Leia mais →