A joint study by MIT, Northeastern University and Meta reveals that large language models can rely heavily on sentence structure, sometimes answering correctly even when the words are nonsensical. By testing prompts that preserve grammatical patterns but replace key terms, the researchers demonstrated that models often match syntax to learned responses, highlighting a potential weakness in semantic understanding. The findings shed light on why certain prompt‑injection techniques succeed and suggest avenues for improving model robustness. The team plans to present the work at an upcoming AI conference.
Leer más →