ChatGPT Stumped by Modified Optical Illusion Image
Background
While browsing a Reddit thread, a user posted a screenshot of the classic Ebbinghaus illusion, an image that normally tricks the eye into seeing two identical circles as different sizes. The user deliberately altered the image so that one of the orange circles was clearly smaller than the other, creating a clear visual discrepancy.
The Test
The altered image was presented to ChatGPT with a simple question about which circle was larger. Instead of analyzing the pixel data directly, the model performed a reverse‑image search, comparing the posted picture to versions of the illusion it could locate on the web. Because the majority of indexed images showed the circles as equal, the AI concluded that the circles were the same size.
ChatGPT’s Response
ChatGPT answered with confidence, stating that neither orange circle was bigger and that they were exactly the same size. The user then engaged the model in an extended dialogue, attempting to point out the discrepancy and urging it to reconsider its conclusion. Over the course of roughly fifteen minutes of back‑and‑forth, the chatbot did not change its stance, maintaining that the circles matched.
Implications
This interaction underscores several limitations of the current AI system. First, the reliance on external image matches can lead to inaccurate assessments when the input image deviates from common examples. Second, the model demonstrated a strong resistance to corrective feedback, persisting in an erroneous belief even after the user highlighted the visual evidence. Finally, the episode raises broader concerns about the suitability of such tools for tasks that require nuanced visual reasoning, reminding users that AI outputs often need verification.
Broader Context
Observers have noted that while ChatGPT excels at many language‑based tasks, its performance on visual queries remains constrained by its architecture. The incident fuels ongoing debate about the readiness of AI chatbots for real‑world applications that blend language and image understanding. Until models can reliably interpret visual data without over‑relying on pre‑existing internet matches, users are advised to treat AI‑generated conclusions as provisional and subject to human validation.
Usado: News Factory APP - descubrimiento de noticias y automatización - ChatGPT para Empresas