What is new on Article Factory and latest in generative AI world

ChatGPT’s Inability to Run Background Tasks Limits Large‑Scale Data Transcription

ChatGPT’s Inability to Run Background Tasks Limits Large‑Scale Data Transcription
A user attempted to have ChatGPT convert a series of photographed tables containing historic Brazilian Jiu‑Jitsu records into a Google Sheets spreadsheet. Although the model initially assured the task was possible, it was unable to continue the work after the conversation turn ended, revealing a fundamental limitation: ChatGPT cannot execute long‑running background processes. The model eventually admitted the constraint, forcing the user to break the job into single‑page chunks. The episode highlights current gaps between AI hype and practical capability, especially for tasks requiring sustained visual analysis. Leia mais →

Common Misconceptions About Artificial Intelligence Debunked

Common Misconceptions About Artificial Intelligence Debunked
A recent overview clarifies several widespread myths about artificial intelligence. It explains that AI models process statistical patterns rather than think like humans, lack true understanding, and cannot read users' unspoken intentions. The piece also highlights that AI inherits biases from its training data and is not inherently objective. Ongoing human involvement remains essential for training, oversight, and improvement. Finally, it stresses that current AI, including large language models, is far from achieving general intelligence and should be viewed as sophisticated autocomplete tools rather than superintelligent systems. Leia mais →

Google Brings Gemini AI to Chrome on iPhone and iPad

Google Brings Gemini AI to Chrome on iPhone and iPad
Google has extended its built‑in Gemini AI experience to Chrome on iPhone and iPad after earlier rollouts on desktop and Android. The new integration adds a spark icon beside the address bar that opens a "Pages tool" offering Lens and an "Ask Gemini" chat window. Users can ask Gemini to summarize pages, generate FAQs, simplify complex topics, test knowledge, modify recipes, and compare information. The feature currently works only in the United States, requires English‑language Chrome, a signed‑in account, and is unavailable in incognito mode or for users under 18. Leia mais →

Generative AI Tested on a Handwritten Apple Pie Recipe Shows Mixed Results

Generative AI Tested on a Handwritten Apple Pie Recipe Shows Mixed Results
A writer fed a handwritten family apple‑pie recipe into three leading generative AI models—ChatGPT, Gemini, and Claude—to see if they could turn the scribbled notes into a clear, illustrated infographic. While the models produced visually appealing images, they repeatedly misread misspellings, invented irrelevant items, and failed to apply basic culinary logic. The experiment highlights both the promise of AI‑driven content creation and its current limitations when handling imperfect, real‑world inputs. Leia mais →

OpenAI's Sora Video Generator Misses the Mark in IVF Explainer Test

OpenAI's Sora Video Generator Misses the Mark in IVF Explainer Test
A reporter undergoing IVF tested OpenAI's Sora AI video generator to create footage for an explainer on the fertility industry. While the tool produced a handful of usable clips, most outputs contained glaring scientific inaccuracies, nonsensical text, and visual errors such as misplaced anatomy and extra limbs. The experiment highlights current limitations of AI‑generated video for specialized medical storytelling and suggests that creators should approach Sora with caution until its capabilities improve. Leia mais →

ChatGPT Stumped by Modified Optical Illusion Image

ChatGPT Stumped by Modified Optical Illusion Image
A Reddit user posted a altered version of the Ebbinghaus optical illusion to test ChatGPT's image analysis. The AI incorrectly asserted that the two orange circles were the same size, despite the modification that made one circle visibly larger. Even after a prolonged dialogue of about fifteen minutes, ChatGPT remained convinced of its answer and did not adjust its reasoning. The episode highlights concerns about the chatbot’s reliance on internet image matching, its resistance to corrective feedback, and broader questions about the reliability of AI tools for visual tasks. Leia mais →

ChatGPT Stumped by Modified Optical Illusion Image

ChatGPT Stumped by Modified Optical Illusion Image
A Reddit user posted a altered version of the Ebbinghaus optical illusion to test ChatGPT's image analysis. The AI incorrectly asserted that the two orange circles were the same size, despite the modification that made one circle visibly larger. Even after a prolonged dialogue of about fifteen minutes, ChatGPT remained convinced of its answer and did not adjust its reasoning. The episode highlights concerns about the chatbot’s reliance on internet image matching, its resistance to corrective feedback, and broader questions about the reliability of AI tools for visual tasks. Leia mais →

ChatGPT Stumped by Modified Optical Illusion Image

ChatGPT Stumped by Modified Optical Illusion Image
A Reddit user posted a altered version of the Ebbinghaus optical illusion to test ChatGPT's image analysis. The AI incorrectly asserted that the two orange circles were the same size, despite the modification that made one circle visibly larger. Even after a prolonged dialogue of about fifteen minutes, ChatGPT remained convinced of its answer and did not adjust its reasoning. The episode highlights concerns about the chatbot’s reliance on internet image matching, its resistance to corrective feedback, and broader questions about the reliability of AI tools for visual tasks. Leia mais →

ChatGPT Stumped by Modified Optical Illusion Image

ChatGPT Stumped by Modified Optical Illusion Image
A Reddit user posted a altered version of the Ebbinghaus optical illusion to test ChatGPT's image analysis. The AI incorrectly asserted that the two orange circles were the same size, despite the modification that made one circle visibly larger. Even after a prolonged dialogue of about fifteen minutes, ChatGPT remained convinced of its answer and did not adjust its reasoning. The episode highlights concerns about the chatbot’s reliance on internet image matching, its resistance to corrective feedback, and broader questions about the reliability of AI tools for visual tasks. Leia mais →

ChatGPT Stumped by Modified Optical Illusion Image

ChatGPT Stumped by Modified Optical Illusion Image
A Reddit user posted a altered version of the Ebbinghaus optical illusion to test ChatGPT's image analysis. The AI incorrectly asserted that the two orange circles were the same size, despite the modification that made one circle visibly larger. Even after a prolonged dialogue of about fifteen minutes, ChatGPT remained convinced of its answer and did not adjust its reasoning. The episode highlights concerns about the chatbot’s reliance on internet image matching, its resistance to corrective feedback, and broader questions about the reliability of AI tools for visual tasks. Leia mais →

OpenAI Acknowledges ChatGPT Safety Gaps in Long Conversations

OpenAI Acknowledges ChatGPT Safety Gaps in Long Conversations
OpenAI has publicly recognized that ChatGPT’s safety mechanisms can weaken during extended interactions. The company’s blog post explains that as a conversation lengthens, the model’s ability to consistently enforce safeguards diminishes, potentially allowing the AI to provide harmful or prohibited content. This limitation stems from the underlying transformer architecture and context‑window constraints, which cause the system to forget earlier parts of a dialogue. OpenAI’s admission highlights a technical challenge that may affect user safety and has sparked discussion about the need for more robust, long‑term guardrails in AI chat systems. Leia mais →