This simple riddle broke DeepSeek. Are you smart enough to solve it? | Trending
DeepSeek may be the newest AI tool that is making waves online, but is it any better than its predecessors when it comes to logical reasoning? Some would say no. A recent example shows how AI tools like DeepSeek struggle with logical reasoning, failing to solve a rephrased riddle. FILE PHOTO: The DeepSeek app is seen in this illustration taken on January 29, 2025. REUTERS/Dado Ruvic/Illustration/File Photo(REUTERS) An X user recently tried to trick DeepSeek rephrasing a popular riddle as a very simple problem, one that most humans would be able to solve within seconds. Surprisingly, DeepSeek not only failed to answer the question that required a modicum of logical reasoning, it also took more than two minutes to come up with a response that made little sense. The riddleHere is the riddle that some say ‘broke’ DeepSeek: A woman and her cousin are in a car accident, and sadly the woman is killed. The cousin is rushed to surgery. After taking one look at the cousin, the doctor exclaims: “I can’t operate on this boy! He’s my cousin! How is this possible? When this exact riddle was posed to DeepSeek, the AI tool took 141 seconds to respond. “The answer lies in the doctor’s relationship to the boy. The doctor is the boy’s mother, but here’s the tw: the doctor is the woman who died. Wait, no —the woman died in the accident. Let’s clarify…” the chatbot responded, clearly confused. It continued in this vein, revealing its ‘thought process’ but never arriving at the correct answer. “Wait, another angle: the surgeon is the man’s father. But again, father is not cousin. “Wait, perhaps the man is a woman. No, the riddle says “a man”,” DeepSeek replied. The answerAfter reading the question, you may have surmised that it is not actually a riddle, although it is presented like one. After all, a person is allowed to have more than one cousin. Many X users reached the same conclusion and were surprised the AI’s failure to answer the simple problem. In fact, the ‘riddle’ here derives from an actual riddle that seeks to highlight gender roles. Perhaps that is why it confused the AI chatbot. Here is the real riddle that inspired this question to DeepSeek: “A father and his son are in a car accident. The father dies at the scene and the son is rushed to the hospital. At the hospital the surgeon looks at the boy and says ‘I can’t operate on this boy, he is my son.’ How can this be?” In this scenario, one may automatically assume that the surgeon is a male, revealing inherent sexism. Here, the answer is that the surgeon is the boy’s mother. In the question posed to DeepSeek, however, there are no such stereotypes at play. However, DeepSeek was not the only chatbot that could not answer the question. As other screenshots shared on X show, several AI tools failed to answer this question correctly. “This thread is a good example of how AI is still at the point of pure regurgitation, it moves stuff around but it is not even synthesizing exing thoughts let alone generating new ones, the pattern of this “riddle” is so overwhelming it is incapable of actually reading it,” wrote one X user. “This is just highlighting how LLMs work and that they’re parrots, not magic learning boxes,” another noted.