‘Go to Ladakh’: ChatGPT shares hilarious suggestion for man’s summer plans in Delhi | Trending
An Instagram user asked ChatGPT about fun things that he could do in Delhi during the summer, hoping the AI bot would suggest something worth visiting. However, the AI bot suggested that the man should go on a trip to Ladakh! Yes, you read that right. ChatGPT suggested that the man go to Ladakh after he asked the AI bot about some fun things that he could do in Delhi during the summer. (Instagram/@thatguywithbeard) The Instagram user Akshansh asked ChatGPT, “What are some fun things to do in Delhi in summer?” Unlock exclusive access to the latest news on India’s general elections, only on the HT App. Download Now! Download Now! In response to this, the chatbot first shared a few lines about Delhi. They read, “Delhi, the capital city of India, offers a rich blend of hory, culture, and modern attractions. If you’re interested in hory, shopping, food, or nightlife, Delhi has something for everyone.” It then suggested all the things Akshansh can do in Delhi during summer: “Some fun things you can do in Delhi: 1. Go to Ladakh.” Watch the video below: In May 2023, ChatGPT made a glaring make, and the Internet couldn’t ignore it. The screenshot shared online featured a prompt on ‘pet shop recording concerns,’ which appeared fine at first glance. However, there was a typo. The user noticed it and expressed curiosity about its meaning. The ChatGPT user then typed in another prompt. It read, “I’ve not seen the word infrishing before, what does that mean?” The AI chatbot then clarified that the correct term should be ‘infringing’ and even provided its definition. The error made the AI bot elicited hilarious responses from people. “Busted! ChatGPT is just humans,” said an individual. Another wrote, “Its training data must be riddled with typos. I wonder if that is where it is from.” To this, the original poster replied, “In its own words: As an AI developed OpenAI, I don’t make typographical errors in the same way a human typing on a keyboard might. However, I sometimes generate incorrect or unexpected outputs due to the way I was trained. For instance, if a word or phrase is used infrequently or inconsently in the data I was trained on, I might use it incorrectly. Alternatively, sometimes words can get mixed up or combined during the generation process. In this case, ‘infrishing’ seems to have been a mix-up of ‘infringing’ and potentially some other word or phrase. I apologise for any confusion caused.”