Technology

‘Can’t replace your doctor with it’: Symbiosis Artificial Intelligence Institute head Dr Shruti Patil on safe use of AI in 2026 | Technology News

Since the launch of ChatGPT in late November 2022, Artificial Intelligence (AI) tools in the form of generative AI have changed how people surf the web. AI tools have made it much easier to perform many tasks, while at the same time raising concerns about data privacy and accuracy.In an interview with The Indian Express, Dr Shruti Patil, Director of Symbiosis Artificial Intelligence Institute, spoke about how people can make safe use of AI in 2026.
Q: In what ways can people safely incorporate AI into their lives?
Dr Shruti Patil: The important thing is to understand what people can use AI for. For example, if you want to know about things going on globally or if a particular event is happening, and you want to read news about that, you can use AI.
If there are some manual tasks that you are doing repeatedly, they can slowly be automated with the help of AI. For example, if you want to travel somewhere and want to create an itinerary within a particular budget. People spend two days or three days researching the location, directions, sightseeing, and temperatures. All these things can be done in just two minutes using ChatGPT. So these kinds of small tasks, where some decision-making is required, and we do it based on some kind of research, can be automated.
Content generation or application generation is also an area where AI can be used very, very well. For example, if you want to design an invite for an event, you do not need to go to a designer. Simply using AI tools like Gemini or Notebook LM, you can design those invitations and quickly share them.
Q: How should people protect their privacy when using AI tools?Story continues below this ad
Patil: It is important for people to understand what kind of information can be given to AI and what should be camouflaged. Sensitive information about yourself or anyone else should definitely not be disclosed. This can be information that can reveal their identity, financial details, or passwords. We are all using generalised large language model products, for example, ChatGPT, which is taking millions of parameters from all over the world. So, definitely, we should avoid giving this kind of information.
If you are making a financial decision about investments, you can ask AI about current stock trends, etc., but you also have to do your own homework and not blindly trust it. When you visit a doctor, and if you want to better understand what they said, you can make use of AI tools for an explanation. But you cannot replace your doctor with AI.
Q: AI is also prone to hallucination (when AI tools produce plausible-sounding but false or inaccurate information). How can people be safe from this?
Patil: Generally, for single-page results, AI tools work well. If a PDF has hundreds of pages, then AI hallucinates. So AI can be used for some personal work, but for office work, free versions of AI models should not be used. Paid versions of AI tools can be used there.Story continues below this ad
Even on single-page PDFs, AI sometimes hallucinates, so it depends on the criticality of the data. These tools are still learning. The tasks are becoming more complex as we are using it more and more.
Q: So would you say it is important to cross-check AI results even if you are doing a small but important task?
Patil: Yes, of course. Currently, we don’t have tools giving exact and perfect results. Sometimes they give correct answers, sometimes they do not. That consency of outcome is missing.
Q: Women are targeted online using generative AI tools with men editing themselves onto women’s photos or videos. What is the responsibility of users as well as AI companies in this regard?Story continues below this ad
Patil: More than users, it is important for a country to come up with an AI policy, which should be enforced every AI service-providing company. Certain rules have to be devised at the product level so that these things are not allowed.
Even now, for example, if you ask ChatGPT ‘when will I die?’, it will not answer. Now, ChatGPT realises when a user is getting emotionally attached to it because a lot of teenagers and even elders are getting attached and treating it like a digital person.
In India, we have to come up with a very strong AI policy. Especially about user data privacy, because these AI tools are trained on user data. The government has to put up guardrails, specifying which kind of data is allowed to be shared and which kind of data is simply banned.
(Aler Augustine is an intern with The Indian Express)

Related Articles

Back to top button