Bangalore, Feb 10 — As artificial intelligence becomes an integral part of everyday life, children and young people are often among the earliest and most enthusiastic adopters. From learning tools and chatbots to voice assistants and creative platforms, AI is reshaping how people study, work, and interact online.
Safer Internet Day 2026, marked on 10 February, focuses on helping users make informed, safe, and responsible choices when using AI. This year’s theme, “Smart tech, safe choices – Exploring the safe and responsible use of AI,” reflects the growing need to balance innovation with awareness, trust, and digital responsibility.
India has emerged as a global leader in AI adoption, significantly outpacing global averages across both consumer and enterprise use. According to a Microsoft study, Around 65% of Indians report having used AI, more than double the global average of 31%. Adoption is particularly strong among millennials aged 25–44, where usage stands at 84%. AI-powered tools are now widely used for answering queries, translation, student assistance, content creation, and improving workplace productivity.
While this rapid adoption highlights India’s digital ambition, it also underscores the importance of building awareness around how AI systems work, how data is used, and how individuals can protect themselves online. Safer Internet Day plays an important role in encouraging practical, everyday digital hygiene that helps users benefit from AI without compromising safety or privacy.
Barriers to cybercrime are dropping, while specialization is rising
On the other side of the equation, the same AI technologies driving progress are also reshaping cybercrime. Attackers no longer need deep technical expertise to operate at scale. AI-supported toolchains now enable individuals or small groups to perform tasks that once required coordinated teams, lowering the barrier to entry for cybercriminal activity.
At the same time, cybercrime ecosystems are becoming increasingly specialized. Different actors now focus on specific roles such as discovery, access brokering, lateral movement, monetisation, and deception. Despite advances in tools and automation, human vulnerability remains the primary attack surface. Trust, urgency, and authority continue to be the levers attackers exploit most effectively, with AI amplifying these techniques through more convincing phishing, faster reconnaissance, and rapid iteration of attack methods.
According to FortiGuard Labs, cybercrime is entering a fourth industrial phase defined by automation, integration, and specialization. Credential dumps are evolving into curated, “intelligent” lists enriched with contextual data and behavioural insights. Dark web marketplaces increasingly resemble legitimate e-commerce platforms, complete with customer support, reputation systems, and escrow services enhanced by AI. Fraud, money laundering, and other illicit activities are also becoming more interconnected, creating resilient criminal ecosystems that are harder to disrupt.
Best practices for the safe use of AI
As AI becomes embedded in everyday tools, particularly chatbots and generative AI platforms, it is important to remember that many systems learn from the data users provide. Adopting a few essential safety practices can help individuals protect their personal information and digital identities:
- Avoid sharing sensitive information: Never provide passwords, banking details, home addresses, or other personal identifiers to AI tools.
- Secure your digital identity: Use strong, unique passwords for different accounts and review privacy settings regularly.
- Be cautious with images: Think carefully before uploading personal or family photos to AI platforms, as you may lose control over how those images are stored or reused.
- Question what you see: Apply critical thinking when consuming online content and remain alert to AI-generated scams or misinformation.
- Verify important information: AI systems can generate confident but incorrect responses. Always cross-check health, financial, or legal information with trusted experts or official sources.
- Be transparent: Clearly label AI-generated content when sharing it publicly to maintain trust and transparency.
- Use trusted platforms: Stick to AI tools recommended by reputable organisations, educators, or parents.
- Review privacy controls: Understand what data an AI platform collects and how it is used.
- Enable two-factor authentication: Add an extra layer of protection to online accounts to reduce the risk of unauthorised access.
“Safer Internet Day 2026 reminds us that no single company or individual can make the internet safe on their own. By working together, educators strengthening digital literacy; parents fostering open communication; technology players building secure technology; and government agencies working closely with law enforcement; we can ensure the internet remains a space for growth, creativity, and connectivity for everyone.”
- Vishak Raman, Vice President of Sales for India, SAARC, SEA & ANZ at Fortinet.

