Close Menu
SkytikSkytik

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    At Least 32 People Dead After a Mine Bridge Collapsed Due to Overcrowding

    November 17, 2025

    Here’s how I turned a Raspberry Pi into an in-car media server

    November 17, 2025

    Beloved SF cat’s death fuels Waymo criticism

    November 17, 2025
    Facebook X (Twitter) Instagram
    • About Us
    • Contact Us
    SkytikSkytik
    • Home
    • AI Tools
    • Online Tools
    • Tech News
    • Guides
    • Reviews
    • SEO & Marketing
    • Social Media Tools
    SkytikSkytik
    Home»Guides»Follow These 8 Tips from Security Experts to Stay Safe When Using AI Chatbots
    Guides

    Follow These 8 Tips from Security Experts to Stay Safe When Using AI Chatbots

    AwaisBy AwaisDecember 18, 2025No Comments6 Mins Read0 Views
    Facebook Twitter Pinterest LinkedIn Telegram Tumblr Email
    person reading in bed with glow of computer screen
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Chatting with AI is still a relatively new phenomenon. 

    Though turning to chatbots for recipe ideas, travel planning and quick answers is harmless (for the most part), there are many issues to be wary of when it comes to AI safety. 

    We often share highly personal information online, but the same confidentiality protections — those you enjoy with human lawyers, therapists and doctors — don’t apply to AI chatbots. Many users employ ChatGPT as a virtual life coach, sharing personal and professional details and problems through the app or program. There’s also a cognitive risk associated with using a large language model, as more studies begin to examine how reliance on chatbots affects memory retention, creativity and writing fluency.

    Here’s a guide to being cautious with chatbots. We’ll walk you through why it’s important to avoid handing over sensitive data, how to navigate mental health concerns and what you can do to prevent long-term cognitive atrophy due to not exercising certain parts of your brain.


    Don’t miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.


    1. Treat AI chatbots as public environments 

    Remember that AI chatbots are “public environments,” not private conversations, says Matthew Stern, a cyber investigator and CEO at CNC Intelligence. 

    AI Atlas

    CNET

    “If we keep that in mind, we will be less likely to share sensitive data that may become visible to others,” Stern says.

    Since chatbot histories have become searchable online, Stern says to be concerned about your conversations getting indexed by search engines. 

    Avoid sharing any personally identifiable information, such as your full name, address, financial details, business data and medical results. The more you share, the more personalized your results will be. Sure, that might sound like a good thing on the surface. 

    But handing over sensitive data to a tech company should give you pause. Even if those details don’t become publicly searchable, you never know what information data brokers will be buying and selling about you.

    2. Don’t overshare your mental state

    An image showing a phone screen with chat boxes between a person and a robot, surrounded by graphics of charts and graphs

    arthobbit/Getty Images

    Chatbots can be useful assistants, but they aren’t your friends, says Elie Berreby, the head of SEO and AI Search at Adorama. He suggests “guarding your secrets” and never discussing your mental state, fears or health concerns. Such data can be used to identify hidden patterns and subconscious intentions, creating a vulnerability profile. 

    “Do not overshare. They already know more about you than you could imagine,” says Berreby. 

    Also, keep in mind that the primary goal of AI chatbots is monetization, i.e., to generate revenue. 

    “Soon, this personalization will be used to show you ultra-targeted ads,” he says. “This data is priceless for advertisers, but it creates a surveillance profile deeper than anything we’ve seen until now.”

    3. Don’t ‘bring your whole self’ to the chatbot 

    AI chatbots exist within attention economies, where your engagement is the product, says Intercultural Strategist Annalisa Nash Fernandez.

    “If chatbots ultimately monetize through data collection and user retention, memory features become engagement tools disguised as personalization, because attention is upstream of everything, including your privacy,” she says.

    Disable memory features to reduce what the systems retain about you. For ChatGPT, navigate to Settings > Personalization > turn off Memory and Record Mode.

    A screenshot showing how to disable memory features on ChatGPT

    Disable memory features on ChatGPT.

    ChatGPT/Screenshot by CNET

    Use secondary email addresses, so that chatbots don’t have this type of identifier for you — emails are “the connective tissue linking disparate data points,” Fernandez says.

    Opt out of training, so the chatbot won’t train itself on your inputs. In ChatGPT, click on your profile/name, select Settings > then Improve the model for everyone > and turn it off. 

    A screenshot showing how to opt out of training on ChatGPT

    Opt out of training on ChatGPT.

    ChatGPT/Screenshot by CNET

    Berreby also advises you to “fragment your data” by switching between different AI chatbots to avoid giving one single entity a complete picture of your life. 

    4. Export your data 

    Whichever AI chatbot you’re using, regularly export your data to see what information it has stored about you. 

    In ChatGPT, go to Settings > Data Controls > Export Data. It’ll email you a link with a ZIP file of text and photos. 

    A screenshot showing how to export your data from ChatGPT

    Regularly export your data from ChatGPT to see what information the chatbot is storing about you.

    ChatGPT / Screenshot by CNET

    5. Fact-check everything 

    Always err on the side of caution with AI-generated content. Expect errors and approach information with doubt. AI chatbots are designed to be helpful — they’re the ultimate people pleaser. This doesn’t mean the information is true or accurate. 

    Cognitive bias is also an issue with chatbots. If you’re using it as a thought partner, it will mirror back what you put in, essentially becoming an ultimate echo chamber.

    Always check its sources and ask where it obtained the information. AI hallucinations also occur, where chatbots falsify information based on either unreliable online sources or by drawing incorrect conclusions.

    6. Watch out for sneaky scammers

    An image showing multi-colored neon chat boxes floating around, one of which says AI

    Andriy Onufriyenko/Getty Images

    AI chatbots are capable of maintaining multiturn conversations, says Ron Kerbs, CEO of Kidas, a company that protects against scams and online threats. These back-and-forth interactions could be mimicked by bad actors on dodgy websites posing as helpful customer service chatbots. 

    “While large platforms like ChatGPT are generally secure, the risk lies in users unintentionally sharing access credentials through phishing links or fake login pages, often distributed via email, SMS or cloned websites,” Kerbs says. “Once credentials are compromised, a scammer could misuse the account, especially if it’s linked to saved payment methods.”

    Kerbs says you must enable two-factor authentication, monitor account access and avoid logging in through third-party links. That might be less convenient, but it’s a small price to pay. 

    While there’s no antivirus equivalent for AI chatbots yet, some tools offer scam detection as a layer of everyday protection, especially when embedded within messaging platforms and service providers. 

    Kerbs says it’s essential not only to scan your hard drive for viruses, but also to monitor your interactions via SMS, email and voice calls for potential scams. Deepfake protection can also analyze audio and video to detect if the person you’re speaking to is an AI clone. 

    7. Confide in people, not AI 

    This tip isn’t tactical, but it’s important: While you might see no harm in speaking to ChatGPT, Claude or Gemini about a problem you’re having, it’s a slippery slope to using a chatbot as a diary.

    Instead, call up a good friend or plan a catch-up to share what you’re going through with someone who cares about you — not a predictive AI model that’s been trained by strangers.

    8. Practice (and protect) critical thinking 

    Don’t outsource your thinking to AI. An ongoing MIT study (yet to be peer-reviewed) conducted a preliminary exploration of the potential for large language models to be detrimental to our mental state, showing “weaker neural connectivity” in the brains of participants who used ChatGPT.

    Use AI for low-level tasks, but keep the creating, thinking and strategizing out of the algorithms. 

    Here are the best things to use AI for, as well as the worst. 

    (Disclosure: Ziff Davis, CNET’s parent company, in April filed a lawsuit against ChatGPT maker OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)

    chatbots Experts Follow Safe security Stay Tips
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Awais
    • Website

    Related Posts

    Follow the AI Footpaths | Towards Data Science

    March 17, 2026

    Where to Stay in New York City If You Like to Eat

    March 14, 2026

    WordPress Security Release 6.9.4 Fixes Issues 6.9.2 Failed To Address

    March 11, 2026

    WordPress Releases A Security Update Followed By A Bugfix

    March 11, 2026

    Enterprise Social Media: How Big Brands Stay Agile

    March 10, 2026

    B2B Buyers Trust Peers Over AI Chatbots, Report Finds

    March 10, 2026
    Leave A Reply Cancel Reply

    Top Posts

    At Least 32 People Dead After a Mine Bridge Collapsed Due to Overcrowding

    November 17, 20250 Views

    Here’s how I turned a Raspberry Pi into an in-car media server

    November 17, 20250 Views

    Beloved SF cat’s death fuels Waymo criticism

    November 17, 20250 Views
    Don't Miss

    Google AI Mode’s Personal Intelligence Now Free In U.S.

    March 17, 2026

    Google is opening Personal Intelligence to free-tier users in the U.S. Previously limited to paid…

    YouTube Social Listening 2026 Guide

    March 17, 2026

    To See is Not to Master: Teaching LLMs to Use Private Libraries for Code Generation

    March 17, 2026

    Post, Story, and Reels Dimensions

    March 17, 2026
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    How to Sell AI Services Without Selling Your Soul : Social Media Examiner

    March 17, 2026

    Ratio-Aware Layer Editing for Targeted Unlearning in Vision Transformers and Diffusion Models

    March 17, 2026
    Most Popular

    13 Trending Songs on TikTok in Nov 2025 (+ How to Use Them)

    November 18, 20257 Views

    How to watch the 2026 GRAMMY Awards online from anywhere

    February 1, 20263 Views

    Corporate Reputation Management Strategies | Sprout Social

    November 19, 20252 Views
    Our Picks

    At Least 32 People Dead After a Mine Bridge Collapsed Due to Overcrowding

    November 17, 2025

    Here’s how I turned a Raspberry Pi into an in-car media server

    November 17, 2025

    Beloved SF cat’s death fuels Waymo criticism

    November 17, 2025

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest YouTube Dribbble
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms & Conditions
    • Disclaimer

    © 2025 skytik.cc. All rights reserved.

    Type above and press Enter to search. Press Esc to cancel.