MAKING simple mistakes when talking to artificial intelligence chatbots can leave you dangerously exposed.
That's the official advice from security experts who say you need to be very careful when using apps like OpenAI's ChatGPT.
The same rules apply to any other chatbot too, including popular options like Google Gemini and Microsoft Copilot.
These apps can help you get things done much faster and can answer almost any question you muster up.
But experts say there are several mistakes that you must not make – with Keeper Security warning users to follow one "crucial" rule.
That specific rule is to never enter any sensitive information into ChatGPT or any other general chatbot.
"Because ChatGPT acknowledges that there are occasions when it will share your private information with third parties, it is best never to enter sensitive data into ChatGPT," said Keeper Security's Ashley D'Andrea.
"Everything that goes into ChatGPT is saved and stored within its database, and if ChatGPT suffers a data breach, the private information you share with ChatGPT could end up in the wrong hands.
"This is why it is crucial not to upload sensitive documents like legal PDFs or financial records into ChatGPT."
Of course that's not the only thing you need to be careful about when using AI chatbots.
Most read in News Tech
For instance, it's important to make sure that you're actually talking to the chatbot that you think you are.
Like regular apps, chatbots can also be cloned by crooks who set up fake versions designed to trick you.
Make sure you're using the official ChatGPT website or app and don't be fooled by online spin-offs promising extra features – especially if they're demanding sensitive info or a fee.
You should also avoiding using ChatGPT to create passwords for your accounts.
Ashley explained: "As mentioned before, the content produced by ChatGPT stays in ChatGPT’s database.
"So, if it creates passwords for you and ChatGPT’s data is breached, a cybercriminal would have access to the passwords that it has generated.
What is ChatGPT?
ChatGPT is a new artificial intelligence tool
ChatGPT, which was launched in November 2022, was created by San Francisco-based startup OpenAI, an AI research firm.
It’s part of a new generation of AI systems.
ChatGPT is a language model that can produce text.
It can converse, generate readable text on demand and produce images and video based on what has been learned from a vast database of digital books, online writings and other media.
ChatGPT essentially works like a written dialogue between the AI system and the person asking it questions
GPT stands for Generative Pre-Trained Transformer and describes the type of model that can create AI-generated content.
If you prompt it, for example ask it to “write a short poem about flowers,” it will create a chunk of text based on that request.
ChatGPT can also hold conversations and even learn from things you’ve said.
It can handle very complicated prompts and is even being used by businesses to help with work.
But note that it might not always tell you the truth.
“ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness,” OpenAI CEO Sam Altman said in 2022.
"Another important reason you should not use ChatGPT to create passwords is that it may generate the same passwords for multiple users."
That means if another user's password is leaked, the same log-in could be tried on your account – allowing crooks to break in.
It's important to use totally unique log-ins generated by purpose-built password managers, like Apple's iCloud Keychain for iOS on your iPhone.
Another rule to follow is to not believe everything that an AI chatbot tells you outright.
AI ROMANCE SCAMS – BEWARE!
Watch out for criminals using AI chatbots to hoodwink you...
The U.S. Sun recently revealed the dangers of AI romance scam bots – here's what you need to know:
AI chatbots are being used to scam people looking for romance online. These chatbots are designed to mimic human conversation and can be difficult to spot.
However, there are some warning signs that can help you identify them.
For example, if the chatbot responds too quickly and with generic answers, it's likely not a real person.
Another clue is if the chatbot tries to move the conversation off the dating platform and onto a different app or website.
Additionally, if the chatbot asks for personal information or money, it's definitely a scam.
It's important to stay vigilant and use caution when interacting with strangers online, especially when it comes to matters of the heart.
If something seems too good to be true, it probably is.
Be skeptical of anyone who seems too perfect or too eager to move the relationship forward.
By being aware of these warning signs, you can protect yourself from falling victim to AI chatbot scams.
Artificial intelligence can have "hallucinations", sharing false information with you and claiming that it's accurate.
READ MORE SUN STORIES
It's important to fact-check what an AI tells you, especially if it's related to something very serious like health or politics.
And if you're asking for an opinion, it might be worth speaking to multiple chatbots to get a sense for how they differ.