How ChatGPT poses privacy risks for businesses

Daily News Egypt
4 Min Read

ChatGPT is a popular neural network-based language model that many people use for various tasks. However, using ChatGPT for business purposes may expose sensitive corporate data to different kinds of threats. Here are four privacy risks that businesses should be aware of when using ChatGPT:

  • Data breach on the provider’s side: ChatGPT is operated by a tech giant, but even they can be hacked or leak data accidentally. For instance, there was a case where ChatGPT users could see other users’ chat histories.
  • Data exposure through chatbots: ChatGPT may use chats with chatbots to train future models. This means that any data in the chats, such as phone numbers, passwords, or trade secrets, may be remembered by the model and accessed by other users. This is called “unintended memorization” and it poses a serious privacy risk.
  • Malicious client: ChatGPT is not available in some regions where it is blocked by the authorities. Users in these regions may try to use unofficial alternatives, such as programs, websites, or messenger bots, that may contain malware or spyware. These malicious clients may steal or damage the user’s data.
  • Account hacking: ChatGPT accounts may be compromised by attackers who use phishing or credential-stuffing techniques. These attackers may gain access to the user’s data, such as chat histories, contacts, or files, and use them for malicious purposes.

To summarize, data loss is a major privacy concern for both users and businesses when using chatbots. ChatGPT and other chatbot providers have different privacy policies that explain how they use, collect, store, and process data. According to Kaspersky’s analysis of popular chatbots, the B2B sector has higher security and privacy standards than the B2C sector, as they deal with more confidential information. The B2B solutions usually do not save chat histories or send data to the provider’s servers, and some of them even operate locally in the customer’s network.

Kaspersky’s security and privacy expert, Anna Larkina, advises businesses to educate their employees about the risks of using chatbots and to establish clear rules for their usage. She says, “Employees need to understand what data is confidential or personal, or constitutes a trade secret, and why it must not be fed to a chatbot. On the other hand, the company must spell out clear rules for using such services, if they are allowed at all.”

To still get the benefits of using chatbots and stay safe, Kaspersky experts also recommend: 

  • Use Strong, Unique Passwords: Create complex passwords for each of your accounts, and avoid using easily guessable information like birthdays or names.
  • Beware of Phishing: Be cautious of unsolicited emails, messages, or calls asking for personal information. Verify the sender’s identity before sharing any sensitive data.
  • Educate Your Employees: Employees should stay informed about the latest online threats and best practices for staying safe online. 
  • Keep Software Updated: Regularly update your operating system, apps, and antivirus programs. These updates often contain security patches. 
  • Limit Corporate Information Sharing: Be cautious about sharing personal information on social media or public forums. Only provide it when necessary.
  • Verify URLs and Websites: Double-check the URL of websites you visit, especially before entering login credentials or making purchases.
TAGGED:
Share This Article
Leave a comment