ChatGPT and cyber risks
Q&A with Zair Kamal
Director, Client Development & Cyber Specialist
properties.trackTitle
properties.trackSubtitle
We’re now seeing breakthroughs in language models such as ChatGPT, due to the availability of computing power required to power them. People are naturally fascinated with how machine learning and AI are able to benefit their lives and businesses. However, with the evolution of new technology comes risk.
In this article, Zair Kamal, Director, Client Development and Cyber Specialist, HSB Canada, provides cyber security insights.
What is ChatGPT?
ChatGPT is a state-of-the-art language generation model developed by OpenAI, capable of understanding and generating human-like text based on a given prompt.
ChatGPT can give a complete and articulate answer to any question that is typed into it. Looking for the world’s best chocolate cake recipe? Type it into ChatGPT and you never need to search Google.
But, how could a ChatGPT-type AI model cause harm?
Language models such as ChatGPT can be used in the following ways by bad actors:
1. Compromising sensitive data
Language models process and store large amounts of data from inputted queries. If employees upload sensitive data and confidential information into the model, data could be hacked, leaked or accidently exposed.
2. Re-writing code to develop malware
Language models may be able to deliberately change software code. If applied to an antivirus program, its code could be changed so that it may no longer be able to recognize a virus.
3. Preparing phishing emails
Language models may be able to take over the task of preparing a well-written phishing email.
4. More efficient information-gathering
Normally, a cyber criminal would conduct manual searches through a target company’s website or social networks. Now criminals could use language models to do these searches, helping them to get faster access to information.
How can businesses protect themselves against increasingly sophisticated attacks?
It takes not just one security measure, but a combination of different lines of defense. It’s important to create several barriers to make it difficult for attackers to penetrate the system or cause damage.
Examples include:
Classify sensitive data
- Identify and classify your data into different sensitivity levels.
- Clearly define what type of data can be shared with ChatGPT and what should remain confidential.
User training and awareness
- Educate your team on the importance of data security when using ChatGPT and not to share sensitive information
- Teach them how to recognize and report suspicious activities
Control access to ChatGPT
- Ensure that only authorized personnel can access and use ChatGPT or related systems
Incident Response
- Develop a well-defined incident response plan in case of a data breach or misuse.
- This should include communication strategies, investigation procedures, and mitigation steps.