Explore Munich Re Group

Get to know our Group companies, branches and subsidiaries worldwide.

The latest generation of AI: Hacker tool or automated defence system?
The latest generation of AI: Hacker tool or automated defence system?
© fotograzia / Getty Images
    alt txt

    properties.trackTitle

    properties.trackSubtitle

    ChatGPT created hype around AI at the end of 2022. Experts, on the other hand, have been fascinated by artificial intelligence (AI) and its subfields, such as machine learning or natural language processing, for decades. 

    One highlight from a series of projects: In 2014, the US Department of Defense’s DARPA agency (Defense Advanced Research Projects Agency) launched the Cyber Grand Challenge. In this first all-machine cyber hacking tournament, seven networked autonomous systems hacked one another and wrote new programmes to close security vulnerabilities. Attacks were followed by automated defence – partly using AI – which was one way of bringing more security into an increasingly interconnected world built in part on defective software. In the current debate, however, concerned voices predominate. How dangerous is the latest generation of AI? Are more cyber attacks now taking place? In this interview, Dr. Siegfried Rasthofer, Senior Cyber Security Expert at Munich Re, gives his initial insights.

    Dr. Rasthofer, language models, and therefore also AI, have suddenly become the focus of reporting. Why?

    Generally speaking, the concepts for such language models have already existed for several years. But it’s only now that we’re seeing the breakthrough, as we now have the computing power needed to implement these models. What’s fascinating and new about ChatGPT, for example, is that this language model can give a complete and eloquent answer to a question in seconds. With more complex topics and questions, the answers can also be unsatisfactory, of course. But anyone looking for a recipe, for example, no longer has to click through the numerous results of a search engine but can ask the chatbot directly.  

    The mechanics behind this are easy to explain. The abbreviation GPT stands for “generative pre-trained transformer”. This means that the chatbot draws on a vast and comprehensive wealth of information on which it was previously trained, processes the existing knowledge and adapts it. Ultimately, it’s all about statistical probability: Based on the text so far and the patterns learnt, the chatbot tries to predict the next matching word or the next matching text sequence. This happens on the basis of statistics that the chatbot has derived from its training dataset.

    There are now fears that these language models can cause harm. Are they useful for cyber attacks, and will there be more or new types of attacks as a result?

    It’s a great shame that the focus always ends up being on the negative. For I could equally well use the technology to protect myself. What you have to bear in mind is that ChatGPT “hallucinates” – it predicts the next matching word and sometimes this is not correct. Because of the statistical nature of their approach, these models can make mistakes or give inaccurate information, especially when they are faced with unusual or contextually complex questions. If it was completely new, we would already have artificial general intelligence (AGI), which enables a computer programme to understand or learn any intellectual task that a person can do.

    As regards language models like ChatGPT and the current situation, I don’t think we can expect serious new attack techniques. But we will probably see more automated attacks that can also be carried out by people with less technical know-how. What you have to bear in mind, however, is that attacks always consist of several stages. Even if the compromising began successfully in the first step, it doesn’t necessarily mean a company’s full encryption. 

    But could language models provide support in a variety of ways in the individual attack stages?

    Yes, I think so. And I think that the speed within the individual stages will probably also increase as a result. Language model support can start off quite innocuously. Reconnaissance, for example, denotes the information-gathering stage prior to an attack. For this, various publicly available information sources, such as the company website or social networks, are used. The previously manual search for the target company’s employees, for example, is something that criminals could get a language model to do. In the initial access stage, hackers could use spear phishing mails to obtain access data. A language model could take over the task of preparing a well-written email. 

    And it could also help develop malware, the key word being “obfuscation”. With these confusion tactics, a programme code is deliberately drastically changed, for example so that an antivirus programme no longer recognises a virus. Another thing I can imagine happening with language models in the future is them finding previously unknown software vulnerabilities and automatically writing exploit codes to take advantage of the vulnerability found. This exploit code would then be used to attack others who have the same vulnerable version of the software analysed.

    But as I already said earlier, a software manufacturer could equally well ask about a vulnerability and thus use a language model for protection. 

    As you have already touched on this, how can companies protect themselves against increasingly sophisticated attacks?

    We all know that protection doesn’t end with an antivirus programme. To be adequately prepared against the constantly growing threat of cyber attacks, a multi-level security concept is needed. This means that companies shouldn’t rely on just one security measure but on a combination of different protective mechanisms. It’s similar to a line of defence with several barriers that make it difficult for attackers to penetrate the system or cause damage. Take deepfakes, for example – deceptively genuine-looking manipulated images, audio recordings or videos. Here too, quality is continually improving because computing power is increasing and AI performance is developing in leaps and bounds. Deepfakes are often used in business email compromise (BEC) scams. Criminals use fake business emails to trigger financial transactions, for example. Once the fake email has been sent, there may be a call between the victim and the criminal to confirm the transaction, with modern AI techniques being used to imitate the boss’s voice. This example is currently a higher-league level of attack. But it illustrates the need for different protective mechanisms, which must be present and must work. This includes the principle of dual control or a saved number which is only used for protection and approval.   

    A multi-level security concept costs money, of course. And it’s an investment that, for some companies, doesn’t immediately bring any visible benefits. But in my view it’s a necessary step, as the risk of attacks will increase, and these can happen more quickly where language models are used. Unfortunately, this requires not only investment but also IT specialists, who are scarce. This is an additional challenge, especially for medium-sized enterprises. Here, it’s worth considering asking an external IT service provider.

    Munich Re itself writes on the company website that the increasing use of AI in commerce is creating new classes of risk and customer requirements which have to be identified, analysed, understood and assessed. What touchpoints does Munich Re have with artificial intelligence? 

    In my research work in the Claims Department, we develop our own AI programmes which, in the area of threat intelligence, for example, help us to better understand cyber losses. These applications are part of our widespread use of AI in various areas. Our clients also benefit from our expertise with the aiSure and aiSelf insurance products.

    Many thanks for the interview, Dr. Rasthofer. 

    Experts
    Siegfried Rasthofer
    Siegfried Rasthofer
    Senior Cyber Security Expert

    Newsletter

    Stay ahead of the curve with exclusive insights and industry updates! Subscribe to our Munich Re Insights Newsletter for a front-row seat to the latest trends in risk management, expert analyses and assessments, market insights, and innovations in the insurance industry. Join our community of forward-thinkers at Munich Re and empower your journey towards a more resilient future.