Global Cyber Conference 2023
We are part of the Global Cyber Conference 2023 on September 14 and 15, 2023, in Zurich. Please feel welcome to contact us to start a new or continue an existing discussion. Below you find more about Prof. Dr. Guido Salvaneschi’s keynote and our expert focus panel discussion.
Global Cybersecurity Threat Landscape in the Age of AI
Keynote by Prof. Dr. Guido Salvaneschi, September 14, 2023, 9:50 – 10:00 at Gallery.
How AI Is Changing the Security of Software Systems
Expert focus panel discussion by Prof. Dr. Guido Salvaneschi, MSc Daniel Sokolowski, and MSc David Spielmann, September 15, 2023, 11:00 – 11:30 at Ballroom.
Below we provide a structured reading list of the discussion. Click on a topic to unfold a list of references with short summaries. Click on an entry’s title to open the article. For each article, we tag its outlet type content type and highlight the best ones with a recommended read label. We refer to all online articles in the version displayed on September 15, 2023.
AI has long been used in cybersecurity, but LLMs have recently brought dramatic change.
- research insight D.E. Denning. An Intrusion-Detection Model. TSE. February 1987
- The paper describes an AI-based expert system that scans real-time audit records to detect abnormal activities and potential security breaches, adaptable to various systems.
- research insight R.P. Lippmann et al. Evaluating Intrusion Detection Systems: The 1998 DARPA Off-line Intrusion Detection Evaluation. DISCEX. 1998
- The paper evaluates a test bed for AI-based intrusion detection systems, finding that while they can moderately detect known attacks, they are less effective at identifying new or novel threats, indicating a need for more adaptive approaches.
- white paper overview Kaspersky Machine Learning for Malware Detection. 2021
- An overview of AI techniques for malware detection.
- research broad insight D. Arp et al. Dos and Don'ts of Machine Learning in Computer Security. USENIX-Security. November 2021
- The paper critically examines the use of machine learning in computer security, identifying common pitfalls in design, implementation, and evaluation that can undermine system performance. Through a study of 30 papers and empirical analysis, it confirms these issues are widespread and offers actionable recommendations for improvement and directions for future research.
- recommended blog insight OpenAI. Introducing ChatGPT. November 2022
- OpenAI's release article for ChatGPT.
- recommended blog insight D. Johnshon. How ChatGPT is changing the way cybersecurity practitioners look at the potential of AI. December 2022
- The article discusses the cybersecurity community's complex reactions to ChatGPT's capabilities and vulnerabilities. While some security professionals find its machine learning capabilities useful for both defensive and offensive tasks, concerns arise about its dual-use nature and how easily its ethical safeguards can be bypassed.
AI can be used on both sides to improve attacks as well as defense.
- blog overview insight A. Joshi, D. Kerr. AI on offense: Can ChatGPT be used for cyberattacks?. May 2023
- The article explores the potential for generative AI models like GPT-4 to assist in cyberattacks, concluding that while these models can aid attackers in certain steps, they lack the autonomy and broad capabilities to execute sophisticated attacks end-to-end. It also highlights that AI can empower both defenders and offenders in cybersecurity, emphasizing the human responsibility to use such technologies wisely.
Concerns about improved phishing attacks have been raised early in media.
- news insight A. Hern, D. Milmo. AI Chatbots Making It Harder to Spot Phishing Emails, say experts. The Guardian. March 2023
- Experts warn that AI chatbots like ChatGPT are making it harder to detect phishing emails by eliminating common giveaways like poor grammar and spelling. Data indicates a rise in bot-written phishing emails that are linguistically complex and less likely to be caught by spam filters, raising concerns about the technology's role in cybercrime.
- news insight E. Sayegh. Almost Human: The Threat Of AI-Powered Phishing Attacks. Forbes. April 2023
- AI is increasingly being used by cybercriminals to create highly convincing phishing attacks, making it easier to deceive individuals through emails, SMS, and even deep-faked voice calls. The rise of AI-powered attacks is leading to an "arms race" between hackers and cybersecurity professionals, necessitating advanced AI-based security solutions and increased public awareness.
- news insight B. Violino. A.I. is Helping Hackers Make Better Phishing Emails. CNBC. June 2023
- AI is making it easier for cybercriminals to craft convincing phishing emails, posing new challenges for cybersecurity experts. To counter AI-assisted attacks, experts recommend organizations deploy AI-based defensive tools and update employee training on recognizing AI-enabled phishing campaigns.
- recommended blog insight Microsoft. How AI is changing phishing scams. July 2023
- AI technology like ChatGPT is making phishing attacks increasingly sophisticated, enabling scammers to produce more convincing and targeted emails and even voice cloning for deceptive calls. While this poses new security challenges, the advancement in AI also offers the potential for improved defense mechanisms, including real-time threat identification and predictive cybersecurity measures.
Generally, AI can assist in coding malware and may democratize attacks.
- blog insight R. Morrison. Here’s how OpenAI’s ChatGPT can be used to launch cyberattacks. Tech Monitor. December 2022
- ChatGPT's ease in generating code and mimicking human language raises concerns about its potential misuse for sophisticated cyberattacks.
- recommended blog overview M. Hill. 5 Ways Threat Actors Can Use ChatGPT to Enhance Attacks. CSO Online. April 2023
- Further ideas of how cybercriminals can leverage ChatGPT for improved attacks.
- news insight P. Muncaster. Experts Warn ChatGPT Could Democratize Cybercrime. Infosecurity magazine. December 2022
- Security experts warn that the capabilities of AI chatbots like ChatGPT can lower the barrier to entry for cyber-criminals by helping them craft more effective attacks and even develop malware. While the AI is programmed to avoid directly generating harmful content, it can still inadvertently assist in malicious activities, highlighting the need for more robust preventative measures against abuse.
- blog insight Analytics Insight. Cybercriminals are Using ChatGPT to Create Hacking Tools and Code. January 2023
- Security researchers, including Israeli firm Check Point, have found that both experienced and novice hackers are increasingly using ChatGPT to develop hacking tools, code for malware, and phishing emails, raising concerns about the chatbot's impact on cybersecurity. The report suggests that while ChatGPT can be used for beneficial purposes, its accessibility and capabilities also make it easier for cybercriminals with minimal programming experience to create dangerous tools, requiring urgent attention from the cybersecurity community.
- recommended blog insight A. Mulgrew. I built a Zero Day virus with undetectable exfiltration using only ChatGPT prompts. Forcepoint. April 2023
- The blog presents an alarming experiment where the author claims to have successfully exploited the capabilities of ChatGPT to assist in the development of advanced malware, using techniques like steganography to evade detection. The author argues that the ease with which this was achieved exposes vulnerabilities in both AI safeguarding measures and current cybersecurity defenses, highlighting the potential risks of leveraging AI technologies like ChatGPT for malicious purposes.
- blog overview A. Daniel. Five Ways Cybercriminals are Making Use of ChatGPT. IT Brief Australia. June 2023
- Ways in which ChatGPT has already been used for attacks.
On the other side, AI can help defenders, e.g., in penetration testing.
- tool example PentestGPT
- Example of novel defense techniques with AI: A penetration testing tool leveraging LLMs (GPT-4 is recommended). Tries to solve problem of losing required context in penetration testing sessions.
- recommended research insight A. Happe, J. Cito. Getting pwn'd by AI: Penetration Testing with Large Language Models. Arxiv. August 2023
- This paper investigates the use of large-language models like GPT-3.5 in aiding penetration testers for both high-level task planning and low-level vulnerability hunting. The study includes a closed-feedback loop where the AI model suggests and executes attack vectors in a vulnerable virtual machine, and concludes with discussion on the promising initial results and ethical considerations.
Generally, AI can assist in (code) security analysis and improvement.
- news insight B. Nolan. OpenAI's ChatGPT can write impressive code. Here are the prompts you should use for the best results, experts say. Business Insider. August 2023
- OpenAI's ChatGPT can generate functional code, intriguing tech leaders and programmers. To improve code quality, experts suggest using clear, specific prompts and assigning ChatGPT a role like "world-class programmer."
Potentially, AI raises the bar for both attackers and defenders.
- blog insight D. Merian. ChatGPT CVE Analysis for Red and Blue Team. Medium. March 2023
- The article discusses how ChatGPT can be used by both Red Teams and Blue Teams for CVE (Common Vulnerabilities and Exposures) analysis, helping the Red Team exploit vulnerabilities and the Blue Team understand and defend against them. Using a test scenario of CVE-2017–7494, a remote code execution vulnerability in Samba, the article confirms that ChatGPT can effectively analyze code for vulnerabilities and suggests potential attack sequences.
To ensure defenders keep up, we urgently need practice and training.
- recommended survey broad insight Harvard Business Review Analytic Services. Artificial Intelligence. November 2018
- The adoption of Artificial Intelligence (AI) and Machine Learning (ML) in businesses is rapidly increasing, with larger organizations being more likely to currently utilize these technologies. The technologies offer a wide range of benefits including predictive analytics, anomaly detection, and improved customer experience, thus attracting attention from senior executives; however, challenges like a skills gap and the technical complexity of AI/ML remain.
- recommended blog insight K. Renaud, M. Warkentin, G. Westerman. From ChatGPT to HackGPT: Meeting the Cybersecurity Threat of Generative AI. MIT Sloan Management Review. April 2023
- The article warns that the rise of generative AI tools like ChatGPT poses new and sophisticated cybersecurity challenges for businesses, moving from mass attacks to highly personalized and intelligent threats that can bypass traditional security measures. To defend against these evolving threats, companies need to adopt real-time adaptive strategies that include employing generative AI for defense, enhancing anomaly detection, and shifting from rule-based employee training to knowledge-based preparedness.
- course example D.R. Pothula. Advanced Ethical Hacking: Mastery AI & ChatGPT. Udemy. July 2023
- An eight-module educational program exploring the intersection of AI and ethical hacking, focusing on advanced techniques and the role of ChatGPT, along with the risks and countermeasures involved.
Current AI has many limitations and challenges, including security and privacy.
- blog insight W. Knight. A New Attack Impacts Major AI Chatbots—and No One Knows How to Stop It. Wired. August 2023
- Researchers at CMU found that security guardrails of current LLMs can be easily avoided by making small, simple additions to the prompt. Zico Kolter: "We just don't know how to make them secure."
- recommended blog insight R. Lemos. Employees Are Feeding Sensitive Biz Data to ChatGPT, Raising Security Fears. Dark Reading. March 2023
- The article highlights the growing cybersecurity concerns as employees increasingly input sensitive business and personal information into large language models like ChatGPT, risking data leakage and legal complications for companies. It cites instances of misuse, discusses the potential for "training data extraction attacks," and suggests that employee education and corporate policies are key to mitigating these risks.
- research insight B. Xu et al. Are We Ready to Embrace Generative AI for Software Q&A?. ASE NIER. August 2023
- A study compareing the quality of human-written and ChatGPT-generated answers on Stack Overflow, concluding that while both types are semantically similar, human answers outperform ChatGPT's by 10% overall. Stack Overflow had previously banned ChatGPT, citing the AI's low-quality responses as the reason.
Lastly, AI regulation is still in early progress.
- news overview C. Kang. In U.S., Regulating A.I. Is in Its ‘Early Days’. The New York Times. July 2023
- Despite increased attention and activity by U.S. lawmakers and the White House regarding artificial intelligence (A.I.) regulations, concrete rules for the technology are still far from being established. In contrast to Europe, where A.I. legislation is set to be enacted, the U.S. is in the early stages of a likely long and difficult process, facing a lack of consensus among stakeholders and lawmakers on how to handle the rapidly evolving technology.
- recommended white paper broad insight Hollistic AI. The State of AI Regulations in 2023. January 2023
- Detailed summary of the ongoing AI-related regulation activities in the US, EU, UK, and China.
- white paper guide European Union Agency for Cybersecurity (ENISA). Multilayer Framework for Good Cybersecurity Practices for AI. June 2023
- Guidelines for national authorities and AI stakeholders to ensure security of AI-based systems. The proposed FAICP framework proposes three layers of cybersecurity practices and discusses open issues in all of them.
- white paper guide World Economic Forum. Adopting AI Responsibly: Guidelines for Procurement of AI Solutions by the Private Sector. June 2023
- Guidelines for the ethical adoption of AI technology in the industry.
- recommended blog insight OpenAI. Introducing ChatGPT Enterprise. August 2023
- OpenAI's release article for ChatGPT Enterprise, addressing some Enterprise issues regarding trust, risk, and security. It announces further developments in this direction.