Arya Kavian, Mohammad Mehdi Pourhashem Kallehbasti, Sajjad Kazemi, Ehsan Firouzi, Mohammad Ghafari
Many developers rely on Large Language Models (LLMs) to facilitate software development. Nevertheless, these models have
exhibited limited capabilities in the security domain. We introduce LLMGuard, an open-source framework that offers enhanced code security through the synergy
between static code security analyzers and LLMs. Particularly, it equips practitioners with code solutions that are more secure than the code initially generated
by LLMs. It also regularly benchmarks LLMs, providing valuable insights into the evolving security properties of these models.
Security Risk Assessment on Cloud: A Systematic Mapping Study
Giusy Annunziata, Alexandra Sheykina, Gemma Catolino, Fabio Palomba, Andrea De Lucia, Filomena Ferrucci
Cloud computing has become integral to modern organizational operations, offering efficiency and agility. However, security challenges such as data
loss and downtime necessitate tailored compliance solutions. Risk assessment is crucial for identifying and mitigating cloud-related threats, yet a
standardized approach remains elusive. Our study aims to fill this gap by conducting a mapping study on the prevailing methodologies. Through a meticulous
analysis of 21 scholarly papers, we explore various aspects of security risk assessment for the cloud. The results provide valuable insights into delivery
models, standards, and validation practices, contributing to a comprehensive understanding of cloud risk assessment.
Semgrep*: Improving the Limited Performance of Static Application Security Testing (SAST) Tools
Gareth Bennett, Tracy Hall, Emily Winter, Steve Counsell
Vulnerabilities in code should be detected and patched quickly to reduce the time in which they can be exploited. There are many automated approaches
to assist developers in detecting vulnerabilities, most notably Static Application Security Testing (SAST) tools. However, no single tool detects all
vulnerabilities and so relying on any one tool may leave vulnerabilities dormant in code. In this study, we use a manually curated dataset to evaluate
four SAST tools on production code with known vulnerabilities. Our results show that the vulnerability detection rates of individual tools range from 11.2%
to 26.5%, but combining these four tools can detect 38.8% of vulnerabilities. We investigate why SAST tools are unable to detect 61.2% of vulnerabilities and
identify missing vulnerable code patterns from tool rule sets. Based on our findings, we create new rules for Semgrep, a popular configurable SAST tool. Our
newly configured Semgrep tool detects 44.7% of vulnerabilities, more than using a combination of tools, and a 181% improvement in Semgrep’s detection rate.
Toward a Search-Based Approach to Support the Design of Security Tests for Malicious Network Traffic
Davide La Gamba, Gerardo Iuliano, Gilberto Recupito, Giammaria Giordano, Filomena Ferrucci, Dario Di Nucci, Fabio Palomba
IoT devices generate and exchange large amounts of data daily, creating significant security and privacy challenges. Security testing, particularly
using Machine Learning (ML), helps identify and classify potential malicious network traffic. Previous research has shown how ML can aid in designing
security tests for IoT attacks. This ongoing paper introduces a search-based approach using Genetic Algorithms (GAs) to evolve detection rules and detect
intrusion attacks. We build on existing GA methods for intrusion detection and compare them with leading ML models. We propose 17 detection rules and
demonstrate that while GAs do not fully replace ML, they perform well with ample attack examples and enhance the usability and implementation of deterministic
test cases by security testers.