Mapping LLM API attack surface

Mapping LLM API attack surface

Exploiting LLM APIs with excessive agency

Introduction

Organizations are quickly integrating Large Language Models (LLMs) to enhance their online customer experience. However, this exposes them to web LLM attacks, which exploit the model's access to data, APIs, or user information that an attacker cannot directly access.

In general, attacking an LLM integration is like exploiting a server-side request forgery (SSRF) vulnerability. In both situations, an attacker uses a server-side system to attack another part that can't be accessed directly.

Detecting LLM vulnerabilities

Our recommended method for detecting LLM vulnerabilities is:

  1. Identify the LLM's inputs, including both direct inputs (like a prompt) and indirect inputs (such as training data).

  2. Determine what data and APIs the LLM can access.

  3. Examine this new attack surface for vulnerabilities.

Mapping LLM API attack surface

Tasks

  • To solve the lab, use the LLM to delete the user carlos.

Required Knowledge

To solve this lab, you need to know:

  • How LLM APIs function.

  • How to map the LLM API attack surface.

Steps

  1. Access the web and get live chat.

  2. Ask normal prompt in chatbox. Lol :)

  3. Try to ask what APIs it has access to

  4. We got idea about LLM is using SQL database. Try to inject SQLI payload now.

    Try to inject payload.

SELECT * FROM users

5. We can see carlos, username. password and email. Now let’s delete carlos.

DELETE FROM users WHERE username='carlos'

Congratulations, you solved the lab!

#llmhacking #chatbot #hacking #webhacking #happylearning