DECEIVE  by splunk

LLM-powered honeypot for emulating realistic systems

Created 1 year ago
260 stars

Top 97.7% on SourcePulse

GitHubView on GitHub
Project Summary

DECEIVE is a high-interaction SSH honeypot that leverages Large Language Models (LLMs) to simulate realistic system environments and user interactions with minimal manual configuration. It targets security researchers and analysts seeking to study attacker behavior without exposing actual systems, offering automated generation of system prompts, user data, and responses.

How It Works

DECEIVE simulates a Linux server via SSH. Its core innovation lies in using an LLM, configured via a system prompt, to dynamically generate responses to attacker commands. This approach eliminates the need for manual seeding of realistic data and applications, allowing the LLM to create a believable environment on the fly. It logs all interactions, including user inputs, LLM outputs, and provides a post-session summary with a classification of benign, suspicious, or malicious activity.

Quick Start & Requirements

  • Install dependencies: pip3 install -r requirements.txt
  • Generate SSH host key: ssh-keygen -t rsa -b 4096 -f SSH/ssh_host_key
  • Configure LLM backend and user accounts in SSH/config.ini.
  • Define the simulated system in SSH/prompt.txt.
  • Run the honeypot: export OPENAI_API_KEY="<your_key>"; cd SSH; python3 ./ssh_server.py
  • Prerequisites: Python 3, SSH client, and an LLM API key (e.g., OpenAI).

Highlighted Details

  • Simulates a Linux server via SSH.
  • LLM-driven simulation reduces manual setup effort.
  • Logs user inputs, LLM responses, and session summaries with threat classification.
  • Supports custom system prompts for diverse simulation scenarios.

Maintenance & Community

Contributions are welcome via pull requests and issues. The project is hosted on GitHub.

Licensing & Compatibility

Licensed under the MIT License, permitting commercial use and integration with closed-source projects.

Limitations & Caveats

DECEIVE is explicitly stated as a proof-of-concept and not production-quality. It is primarily developed on macOS 15 but should function on other UNIX-like systems, including Linux and WSL on Windows.

Health Check
Last Commit

3 months ago

Responsiveness

Inactive

Pull Requests (30d)
0
Issues (30d)
0
Star History
2 stars in the last 30 days

Explore Similar Projects

Starred by Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), Michele Castata Michele Castata(President of Replit), and
3 more.

rebuff by protectai

0.4%
1k
SDK for LLM prompt injection detection
Created 2 years ago
Updated 1 year ago
Starred by Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), Carol Willing Carol Willing(Core Contributor to CPython, Jupyter), and
3 more.

llm-security by greshake

0.1%
2k
Research paper on indirect prompt injection attacks targeting app-integrated LLMs
Created 2 years ago
Updated 2 months ago
Feedback? Help us improve.