beelzebub  by mariocandela

Honeypot framework for cyber attack detection and analysis

Created 3 years ago
1,532 stars

Top 27.1% on SourcePulse

GitHubView on GitHub
Project Summary

Beelzebub is a low-code honeypot framework designed for detecting and analyzing cyber attacks with AI-driven behavioral mimicry. It targets security professionals and researchers seeking a secure, high-interaction honeypot solution with simplified deployment and real-time attack monitoring via a Telegram bot.

How It Works

Beelzebub employs a modular architecture, allowing users to define honeypot configurations via YAML files for various protocols (SSH, HTTP, TCP). Its key innovation lies in its LLM integration, enabling dynamic, human-like responses for high-interaction honeypots. It supports both OpenAI and local Ollama instances, allowing for sophisticated threat emulation.

Quick Start & Requirements

  • Docker Compose: docker-compose build && docker-compose up -d
  • Go Compiler: go mod download && go build && ./beelzebub
  • Kubernetes: helm install beelzebub ./beelzebub-chart
  • Prerequisites: Go compiler (for non-Docker builds), Docker, Helm (for Kubernetes).
  • LLM Integration: Requires API keys for OpenAI or a running Ollama instance.
  • Configuration: Services defined in /configurations/services/ directory.
  • Docs: mariocandela/beelzebub-example

Highlighted Details

  • Supports SSH, HTTP, and TCP honeypots.
  • Integrates with Ollama and OpenAI for LLM-powered honeypots.
  • Includes Prometheus integration for metrics and RabbitMQ for event handling.
  • Offers Docker and Kubernetes deployment options.

Maintenance & Community

  • Active development with a roadmap towards a PaaS platform.
  • Welcomes contributions; details in Contributor Guide.
  • Telegram channel for real-time attack updates.

Licensing & Compatibility

  • MIT License. Permissive for commercial use and integration with closed-source projects.

Limitations & Caveats

  • LLM integration requires careful prompt engineering and API key management.
  • The framework's security relies on proper configuration and isolation of honeypot environments.
Health Check
Last Commit

2 days ago

Responsiveness

1 day

Pull Requests (30d)
10
Issues (30d)
3
Star History
152 stars in the last 30 days

Explore Similar Projects

Starred by Chip Huyen Chip Huyen(Author of "AI Engineering", "Designing Machine Learning Systems"), Carol Willing Carol Willing(Core Contributor to CPython, Jupyter), and
3 more.

llm-security by greshake

0.1%
2k
Research paper on indirect prompt injection attacks targeting app-integrated LLMs
Created 2 years ago
Updated 2 months ago
Feedback? Help us improve.