Introduction
Welcome to the first part of our guide on enhancing Wazuh with Ollama!
In this post, we’ll dive into integrating Wazuh – a powerful open-source security platform – with Ollama, a tool for running large language models (LLMs) locally.
This combination will help improve threat detection, automate security workflows, and simplify incident response with the magic of artificial intelligence. Ready to take your cybersecurity to the next level? Let’s break down the benefits and walk through the setup process step by step.
Why Integrate Ollama with Wazuh? Top Benefits Unveiled
Combining Ollama with Wazuh offers numerous advantages, bringing together machine learning and security monitoring for smarter, faster protection. Here’s why this integration is a game-changer:
1. Smart Threat Intelligence and Anomaly Detection
- Threat Prioritization: Machine learning helps evaluate threats based on severity, filtering out false positives.
- Automated Incident Summaries: Get concise and actionable incident summaries within seconds.
2. Optimizing Security Operations
- Automated Response Playbooks: Ollama generates playbooks for Wazuh’s active response system based on detected threats.
- Enhanced SIEM and SOAR Integration: Integrating with SIEM and SOAR systems becomes more efficient with AI-powered data.
3. Simplified Compliance and Reporting
- Policy Recommendations: AI suggests optimal Wazuh settings to better align with compliance requirements.
Setting Up Ollama
What Is Ollama?
Ollama is an open-source tool that simplifies running large language models (LLMs) right on your machine.
It’s perfect for enhancing Wazuh with machine learning and AI without the complexity.
Ollama abstracts away the complexities of model management, quantization, and GPU acceleration. It provides a unified interface for downloading, configuring, and serving models through a local REST API. This is particularly important for security operations where sending sensitive log data to external cloud-based AI services may violate data residency requirements or organizational security policies. By running models locally with Ollama, all security event data remains within your infrastructure perimeter.
Installing Ollama: Your Options
You can install Ollama either locally or via Docker – the choice is yours.
Option 1: Local Installation (Without Docker)
- Go to the Ollama website website.
- Download the binary for your operating system (Windows, macOS, or Linux).
- Run the model with the command:
ollama run llama3.2
Option 2: Installation via Docker
To run Ollama in a container, use this command:
docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
Bonus: Docker Compose for Easy Reuse
Here’s a neat Docker Compose file for future use:
services:
ollama:
image: ollama/ollama:latest
ports:
- "11434:11434"
volumes:
- ./ollama:/root/.ollama
restart: always
environment:
- OLLAMA_KEEP_ALIVE=24h
- OLLAMA_HOST=0.0.0.0
Run it with:
docker-compose up -d
Available Ollama Models
Ollama supports several LLMs. Here’s a list of popular models:
| Model | Parameters | Size | Command |
|---|---|---|---|
| Llama 3.2 | 3B | 2.0GB | ollama run llama3.2 |
| Llama 3.1 | 8B | 4.7GB | ollama run llama3.1 |
| Gemma 2 | 9B | 5.5GB | ollama run gemma2 |
| Mistral | 7B | 4.1GB | ollama run mistral |
To download a model, use the command:
ollama pull llama3.2
Model Selection Guidance for Security Workloads
Choosing the right model for Wazuh integration depends on your specific use case and available hardware resources. Here are key considerations:
- Llama 3.2 (3B): Best suited for environments with limited GPU memory. It handles basic alert summarization and classification tasks effectively while maintaining low resource consumption. Recommended for initial testing and proof-of-concept deployments.
- Llama 3.1 (8B): Provides a strong balance between performance and resource usage. This model delivers more nuanced analysis of security events and can handle complex queries about attack patterns and remediation steps. Recommended for production deployments on systems with at least 8GB of available RAM.
- Mistral (7B): Known for strong reasoning capabilities, making it well-suited for security event correlation and root cause analysis tasks. It performs particularly well when generating detailed incident response recommendations.
For Wazuh-specific workloads, consider using the fine-tuned wazuh-llama-3.1-8B-v1 model, which has been specifically trained on Wazuh security log data and can provide domain-specific analysis that general-purpose models cannot match.
Build your own model Ollama with a Modelfile
To create an AI expert for Wazuh, follow these steps:
- Create a
Modelfile:FROM llama3.2 PARAMETER temperature 1 SYSTEM "You are an AI assistant and Wazuh expert. Respond only as a Wazuh professional." - Create and run the model:
ollama create wazuh-expert -f ./Modelfile ollama run wazuh-expert
Using the Ollama REST API
You can interact with Ollama through its API. For example, to send a query:
curl http://localhost:11434/api/chat -d '{
"model": "llama3.2",
"messages": [{ "role": "user", "content": "What is Wazuh?" }]
}'
Full API documentation is available on GitHub.
The REST API is the foundation for the Wazuh-Ollama integration. In subsequent parts of this series, we will build a custom integration script that sends Wazuh alert data to this API endpoint and processes the model’s response to enrich security events with AI-generated analysis.
What’s Next?
We’ve set up Ollama! In Part 2, we’ll integrate it with Wazuh and continue the setup process.
Related Reading
- Applying RAG for Wazuh Documentation: Part 1 - Learn how to enhance Wazuh documentation with RAG
- How to Set Up a Custom Integration between Wazuh and MARK - Integrate Wazuh with MARK platform
- Boosting Container Image Security Using Wazuh and Trivy - Container security with Wazuh
Stay tuned for more!
Series Navigation:
- Part 1: Introduction to Integration (you are here)
- Part 2: Deploying Wazuh
- Part 3: Creating Integration
- Part 4: Configuration & Implementation
- Part 5: Local Ollama in Wazuh Dashboard