Introduction
Welcome to the first part of our guide on enhancing Wazuh with Ollama!
In this post, we’ll dive into integrating Wazuh — a powerful open-source security platform — with Ollama, a tool for running large language models (LLMs) locally.
This combination will help improve threat detection, automate security workflows, and simplify incident response with the magic of artificial intelligence. Ready to take your cybersecurity to the next level? Let’s break down the benefits and walk through the setup process step by step.
Why Integrate Ollama with Wazuh? Top Benefits Unveiled
Combining Ollama with Wazuh offers numerous advantages, bringing together machine learning and security monitoring for smarter, faster protection. Here’s why this integration is a game-changer:
1. Smart Threat Intelligence and Anomaly Detection
- Threat Prioritization: Machine learning helps evaluate threats based on severity, filtering out false positives.
- Automated Incident Summaries: Get concise and actionable incident summaries within seconds.
2. Optimizing Security Operations
- Automated Response Playbooks: Ollama generates playbooks for Wazuh’s active response system based on detected threats.
- Enhanced SIEM and SOAR Integration: Integrating with SIEM and SOAR systems becomes more efficient with AI-powered data.
3. Simplified Compliance and Reporting
- Policy Recommendations: AI suggests optimal Wazuh settings to better align with compliance requirements.
Setting Up Ollama
What Is Ollama?
Ollama is an open-source tool that simplifies running large language models (LLMs) right on your machine.
It’s perfect for enhancing Wazuh with machine learning and AI without the complexity.
Installing Ollama: Your Options
You can install Ollama either locally or via Docker — the choice is yours.
Option 1: Local Installation (Without Docker)
- Go to the Ollama website website.
- Download the binary for your operating system (Windows, macOS, or Linux).
- Run the model with the command:
ollama run llama3.2
Option 2: Installation via Docker
To run Ollama in a container, use this command:
docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama
Bonus: Docker Compose for Easy Reuse
Here’s a neat Docker Compose file for future use:
services:
ollama:
image: ollama/ollama:latest
ports:
- "11434:11434"
volumes:
- ./ollama:/root/.ollama
restart: always
environment:
- OLLAMA_KEEP_ALIVE=24h
- OLLAMA_HOST=0.0.0.0
Run it with:
docker-compose up -d
Available Ollama Models
Ollama supports several LLMs. Here’s a list of popular models:
Model | Parameters | Size | Command |
---|---|---|---|
Llama 3.2 | 3B | 2.0GB | ollama run llama3.2 |
Llama 3.1 | 8B | 4.7GB | ollama run llama3.1 |
Gemma 2 | 9B | 5.5GB | ollama run gemma2 |
Mistral | 7B | 4.1GB | ollama run mistral |
To download a model, use the command:
ollama pull llama3.2
Build your own model Ollama with a Modelfile
To create an AI expert for Wazuh, follow these steps:
- Create a
Modelfile
:FROM llama3.2 PARAMETER temperature 1 SYSTEM "You are an AI assistant and Wazuh expert. Respond only as a Wazuh professional."
- Create and run the model:
ollama create wazuh-expert -f ./Modelfile ollama run wazuh-expert
Using the Ollama REST API
You can interact with Ollama through its API. For example, to send a query:
curl http://localhost:11434/api/chat -d '{
"model": "llama3.2",
"messages": [{ "role": "user", "content": "What is Wazuh?" }]
}'
Full API documentation is available on GitHub.
What’s Next?
We’ve set up Ollama! In Part 2, we’ll integrate it with Wazuh and continue the setup process.
Stay tuned for more!
See also
- Applying RAG for Working with Wazuh Documentation: A Step-by-Step Guide (Part 2)
- Applying RAG for Wazuh Documentation: A Step-by-Step Guide (Part 1)
- Enhancing Wazuh with Ollama: Boosting Cybersecurity (Part 4)
- Enhancing Wazuh with Ollama: Boosting Cybersecurity (Part 3)
- Enhancing Wazuh with Ollama: Boosting Cybersecurity (Part 2)