Enhancing Wazuh with Ollama: A Cybersecurity Boost (Part 1)

Introduction

Welcome to the first part of our guide on enhancing Wazuh with Ollama!

In this post, we’ll dive into integrating Wazuh — a powerful open-source security platform — with Ollama, a tool for running large language models (LLMs) locally.

This combination will help improve threat detection, automate security workflows, and simplify incident response with the magic of artificial intelligence. Ready to take your cybersecurity to the next level? Let’s break down the benefits and walk through the setup process step by step.

Why Integrate Ollama with Wazuh? Top Benefits Unveiled

Combining Ollama with Wazuh offers numerous advantages, bringing together machine learning and security monitoring for smarter, faster protection. Here’s why this integration is a game-changer:

1. Smart Threat Intelligence and Anomaly Detection

  • Threat Prioritization: Machine learning helps evaluate threats based on severity, filtering out false positives.
  • Automated Incident Summaries: Get concise and actionable incident summaries within seconds.

2. Optimizing Security Operations

  • Automated Response Playbooks: Ollama generates playbooks for Wazuh’s active response system based on detected threats.
  • Enhanced SIEM and SOAR Integration: Integrating with SIEM and SOAR systems becomes more efficient with AI-powered data.

3. Simplified Compliance and Reporting

  • Policy Recommendations: AI suggests optimal Wazuh settings to better align with compliance requirements.

Setting Up Ollama

What Is Ollama?

Ollama is an open-source tool that simplifies running large language models (LLMs) right on your machine.

It’s perfect for enhancing Wazuh with machine learning and AI without the complexity.

Installing Ollama: Your Options

You can install Ollama either locally or via Docker — the choice is yours.

Option 1: Local Installation (Without Docker)

  1. Go to the Ollama website website.
  2. Download the binary for your operating system (Windows, macOS, or Linux).
  3. Run the model with the command:
    ollama run llama3.2
    

Option 2: Installation via Docker

To run Ollama in a container, use this command:

docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama

Bonus: Docker Compose for Easy Reuse

Here’s a neat Docker Compose file for future use:

services:
  ollama:
    image: ollama/ollama:latest
    ports:
      - "11434:11434"
    volumes:
      - ./ollama:/root/.ollama
    restart: always
    environment:
      - OLLAMA_KEEP_ALIVE=24h
      - OLLAMA_HOST=0.0.0.0

Run it with:

docker-compose up -d

Available Ollama Models

Ollama supports several LLMs. Here’s a list of popular models:

ModelParametersSizeCommand
Llama 3.23B2.0GBollama run llama3.2
Llama 3.18B4.7GBollama run llama3.1
Gemma 29B5.5GBollama run gemma2
Mistral7B4.1GBollama run mistral

To download a model, use the command:

ollama pull llama3.2

Build your own model Ollama with a Modelfile

To create an AI expert for Wazuh, follow these steps:

  1. Create a Modelfile:
    FROM llama3.2
    PARAMETER temperature 1
    SYSTEM "You are an AI assistant and Wazuh expert. Respond only as a Wazuh professional."
    
  2. Create and run the model:
    ollama create wazuh-expert -f ./Modelfile
    ollama run wazuh-expert
    

Using the Ollama REST API

You can interact with Ollama through its API. For example, to send a query:

curl http://localhost:11434/api/chat -d '{
  "model": "llama3.2",
  "messages": [{ "role": "user", "content": "What is Wazuh?" }]
}'

Full API documentation is available on GitHub.

What’s Next?

We’ve set up Ollama! In Part 2, we’ll integrate it with Wazuh and continue the setup process.

Stay tuned for more!


See also