Two LLM Security Assistants for Wazuh and AWS Analysis

When Your SOC Analyst Can’t Keep Up (Or Just Needs a Break)

Let’s be honest: analyzing thousands of security events every day isn’t the most exciting job.

Especially when half the alerts are false positives, and the other half requires deep contextual understanding that you’ve already forgotten after your third cup of coffee.

That’s exactly why I created two specialized AI assistants that actually understand what’s happening in your logs. And no, this isn’t another ChatGPT with a “you are a security expert” prompt. These are fine-tuned models based on LLaMA 3.1 8B, trained on real (well, almost real) attack scenarios and security events.

Model #1: Wazuh Security Assistant — The Wazuh Expert That Never Sleeps

Where to Find:

What Can This Model Do?

Imagine a colleague who:

  • Has seen more SSH brute-forces than you’ve opened Jira tickets all year
  • Knows all Wazuh severity levels (0-15) and can explain why level 12 means “drop everything and fix it right now”
  • Doesn’t confuse legitimate admin activity with lateral movement
  • Gives specific recommendations, not just “check the logs” (thanks, Captain Obvious)
  • Maps everything to MITRE ATT&CK because compliance requires it

Technical Specifications:

  • Base: meta-llama/Llama-3.1-8B-Instruct
  • Architecture: 8B parameters (fine-tuned with LoRA)
  • Languages: English, Russian, Spanish
  • Size: 4.7GB (Ollama version)
  • Context: 128K tokens
  • Optimization: Unsloth (2x faster)

Usage Examples

Via HuggingFace Transformers:

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("pyToshka/wazuh-llama-3.1-8b-assistant")
tokenizer = AutoTokenizer.from_pretrained("pyToshka/wazuh-llama-3.1-8b-assistant")

prompt = """Analyze this Wazuh alert:

Multiple failed SSH login attempts from IP 45.142.120.10
Rule Level: 10
Event: sshd authentication failure

Provide:
1. Threat level
2. Attack type
3. Recommended actions
"""

inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=512)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)

Via Ollama (Locally, Because Paranoia):

# Installation
ollama pull OpenNix/wazuh-llama-3.1-8B-v1

# Interactive mode
ollama run OpenNix/wazuh-llama-3.1-8B-v1

# Via API
curl http://localhost:11434/api/generate -d '{
  "model": "OpenNix/wazuh-llama-3.1-8B-v1",
  "prompt": "Analyze this Wazuh alert: Multiple failed SSH login attempts from IP 45.142.120.10",
  "stream": false
}'

What You’ll Get in Response:

Response:
1. Threat level: High
2. Attack type: Password attack
3. Recommended actions: "Verify user credentials and password", "Enable two-factor authentication", "Update system to latest version", "Check for security updates", etc.
4. Additional information: "IP address 45.142.120.10" is the IP address from which failed login attempts originated. "sshd authentication failure" is the rule description, indicating the attack is targeting the SSH server.

Analysis:
This alert indicates a password attack, as the rule has a threat level of 10, suggesting a high level of danger. Therefore, recommended actions include verifying user credentials and password, enabling two-factor authentication, and updating the system to the latest version. The additional information that IP address 45.142.120.10 is where the failed login attempts originated suggests the attack came from this IP address.

Reasoning Skills:
To respond to this alert, analytical and reasoning skills are required. Specifically, you need to:

* Identify the attack type and its objective.
* Assess the threat level and take appropriate protective measures.
* Verify user credentials and password.
* Enable two-factor authentication.
* Update the system to the latest version.
* Check for security updates.

Thus, this alert requires analytical and reasoning skills to identify the attack type, assess the threat level, and take protective measures. Recommended actions include verifying user credentials and password, enabling two-factor authentication...

When to Use?

  • Need quick analysis of Wazuh alerts
  • Want to automate event triage
  • Require MITRE ATT&CK mapping
  • Need specific recommendations, not generic phrases

Model #2: AWS Security Analyst — For Those Who’ve Accepted the Cloud

Where to Find:

Why Is Cloud Security Such a Pain?

Because AWS generates logs faster than you can read them. CloudTrail, GuardDuty, Security Hub, VPC Flow Logs, WAF, Macie, Inspector, Config… and that’s just the beginning of the list.

This model is trained on 20+ AWS security sources and understands what each event means in the context of your infrastructure.

Technical Specifications:

  • Base: meta-llama/Llama-3.1-8B-Instruct
  • Trained on: 16,448 AWS security event samples
  • AWS Sources: 20+ services (CloudTrail, GuardDuty, Security Hub, etc.)
  • MITRE ATT&CK: 135 cloud techniques, 14 tactics
  • Compliance: 195 items (CIS, PCI-DSS, HIPAA, GDPR, FedRAMP, NIST)
  • Attack Scenarios: 20 multi-step attack patterns
  • Size: 4.3GB (Ollama version, Q4_0 quantization)

What Can It Do?

The model analyzes events from:

ServiceWhat It Catches
CloudTrail74 API call types (including those root makes at 3 AM)
GuardDuty86 finding types (from crypto mining to data exfiltration)
Security HubCompliance violations (CIS, PCI-DSS, HIPAA)
VPC Flow Logs5 attack patterns (port scanning, DDoS, etc.)
WAFWeb application attacks (SQLi, XSS, and other “classics”)
MacieData sensitivity findings (when S3 bucket decides to go public)
InspectorVulnerability findings (zero-days and not-so-zero)
ConfigConfiguration drift (someone disabled encryption, surprise!)

Usage Examples

Via HuggingFace:

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("pyToshka/aws-security-analyst")
tokenizer = AutoTokenizer.from_pretrained("pyToshka/aws-security-analyst")

prompt = """Analyze this AWS security event:

Event Source: GuardDuty
Finding Type: UnauthorizedAccess:EC2/SSHBruteForce
Severity: 8.0
Resource: EC2 instance i-1234567890abcdef0
Source IP: 45.142.120.10

Provide:
1. Threat assessment
2. MITRE ATT&CK techniques
3. Compliance impact
4. Recommended actions
"""

inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Via Ollama:

# Installation
ollama pull OpenNix/aws-security-assistant

# Analyze CloudTrail event
ollama run OpenNix/aws-security-assistant \
  "Analyze: Root user attached AdministratorAccess policy at 3 AM"

# Via API
curl http://localhost:11434/api/generate -d '{
  "model": "OpenNix/aws-security-assistant",
  "prompt": "Analyze GuardDuty finding: Cryptocurrency mining detected on EC2",
  "stream": false
}'

Use Cases (When This Model Will Save Your Day)

  1. GuardDuty Findings Interpretation

    • Got a finding but don’t know how serious it is?
    • Model explains severity, MITRE techniques, and what to do
  2. CloudTrail Event Investigation

    • Someone created an IAM user with admin rights at 2 AM?
    • Model tells you why that’s suspicious (or not)
  3. Compliance Violations Detection

    • CIS benchmark failed? PCI-DSS violation?
    • Model explains which requirement is violated and how to fix it
  4. Multi-Source Event Correlation

    • CloudTrail + GuardDuty + VPC Flow Logs = ???
    • Model pieces together the puzzle and shows the full attack picture
  5. Incident Response Planning

    • Don’t know where to start?
    • Model provides a step-by-step response plan

Comparison Table: Which One to Choose?

ParameterWazuh AssistantAWS Security Analyst
SpecializationWazuh security eventsAWS cloud security
Trained onWazuh rules & alerts20+ AWS services
Training samples~200016,448
MITRE ATT&CKGeneralCloud-specific (135 techniques)
ComplianceBasicAdvanced (CIS, PCI-DSS, HIPAA, GDPR, FedRAMP, NIST)
LanguagesEN, RU, ESEN, RU, ES
Size (Ollama)4.7GB4.3GB
Context window128K128K
Best forOn-prem/hybrid securityCloud-native AWS

Installation and Setup: Three Ways

Method 1: HuggingFace Transformers (For Control Enthusiasts)

# For Wazuh
from transformers import AutoModelForCausalLM, AutoTokenizer

wazuh_model = AutoModelForCausalLM.from_pretrained("pyToshka/wazuh-llama-3.1-8b-assistant")
wazuh_tokenizer = AutoTokenizer.from_pretrained("pyToshka/wazuh-llama-3.1-8b-assistant")

# For AWS
aws_model = AutoModelForCausalLM.from_pretrained("pyToshka/aws-security-analyst")
aws_tokenizer = AutoTokenizer.from_pretrained("pyToshka/aws-security-analyst")

Method 2: Ollama (For Simplicity Lovers)

# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh

# Wazuh Assistant
ollama pull OpenNix/wazuh-llama-3.1-8B-v1
ollama run OpenNix/wazuh-llama-3.1-8B-v1

# AWS Security Analyst
ollama pull OpenNix/aws-security-assistant
ollama run OpenNix/aws-security-assistant

Method 3: Docker + Ollama (For Production)

# docker-compose.yml
services:
  ollama:
    image: ollama/ollama:latest
    ports:
      - "11434:11434"
    volumes:
      - ./ollama:/root/.ollama
    restart: always
    environment:
      - OLLAMA_KEEP_ALIVE=24h
      - OLLAMA_HOST=0.0.0.0
docker-compose up -d

# Pull models
docker exec -it ollama bash
ollama pull OpenNix/wazuh-llama-3.1-8B-v1
ollama pull OpenNix/aws-security-assistant

Wazuh Integration (Because You Can!)

Remember my series on integrating Wazuh with Ollama?

These models integrate easily into your Wazuh cluster:

#!/var/ossec/framework/python/bin/python3
from ollama import Client

def analyze_wazuh_alert(alert):
    client = Client(host='http://localhost:11434')
    
    response = client.chat(
        model='OpenNix/wazuh-llama-3.1-8B-v1',
        messages=[{
            'role': 'user',
            'content': f'Analyze this Wazuh alert: {alert}'
        }]
    )
    
    return response['message']['content']

if __name__ == "__main__":
    alert = sys.argv[1]
    analysis = analyze_wazuh_alert(alert)
    send_to_wazuh(analysis) 

Real-World Use Cases (Examples from Life)

Scenario 1: SSH Brute-force Detection

Input (Wazuh Assistant):

Multiple failed SSH login attempts from 103.85.24.45
Rule: 5710 | Level: 10 | sshd: authentication failure

Output:

Threat Level: 12/15 - High Severity
Attack Type: SSH Brute-force (MITRE T1110.001)
Risk: Active credential compromise attempt
Actions:
1. Block IP immediately via firewall/fail2ban
2. Check for successful logins from same /24 subnet
3. Enable 2FA if not configured
4. Review SSH key-based auth configuration

Scenario 2: AWS Privilege Escalation

Input (AWS Security Analyst):

CloudTrail Event: AttachUserPolicy
User: dev-user
Policy: AdministratorAccess
Time: 02:47 AM UTC
Source IP: 185.220.101.45 (Tor exit node)

Output:

Threat Assessment: CRITICAL
MITRE ATT&CK: T1098.001 - Account Manipulation: Additional Cloud Credentials
Compliance Impact:
- CIS AWS Foundations: 1.16 (IAM policies)
- PCI-DSS: 7.1, 7.2 (Access control)

Attack Scenario: Privilege Escalation via Policy Attachment
1. Compromised dev-user account (low privileges)
2. Attached AdministratorAccess (privilege escalation)
3. Source IP is Tor node (attacker hiding identity)

Immediate Actions:
1. REVOKE AdministratorAccess from dev-user NOW
2. Rotate dev-user credentials immediately
3. Review CloudTrail for other actions from this IP
4. Check for data exfiltration (S3 GetObject, etc.)
5. Enable MFA delete on critical S3 buckets
6. Implement SCPs to prevent policy attachment by non-admin users

Limitations (Being Honest About Shortcomings)

Both Models:

Don’t replace humans (I tried, but no) — these are assistants, not autopilots
Require verification — always check recommendations before applying
Context window is limited — 128K is a lot, but not infinite
May hallucinate — like any LLM, sometimes makes up details

Wazuh Assistant:

Doesn’t know anything about custom Wazuh rules (only official ones)

AWS Security Analyst:

Won’t work with Azure or GCP
Synthetic data may not cover all cases, but I tried my best

FAQ (Frequently Asked Questions)

Q: Is this free?
A: Yes, both models are fully open-source under LLaMA 3.1 Community License

Q: Can I use in commercial projects?
A: Yes, but read the LLaMA 3.1 license — there are nuances

Q: Which model is better?
A: Depends on your use case. Wazuh for on-prem/hybrid, AWS for cloud

Q: Does it work offline?
A: Yes! Especially via Ollama — everything local, no API calls

Q: Can I fine-tune further on my data?
A: Of course! These are LoRA models, easily retrained

Q: What about privacy?
A: All inference is local (if via Ollama), no data goes anywhere

Q: Multi-GPU support?
A: HuggingFace version — yes, Ollama version — not yet

Q: Production latency?
A: 1-3s on GPU, 10-30s on CPU (depends on hardware and response length)


Models:

Wazuh Security Assistant:

AWS Security Analyst:


See also