ClearNebel Agent Builder

Released: July 29, 2025
Author: samuelgiger
Repository: https://github.com/ClearNebel/Agent-Builder

Run, fine-tune, and dataset creation tool for local LLMs and public LLMs. Allows local LLMs with tool-calling capabilities.

ProjectClearNebelBanner.png Currently, this project is just a Proof of Concept of some issues I have observed regarding the integrations of various LLM models.
DISCLAIMER Not intended to be deployed directly internet facing!
The use of LLM models and their associated data security concerns is increasing, as publicly available LLM model providers are slowly running out of data. With a combination of users and businesses trying to integrate LLMs into their workflows and daily operations, internal data may find its way to LLM providers, where it could be used for training new models, as it is linked to private accounts. Instead of trying to prevent access and creating internal rules—which are mostly followed, but sometimes willingly or unwillingly ignored by users—this project aims to deliver the best possible option for users to run locally hosted, open-weight models for their own use and to enable them to fine-tune the model. There should also be multiple "LLMs" that can be fine-tuned for specific tasks, equipped with "Tools/Functions" to gather data from approved sources.

Components

The setup consists of 3 components:
  1. Worker: This component can be deployed on multiple hosts, depending on hardware and redundancy requirements.
  2. Redis: A queue for currently running tasks. Also used to store data from LLMs.
  3. Web: A web interface for users to interact with the system, access the models, and create datasets.

ProjectClearNebelComponents.png

Worker

The Worker is responsible for running the LLMs, executing tools, and enforcing the execution policies for the public LLMs. It first fetches the data queue from Redis and, after execution, stores the response back in the same queue.

Redis

Redis is used to store requests from the web interface, responses from the LLMs, and information about currently running tasks. It is a key-value store that can be accessed by multiple hosts.

Web

The Web Interface provides a chat interface to validate and correct the models’ responses, generating datasets. Additionally, it provides an API to create a separate UI component to interact with the LLM models.

Architecture

After briefly describing the components, this chapter explains the architecture of the project in detail. Project Clear Nebel contains a web frontend to interact with the configuration and the LLM. SQLite is used as the database to store interactions. Each request is queued and stored in Redis. The Worker fetches requests from Redis, checks the request, and, if allowed, sends it to a locally hosted or public LLM.
ProjectClearNebelArchitecture.png

Web Interface

The Web Interface provides access to the following functions over HTTP/HTTPS:
  • API: Enables other applications to make requests to local or public LLMs with safeguards in place.
  • Chat: Enables users to chat with enabled LLMs and allows them to vote on the responses for further fine-tuning.
  • Management:
    • User Management: Allows the creation of users, setting rate limits, and controlling access to specific models. Also enables safeguards for public LLMs.
    • Model Configuration: Displays model settings such as the master prompt, base model, and available tools.
    • Feedback Curation: Allows reevaluation of downvoted responses and the setting of new routes or updated responses. This data is used to further fine-tune local models.
    • Dataset Creator: Enables the creation of custom datasets to fine-tune models based on a "User Prompt" and an "Ideal Response" from the LLM.
    • Analytics: Displays model usage statistics and user request counts.

Redis

The Redis instance is the middle component between the web interface and the worker. It stores requests from the web interface and allows fetching the responses from local LLMs.

Worker

The Worker enables interaction with fine-tuned models and tool calls. It can run on different hardware or span multiple instances of locally loaded models that respond to custom requests. It can also call publicly available LLMs if configured.
  • Private Identifiable Information (PII): The Worker first checks whether any PII is present; if it is, it forces the request to be handled by a local LLM.
  • Public LLM: If the public LLM is allowed, it is called, and its response is stored in the Redis instance.
  • Local LLM:
    • Router LLM: The Router LLM decides which locally fine-tuned model should respond to the request.
    • Local Agent Call: After the Router LLM responds, the local agent is called, which can execute a locally available tool.
    • Tool Call: Each local agent can be configured with tools that allow it to gather more information based on the request. After executing the tool, the LLM is recalled with the tool's results.


Installation

Prerequisites

  • Python 3.11+ and pip.
  • A running Redis server on localhost:6379.
  • An NVIDIA GPU with CUDA installed for hardware acceleration.
  • Tested on Debian 12 (Bookworm)

Installation Steps (Native Python)

  1. Create and update the ./agent/.env file:
    # Copy the .env-example to .env file:
    cp .env-example .env
    SECOVERVIEW_ENDPOINT=https://SECOVERVIEW_IP
    SECOVERVIEW_USERNAME=username
    SECOVERVIEW_PASSWORD=password
    SECOVERVIEW_PASSWORD_UPDATE_CYCLE=167
    NUM_WORKERS=1 # Set the number of possible worker threads here; default is 1
    
    OPENAI_API_KEY="your-openai-api-key"
    GOOGLE_API_KEY="your-google-api-key"
    
    REDIS_HOST=localhost # Set Redis host here; default is localhost
    REDIS_PORT=6379
    REDIS_PASSWORD=your_strong_redis_password_here
    REDIS_DB=0
  2. Create and update the ./web/.env file:
    # Copy the .env-example to .env file:
    cp .env-example .env
    DEBUG=False
    SECRET_KEY=KEY_VALUE # Create a secret key that will be used to encrypt passwords and generate the API token. This should be at least 24 characters long with lowercase letters, numbers, and symbols.
    
    REDIS_HOST=localhost
    REDIS_PORT=6379
    REDIS_PASSWORD=your_strong_redis_password_here
    REDIS_DB=0
  3. Create Virtual Environment & Install Dependencies:
    # Create and activate the virtual environment
    python -m venv .venv
    source .venv/bin/activate
    
    # Install all required packages from both applications
    pip install -r requirements.txt
  4. Hugging Face Authentication:
    huggingface-cli login
  5. Initial Application Setup:
    cd web
    python manage.py makemigrations
    python manage.py migrate
    python manage.py createsuperuser
    python manage.py collectstatic
    
    cd ../agent
    python -m rag.build_index

Installation Steps (Nginx & Worker Service)

  1. Install and configure Nginx:
    sudo apt install nginx
  2. Create and update the ./agent/.env file:
    # Copy the .env-example to .env file:
    cp .env-example .env
    SECOVERVIEW_ENDPOINT=https://SECOVERVIEW_IP
    SECOVERVIEW_USERNAME=username
    SECOVERVIEW_PASSWORD=password
    SECOVERVIEW_PASSWORD_UPDATE_CYCLE=167
    NUM_WORKERS=1 # Set the number of possible worker threads here; default is 1
    
    OPENAI_API_KEY="your-openai-api-key"
    GOOGLE_API_KEY="your-google-api-key"
    
    REDIS_HOST=localhost # Set Redis host here; default is localhost
    REDIS_PORT=6379
    REDIS_PASSWORD=your_strong_redis_password_here
    REDIS_DB=0
  3. Create and update the ./web/.env file:
    # Copy the .env-example to .env file:
    cp .env-example .env
    DEBUG=False
    SECRET_KEY=KEY_VALUE # Create a secret key that will be used to encrypt passwords and generate the API token. This should be at least 24 characters long with lowercase letters, numbers, and symbols.
    
    REDIS_HOST=localhost
    REDIS_PORT=6379
    REDIS_PASSWORD=your_strong_redis_password_here
    REDIS_DB=0
  4. Create Virtual Environment & Install Dependencies:
    # Create and activate the virtual environment
    python -m venv .venv
    source .venv/bin/activate
    
    # Install all required packages from both applications
    pip install -r requirements.txt
  5. Hugging Face Authentication:
    huggingface-cli login
  6. Initial Application Setup:
    cd web
    python manage.py makemigrations
    python manage.py migrate
    python manage.py createsuperuser
    python manage.py collectstatic
    
    cd ../agent
    python -m rag.build_index
  7. Add a user to run the web frontend and worker:
    sudo useradd -m -s /bin/bash clearnebel
  8. Create the service file for the Web Interface /etc/systemd/system/clearnebel-web.service
    [Unit]
    Description=ClearNebel instance to serve Django Web project
    After=network.target
    
    [Service]
    User=clearnebel
    Group=www-data
    WorkingDirectory=/path/to/app/web
    ExecStart=/path/to/app/web/.venv/bin/gunicorn --workers 3 --bind unix:/path/to/app/web/web.sock web.wsgi:application --timeout 3600
    
    [Install]
    WantedBy=multi-user.target
  9. Create the service file for the Worker /etc/systemd/system/clearnebel-worker.service
    [Unit]
    Description=ClearNebel Worker Service
    After=network.target
    
    [Service]
    User=clearnebel
    WorkingDirectory=/path/to/app/agent
    ExecStart=/path/to/app/web/.venv/bin/python /path/to/app/agent/worker.py
    Restart=always
    Environment=PYTHONUNBUFFERED=1
    
    [Install]
    WantedBy=multi-user.target
  10. Change ownership of the project folder to the service user:
    sudo chown -R clearnebel:www-data /path/to/app
  11. Enable and start the services:
    sudo systemctl daemon-reload
    
    sudo systemctl enable clearnebel-worker.service
    sudo systemctl start clearnebel-worker.service
    
    sudo systemctl enable clearnebel-web.service
    sudo systemctl start clearnebel-web.service
  12. Create a self-signed SSL certificate for HTTPS:
    # Create directory for SSL certificate
    sudo mkdir -p /etc/nginx/ssl
    
    # Create SSL certificate
    sudo openssl req -x509 -nodes -days 3650 \
      -newkey rsa:2048 \
      -keyout /etc/nginx/ssl/selfsigned.key \
      -out /etc/nginx/ssl/selfsigned.crt \
      -subj "/C=CH/ST=Zurich/L=Zurich/O=ClearNebel/OU=ClearNebel/CN=ClearNebel"
  13. Create the Nginx config under /etc/nginx/sites-available/clearnebel:
    server {
        listen 80;
        server_name _;
        return 301 https://$host$request_uri;
    }
    
    server {
        listen 443 ssl;
        server_name _;
    
        ssl_certificate /etc/nginx/ssl/selfsigned.crt;
        ssl_certificate_key /etc/nginx/ssl/selfsigned.key;
    
        ssl_protocols TLSv1.2 TLSv1.3;
        ssl_ciphers HIGH:!aNULL:!MD5;
    
        location / {
            include proxy_params;
            proxy_pass http://unix:/path/to/app/clearnebel.sock;
            proxy_read_timeout 3600s;
            proxy_connect_timeout 60s;
            proxy_send_timeout 3600s;
        }
    
        location /static/ {
            alias /path/to/app/staticfiles/;
        }
    }
    Enable the created config and remove the default one:
    sudo ln -s /etc/nginx/sites-available/$PROJECT_NAME /etc/nginx/sites-enabled
    sudo rm -f /etc/nginx/sites-enabled/default
    After enabling the config, test it and reload the Nginx service:
    sudo nginx -t && sudo systemctl reload nginx
You can now access the application by opening your browser at https://your-ip/.

Running the Application (Native Python)

To run the full application on one host, you will need two separate terminals.

Terminal 1: Start the AI Worker Pool

cd /path/to/your_project/agent
python worker.py

Terminal 2: Start the Django Web Server

cd /path/to/your_project/web
python manage.py runserver
You can now access the application by opening your browser at http://127.0.0.1:8000/.

User and Admin Guide (Web Interface)

  • Login: will appear after accessing the link.
  • Chat Interface: The main interface is at /chat/.
  • Expert Mode: Use the gear icon (⚙️) to open advanced settings.
  • Feedback: Users can rate responses via thumbs up/down buttons.
  • Admin Panel: Admins can access:
    • User Management: Add, delete, and update user accounts and roles.
    • Agent Configuration
    • Feedback Curation

Backend Administration CLI (manage_agent.py)

All backend management tasks are handled by this tool. Run these commands from the agent/ directory.

System Configuration

# View the current config.yaml
python manage.py config show

# Set a new base model for the system (requires re-training adapters)
python manage_agent.py config set-base-model "google/gemma-3-4b-it"

Agent Management

# Workflow: Create agent scaffolding first...
python manage_agent.py agents create "new_agent_name"

# ...then generate its prompt with AI assistance.
python manage_agent.py agents create-prompt "new_agent_name"

# Other commands
python manage_agent.py agents list
python manage_agent.py agents delete "agent_to_delete"

Training Management

# Initial Supervised Fine-Tuning (SFT)
python manage_agent.py train run new_agent_name
python manage_agent.py train run router

# Direct Preference Optimization (DPO) after collecting and exporting feedback
python manage_agent.py train dpo new_agent_name

User Guide

Chat

After accessing and logging in to the web app, the chat view will be shown. ClearNebelChat.png
The Chat view contains the following options:
  • Chat Interface: Enter text in the input field and hit the "Send" button on the right or press "Ctrl+Enter".
  • Expert Mode: Use the gear icon (⚙️) to open advanced settings.
    • Model Selection: Select the model with which the message should be executed (Local Agent System, OpenAI, Google).
    • Enabled Local Agent: List of all the locally defined agents.
    • Temperature and Top P: Temperature and top_p parameters to control the randomness of responses.
  • Feedback: Users can rate responses via thumbs up/down buttons.
  • Admin Panel: Admins can access:
    • User Management: Add, delete, and update user accounts.
    • Agent Configuration: View Local LLM Agent Configuration.
    • Feedback Curation: Correct downvoted feedback to fine-tune the model further.
    • Analytics: View all LLM calls by user or LLM.
    • Dataset Creator: Add new entries to fine-tune the local models.

User Management

The User Management allows you to create users, set limits on how often public LLMs can be called, and reroute public LLM calls to local LLMs if certain conditions are met. ClearNebelUserManagement.png
The User Management allows control over the following features:
  • Select LLM Provider: Allows you to set the available LLM providers for the user. Also set a daily rate limit to prevent high-cost generation by one user.
  • Local Sub-Agent Permissions: Allows you to set the enabled Local Agents per user.
  • Safety & Compliance:
    • Force to Local System on PII Detection: If enabled, any prompt to a public LLM containing a pattern defined in the config will be rerouted to the local LLM agents.
    • Block Dangerous Content: If enabled, any prompt to any LLM provider or response from a LLM provider containing a blocked word will be blocked.

Model Configuration

The Model Configuration will display the configuration of the local LLM agent with its function, master prompt, and its base model. All will be extracted from the config.yaml or master prompt file and displayed. ClearNebelModelConfigruation.png
The Model Configuration will display the following features:
  • Base Model: The model which was fine-tuned for the use case.
  • Prompt File: Displays the path to the prompt file.
  • Assigned Tools: Displays all the tools available for the LLM to get further data.
  • Prompt Content: Displays the master prompt for the specific agent.

Feedback Curation

The Feedback Curation helps to create a fine-tune dataset (DPO) for the model. With it, the expected model output can be created. Downvoted responses will be listed. ClearNebelFeedbackCuration.png
The Feedback Curation has the following features:
  • Conversation Context: The full chat history to allow better classification of how the ideal response should look like.
  • Correct Route: If the wrong local agent was selected by the router, the router can be retrained on this dataset.
  • Corrected Response: If the model provided the wrong response, a correct one can be entered here.

Dataset Creator

The Dataset Creator helps to create a fine-tune dataset (SFT) for the model. This can help to create a basic understanding of the model's responses. ClearNebelDatasetCreator.png
The Dataset Creator has the following features:
  • Add New Example:
    • User Prompt: Expected user prompt.
    • Ideal Agent Response: Best response of the agent for the user prompt.
  • Export to File: Exports the dataset to the dataset directory to fine-tune the model.

Related Posts

Project ClearNebel Agent Builder: Usage and Integration Possibilities
Released: Aug. 17, 2025
Author: samuelgiger
Categories: Security Web LLM ClearNebel Agent Builder

SecOverview and ClearNebel integrate to deliver automated security insights and recommendations.

Read Post »