Home NovaAstrax 360 Fastino Labs Open-Sources GLiGuard: A 300M Parameter Safety Moderation Model That Matches...

    Fastino Labs Open-Sources GLiGuard: A 300M Parameter Safety Moderation Model That Matches or Exceeds Accuracy of Models 23–90x Its Size

    2
    0


    As LLM-powered applications move into production — and as AI agents take on more consequential tasks like browsing the web, writing and executing code, and interacting with external services — safety moderation has quietly become one of the most operationally expensive parts of the stack.

    Most developers who’ve deployed a production LLM system know the problem: you need to evaluate every user prompt before it reaches the model, and every model response before it reaches the user. That means your guardrail model runs on every single request, at every turn of a conversation. The guardrail latency compounds. The cost compounds. And the current generation of open-source guardrail models — LlamaGuard4 (12B), WildGuard (7B), ShieldGemma (27B), NemoGuard (8B) — are all decoder-only models with billions of parameters, built for flexibility but not for speed.

    Fastino Labs released GLiGuard, a 300 million parameter open-source safety moderation model designed to address this specific problem. GLiGuard evaluates multiple safety dimensions in a single pass, and across nine safety benchmarks, its accuracy matches or exceeds models that are 23 to 90 times its size while running up to 16 times faster.

    Why Decoder LLMs May Not Be the Right Tool for Safety Moderation

    To understand what makes GLiGuard different, it helps to understand why existing guardrail models are slow. Most major guardrail models are built on decoder-only transformer architectures, they generate their safety verdicts autoregressively, one token at a time — the same way a large language model generates a response to a chat message.

    This design made sense when safety requirements were fluid. Decoder models can interpret natural language task descriptions and adapt to new safety policies without retraining. But autoregressive generation is inherently sequential, which makes it slow and computationally expensive.

    There’s a compounding problem on top of that. Most guardrail models need to assess inputs across multiple safety dimensions: what type of harm is present, whether the user prompt is attempting to bypass safety training, whether the model’s response is itself unsafe, and so on. Because decoder models generate output sequentially, these assessments are typically produced one after another, and latency compounds as more criteria are evaluated.

    In other words, the architecture that makes decoder models flexible is also the architecture that makes them the wrong tool for what is fundamentally a classification problem.

    What GLiGuard Actually Does

    GLiGuard is a small encoder-based model that reframes safety moderation as a text classification problem rather than a text generation problem. Encoder models process the entire input at once and output a single classification label for a set of fixed labels, whereas decoder models generate their output one token at a time, left to right.

    The key architectural insight is in how GLiGuard handles multiple tasks simultaneously. Instead of generating tokens, GLiGuard encodes both the input text and task definitions (labels) together. These are then fed to the model, which scores every label simultaneously in a single forward pass and returns the highest-scoring label for each task. Because all tasks and their candidate labels are part of the input itself, evaluating additional safety dimensions doesn’t add latency; it simply means including more labels in the input.

    GLiGuard runs four moderation tasks concurrently in one forward pass:

    1. Safety classification (safe / unsafe) — applied to both user prompts before generation and model responses after generation.
    2. Jailbreak strategy detection across 11 strategies, including prompt injection, roleplay bypass, instruction override, and social engineering. If any jailbreak strategy is detected, the prompt is automatically flagged as unsafe.
    3. Harm category detection across 14 categories — violence, sexual content, hate speech, PII exposure, misinformation, child safety, copyright violation, and others. A single input can trigger multiple categories at once.
    4. Refusal detection (compliance / refusal), tracked separately to help measure over-refusal (when a model refuses safe requests) and detect false compliance (when a model appears to comply but doesn’t). If a refusal is detected, the response is automatically marked as safe.

    Training Data and Fine-Tuning

    GLiGuard was trained on a mixture of human-annotated and synthetically generated training data. For prompt safety, response safety, and refusal detection, the team used WildGuardTrain, a dataset of 87,000 human-annotated examples. For harm category and jailbreak strategy detection, labels for the unsafe samples were generated using GPT-4.1.

    During early training, the model struggled to distinguish between similar harm categories like toxic speech and violence, so the team used Pioneer to generate supplemental synthetic data with edge cases targeting these fine-grained distinctions.

    On the architecture side, GLiGuard was trained via full fine-tuning of the GLiNER2-base-v1 checkpoint for 20 epochs using the AdamW optimizer. GLiNER2 is Fastino’s own architecture for multi-task text classification — a natural starting point for a model designed to score multiple label sets in one pass.

    Benchmark Results: Accuracy and Speed

    The research team evaluated GLiGuard across nine established safety benchmarks. These benchmarks cover both prompt and response classification, testing whether a model can identify harmful content, withstand adversarial attacks, distinguish between different types of harm, and avoid over-flagging safe content. Results use macro-averaged F1, a standard metric that balances precision and recall.

    On accuracy:

    • GLiGuard scores 87.7 average F1 on prompt classification, within 1.7 points of the best model (PolyGuard-Qwen at 89.4).
    • It achieves the second-highest average F1 on response classification (82.7), behind only Qwen3Guard-8B (84.1).
    • It outperforms LlamaGuard4-12B, ShieldGemma-27B, and NemoGuard-8B despite being 23–90× smaller.

    On throughput and latency, benchmarked on a single NVIDIA A100 GPU:

    • GLiGuard achieves up to 16.2× higher throughput (133 vs. 8.2 samples/s at batch size 4).
    • GLiGuard achieves up to 16.6× lower latency: 26 ms vs. 426 ms at sequence length 64.

    These are not marginal improvements. At 26 ms per request versus 426 ms, the difference is meaningful in any real-time user-facing application, and the compounding effect across a multi-turn conversation makes the gap even larger in practice.

    Marktechpost’s Visual Explainer

    #gliguard-guide *,#gliguard-guide *::before,#gliguard-guide *::after{box-sizing:border-box!important;margin:0!important;padding:0!important}
    #gliguard-guide hr,#gliguard-guide p:empty,#gliguard-guide del,#gliguard-guide s{display:none!important}
    #gliguard-guide{background:#080808!important;color:#f0f0f0!important;font-family:’DM Sans’,sans-serif!important;border-radius:16px!important;overflow:hidden!important;max-width:860px!important;margin:0 auto!important;border:1px solid #1e1e1e!important;box-shadow:0 24px 80px rgba(0,0,0,0.6)!important;position:relative!important}
    #gliguard-guide .gg-header{background:#080808!important;border-bottom:1px solid #1a1a1a!important;padding:18px 28px!important;display:flex!important;align-items:center!important;justify-content:space-between!important}
    #gliguard-guide .gg-brand{display:flex!important;align-items:center!important;gap:10px!important}
    #gliguard-guide .gg-logo-dot{width:10px!important;height:10px!important;border-radius:50%!important;background:#00C896!important;display:inline-block!important;box-shadow:0 0 8px #00C89666!important}
    #gliguard-guide .gg-brand-name{font-size:12px!important;font-weight:600!important;letter-spacing:0.12em!important;text-transform:uppercase!important;color:#888!important}
    #gliguard-guide .gg-slide-counter{font-size:12px!important;color:#555!important;font-variant-numeric:tabular-nums!important;font-weight:500!important}
    #gliguard-guide .gg-slides-wrap{position:relative!important;overflow:hidden!important}
    #gliguard-guide .gg-slides{display:flex!important;transition:transform 0.45s cubic-bezier(0.65,0,0.35,1)!important}
    #gliguard-guide .gg-slide{min-width:100%!important;padding:40px 44px 44px!important;display:flex!important;flex-direction:column!important;gap:0!important;min-height:360px!important}
    #gliguard-guide .gg-slide-num{font-size:11px!important;font-weight:700!important;letter-spacing:0.18em!important;text-transform:uppercase!important;color:#00C896!important;margin-bottom:14px!important}
    #gliguard-guide .gg-slide-title{font-size:26px!important;font-weight:700!important;line-height:1.25!important;color:#ffffff!important;margin-bottom:18px!important;letter-spacing:-0.02em!important}
    #gliguard-guide .gg-slide-title span{color:#00C896!important}
    #gliguard-guide .gg-slide-body{font-size:15px!important;line-height:1.7!important;color:#aaa!important;max-width:680px!important}
    #gliguard-guide .gg-slide-body strong{color:#e8e8e8!important;font-weight:600!important}
    #gliguard-guide .gg-divider{height:1px!important;background:linear-gradient(90deg,#00C89633,#00C89600)!important;margin:24px 0!important;width:100%!important}
    #gliguard-guide .gg-stat-row{display:flex!important;gap:16px!important;flex-wrap:wrap!important;margin-top:4px!important}
    #gliguard-guide .gg-stat{background:#0f0f0f!important;border:1px solid #1e1e1e!important;border-radius:10px!important;padding:16px 20px!important;flex:1!important;min-width:140px!important}
    #gliguard-guide .gg-stat-val{font-size:28px!important;font-weight:800!important;color:#00C896!important;letter-spacing:-0.03em!important;line-height:1!important;margin-bottom:6px!important}
    #gliguard-guide .gg-stat-label{font-size:12px!important;color:#666!important;line-height:1.4!important}
    #gliguard-guide .gg-tasks{display:flex!important;flex-direction:column!important;gap:10px!important;margin-top:4px!important}
    #gliguard-guide .gg-task{display:flex!important;align-items:flex-start!important;gap:14px!important;background:#0f0f0f!important;border:1px solid #1e1e1e!important;border-radius:10px!important;padding:14px 16px!important}
    #gliguard-guide .gg-task-num{font-size:11px!important;font-weight:800!important;color:#00C896!important;min-width:20px!important;padding-top:2px!important}
    #gliguard-guide .gg-task-content{}
    #gliguard-guide .gg-task-title{font-size:13px!important;font-weight:700!important;color:#e0e0e0!important;margin-bottom:3px!important}
    #gliguard-guide .gg-task-desc{font-size:12.5px!important;color:#777!important;line-height:1.5!important}
    #gliguard-guide .gg-compare{display:flex!important;gap:14px!important;margin-top:4px!important}
    #gliguard-guide .gg-compare-col{flex:1!important;background:#0f0f0f!important;border:1px solid #1e1e1e!important;border-radius:10px!important;padding:18px 20px!important}
    #gliguard-guide .gg-compare-col.gg-col-good{border-color:#00C89640!important;background:#00C8960a!important}
    #gliguard-guide .gg-col-head{font-size:11px!important;font-weight:700!important;letter-spacing:0.1em!important;text-transform:uppercase!important;margin-bottom:12px!important;color:#555!important}
    #gliguard-guide .gg-col-head.gg-good{color:#00C896!important}
    #gliguard-guide .gg-col-item{font-size:12.5px!important;color:#888!important;padding:6px 0!important;border-bottom:1px solid #1a1a1a!important;line-height:1.5!important}
    #gliguard-guide .gg-col-item:last-child{border-bottom:none!important}
    #gliguard-guide .gg-col-item strong{color:#ccc!important}
    #gliguard-guide .gg-bench-row{display:flex!important;align-items:center!important;gap:12px!important;padding:10px 0!important;border-bottom:1px solid #151515!important}
    #gliguard-guide .gg-bench-row:last-child{border-bottom:none!important}
    #gliguard-guide .gg-bench-name{font-size:12.5px!important;color:#888!important;min-width:160px!important}
    #gliguard-guide .gg-bench-name strong{color:#e0e0e0!important}
    #gliguard-guide .gg-bench-bar-wrap{flex:1!important;background:#111!important;border-radius:4px!important;height:6px!important;overflow:hidden!important}
    #gliguard-guide .gg-bench-bar{height:100%!important;border-radius:4px!important;background:#333!important}
    #gliguard-guide .gg-bench-bar.gg-bar-hi{background:#00C896!important}
    #gliguard-guide .gg-bench-score{font-size:12px!important;font-weight:700!important;color:#888!important;min-width:44px!important;text-align:right!important}
    #gliguard-guide .gg-bench-score.gg-score-hi{color:#00C896!important}
    #gliguard-guide .gg-tag-row{display:flex!important;flex-wrap:wrap!important;gap:8px!important;margin-top:4px!important}
    #gliguard-guide .gg-tag{font-size:11.5px!important;background:#111!important;border:1px solid #222!important;border-radius:6px!important;padding:5px 11px!important;color:#888!important}
    #gliguard-guide .gg-tag.gg-tag-hi{border-color:#00C89640!important;color:#00C896!important;background:#00C8960d!important}
    #gliguard-guide .gg-link-row{display:flex!important;gap:10px!important;flex-wrap:wrap!important;margin-top:4px!important}
    #gliguard-guide .gg-link{display:inline-flex!important;align-items:center!important;gap:6px!important;font-size:12.5px!important;font-weight:600!important;color:#00C896!important;text-decoration:none!important;background:#00C8960d!important;border:1px solid #00C89640!important;border-radius:7px!important;padding:8px 16px!important;transition:background 0.2s!important}
    #gliguard-guide .gg-link:hover{background:#00C89620!important}
    #gliguard-guide .gg-nav{background:#080808!important;border-top:1px solid #1a1a1a!important;padding:16px 28px!important;display:flex!important;align-items:center!important;justify-content:space-between!important}
    #gliguard-guide .gg-dots{display:flex!important;gap:6px!important;align-items:center!important}
    #gliguard-guide .gg-dot{width:6px!important;height:6px!important;border-radius:50%!important;background:#222!important;cursor:pointer!important;transition:all 0.2s!important}
    #gliguard-guide .gg-dot.gg-active{background:#00C896!important;width:20px!important;border-radius:3px!important}
    #gliguard-guide .gg-btn-row{display:flex!important;gap:8px!important}
    #gliguard-guide .gg-btn{background:#111!important;border:1px solid #222!important;color:#888!important;border-radius:8px!important;padding:8px 18px!important;font-size:13px!important;font-weight:600!important;cursor:pointer!important;transition:all 0.2s!important;font-family:inherit!important}
    #gliguard-guide .gg-btn:hover{background:#1a1a1a!important;color:#e0e0e0!important}
    #gliguard-guide .gg-btn:disabled{opacity:0.3!important;cursor:default!important}
    #gliguard-guide .gg-btn-next{background:#00C896!important;border-color:#00C896!important;color:#000!important}
    #gliguard-guide .gg-btn-next:hover{background:#00e0aa!important;border-color:#00e0aa!important}
    #gliguard-guide .gg-progress{height:2px!important;background:#00C896!important;transition:width 0.4s!important;position:absolute!important;top:0!important;left:0!important}

    @media(max-width:640px){
    #gliguard-guide .gg-slide{padding:28px 22px 32px!important;min-height:auto!important}
    #gliguard-guide .gg-slide-title{font-size:20px!important}
    #gliguard-guide .gg-stat-row{flex-direction:column!important}
    #gliguard-guide .gg-compare{flex-direction:column!important}
    #gliguard-guide .gg-bench-name{min-width:120px!important;font-size:11.5px!important}
    #gliguard-guide .gg-header,.gg-nav{padding:14px 18px!important}
    #gliguard-guide .gg-btn{padding:7px 13px!important;font-size:12px!important}
    }


    GLiGuard — Fastino Labs

    1 / 6

    01 — Overview
    What is GLiGuard?
    GLiGuard is an open-source 300M parameter safety moderation model released by Fastino Labs on May 12, 2026. It is designed to act as a guardrail layer between users and LLMs — screening every user prompt before it reaches the model and every model response before it reaches the user.
    300M
    Parameters — runs on a single GPU

    16x
    Faster throughput vs. SOTA decoder guardrails

    4
    Safety tasks evaluated in a single forward pass

    Apache 2.0
    Hugging Face
    Pioneer Inference
    Encoder Architecture

    02 — The Problem
    Why Existing Guardrails Are Slow
    Most production guardrail models — LlamaGuard4, WildGuard, ShieldGemma, NemoGuard — are built on decoder-only transformer architectures. They generate safety verdicts autoregressively, one token at a time, the same way a large language model generates a chat response.
    Decoder Guard Models
    Generate verdicts token by token
    Sequential output — latency compounds per task
    7B — 27B parameters required
    Expensive to run at real-time scale
    Separate passes per safety dimension

    GLiGuard (Encoder)
    Processes entire input at once
    All tasks evaluated in one forward pass
    300M parameters
    Single GPU deployment
    More dimensions = no added latency

    03 — Architecture
    Single Pass. Multiple Tasks.
    GLiGuard reframes safety moderation as a text classification problem, not a text generation problem. It encodes the input text and all task definitions (labels) together, then scores every label simultaneously in one forward pass. Adding more safety dimensions does not increase latency — it simply means more labels in the input.
    Base model: Fine-tuned from the GLiNER2-base-v1 checkpoint using full fine-tuning for 20 epochs with the AdamW optimizer. Training data: 87,000 human-annotated examples from WildGuardTrain, plus synthetic edge-case data generated via GPT-4.1 and Pioneer for fine-grained harm category distinctions.

    04 — Capabilities
    4 Moderation Tasks in One Pass
    01
    Safety Classification — safe / unsafe
    Applied to both user prompts before generation and model responses after generation.

    02
    Jailbreak Strategy Detection — 11 strategies
    Detects prompt injection, roleplay bypass, instruction override, social engineering, and others. Any detected strategy auto-flags the prompt as unsafe.

    03
    Harm Category Detection — 14 categories
    Violence, sexual content, hate speech, PII exposure, misinformation, child safety, copyright violation, and others. A single input can trigger multiple categories.

    04
    Refusal Detection — compliance / refusal
    Tracks over-refusal (refusing safe requests) and false compliance. A detected refusal auto-marks the response as safe.

    05 — Benchmarks
    Accuracy vs. Much Larger Models
    Evaluated across 9 safety benchmarks using macro-averaged F1. Speed benchmarked on a single NVIDIA A100 GPU.
    Prompt Classification — Avg. F1
    GLiGuard (0.3B)
    87.7

    PolyGuard-Qwen (7B)
    89.4

    LlamaGuard4 (12B)

    ShieldGemma (27B)

    26ms
    Latency at seq. length 64 (vs. 426ms for ShieldGemma-27B)

    133
    Samples/sec throughput at batch size 4

    06 — Get Started
    Deploy GLiGuard Today
    At 300M parameters, GLiGuard runs on a single GPU and can be fine-tuned for domain-specific use cases without heavy infrastructure. Weights are available on Hugging Face under the Apache 2.0 license. Managed inference is available on Pioneer.
    Model ID

    fastino/gliguard-LLMGuardrails-300M

    Prompt Safety
    Response Safety
    Jailbreak Detection
    Harm Classification
    Refusal Detection
    Single GPU

    Designed & Created by MarktechPost.com

    (function(){
    var total=6,cur=0;
    var slides=document.getElementById(‘gg-slides’);
    var counter=document.getElementById(‘gg-counter’);
    var progress=document.getElementById(‘gg-progress’);
    var dotsEl=document.getElementById(‘gg-dots’);
    var prevBtn=document.getElementById(‘gg-prev’);
    var nextBtn=document.getElementById(‘gg-next’);
    for(var i=0;i<total;i++){
    var d=document.createElement('div');
    d.className='gg-dot'+(i===0?' gg-active':'');
    d.setAttribute('data-i',i);
    d.onclick=(function(idx){return function(){goTo(idx)};})(i);
    dotsEl.appendChild(d);
    }
    function goTo(n){
    cur=n;
    slides.style.transform='translateX(-'+cur*100+'%)';
    counter.textContent=(cur+1)+' / '+total;
    progress.style.width=((cur+1)/total*100)+'%';
    prevBtn.disabled=cur===0;
    nextBtn.disabled=cur===total-1;
    nextBtn.textContent=cur===total-1?'Done ✓':'Next \u2192';
    var dots=dotsEl.querySelectorAll('.gg-dot');
    for(var i=0;i=0&&n<total)goTo(n);
    };
    })();

    Key Takeaways

    • GLiGuard is a 300M parameter encoder-based safety moderation model that handles four tasks — safety classification, jailbreak detection, harm categorization, and refusal detection — in a single forward pass.
    • Unlike decoder-only guardrail models that generate verdicts autoregressively, GLiGuard reframes safety moderation as a text classification problem, eliminating the sequential latency bottleneck.
    • Benchmarked on a single NVIDIA A100 GPU, GLiGuard achieves up to 16.2× higher throughput and 16.6× lower latency (26 ms vs. 426 ms) compared to current SOTA models like ShieldGemma-27B.
    • Across nine safety benchmarks, GLiGuard scores 87.7 average F1 on prompt classification and 82.7 on response classification — outperforming LlamaGuard4-12B, ShieldGemma-27B, and NemoGuard-8B despite being 23–90× smaller.
    • Model weights are available under Apache 2.0 on Hugging Face (fastino/gliguard-LLMGuardrails-300M), making it deployable on a single GPU without heavy infrastructure.

    Check out the Paper, Model Weights on HF, GitHub Repo and Technical details. Also, feel free to follow us on Twitter and don’t forget to join our 150k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.

    Need to partner with us for promoting your GitHub Repo OR Hugging Face Page OR Product Release OR Webinar etc.? Connect with us

    The post Fastino Labs Open-Sources GLiGuard: A 300M Parameter Safety Moderation Model That Matches or Exceeds Accuracy of Models 23–90x Its Size appeared first on MarkTechPost.

    LEAVE A REPLY

    Please enter your comment!
    Please enter your name here