hb.dev

Real-time drone detection for critical infrastructure

12 March 2026Defence sector clientDefence / Critical infrastructure

Drone detection system interface showing radar view with active tracks, drone classifications with confidence scores, and bird dismissals

Commercial drones have created a new class of security risk for critical infrastructure operators. Existing radar-based systems flag everything that moves, generating high false-positive rates and alert fatigue. The problem is not detection - it is classification: is this a threat, or a seabird?

The challenge

Our client operates a network of sensitive sites across Northern Europe. Their existing rule-based detection system had a false-positive rate exceeding 60%. Operators routinely dismissed genuine alerts. In this context, a missed detection carries serious consequences.

They needed a system that could:

  • Detect and continuously track airborne objects across wide-area sensor coverage
  • Distinguish drones from birds, weather phenomena, and insects with high confidence
  • Integrate with their existing radar and optical sensor infrastructure without hardware replacement
  • Run entirely on-premise - strict data sovereignty requirements ruled out any cloud dependency

Our approach

We embedded with the client's security and engineering teams over a year-long engagement, working from sensor integration through to a production operator interface.

Sensor Capture

Radar · EO · Thermal

Frame Processing

Fusion · Kalman tracking

Dataset Curation

CVAT · FiftyOne

Model Training

PyTorch · MLflow

Evaluation

Held-out · Edge cases

Inference API

TensorRT · FastAPI

Sensor fusion and data pipeline

The system ingests feeds from multiple sensor types - radar returns, electro-optical cameras, and thermal/infrared imaging - and fuses them into a unified tracking state. We built a real-time streaming pipeline using Apache Kafka to handle multi-sensor synchronisation with deterministic latency. Pydantic models enforce schema validation at every stage of the pipeline, and MinIO provides object storage for raw sensor captures and evidence clips.

Multi-object tracking

Each detected object is assigned a persistent track across frames using a Kalman filter-based tracker. Tracks accumulate trajectory, velocity, and behavioural features over time, giving the classifier richer signal than single-frame detection alone. The system handles simultaneous tracking of multiple objects, including through occlusion and sensor handoff.

AI classification

A custom computer vision pipeline classifies each tracked object using both visual features and kinematic signature: flight pattern, speed profile, altitude envelope, and manoeuvre characteristics. A Faster R-CNN detector identifies and classifies airborne objects in each frame. ByteTrack assigns persistent track IDs across frames, maintaining identity through occlusion and sensor handoff. As tracks accumulate detections over time, classification confidence increases, allowing the system to distinguish drones from birds not from a single frame, but from consistent detection across a trajectory. Training used a proprietary dataset of real sensor captures augmented with synthetic data to cover rare edge cases and underrepresented drone types. Training used a proprietary dataset of real sensor captures augmented with synthetic flight trajectories to cover rare edge cases and underrepresented drone models.

A model-assisted annotation loop accelerates labelling: the production model pre-annotates new frames, human reviewers correct and approve them in CVAT, and corrected labels feed directly back into the next training cycle. FiftyOne provides dataset management and visualisation throughout; MLflow tracks experiments and model lineage.

On-premise inference

All inference runs locally on NVIDIA hardware with TensorRT-optimised model exports. The pipeline is containerised with Docker and orchestrated with Kubernetes, supporting deployment across both rack-mounted GPU servers and NVIDIA Jetson edge units at remote sites. End-to-end latency from sensor input to operator alert is under 400ms.

Operator interface

A live dashboard presents active tracks on a map with classification labels, confidence scores, and alert prioritisation. Operators can drill into individual tracks, review evidence clips, and log decisions. These operator decisions feed back into the training pipeline for continuous model improvement.

Results

The system was evaluated over a 90-day production deployment across operational sites.

<1%

False positive rate

down from 42%

<400ms

End-to-end latency

sensor to classified alert

99.1%

Drone detection rate

on held-out test data

14

Drone models covered

including rare variants

Full air-gap deployment with zero external data dependencies. Integrated with existing sensor infrastructure - no hardware replacement required.

Client feedback

We had a model that worked in the lab. hb.dev turned it into something our operators actually trust in the field.

False positives used to be the main complaint from site operators. That conversation has stopped entirely.

Working on a similar challenge?

We build AI systems for defence and critical infrastructure clients across Northern Europe. Let's talk about what's possible for your environment.

Let's talk