hb.dev

Real-time drone detection for critical infrastructure

1 September 2025Defence sector clientDefence / Critical infrastructure

Drone detection system interface showing radar view with active tracks, drone classifications with confidence scores, and bird dismissals

Commercial drones have created a new class of security risk for critical infrastructure operators. Existing radar-based systems flag everything that moves, generating high false-positive rates and alert fatigue. The problem is not detection - it is classification: is this a threat, or a seabird?

The challenge

Our client operates a network of sensitive sites across Northern Europe. Their existing rule-based detection system had a false-positive rate exceeding 60%. Operators routinely dismissed genuine alerts. In this context, a missed detection carries serious consequences.

They needed a system that could:

  • Detect and continuously track airborne objects across wide-area sensor coverage
  • Distinguish drones from birds, weather phenomena, and insects with high confidence
  • Integrate with their existing radar and optical sensor infrastructure without hardware replacement
  • Run entirely on-premise - strict data sovereignty requirements ruled out any cloud dependency

Our approach

We embedded with the client's security and engineering teams over a year-long engagement, working from sensor integration through to a production operator interface.

Sensor Capture

Radar · EO · Thermal

Frame Processing

Fusion · Kalman tracking

Dataset Curation

CVAT · FiftyOne

Model Training

PyTorch · MLflow

Evaluation

Held-out · Edge cases

Inference API

TensorRT · FastAPI

Sensor fusion and data pipeline

The system ingests feeds from multiple sensor types - radar returns, electro-optical cameras, and thermal/infrared imaging - and fuses them into a unified tracking state. We built a real-time streaming pipeline using Apache Kafka to handle multi-sensor synchronisation with deterministic latency. Pydantic models enforce schema validation at every stage of the pipeline, and MinIO provides object storage for raw sensor captures and evidence clips.

Multi-object tracking

Each detected object is assigned a persistent track across frames using a Kalman filter-based tracker. Tracks accumulate trajectory, velocity, and behavioural features over time, giving the classifier richer signal than single-frame detection alone. The system handles simultaneous tracking of multiple objects, including through occlusion and sensor handoff.

AI classification

A custom computer vision pipeline classifies each tracked object using both visual features and kinematic signature: flight pattern, speed profile, altitude envelope, and manoeuvre characteristics. The model architecture uses a Faster R-CNN detector with a DINOv2 backbone for visual feature extraction, combined with trajectory-derived features fed through a lightweight classification head. Training used a proprietary dataset of real sensor captures augmented with synthetic flight trajectories to cover rare edge cases and underrepresented drone models.

A model-assisted annotation loop accelerates labelling: the production model pre-annotates new frames, human reviewers correct and approve them in CVAT, and corrected labels feed directly back into the next training cycle. FiftyOne provides dataset management and visualisation throughout; MLflow tracks experiments and model lineage.

On-premise inference

All inference runs locally on NVIDIA hardware with TensorRT-optimised model exports. The pipeline is containerised with Docker and orchestrated with Kubernetes, supporting deployment across both rack-mounted GPU servers and NVIDIA Jetson edge units at remote sites. End-to-end latency from sensor input to operator alert is under 400ms.

Operator interface

A live dashboard presents active tracks on a map with classification labels, confidence scores, and alert prioritisation. Operators can drill into individual tracks, review evidence clips, and log decisions. These operator decisions feed back into the training pipeline for continuous model improvement.

Results

The system was evaluated over a 90-day production deployment across operational sites.

<4%

False positive rate

down from 62%

<400ms

End-to-end latency

sensor to classified alert

99.1%

Drone detection rate

on held-out test data

14

Drone models covered

including rare variants

Full air-gap deployment with zero external data dependencies. Integrated with existing sensor infrastructure - no hardware replacement required.

Working on a similar challenge?

We build AI systems for defence and critical infrastructure clients across Northern Europe. Let's talk about what's possible for your environment.

Get in touch