Dual Azure Kinect Laser Detection System for Tactical Training Simulation

Abstract

This paper proposes a real-time system utilizing dual Azure Kinect DK cameras (Microsoft, 2023) to detect green laser pointers and transmit 3D vectors to a Unity simulation environment. To address the high cost and spatial constraints of existing infrared sensor-based tactical training systems, we present a low-cost alternative using computer vision algorithms. The system employs a dual-camera architecture where Camera 0 detects laser points based on Frame Difference and Green Excess weighted scores, while Camera 1 tracks hand and muzzle positions using Frame Difference and Depth Masking to eliminate interference. Detected 2D coordinates are converted into 3D vectors through screen homography or stereo calibration and transmitted to Unity via a UDP JSON protocol. In Unity, a thread-safe Producer-Consumer pattern is implemented to handle data reception, enabling Physics.Raycast-based target collision detection and LineRenderer visualization. The current production pipeline operates in a single-user mode using frame-difference detection, while the advanced modules (IR+HSV dual filtering, SORT-based tracking, and Mediapipe-based hand tracking) are validated at the component level on synthetic data and are reserved for an upcoming integrated MVP. This study therefore reports a system-level engineering study: it establishes the dual-camera architecture and the calibration / coordinate-transformation pipeline that any later integration will reuse, rather than claiming a fully validated end-to-end product.