Video Motion Analysis (VMA)
DIViSS has the capability to filter out moving objects that are inherent to a particular scene, without flagging it as possible security breach by placing a “virtual fence” around such disturbances. Rain, falling leaves, even moving cars will not trigger detection events, providing that these are intrinsic to the monitored environment, while objects and entities foreign to the milieu will be automatically detected. This allows robust, 24/7 outdoor system operation.
The decision as to which objects should be filtered out can be programmed by the end user (i.e. the operator instructs the system which objects to ignore – this can be based on several characteristics, such as size, shape, velocity, etc.) or self-learned. The VMA feature allows DIViSS to learn the scene, either upon being prompted by the operator or independently. This involves the system briefly switching to a “learning mode” and then continuing normal operation.
The VMA also provides the exact segmentation of each object, the number of moving objects, and the direction of movement.
Static Detection
DIViSS can detect and flag an object that was added (e.g., object left unattended) or removed (i.e. object missing) from the scene. This operation is performed automatically, in real-time, includes rotation and scaling factors determination, and does not require any prior knowledge of the object/image.
CFAR Target Detection
Target detection is a fundamental capability for all intelligent video surveillance applications. The false alarms inherent to this process are a problem that reduces the value of target surveillance. The implementation of AUG Signals’ cutting-edge multi-Constant False Alarm Rate (CFAR) detection technologies vary the detection threshold as a function of the sensed environment, significantly reducing the false alarm rate.
Sequential frame registration – camera stabilization
Sequential images from individual sensors are registered by a global motion estimation method that achieves accurate image registration and provides the viewer with a smooth spatiotemporal evolution of events. Correct alignment of sequential images serves two purposes:
- Stabilizes camera(s) in order to eliminate possible sequential frame distortion due to camera vibration in outdoor environment or camera pan-tilt-zoom operation
- Provides aligned frame sequence for multi-frame deblurring and super-resolution enhancement using information fusion
Sequential image registration is performed on a subpixel-level accuracy. The registration operations are performed on grayscale versions of the original frames to reduce computational complexity (to allow for real-time implementation) while maintaining almost the same level of accuracy.
Fusion and enhancement – multi-frame fusion
While other commercially-available products lack the ability to accurately register complex motions of multiple objects due to their inability to enhance images beyond single-frame based sharpening and de-noising, AUG Signals’ unique multi-frame registration and fusion techniques enable multi-frame based image enhancement (super-resolution capacity).
Super-resolution increases spatial resolution of video images on a frame by frame basis to meet end user needs (e.g., assist online investigations and increase screening capabilities on request). Based on correctly aligned sequential frames, AUG Signals’ super-resolution algorithm obtains super-resolution image by performing multi-frame fusion. The algorithm is able to increase the spatial resolution far beyond the sensor’s resolution, facilitating the recognition/identification of fine details of an object (e.g., fine labels or other features that are smaller than the video sensor resolution). |