ProHawk Vision Restoration Plugin
Overview
Because of the massive amount of data growth and need for instantaneous image recognition and response, powerful data analytic tools and being able to run AI inference models have become essential features in this wave of AI centric applications in smart traffic management, retail, worker safety and security areas. In order for these applications to be widely adopted, AI models need to run at optimal levels, and operators need to make fast and accurate call-to-action decisions.
Approach
ProHawk’s patented industry-leading computer vision AI restoration algorithms resolve the broadest range of real-world environmental problems caused by lowlight, weather, particulates, and lighting. ProHawk Vision is powered by the parallel processing capabilities of NVIDIA GPU-accelerated technologies to restore poor quality video to unobstructed, day-time safety level in as little as 3 milliseconds that is undetectable to the human eye.
ProHawk Vision Restoration Plugin provides powerful computer vision AI algorithms into an NVIDIA Metropolis Application Framework to dramatically clarify and enrich the quality of images and live video. The ProHawk Vision Plugin automatically interprets every frame, determines the real-world condition affecting the frame, and applies the appropriate control over the ProHawk Vision restoration algorithms to restore the frame to invert degradation. The objective mathematical model applied to each pixel is based on dependent quantitative measurements allowing the ProHawk Vision Plugin to remove the real-world effects of the video sensors environment.
The ProHawk Vision Plugin leverages patented ProHawk Vision algorithms, wrapped into an NVIDIA DeepStream Plugin that seamlessly integrates into the NVIDIA Metropolis video processing pipeline. The ProHawk Vision Plugin data flow through the pipeline is as follows:
This approach enables ProHawk Vision restoration algorithms to be inserted into the pipeline after de-muxing, and prior to the primary inference detector. NVIDIA Metropolis partners and customers can take advantage of ProHawk Vision’s pre-processing to achieve more accurate vision AI confidence, recognition, detection, and tracking without the need to retrain the system.
Embedded, Workstation, and Data Center Ready
ProHawk Vision Restoration Plugin has been Metropolis Validated in the jointly run Dell Technologies and NVIDIA Metropolis lab passing all areas across 5 different NVIDIA Data Center GPU platforms: T4, A2, A30, ATOS T4, and AWS T4. Each platform validated performance and resources used across 4 series of tests with different video modes including: 720p15, 720p30, 1080p15, and 1080p30. The ProHawk Vision Restoration Plugin was measured to require 500 CUDA cores per 1080p30 stream.
The ProHawk Vision Restoration Plugin has been integrated to operate on the NVIDIA Jetson edge AI platfrom, including the Jetson TX2, Jetson AGV Xavier and 6 different Jetson Orin modules: Jetson AGX Orin 64GB, Jetson AGX Orin 32GB, Jetson Orin NX 16GB, Jetson Orin NX 8GB, Jetson Orin Nano 8GB, & Jetson Orin Nano 4GB. Compatibility has also been verified with the NVIDIA RTX workstation class GPU product line.
Safety & Security Values
- Visibility in any challenging environment
- Raise accuracy of monitoring for video analytics and vision AI systems
- Dramatically increase confidence of detection
- Improve object recognition for Operators, VA, AI, and CV
- Restore visual details and quality to uniquely identify
Features and Benefits
- Description: Programmatic parameters quickly restore imagery obstructed by fog, rain, snow, dirt, sand, smoke, backlight, lowlight, sun glare, headlight, and tinted windows
- Benefit: Improve threat detection and perimeter security, streamline security processes with improved service response times and accuracy, and lower the TCO of the video infrastructure including illumination, and delay upgrades.
Edge Improvement
- Description:
Edge Improvement Edge Sharpening Algorithm Improve Outlines and Reduces Non-Uniform Imagery Noise
- Benefit: magery Fine Details Enable Unique Identification of People, Places, or Things
Live Video/Low Latency
- Description: Industry Leading Low Latency, Compact High-Performance Algorithms Enable Embeddable Live Video Improvement
- Benefit: Dramatically Clarify Live Video with No Video Lag,
or Frame Skipping That Enables Decisive Decisions
Sensor Coverage
- Description: Eliminate Humid Climate Differentiation Struggles Between Body Heat and Ambient Surroundings
- Benefit: Increase Range and Accuracy of Thermal Sensorsby 300% and Infrared Cameras by 500%
Expose Fine Details
- Description: Patented Detail Restoration Algorithm Reveals Intricate Details, Even in Good Quality Video
- Benefit: Accurately Identify Objects, Weapons, Vehicles, License Plates, Faces, People, Animals, and Problems
- Datasheets