Video super-resolution reconstruction consists of generating high-resolution frames by processing low-resolution ones. This process enhances the video quality, allowing the visualisation of fine details. Moreover, it can be considered a primary step in a video processing pipeline for further applications, such as object detection, classification and tracking from uncrewed aerial vehicles (UAV). For this reason, the super-resolution process should be performed quickly and accurately. Implementing a real-time video super-resolution method through parallel programming contributes to the efficiency of this pipeline. This work proposes two parallel super-resolution approaches for videos taken from UAVs: one using multi-core CPUs and another on a GPU architecture. The method is based on sparse representation and Wavelet transforms. First, it makes an edge correction performed in the Wavelet domain, then employs dictionaries previously trained with k-Singular Value Decomposition (k- SVD) to reconstruct the Wavelet subbands of the frames, and the high-resolution frames are computed from the Inverse Discrete Wavelet Transform (IDWT). The performance of this method was measured with the Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM) and Edge Preservation Index (EPI). The implementations are tested in a workstation with a Ryzen multi-core processor and a CUDA-enabled GPU; furthermore, they are compared with the non-parallel method regarding algorithm complexity and computing time.
|