The widely adopted video stabilization techniques in imaging devices have become a bottleneck that limits the performance of source camera identification (SCI) based on photo-response non-uniform (PRNU) noise. Affine transformations in the stabilized video, including scaling, translation, and rotation, introduce varying degrees of distortion to the PRNU pattern across frames, which lead to a degradation of accuracy and prolonged matching time in SCI methods. To address these issues, we propose a stabilized video source identification method that combines the undecimated dual-tree complex wavelet transform (UDTCWT) with the Bayesian adaptive direct search (BADS) algorithm. First, decoded frames are obtained by skipping the loop filter in the video codec to preserve more PRNU. Then, a PRNU extraction algorithm based on the UDTCWT is utilized to filter out more reliable PRNU from the decoded frames. Finally, the distorted PRNU is optimized and recovered by the affine transformation model with the BADS. Experiments on public datasets show that the proposed method outperforms state-of-the-art methods in terms of matching speed and accuracy. |
ACCESS THE FULL ARTICLE
No SPIE Account? Create one
Video
Detection and tracking algorithms
Cameras
Multiphoton fluorescence microscopy
Video processing
Tunable filters
Video acceleration