Reversible information hiding technology can hide secret or sensitive information in the redundant information of the carrier image and completely restore the original image at the receiving end. Current, the difference histogram algorithm appears to be the most attractive for reversible information hiding. However, this technique cannot well balance the embedding capacity and security. To further improve the embedding capacity and security of the difference histogram algorithm, this paper proposes a large-capacity reversible information hiding algorithm based on multi-difference histograms and Gray code. At first, the original image is divided into multiple same-size blocks. Then the blocked image is scrambled with Gray code to improve the system's security. Thereafter, a difference histogram is established for the blocked image and the zero value on the right side of the peak value is selected as the embedding position. Finally, the secret information is embedded. Experimental results show that the proposed algorithm significantly improves the embedding capacity of the carrier image while ensuring the security of the carrier image and the secret information.
With the trustworthiness of multimedia data has been challenged by editing tools, image forgery localization aims to identify regions in images that have been modified. Although the existing techniques provide reasonably good results for image forgery localization, with emerging new editing techniques, such models must be retrained and it is highly dependent on the real tampering localization maps. In this paper, we propose an attention-based fusion network that combines the RGB image and noise residual yielding excellent results. Noise residual is commonly regarded as camera model fingerprint, and forgery localization can be detected as deviations from the expected regular pattern. The model consists of three parts: feature extraction, attentional feature fusion, and feature output. The feature extraction module is used to extract RGB image features and noise residuals separately, and the attentional feature fusion module is used to suppress the high frequency components, supplement and enhance model-related artifacts by combining the aforementioned features. Finally, the last module generates images with one channel as the camera model fingerprint. In order to avoid dependence on tampering localization maps, the model is trained with pairs of image patches coming from the same or different camera sensors by means of Siamese network. Experiment results obtained from several datasets show that the proposed technique successfully identifies modified regions, improves the quality of camera model fingerprints, and achieves significantly better performance when compared to the existing techniques.
Makeup transfer aims to extract a specific makeup from a face and transfer it to another face, which can be widely used in portrait beauty, and cosmetics marketing. At present, existing methods can achieve the transfer of the entire facial makeup, but the quality of makeup transfer is not excellent because there may be a mismatch between the two images. In this paper, we propose a facial makeup transfer network based on the Laplacian pyramid, which can better preserve the facial structure from the source image and achieve high-quality transfer results. The model consists of three parts: makeup feature extraction, facial structure feature extraction, and makeup fusion. The makeup extraction part is used to extract the facial makeup from the reference image. And the facial structure feature extraction part is used to extract facial structure from the source image, in order to solve the loss of facial details when extracting facial structural features, we used the method based on Laplacian Pyramid. The makeup fusion part will fuse the facial makeup with facial structure features. Many experiments on the MT dataset have shown that this method can transfer makeup successfully without changing the original facial structure, and achieve advanced performance in various makeup styles.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.