Physical distortions (such as thorn-offs and scratches) are commonly seen in historical documents. Their presence disturbs downstream processes such as optical character recognition (OCR) and layout analysis, which leads to reduced productivity in automatic document information retrieval. A proper characterization of such physical noise is an important step in the development of historical document denoising methods. In this paper, we tackle noise characterization with Bayesian labeling, where noise and text pixels are characterized in terms of likelihood densities. We employ in particular two different significance measures, which are formulated using pointwise and cone-of-influence (COI) approximation of local Lipschitz regularity in the wavelet domain. We evaluate the effectiveness of the proposed noise characterization using a binary noise versus text classification model, where we show that a naive binary classifier using average point ratio (APR) or average cone ratio (ACR) distribution densities leads to effective classification of noise and text pixels with encouraging overall success rates. This encourages future work on the development of Bayesian frameworks for the recognition of physical distortions in historical documents.
This paper is an introduction into the analysis of multispectral recordings of paintings. First, we will give an
overview of the advantages of multispectral image analysis over more traditional techniques: first of all, the bands
residing in the visible domain provide an accurate measurement of the color information which can be used for
analysis but also for conservational and archival purposes (i.e. preserving the art patrimonial by making a digital
library). Secondly, inspection of the multispectral imagery by art experts and art conservators has shown that
combining the information present in the spectral bands residing in- and outside the visible domain can lead to
a richer analysis of paintings. In the remainder of the paper, practical applications of multispectral analysis are
demonstrated, where we consider the acquisition of thirteen different, high resolution spectral bands. Nine of
these reside in the visible domain, one in the near ultraviolet and three in the infrared. The paper will illustrate
the promising future of multispectral analysis as a non-invasive tool for acquiring data which cannot be acquired
by visual inspection alone and which is highly relevant to art preservation, authentication and restoration. The
demonstrated applications include detection of restored areas and detection of aging cracks.
In this paper, the effect of binary labelling bins for lattice quantization index modulation techniques is studied.
The problem can be solved in one dimension using Gray codes but it is not straightforward for higher dimensions
due to more directionality. We show the impact of different labellings on the overall performance of two-dimensional
lattice quantization index modulation watermarking systems. Heuristically, we present solutions for
these systems, where our analysis includes (1) robustness tests against JPEG and JPEG 2000 compression and
(2) transmission over an AWGN channel.
The work described here fits in the context of a larger project on the objective and relevant characterization of
paintings and painting canvas through the analysis of multimodal digital images. We captured, amongst others,
X-ray images of different canvas types, characterized by a variety of textures and weave patterns (fine and rougher
texture; single thread and multiple threads per weave), including raw canvas as well as canvas processed with
different primers.
In this paper, we study how to characterize the canvas by extracting global features such as average thread
width, average distance between successive threads (i.e. thread density) and the spatial distribution of primers.
These features are then used to construct a generic model of the canvas structure. Secondly, we investigate
whether we can identify different pieces of canvas coming from the same bolt. This is an important element for
dating, authentication and identification of restorations. Both the global characteristics mentioned earlier and
some local properties (such as deviations from the average pattern model) are used to compare the "fingerprint"
of different pieces of cloth coming from the same or different bolts.
KEYWORDS: Digital watermarking, Scalable video coding, Multimedia, Spatial resolution, Fourier transforms, Video compression, Video, Image registration, Quantization, Signal to noise ratio
The rapid dissemination of media technologies has lead to an increase of unauthorized copying and distribution
of digital media. Digital watermarking, i.e. embedding information in the multimedia signal in a robust and
imperceptible manner, can tackle this problem. Recently, there has been a huge growth in the number of
different terminals and connections that can be used to consume multimedia. To tackle the resulting distribution
challenges, scalable coding is often employed. Scalable coding allows the adaptation of a single bit-stream to
varying terminal and transmission characteristics. As a result of this evolution, watermarking techniques that
are robust against scalable compression become essential in order to control illegal copying. In this paper, a
watermarking technique resilient against scalable video compression using the state-of-the-art H.264/SVC codec
is therefore proposed and evaluated.
In this paper we concentrate on robust image watermarking (i.e. capable of resisting common signal processing
operations and intentional attacks to destroy the watermark) based on image features. Kutter et al.7 motivated
that well chosen image features survive admissible image distortions and hence can benefit the watermarking
process. These image features are used as location references for the region in which the watermark is embedded.
To realize the latter, we make use of previous work16 where a ring-shaped region, centered around an image
feature is determined for watermark embedding. We propose to choose a specific sequence of image features
according to strict criteria so that the image features have large distance to other chosen image features so
that the ring shaped embedding regions do not overlap. Nevertheless, such a setup remains prone to insertion,
deletion and substitution errors. Therefore we applied a two-step coding scheme similar to the one employed by
Coumou and Sharma4 for speech watermarking. Our contribution here lies in extending Coumou and Sharma's
one dimensional scheme to the two dimensional setup that is associated with our watermarking technique.
The two-step coding scheme concatenates an outer Reed-Solomon error-correction code with an inner, blind,
synchronization mechanism.
The Joint Photographic Experts Group (JPEG) committee is a joint working group of the International Standardization
Organization (ISO) and the International Electrotechnical Commission (IEC). The word "Joint" in JPEG however does
not refer to the joint efforts of ISO and IEC, but to the fact that the JPEG activities are the result of an additional
collaboration with the International Telecommunication Union (ITU). Inspired by technology and market evolutions, i.e.
the advent of wavelet technology and need for additional functionality such as scalability, the JPEG committee launched
in 1997 a new standardization process that would result in 2000 in a new standard: JPEG 2000. JPEG 2000 is a
collection of standard parts, which together shape the complete toolset. Currently, the JPEG 2000 standard is composed
out of 13 parts. In this paper, we review these parts and additionally address recent standardization initiatives within the
JPEG committee such as JPSearch, JPEG-XR and AIC.
In the past decade the use of digital data has increased significantly. The advantages of digital data are, amongst others, easy editing, fast, cheap and cross-platform distribution and compact storage. The most crucial disadvantages are the unauthorized copying and copyright issues, by which authors and license holders can suffer
considerable financial losses. Many inexpensive methods are readily available for editing digital data and, unlike analog information, the reproduction in the digital case is simple and robust. Hence, there is great interest in developing technology that helps to protect the integrity of a digital work and the copyrights of its owners. Watermarking, which is the embedding of a signal (known as the watermark) into the original digital data, is one method that has been proposed for the protection of digital media elements such as audio, video and images. In this article, we examine watermarking schemes for still images, based on selective quantization of the coefficients of a wavelet transformed image, i.e. sparse quantization-index modulation (QIM) watermarking. Different grouping schemes for the wavelet coefficients are evaluated and experimentally verified for robustness against several attacks. Wavelet tree-based grouping schemes yield a slightly improved performance over block-based
grouping schemes. Additionally, the impact of the deployment of error correction codes on the most promising configurations is examined. The utilization of BCH-codes (Bose, Ray-Chaudhuri, Hocquenghem) results in an improved robustness as long as the capacity of the error codes is not exceeded (cliff-effect).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.