Open access peer-reviewed chapter

Novel Methods for Forensic Multimedia Data Analysis: Part II

Written By

Petra Perner

Reviewed: 14 April 2020 Published: 02 June 2020

DOI: 10.5772/intechopen.92548

From the Edited Volume

Digital Forensic Science

Edited by B Suresh Kumar Shetty and Pavanchand Shetty H

Chapter metrics overview

891 Chapter Downloads

View Full Metrics

Abstract

We are proposing a new concept for new multimedia data processing techniques for varied multimedia sources. We address our work toward speech, video and images, handwriting and text documents. The methods and techniques will form a toolkit that can be used for different cases in a court of law in order to extract information from different multimedia sources. In addition to the data mentioned above, social media (e.g., Facebook and Twitter) provide multimedia data in new formats. Such data allow the investigator to identify and compare objects, events or persons based on data properties, including biometric features or more symbolic features that point to coincidences and anomalies. We continue our work of novel methods for forensic multimedia data analysis Part I by a description of related work and a proposal of the methods and techniques we are developing beyond the state of the art for handwriting, multimedia feature extraction, novelty detection, legal aspects and cloud computing. Then, we describe the tasks that should be solved by the system and the different multimedia data. We describe expected results of our proposed toolkit. Finally, we summarize the objective of our work.

Keywords

  • multimedia forensic data analysis
  • standardization of forensic data analysis
  • video and image enhancement
  • video analysis
  • image analysis
  • speech analysis
  • case-based reasoning
  • multimedia feature extraction
  • handwriting
  • twitter data analysis
  • novelty detection
  • legal aspects

1. Introduction

The objective of this work is to provide novel methods and techniques for the analysis of forensic multimedia data. These methods and techniques should form a novel toolkit for automatic forensic multimedia data. The data modalities the proposed work is considering are images and videos, text, handwriting, speech and audio signals, social media data, log data and genetic data. The integration of methods for all these different data modalities in one toolkit should allow the cross-analysis of these data and the detection of events by interlinking between these data. The proposed methods will face on standard forensic tasks, e.g., identification of events, persons or groups, and device recognition. Together with the end users and the police forces, new standard tasks will be worked out during the project and will give a new input to the standardization aspect of forensic data analysis.

The proposed novel methods and techniques will consider all aspects of multimedia data analysis such as device identification and trustworthiness of the data, signal enhancement, preprocessing, feature extraction, signal and data analysis and interpretation.

The chapter is continuation of Part I of novel methods for forensic multimedia data analysis [1]. The main aspects of multimedia forensic data analysis, the background and the system architecture are described over in Part I. Related work and the proposed methods and techniques that go beyond the state of the art for video, images, speech, multimedia feature extraction, text and Twitter data analysis are also described in Part I. Here we are continuing our description of related work and the proposed methods and techniques that go beyond the state of the art for handwriting, novelty detection, law aspects and cloud computing in Section 2. Task system has to be solved based on the different multimedia data, for case-based reasoning and legal aspects, which are explained in Section 3. In Section 4, we describe what kind of results we expect from our system and our work. Finally, we give conclusions in Section 5.

Advertisement

2. Related work to be continued from Part I

2.1 Handwriting recognition

2.1.1 State of the art

Criminologists and forensic document examiners have been investigating the clues to identify the handwritings for decades [2]. However, identification by human experts has the drawbacks of nonobjective measurements and nonreproducible decisions besides the cost of expert training. Recent attempts in computer-supported handwriting identification aim to address the challenge of doing this task using computer vision and pattern recognition techniques. The Forensic Information System for Handwriting (FISH) system developed by German law enforcement is followed by more recent ones including WANDA and CEDAR-FOX systems [3, 4, 5].

As an initial step to identification, the documents are required to be processed in order to handle the noise and for enhancement [6]. Then some similarity measures have to be defined to identify all the writings of a person as same and to differentiate it from others’ writings [7]. The writer identification systems make use of the individuality of handwritings and try to capture the characteristics of handwriting in a macro level by the style of handwriting and in a micro level by the shapes of the characters.

The literature is mostly dominated by character-based systems. HCLUS prototype matching techniques [3], dynamic time warping-based techniques [8, 9] and structural features [10] have been recently proposed for matching allographs (prototypical character shapes). The disadvantage of character-based systems is the requirement for character segmentation and the character classification (whether the character is y or g, etc.) prior to finding the similarities and dissimilarities on a specific character. However, both tasks are challenging and error-prone when free handwriting is the issue. The shape of the characters may change with the size of writing or with pen size and type. The characters can be written differently in a different context, i.e., in different words, and that requires the consideration of the preceding and following characters. Most importantly, it is very difficult to apply those systems to large datasets.

2.1.2 Beyond the state of the art

When the documents in forensic applications are considered, the preprocessing step gains more importance due to the variety of artifacts and degradations that are generated intentionally to hide the evidences or happened because of the environmental conditions. The enhancement methods should be carefully applied in order not to remove important cues while reducing the noise. The methods that will be developed will be adaptive and will learn from a large number of data by incorporating the expert feedback when necessary.

Inspired by cognitive studies that have observed the human tendency to read whole words at a time [11], in recent studies word-spotting techniques have been proposed as an alternative to character-based systems in the word retrieval literature [12, 13, 14, 15, 16]. As a novel direction in forensic handwriting identification, to be able to work in the large scale, we will follow the direction in word spotting and describe and match the words rather than characters. Generic image features will be used to describe the word images as a whole. This will enable us to capture the writing habits and styles of individuals and also the character variations in a different context.

Going beyond the writer identification and verification, recognition of printed and handwritten text from documents (such as letters and notes) and the text in the photographs taken in the environment (such as the labels of shops, buildings or streets and advertisements on boards) will be another issue that will be considered. While the commercially available optical character recognition systems are very successful for printed documents, recognition of handwritten text continues to be a challenge [17, 18]. More importantly, the recognition of words in unconstrained settings or “in the wild” is still an interesting problem [19]. Besides scene and object detection and recognition methods that will be developed, recognition of text in the images, ranging from license plates to shop labels and even the text on clothes, will provide important cues for the identification of places.

Both identification and recognition will be attacked similar to generic object recognition, and word images will be described using advanced image descriptors to be able recognize and identify text in multi-author multi-language cases. Besides the SIFT [20]-based and k-AS [21]-based features that we considered in our previous studies [22, 23, 24], SURF [25], FREAK [26] and Shape Context [27]-based features will be adopted for word description. Together with the statistical analysis of the occurrence of some features, the spatial layout of extracted features will also be considered.

Large volumes of data will be exploited through data mining techniques in order to learn the in-class similarities and between-class differences. Novel similarity measures will be developed. The main goal will be to provide the sufficient amount of data to the experts to be validated. The main challenge will be not to miss any important data while reducing the huge volume. Therefore, the methods that will be developed for similarity should be both robust and fast. We will benefit from expertise of partners in different areas to design new similarity measures for handwriting matching.

2.2 Novelty detection

2.2.1 State of the art

The aim of novelty detection is to recognize inputs that cannot be properly represented by information provided by previous inputs (i.e., a nominal distribution). Recognizing that an input differs in some respect from previous inputs is a very important capability of a learning system. In classification problems novelty detection is particularly useful when a relevant class is under-represented in the data, so that a classifier cannot be trained to reliably recognize that class or when hierarchical classifiers trained on different concept information disagree on the output.

The goal of novelty detection is twofold: to be as accurate as possible in detecting inputs which do deviate from the nominal distribution (true positives) and to predict how many normal inputs will be erroneously flagged as positives (false positives). Novelty detection is also known as one-class classification [28] or learning from only positive (or only negative) examples. The standard approach has been to assume that novelties are outliers with respect to the nominal distribution and to build a novelty detector by estimating a level set of the nominal density. This approach allows fixing a threshold for acceptance of new data while having a degree of control over the number of false alarms raised. Using this framework, novelty detection can be interpreted as a binary classification problem. Several approaches have been utilized to tackle this problem: statistical methods, neural networks and support vector method approaches (see [29, 30, 31, 32], for good reviews of these techniques). Bayesian methods have been used to provide a nonparametric estimation of the probability distribution [33], and content-based reasoning (CBR), based on Bayesian decision theory, has also been utilized. A common drawback of all these approaches is the assumption that novelties are uniformly distributed on the support of the nominal distribution, which is not true in most cases, mainly when the feature space dimension is high.

Novelty detection has been already applied with success on single modalities of the forensic multimedia data. For instance, the detection and classification of abnormal events in a surveillance video has been studied in [34, 35, 36]. In [33], novelty detection is applied in online document clustering, and in [37], novelty detection is applied on image sequences.

A new and promising approach to novelty detection in audiovisual data has been proposed in [38]. In this approach, the novelty detection is not the negative output of multiple classifiers but the disagreement of several concept hierarchical classifiers trained from different but hierarchically related concept. Here, the novelty is represented not by a fully new item but by relevant changes from previous seen items. In forensic multimedia data, this situation is very common when only one modality is affected.

According to [33], several factors make the novelty detection problem very challenging:

  • The definition of a normal region that encompasses every possible normal behavior is very difficult. In addition, the boundary between normal and anomalous behavior is often not precise. Thus, an anomalous observation that lies close to the boundary can actually be normal and vice versa.

  • When anomalies are the result of malicious actions, the malicious adversaries often adapt themselves to make the anomalous observations appear normal, thereby making the task of defining normal behavior more difficult.

  • In many domains, normal behavior keeps evolving, and a current notion of normal behavior might not be sufficiently representative in the future.

  • The exact notion of anomaly is different for different application domains. For example, the rate of change for novelty may be different in each application. Thus applying a technique developed in one domain to another one is not straightforward.

  • The availability of labeled data for training/validation of models used by anomaly detection techniques is usually a major problem.

  • Often, the data contain noise that tends to be similar to the actual anomalies and hence is difficult to distinguish and remove.

2.2.2 Beyond the state of the art

We will consider novelty detection as a CBR problem [34]. The CBR-based novelty detection will consist of successively adapting or evolving the previously obtained solutions, taking into account the data properties, the user’s needs and any other prior knowledge into account. We will use a combination of statistical and similarity-based methods as the solution to the problems underlying the CBR methodology. Our proposed scheme differs from existing methodologies on novelty detection [29, 30, 39, 40, 41] since it can perform simultaneously novelty detection and handling and also considers the incremental nature of the data.

2.3 Legal aspects

2.3.1 State of the art

The normative compliance of the methodologies and tools proposed by the project will be assessed by reference to the European and national legal framework on data protection and privacy [42].

Data protection is a fundamental right in Europe, enshrined in Article 8 of the Charter of Fundamental Rights of the European Union as well as in Article 16(1) of the Treaty on the Functioning of the European Union (TFEU).

The central legislative instrument for the protection of personal data in Europe is the EU’s 1995 Directive1. In order to come to a complete revision of the entire framework concerning data protection, more consistent with changes in the single market and with stronger needs to ensure security of European citizens, since 2009, the European Commission launched public consultations on data protection2 and engaged in intensive dialog with stakeholders. On 4 November 2010, the Commission published a communication on a comprehensive approach on personal data protection in the European Union3 that sets out the main themes of the reform. “After assessing the impacts of different policy options4, the European Commission is now proposing a strong and consistent legislative framework across Union policies, enhancing individuals’ rights, the Single Market dimension of data protection and cutting red tape for businesses”5. The Commission proposes that the new framework should consist of:

  • A Regulation6 (replacing Directive 95/46/EC) setting out a general EU framework for data protection

  • A Directive7 (replacing Framework Decision 2008/977/JHA8) setting out rules on the protection of personal data processed for the purposes of prevention, detection, investigation or prosecution of criminal offenses and related judicial activities

By proposing a specific Directive regulating the use of personal data for criminal investigations, the EU legislator acknowledges the importance of innovating by means of a legislative instrument, the specific theme of personal data processing in criminal investigations, a theme that was left aside by the Directive 95/46 and only partially regulated by Framework Decision 2008 [43, 44]. It left to national legislations all the decisions about the legitimacy in processing personal data for purpose of crime detection, thus preventing a global European intervention against criminality.

The new Directive intends to make a distinction between the fundamental (but not “absolute”) right to data protection and its social profile in the light of achieving a global security. An EU legislator states that “The right to the protection of personal data, enshrined in Article 8 of the Charter of Fundamental Rights and in Article 16(1) TFEU, requires the same level of data protection throughout the Union”9 and that “According to the principle of subsidiarity (Article 5(3) TEU), action at Union level shall be taken only if and in so far as the objectives envisaged cannot be achieved sufficiently by Member States, but can rather, by reason of the scale or effects of the proposed action, be better achieved by the Union”.

The need to respect the sovereignty principle explains why the EU chose the instrument of Directive that leaves a space of more flexibility to Member States, instead of the adoption of a Regulation, that would have a stronger impact on national legislation on privacy protection [45].

The purpose of the new rules is highlighted in Article 1: ‘Subject matter and objectives. 1. This Directive lays down the rules relating to the protection of individuals with regard to the processing of personal data by competent authorities for the purposes of the prevention, investigation, detection or prosecution of criminal offenses or the execution of criminal penalties. 2. In accordance with this Directive, Member States shall: (a) protect the fundamental rights and freedoms of natural persons and in particular their right to the protection of personal data; and (b) ensure that the exchange of personal data by competent authorities within the Union is neither restricted nor prohibited for reasons connected with the protection of individuals with regard to the processing of personal data.

2.3.2 Beyond the state of the art

The main innovations concern the following:

  • The introduction of the definitions of ‘personal data breach,’ ‘genetic data’ and ‘biometric data,’ ‘competent authorities’ (based on Article 87 TFEU and Article 2(h) of Framework Decision 2008/977/JHA) and, of a ‘child,’ based on the UN Convention on the rights of the child

  • The distinction between different categories of data subjects (art.5): ‘Member States shall provide that, as far as possible, the controller makes a clear distinction between personal data of different categories of data subjects, such as:

    1. persons with regard to whom there are serious grounds for believing that they have committed or are about to commit a criminal offense;

    2. persons convicted of a criminal offense;

    3. victims of a criminal offense, or persons with regard to whom certain facts give reasons for believing that he or she could be the victim of a criminal offense;

    4. third-parties to the criminal offense, such as persons who might be called on to testify in investigations in connection with criminal offenses or subsequent criminal proceedings, or a person who can provide information on criminal offenses, or a contact or associate to one of the persons mentioned in (a) and (b); and

    5. persons who do not fall within any of the categories referred to above.’

2.4 Cloud computing

2.4.1 State of the art

Many of the tasks, such as video analysis, text mining or case-based reasoning, are very compute and data intensive. Processing large amounts of multimedia data for police investigation purposes must take into account scalability and performance requirements in order to be usable in a professional context. Cloud computing [46] has emerged as a good model for providing compute and storage resources on demand. To benefit from these available resources, applications that are deployed in the cloud need to be designed to scale and to be able to benefit from parallel processing of data with approaches such as map-reduce [47].

2.4.2 Beyond the state of the art

In this proposal, we will advance the state of the art by designing algorithms for video-analysis, text mining or case-based reasoning that are scalable and can benefit from parallel processing for the computing and data intensive tasks. This will allow the system to deal with varying workloads for large organizations and provide analysis response times [48] in line with police investigation requirements.

Advertisement

3. Tasks to be solved

The functional and requirement analysis of the methods will be worked out as well as the standardization task. This work should ensure that the developed tools meet the requirements of the end-users and the police forces, and it is therefore considered as a key work package.

For each of the considered multimedia resources, one considers the special needs related to the automated processing of the specific data. Therefore, five tasks are related to the treatment of images and videos, text, handwriting and audio and speech signals. Feature extraction methods are considered for all multimedia sources in work package. This strategy guarantees the synergy that can be obtained in quality improvement for processing when using information from different media types. The reasoning unit is considered in a single work package. All developed method will be linked to the CBR system. They provide a case description for each new case to the CBR system.

Novelty detection has aspects that go beyond CBR. Therefore, the development of the novelty detection unit is left to a single task.

The legal aspects are worked out in a special task. It should run over the whole project with the aim to ensure that each RTD task meets the legal aspects as well as to identify new aspects that arise when developing novel techniques for the analysis of forensic data.

All the proposed methods should be integrated into a single reasoning system. After this has been done, the final evaluation of the system will be done. The final system will be demonstrated to police forces and end users.

The proposed methodology of the work is shown in Figure 1.

Figure 1.

Methodology.

3.1 Video and image enhancement, filtering and assessment

The goal of this package is to improve the quality of an image or video sequence in order to support easy classification or recognition tasks and to detect in images traces of tampering without using protecting pre-extracted or pre-embedded information. Fragile watermarking schemes to protect legal evidence audio and speech data will also be developed.

In many image modalities, synthetic aperture radar (SAR) images, passive millimeter wave (PMMW) images, commercial photography and images and videos acquired for security purposes, the captured images and video represent a degraded version of the original scene. The observed degraded image is usually the result of convolving the original image (to be estimated) with a known or unknown blurring function and the addition of noise. The removal of noise and blur from the observation in order to obtain a good estimate of the original image is the goal of deconvolution and blind deconvolution techniques. In this task, we will develop and utilize Bayesian deconvolution and blind deconvolution techniques to improve the quality of the observed images. The improved images will either be used by humans for identification or be the input to classification and recognition methods.

3.1.1 Image and video super resolution

We use the term super resolution (SR) to describe the process of obtaining a high-resolution image or a sequence of high-resolution images from a set of low-resolution (LR) observations. In this task we will utilize and develop motion-based spatial super resolution techniques as well as motion free super resolution techniques. In motion-based SR, the LR observed images are under-sampled, and they are acquired either by multiple sensors imaging a single scene or by a single sensor imaging a scene over a period of time. In motion free SR images are upsampled by learning the relationship between corresponding low resolution and high-resolution image patches in a database and combining this learning process with the observed LR image. The improved images will either be used by humans for identification or be the input to classification and recognition methods.

3.1.2 Blind methods for detecting image forgery

In order to trust the information extracted from images and videos, it is necessary to make sure that the image and video have been recorded by a camera and that no artifact has been added or object removed. The trustworthiness of images and videos has clearly an essential role in many security areas, including forensic investigation, criminal investigation, surveillance systems, and intelligence services. In this task we will utilize and develop blind methods for detecting image forgery, that is, methods that use the image function to perform the forgery detection task. The methods will try to identify various traces of tampering and detect them separately. The final decision about the forgery will be carried out by fusion of results of separate detectors.

3.2 Case-based reasoning

In this work package, we will develop novel case-based reasoning methods for a case-based reasoning system that can keep complex multimedia cases based on their different multimedia features and specific event features in a case base so that they can be easily retrieved and applied for new situations. Meta-learning methods based on CBR over the proposed multimedia processing chain will be developed in this WP in order to achieve the best processing results. The case-based reasoning system will consist of novel probabilistic and similarity-based methods. It will provide a wide range of novel similarity measures for the different feature types and representations for identification and similarity determination. Methods for hierarchical organization of the case base will allow very fast retrieval of similar cases. A special taxonomy for similarity determination and measures will be worked out and implemented in the CBR system. It will provide explanation capabilities for similarity, as those will help a forensic data analyst to identify the right reasoning method for his particular problem. This aspect goes along with the training and education aspect for forensic data analysis. Part of this will be self-contained in the chosen methods and realized by the system.

We will also develop learning methods to include new data into the existing cases and summarization of new and old cases into more general cases applicable to a wider range of tasks for further law purposes. The lifetime aspect of such a CBR system will be considered by special case base maintenance methods and modularity of the system architecture.

  • Development of the system architecture

    The main architecture of the CBR system will be developed, taking into account the different multimedia data types and data representations. The interface to the preprocessing units and the feature extraction units will be defined. The initial case description that can represent the different multimedia data types and data representations will be developed.

  • Development and implementation of the case base for the different multimedia sources

    The case base that is the heart of a CBR system will be developed and implemented. For the different multimedia-representation, the right database will be chosen as well as the right data structure. The interfaces and the data structure will be defined.

  • Development and implementation of similarity measures for the different feature types and representation

    An overview about different similarity measures for the different media type-representations will be developed. The pros and cons of the similarity measures will be worked out, and novel similarity measures for the respective data types will be developed. Aggregation of similarities of different types will be studied and evaluated.

  • Development and implementation of a taxonomy over the similarity measures

    As an outcome of these tasks, a taxonomy over similarities will be developed. This taxonomy will be represented by a hierarchical concept, and the user will be guided through this taxonomy for his special needs. A conversational strategy will be developed that helps the user to figure out what his needs are.

  • Development and implementation of an indexing structure

    The indexing structure over the different data types will be developed and implemented.

  • Development and implementation of meta-learning methods over image and video processing chain

    The preprocessing and feature extraction methods developed will be evaluated, and it will be decided where meta-learning has to be applied. The architecture and CBR methods for meta-learning will be defined. The meta-learning architecture that fits the CBR system will be developed. The meta-learning algorithm will be implemented and evaluated.

  • Development and implementation of a generalization mechanism over cases, case-classes and higher-order classes

    The methods for meta-learning over case, case-classes and higher-order classes will be defined for the different multimedia data sources. The interaction between these different multimedia-representations will be studied, and methods that can improve the performance will be developed.

  • Development of a learning mechanism over similarities

    A recent learning mechanism will be studied, and based on that a new learning mechanism will be developed for feature weighting of the different data types and deciding about the correct similarity measure. These strategies will be implemented into the CBR system and evaluated.

  • Development and implementation of a life cycle and maintenance function for the case-based reasoning system

    The life cycle of a multimedia CBR system will be studied based on the experience the partners have with different data sources. A life cycle and maintenance function that can take this into account will be developed and implemented.

3.3 Multimedia feature extraction

This work package will investigate, define and evaluate feature extraction methods to detect, describe and relate the multimedia data content relevant to forensic activities. The activities will concentrate on different biometric parameters that characterize individuals in terms of appearance, behavior, voice and handwritings, so as to enable the process of detection and recognition.

Features pertaining to face characteristics, distinctive individual marks, morphometric measurements of the body and gait analysis will be the subject of investigation for videos and images. Similarly, the exploration will regard distinctive features from audio signals for speaker’s identification and recognition as well as text analysis to extract information pertinent to the forensic investigations. Particular emphasis will be put on texts whose language shows characteristics deviating from the standard written form: this will be the case of transcriptions of speech recognizers as well as of the language of social media.

Aiming at recognition in the wild, focus will be given to the definition and verification of features that enable detection and recognition in unconstrained conditions and environments. This means that feature invariance to different condition’s changes and robustness to noise will be two fundamental issues that will be tackled. A systematic approach will be used to feature organization. This means that, starting from existing metadata standards for image, video, audio as well as textual data, a formal ontological model will be defined to organize and categorize all the features collected from pertinent literature as well as the ones newly defined. The resulting feature ontology will standardize feature definition and computation, catalog features and model the multimedia data analysis domain. Such an ontology will be integrated with a library of algorithms for the computation of the features considered, resulting in a toolbox for feature extraction.

  • Feature extraction from images and videos

    This task will focus on the definition and extraction of features to be used in detection, recognition, authentication and tracking of individuals, event analysis and anomaly/novelty detection.

    The work will concentrate on biometric and general appearance features. The field of biologically inspired features will be investigated as well, to verify whether methods that try to mimic the human visual capabilities are able to ensure a better performance. This will be particularly addressed to tackle the difficulties of forensic scenarios where it is not possible to make restrictive assumptions about ambient illumination, subject pose, sensor resolution and compression. In particular, for face recognition in the wild, three-dimensional features will also be explored.

    The approach based on bags of features will be studied to carry out a first ‘skimming off the top’ in large repositories to find out only relevant objects pertinent to the case at hand. Local descriptors will be then defined to handle intensity, rotation, scale and affine variations. Improvements based on the inclusion of spatial information will be investigated.

  • Feature extraction from audio streams

    This task will focus on the definition and extraction of features for audio streams for the purpose of (i) identification/authentication of individuals and (ii) audio event detection.

    The following two main lines of research will be followed: (i) Audio diarization where first the audio stream will be first segmented in speech, nonspeech audio (incl. music) and background noise. Then the speech portions of the audio streams will be segmented by speaker turn and identity. Finally the speaker identity will be verified or identified (depending on the scenario). (ii) Computation of the saliency of the audio stream using low-level features to identify surprising audio events. The audio events will be then classified into semantic (ontological) categories that will be used in Task 3.4.

    The audio feature extraction package will include generic short-time envelope features (e.g., mel frequency cepstrum coefficients (MFCC), perceptual linear prediction coefficients (PLP), spectral envelope coefficients (SMAC)) as well as time-domain features. In addition, for speaker identification/verification, modulation spectrum and micro-modulation (AM-FM) features will be employed.

  • Feature extraction from text

    This task will focus on the definition and extraction of features from running texts carrying relevant content for forensic activities. This goal will be pursued by combining machine learning stochastic algorithms with advanced natural language processing techniques in line with the state of the art in the computational linguistics field. Two main lines of research can be envisaged for this task, respectively, aimed at (a) extracting ontological knowledge from texts to be used in the framework of Task 3.4 for building the feature ontology and (b) recognizing and semantically classifying relevant information in running texts, to be used as features describing individual documents. We will refer to these lines of research as ‘ontology learning’ and ‘feature extraction’, respectively. The planned work will mainly consist in the customization and integration of pre-existing software components available from the consortium partners contributing to this task, which will be specialized to meet the specific needs of particularly text. In particular, the main customizations will be concerned with the typology of information to be extracted (also including relational information) as well as with more challenging research topics such as the automatic analysis of texts representative of so-called noncanonical languages or the development of sophisticated technologies devoted to false witness detection. The final result of this task will include automatically extracted ontological knowledge (to be used as input to Task 3.4) as well as tools for feature extraction from texts to be possibly included in the toolbox for feature extraction as far as texts are concerned.

  • Feature integration and ontology development

    All the features developed and collected in the other tasks will be precisely defined and formalized following a coherent feature definition model. This will enable the development of an ontological model to accommodate the different classes of features and give them an easily sharable and reusable standard organization. The final aim is to (i) standardize and homogenize the feature terminology, (ii) collect and disseminate structured classes of features and (iii) support the choice and computation of features according to a method-oriented strategy. Moreover, a library of algorithms will be supplied and linked to such an ontology, resulting in a toolbox for feature extraction.

3.4 Text mining

The overall goal of this task is the design and development of methods and techniques for supporting human experts in the analysis of social media textual data. In particular, novel methods will be developed to (a) monitor in real time Twitter and identify potential threats including individuals and communities of users who are planning illegal activities and (b) build a dynamic model on Twitter text to forecast the upcoming significant events and emotions of the crowd associated with these events. Different approaches to the analysis of this type of texts will be pursued, including natural language processing (NLP) techniques, whose results will be eventually compared and—possibly—combined. The languages dealt with will include Dutch, Italian, Hungarian, English and Bulgarian.

  • Linguistic analysis of Twitter data

    In this phase the tweets will be linguistically analyzed: the text will be segmented in sentences, tokenized, morphologically analyzed and lemmatized. Linguistic preprocessing is needed to extract information from text: however, the linguistic analysis of small-sized texts, such as tweets, is nowadays a challenging task. It is a widely acknowledged fact that NLP systems, typically trained on newswire texts, have a drop of accuracy when tested against these kinds of texts. In tweets, punctuation and capitalization are often inconsistent, slang and technical jargon are widely exploited, and no-canonical syntactic structures frequently occur. This task is aimed at devising and testing domain adaptation methods to allow the NLP tools to achieve reliable results on these types of texts.

  • Keyword extraction

    In the first phase, keywords for early warning indicators on certain selected types of crimes will be developed. Traditional fully automated keyword extraction techniques have shown to perform poorly on small-sized texts. We will consider semi-automated keyword extraction methods. In particular, we will start from a domain specific set of keywords for football hooliganism provided by experienced police officers. This collection will unavoidably be incomplete; however, generic keywords in this set can help us zoom in on a subset of the entire dataset. Starting from this manually built set of keywords, we will develop methods to extract other relevant keywords. This will be done by exploring different strategies, also based on NLP techniques. The approaches that generate more reliable results will be used for continuously extending the set of keywords. Some of these methods will also be exploited in the feature extraction process from text carried out in the framework of WP3.

  • Visualization

    In the next phase, the Twitter feeds which remained after applying the early warning indicators should be visualized in an intuitive to interpret manner such that police officers can swiftly zoom in on potentially dangerous conversations. We plan to first use self-organizing maps which are built using a training algorithm that allows for incorporating user-defined and automatically inferred attribute priorities. The SOM will partition the collection of tweets into risk areas. The user can choose to group tweets based on the person who wrote them, and the map will show in this case the distribution of persons. If the user would like to analyze in detail a person, a Twitter conversation or a group of persons, we intend to provide functionality such that he or she can select an object or a collection of objects of interest and analyze it with a formal concept analysis and its temporal variant.

  • Twitter data analysis

    An important step is to identify communities of users in these large amounts of Twitter data. We hereby extend our analysis methods from object-attribute data to object-object data. Fuzzy or probabilistic measures depicting the strength of the relation between individual objects should be used to preprocess the data in order to cope with scalability issues. Based on these distance-related measures, the data is segmented, and strongly related subcommunities are extracted. These subcommunities can be investigated with traditional social network analysis methods, complemented by temporal concept analysis, temporal relational semantic systems, etc. in order to infer its threat level and the role of the actors in the network. To facilitate this phase, we need linguistic methods which will be used to extract relational attributes from Twitter data and can complement the keyword attributes.

  • Development of text classification and regression techniques

    Finally, automated classification and regression techniques should be developed to partially automate the suspect identification process. Initially a large amount of manual labor should be used to extract keywords, identify priorities among attributes, etc. In the later phases we aim at automating this process and the previously discussed visualization methods stay useful for explaining the automatically made decisions as well as validate them.

  • Forecasting events relevant to legal forces

    To provide more effective results, automatic identification of upcoming threatening events is essential. We will develop methods to identify events unknown beforehand, especially with potential interest to legal forces. We plan to build a dynamic model on Twitter text to forecast the upcoming significant events and emotions of the crowd associated with these events. While there can be many events with a strong presence in social media, some of them would have stronger negative emotions associated with them. These events are candidates that may have criminal nature or significant social consequences.

3.5 Video analysis

The huge amount of CCTV systems has increased the importance of video and image evidence in forensic labs. In order to retrieve the frames of interests, human experts usually spend a lot of time to examine hours and hours of video sequences. The low quality of the images due to high compression algorithm or bad light conditions or video coming from different record source multiplexed in a unique streaming makes the problem quite difficult to tackle. Another important aspect is the great number of native file formats of CCTV video and the difficulty to create a working copy of the data from a format conversion. This WP will develop novel automatic video processing tools aimed at supporting the expert in selecting the portions of videos that might contain the interesting facts he/she is looking for. In particular, techniques for people identification based on face recognition will be devised. As people may appear in nonfrontal poses, we will exploit the presence of multiple cameras in a given area to track a person and make the identification when the face is in frontal position.

  • Video sequence analysis

    To reconstruct the dynamic of the event of interest, the expert may require a lot of time spent to examine hours and hours of video sequences. A lot of factors can make the operation quite difficult in the absence of a suitable automatic software: low quality of the images due to high compression algorithm, bad light conditions or video coming from different record source multiplexed in an unique streaming. In addition, as multiple native file formats for CCTV video have been produced, the expert should also produce working copies of the data by normalizing the video format.

    This task will be aimed to develop a tool that automatically separates video sequences coming from different cameras and converts the video sequence in a common format that can be used for further processing steps, e.g., feature extraction.

  • Frame selection

    This task aims at producing a semi-automatic tool to assist the human expert in selecting the most meaningful frames. Due to the huge quantity of manufacturers operating in the CCTV marketplace, there are a broad range of system for retrieve and export images from compressed video. This task will analyze the most common formats and produce a tool that will leverage on already available retrieval techniques provided either by the developer of CCTV system or by the open-source community.

  • Person and object retrieval and identification

    This task will be aimed to develop tools to aid the human expert looks for individuals or objects that are useful for the investigation. Object of interests can be, for example, heads, vehicles, license plates, guns, dresses and all other objects that can link a person to the event, etc. The tool will handle the cases of videos with low quality, bad lighting condition, camera/object position and facial expressions.

    The persons/objects with enough quality retrieved from the video will be compared by an automatic procedure to a set of known elements of comparison. The expert will then provide feedback on the comparison results in order to refine the search.

    An important main focus of police work is the identification of people for which a decision of the public prosecutor’s office or a judge to the observation or an arrest warrant was issued. Within this scope of arrangement, the use of video-supervised places and facilities should be used. At earlier not known places, the application of mobile videotechnology should be deployed.

    The aim of this task is to develop methods and procedures for an automatic system for identification of one or several target people in mobile video recordings based on passport photos or other available pictures. On this occasion, the prototype-based methods should be used, which are able to work on different picture representations, like pixel, features and graphs. By means of case-based learning mechanisms, a model should be automatically learnt for procedures to the facial recognition under the described application terms. In detail, the following should be developed:

    1. Prototype-based methods and procedures for the identification of an aim person or personal group based on one or several prototypes obtained from pictures of the target person or target personal group without that the system on several picture sequences about likelihood must be trained.

    2. Methods and procedures which allow it from a picture of the person (passport photo or photo of an observation) on site or fast to generate a prototype picture, which includes aging processes of the person and different perspectives and eliminates covers of clothes must be included without special facilities of the forensic disciplines.

    3. To be able to learn methods and procedures to the generalization about case uses around the model for the facial recognition under these application terms.

  • People reidentification

    This task is aimed at devising techniques for the re-identification of people in videos captured by different cameras. In many outdoor and indoor environments, different cameras are present to monitor a given area (e.g., in a given street, you can find cameras operated by shops, banks, etc., or by the municipal police). The development of reidentification techniques allows tracking a subject that exits from the field of view of a camera and enters into the field of view of another camera placed in the neighborhood.

3.6 Speech and audio processing

A significant portion the data collected by law enforcement agencies are speech, sound and audio files.

Similarity, case-based reasoning, compressive reasoning and sensing-based novel methods for speech and audio recognition using both temporal and frequency domain information will be developed. Similarity learning, case generalization and case storage and compressive learning and sensing will allow the handling of very large amount (terabytes) of data.

  • Speech and audio representation

    Development of speech and audio representation schemes using both spectral, wavelet scattering and temporal methods such as delta modulation and zero-crossing information. This novel representation will be used in all of the above tasks.

  • Case and similarity-based reasoning-based speech and audio recognition

    Development of ‘query by example,’ keyword and phrase-based retrieval schemes using conventional and structural similarity-based methods capable of part and whole similarity matching. Once the keyword and phrases are detected, analysts can manually process the proposed retrieval results.

  • Compressive reasoning and recognition

    Exemplar-based speech, speaker and audio recognition. The resulting scheme will be computationally efficient, and it will work in the compressed data domain.

3.7 Handwriting recognition

Handwritings constitute another important part of the collected data. The objective of this work package is to develop computer vision and pattern recognition methods to identify and recognize the large volumes of unconstrained handwritten text to assist the experts.

Methods will be developed not only for writer identification which is a very important task in forensic applications but also for automatic recognition of the text found in the environment such as notes and letters written by the subject or the scene text in the photos taken, such as the shop labels and billboards.

Image enhancement techniques will be carefully applied to reduce the noise without deforming the original data. Alternative to character-based systems, word-based systems will be developed to read the text ‘in the wild’ using methods inspired from object detection and recognition literature. Generic image descriptors such as salient points, gradient histograms and line pairs will be considered for describing words. Efficient and effective similarity measures will be developed to match handwritten text in large volumes in a fast manner without losing any important data.

  • Representation of words

    Words in handwritten text will be described with advanced visual features used in generic object recognition. Novel descriptors based on contours, salient points and shapes will be generated.

  • Similarity-based word matching

    Development of ‘query by example,’ subword, keyword and phrase-based retrieval schemes using conventional and structural similarity-based methods capable of part and whole similarity matching.

  • Writer identification

    Documents will be preprocessed for enhancement and noise removal. Exemplar-based word and subword matching will be used to identify and verify the writers. Experts will be provided with a set of results and will be asked for feedback to be used in an active learning scheme.

  • Automatic text recognition

    Development of both character and word-based recognition schemes. Manual effort for labeling data during training will be reduced by learning the relationships between available handwritten and printed text pairs. Scene text will also be recognized.

3.8 Novelty detection

The objective is to develop a CBR-based novelty detection method based on a combination of statistical and similarity-based methods able to handle the different multimedia data types.

The method will perform novelty detection and handling of the novel cases for immediately reasoning and new model built up and should consider the incremental nature of the data. It will update the models incrementally based on MML and MDL methods.

  • State of art on statistical novelty detection methods applicable to multimedia data types.

    Different statistical model methods will be studied and evaluated for the different multimedia data types. The methods that are applicable to the data types will be selected.

  • Model development for novelty detection classification for multimedia data types

    Taking into account the outcome of Task 8.1, new methods for the different multimedia data types will be developed and implemented.

  • Incremental model learning mechanism based on MDL or MML

    The incremental learning mechanism for the updating of the models and feature description should be developed based on statistical learning methods such as MML or MDL.

  • Incremental data collection mechanism and data base structure

    Development of the data base structure for the different multimedia data types.

    The incremental data collection mechanism will be developed, taking into account the data base structure.

  • Task management mechanism between the model-based and the similarity-based unit

    The task management mechanism that can handle the interaction and task division between the model-based module and the similarity-based module will be developed, implemented and evaluated.

  • Integration into the CBR unit

    The developed novelty detection unit will be integrated into the CBR unit and tested.

3.9 Legal aspects

To provide a clear picture of the current legal framework regulating the process of gathering, processing, analyzing and integrating multimedia data for security and judicial purposes at the European (EU Directives and Regulations) and national level.

To specialize the investigation on legal issues related to the use of sensible data for forensic purposes by the analysis of case studies and EU (ECJ) and national judgments.

To provide a framework of standards, quality indicators and approaches for the preservation and validity assessment of digital evidence for forensic purposes.

To support partners in sorting out potential ethical questions, by supporting the consulting process of the advisory body in solving questions specific to ethics, issues concerning sensible data processing and, generally, rules limiting the use of personal data extracted by massive data processing techniques:

  • This task aims at providing a deep survey of the legal sources at the national and European level. The survey will outline the state of the art in EU law, namely, the first Data Protection Directive (Directive 95/46/EC), and will investigate the important role played by the European Court of Justice (ECJ) in its interpretation. The emphasis is on the principle of proportionality—the key concept in the ECJ’s judgments—that requires that every specific instance of processing of personal data to be necessary for its concrete purpose. The so-called proportionality test has three components, which involve an assessment of a measure’s suitability, necessity and proportionality strictu sensu. References to the test could be useful for evaluating the legal effects of the implementation of tools developed by the project. National legislation of the eight countries participating in the Consortium (FR, DE, IT, BG, GR, SP, TK and NL) will be collected, analyzed and compared, to provide a complete picture of the level of harmonization among European countries. The impact of ‘proportionality test’ in national judiciary will be tested.

  • The second step makes specific reference to the new European rules under discussion (proposal for a Directive-COM-2012-2110 and Sec-2012-2072 final) and its accompanying documents (Impact Assessment by Commission staff working paper to the Regulation of the European Parliament and of the Council on the protection of individuals with regard to the processing of personal data and on the free movement of such data and to the Directive 72). The implementation process of the new rules will be monitored, and their impact (especially the interaction between regulation and Directive), will be analyzed. Actual changes of the regulative framework in case of the enactment of the new legislation during the project life will be explained to partners by training activities.

  • Evaluation of digital evidence: this task aims at explaining the concept of legal validity in the light of digital evidence and at pointing out the criteria (e.g. authenticity) on which legal validity of evidences can be assessed; Task 2 will be implemented in two steps and with a strong connection with the WP 2, by reference to the classification of typologies of data, formats and specific features.

  • Analysis of technical and legal requirements (e.g. ISO Guide 25) for acquiring, processing, storing and preserving digital evidences, based on the selection of guidelines and best practices of relevant judicial cases and of significant literature. (i.e., ENFSI community (http://www.enfsi.org/))

  • Analysis of the communication process between the judge, who is the final evaluator (‘free conviction of the court’) and the expert, who plays the role of a ‘mediator’, expected to provide the basic elements (time, location, authenticity, etc.) on which the evidence is grounded. The cognitive background, the communicative interaction, the semantic/terminological mapping and the reasoning processes will be modeled in an integrated picture; on this topics Task 9.2 will publish a report that aims at providing the scientific community with an update and original approach to the methodology for quality assessment of digital evidence.

3.10 Evaluation

To integrate and test the different components in an application that can easily be deployed in a computing infrastructure. This task aims to perform integration testing of the components and building an easily deployable application.

To develop case studies that will illustrate the features of the system. The case studies will illustrate the need for video and image enhancement, filtering and assessment, case-based reasoning, multimedia feature extraction, video analysis, handwriting recognition and novelty detection and illustrate how the system complies with legal requirements.

To evaluate the developed methods in the integrated software solution and to demonstrate the applicability of the solutions.

Risk mapping on territory, or based on the results of the scenarios, to determine certain potentially illegal behavior occurrences involving the activation of protection and security, both in terms of allocation of armed guards and installation of dedicated electronic equipment;

Support the decision on whether to carry out an intervention and which mode to adopt, on the basis of the evolution of a given behavioral scenario.

  • Identification of a large class of—potentially or effectively—illegal behaviors.

  • Registration and selective search of details from an offense scenario

    So the solution needs to be evaluated against real life situations.

  • Integration and testing

    This task will integrate the components produced by the different tasks. This will require integrating the video and image components, the case-based reasoning components, the multimedia feature extraction component, the social media text analytics component, the video analysis and face detection component, the speech and audio processing component, the handwriting recognition component and the novelty detection component. The system build will then undergo integration and testing to ensure that architectural functional and nonfunctional requirements are satisfied. The resulting build architecture will be deployable on a cloud computing infrastructure so that scalability requirements can be tested and shown to meet the requirements.

  • Case applications

    This task will develop several case studies that illustrate the use of the system on real but anonymized data from the law enforcement agencies and end-user companies. These case studies will illustrate the use of video and image enhancement filtering and assessment (PMMW images), case-based reasoning (data provided by all data providers), multimedia feature extraction (data provided by all data providers), video analysis, handwriting recognition, text analysis and novelty detection for forensic analysis of multimedia data (data provided by all data providers). The case studies will also show how the system complies with legal requirements. The case studies will be incrementally developed through the different project iterations.

Advertisement

4. Expected results

As the expected result, we will have the following:

  • Novel multimedia data acquisition and automatic analysis methods for forensic multimedia data such as images and videos, text, handwriting, speech and audio signals and social media data

  • A novel toolkit for the automatic and semi-automatic analysis and interpretation of forensic multimedia data comprised of image enhancement algorithm

  • A forensic case basis in which cases and generalized cases are stored for fast retrieval and reasoning

  • A learning unit and a novelty detection unit for dealing with novel and formerly unseen data and situations and detecting new task for the standard

  • New standards and a new methodology for the analysis of forensic multimedia data.

The toolkit will be used as benchmark for testing instruments and rules involving technical skills, operators and courts.

The scenario: the achievement of an effective set of technical solutions and of (law-compliant) procedural standards is a valuable goal for the ‘global society’, where cultural interaction, de-territorialization of social behaviors and interdependency of phenomena (e.g. environment, health, immigration, crimes prevention, anti-terrorism fight, etc.) require law makers and courts to face the evolution of social phenomena and their legal effects within a pan-European harmonized area of justice. European judges tend to adjust the national legal framework by referring to common and shared principles, thus invoking constitutional rules as higher principles to which anchor case solutions. The problem of adopting standards of conduct in judicial procedures seems no longer to be a questionable issue but rather looks like an undeniable fact at the center of a vivid discussion within the international community of jurists, scholars and courts. In a field of growing relevance, like digital evidences in the courts, the project will provide practices of use (cases, tests) and guidelines.

Knowing the procedures, understanding the purposes, and assessing the results.

When touching mostly the sensitive area of ‘public rights’ like freedom, security and equality, citizens must be ensured that their fundamental rights are secured and that States’ actions should be directed to their protection against crimes at the minimum cost of their freedom. In the field of data mining, the question became, besides the regulative aspects, a matter of ethic. In this field the question is not simply a matter of what type of data is collected and whether it is relevant but also how it is collected and by whom. The fact that securely de-identified data can be collected without consent provided there is a legitimate purpose is a clear argument. But, still law enforcement agency and courts should have to legitimize the purpose in a way that citizens can understand. The project will produce public available reports on the technologies while explaining their use and application by means of transparent guidelines.

The business-oriented benefits of this project take the form of new techniques and tools for the analysis of forensic multimedia data that can be marketed as tool boxes or single software solutions for the specific tasks described in the proposal. It will make a marked improvement on the solutions currently in use by the companies involved in the proposed work and will open new markets for those that are not currently involved in forensics. It is also foreseen to establish new enterprises that market and further develop the proposed software solutions in the high-technology field of security. A special marketing and services entity that will advertise the tools in the security field and among police forces will also be established.

End-users will benefit from the more effective analysis of forensic data based on the standards and methodology.

Advertisement

5. Conclusions

With this chapter, we finish our work on multimedia forensic data analysis, which was started in Part I of Forensic Multimedia Data Analysis [1].

Forensic investigations on multimedia evidence usually develop along four different steps: analysis, selection, evaluation and comparison. During the analysis step, technicians typically look at huge amounts of different multimedia data (e.g. hours of video or audio recordings, pages and pages of text, hundreds and hundreds of pictures) to reconstruct the dynamic of the event and collect any piece of relevant information. This step obviously requires a lot of time, and many factors can make it difficult, among which data heterogeneity, quality and quantity are the most relevant. Afterwards, during the selection step, technicians select and acquire the most meaningful pieces of information from the different multimedia data (e.g., frames from videos, audio fragments and documents). Then, in the evaluation step, they look for relevant elements in the selected data, which will be further investigated in the comparison step. They can select heads, vehicles, license plates, guns, sentences, sounds and all other elements that can link a person to the event. The main problems are the low quality of media data due to high compression, adverse environmental conditions (e.g., noise and bad lighting condition), camera/object position and facial expressions. Finally, during the comparison step, technicians place the extracted elements side by side with a known element of comparison. From the comparison of general and particular characteristics, the operators give a level of similarity. In forensic application the use of automatic pattern recognition system gives poor performance because of the high variability of data recording. On the other hand, human perception is a great pattern recognition system but is characterized by high subjectivity and unknown reproducibility and performance.

In this chapter, we propose to develop a toolkit of methods and instruments that will be able to support analysts along all these steps, strongly reducing human intervention. First of all, it will include instruments to process different kinds of media data and, possibly, correlate them. This will obviously reduce the time spent to find the correct instruments for processing the medium at hand. Furthermore, it comprises preprocessing tools that alleviate, by filtering and enhancement, the problem of low data quality. In particular, for image and video data, a great help will come from super-resolution methods that will maximize the information contained in low-resolution images or videos (e.g., foster the process of face reconstruction and recognition from blurred images). This feature will greatly support all the subsequent steps.

Semi-automatic tools will be included to assist human experts in selecting the most meaningful pieces of data. In particular methods for the selection of frames from videos, images from large databases, keywords from text documents and pieces from audio signals will be developed for a first skimming of huge amounts of data according to criteria specified by the users. To this end, a great advantage will come from organizing feature extraction methods, which will also allow users to relate different types of media and operate on them contemporarily, when possible.

For evaluation and comparison, a toolkit will comprise advanced (semi-)automated instruments for different media processing so as to allow person and object retrieval, identification and recognition, writer identification, automatic handwriting recognition, speech and speaker recognition. All these methods will address the problem of recognition in the wild, going strongly beyond the current state of the art. Using the toolkit will ensure the reproducibility of the analysis and foster operators’ objectivity.

Finally, the case-based approach will ensure that the knowledge acquired during each investigation will be suitably summarized, generalized and stored so as to be profitably reused in other investigations.

Taken as a whole, the toolkit will dramatically reduce efforts spent by operators in tedious and time-consuming tasks, such as retrieving and selecting multimedia data, thus focusing them on much more important investigations. Currently, there is no other toolkit on the market that addresses the analysis of different types of media and has such a broad range of applications. Usually, different and not integrated solutions are developed to tackle specific problems on single-modality data.

A fundamental concern in forensic investigation is the legal validity of evidence. We will deeply survey the legal frameworks at the national and European level, thus obtaining a clear picture of the legal hurdles governing data extraction, integration and use. Criteria and rules to evaluate digital evidence will be investigated; standards for analysis, production and usability will be acquired and, if necessary, extended. The results produced by the toolkit will then be appropriate in different national and international courts.

  • Validity of data as proofs of evidence, guaranteed by the evidence usability criteria

  • Objectivity of data analysis, ensured by the use of mathematical methods well documented and explained

  • Traceability of the methods applied, obtained by logging all the tools selected and applied to each type of media

  • Reproducibility of the investigation process

Standardization will be particularly promoted by two main outcomes:

  • The case base, which will foster the spreading of similar procedures and protocols, since successful solutions will be stored and efficiently reused to support novel cases

  • The feature ontological model, which will enable reproducibility and shareability of the features that can be extracted from multimedia data in different scenarios

ICT solutions such as our proposed toolkit put a big value in forensic activities, since they enable analysts to obtain a sound identification, preservation, recovery and presentation of facts and opinions pertinent to an investigation. The awareness of this capability has been spreading in the last years, and several research initiatives and industries have been focusing on forensic informatics.

References

  1. 1. Perner P. Novel Methods for Forensic Multimedia Data: Part I
  2. 2. Wang A. The Shazam music recognition service. Communications of the ACM. 2006;49(8):44-48
  3. 3. Morris RN. Forensic Handwriting Identification: Fundamental Concepts and Principles. San Diego, USA: Academic Press; 2000
  4. 4. Franke K, Schomaker L, Veenhuis C, Taubenheim C, Guyon I, Vuurpijl L, et al. WANDA: A generic framework applied in forensic handwriting analysis and writer identification. In: Proceedings of 3rd International Conference on Hybrid Intelligent Systems. 2003. pp. 927-938
  5. 5. Srihari SN, Leedham G. A survey of computer methods in forensic document examination. In: Proceedings of the 11th Conference of the International Graphonomics Society. 2003. p. 279
  6. 6. Schomaker L. Advances in writer identification and verification. In: Ninth International Conference on Document Analysis and Recognition (ICDAR 2007), Vol. 2. Parana, Brazil: IEEE; 2007. pp. 1268-1273
  7. 7. Franke K, Koeppen MK. A framework for document pre-processing in forensic handwriting analysis. In: IWFHR00; 2000. pp. 73-81
  8. 8. Srihari SN, Zhang B, Tomai C, Lee S, Shi Z, Shin Y-C. A system for handwriting matching and recognition. In: Proceedings of Symposium on Document Image Understanding Technology. 2003. pp. 67-75
  9. 9. Niels R, Vuurpijl L. Using dynamic time warping for intuitive handwriting recognition. In: Proceedings of IGS2005. 2005. pp. 217-221
  10. 10. Niels R, Vuurpijl L, Schomaker L. Automatic allograph matching in forensic writer identi-fication. International Journal of Pattern Recognition and Artificial Intelligence. 2007;21(01):61-81
  11. 11. Pervouchine V, Leedham G. Extraction and analysis of forensic document examiner features used for writer identification. Pattern Recognition. 2007;40(3):1004-1013
  12. 12. Madhvanath S, Govindaraju V. The role of holistic paradigms in handwritten word recog-nition. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2001;23(2):149-164
  13. 13. Rath TM, Manmatha R. Word spotting for historical documents. Journal on Document Analysis and Recognition. 2007;9:139-152
  14. 14. Rodriguez-Serrano JA, Perronnin F. Handwritten word-spotting using hidden Markov models and universal vocabularies. Pattern Recognition. 2009;42(9):2106-2116
  15. 15. Rothfeder J, Manmatha R, Rath T. Aligning transcripts to automatically segmented hand-written manuscripts. In: Proceedings of the Conference on Document Analysis Systems. Vol. 3872. 2006. pp. 84-95
  16. 16. Leydier Y, Lebourgeois F, Emptoz H. Text search for medieval manuscript images. Pattern Recognition. 2007;40:3552-3567
  17. 17. Wang K, Belongie S. Word spotting in the wild. In: European Conference on Computer Vision (ECCV), Heraklion, Crete. 2010
  18. 18. Plamondon R, Srihari SN. On-line and off-line handwriting recognition: A comprehensive survey. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2000;2(1):63-84
  19. 19. Adamek T, O’Connor NE, Smeaton AF. Word matching using single closed contours for indexing handwritten historical documents [Internet]. Journal on Document Analysis and Recognition. 2007;9:153-165
  20. 20. Ferrari V, Fevrier L, Jurie F, Schmid C. Groups of adjacent contour segments for object detection. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2008;30(1):36-51
  21. 21. Alahi A, Ortiz R, Vandergheynst P. FREAK: Fast retina keypoint. In: CVPR. 2012. pp. 510-517
  22. 22. Ray H, Ess A, Tuytelaars T, Van Gool L. Speeded up robust features (SURF). Computer Vision and Image Understanding. 2008;110(3):346-359
  23. 23. Ataer E, Duygulu P. Retrieval of ottoman documents. In: Proceedings of 8th ACM Internat. Work-shop on Multimedia Information Retrieval. 2006. pp. 155-162
  24. 24. Ataer E, Duygulu P. Matching ottoman words: An image retrieval approach to historical document indexing. In: Proc. 6th ACM Internat. Conf. on Image and Video Retrieval. 2007. pp. 341-347
  25. 25. Lowe DG. Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision. 2004;60(2):91-110
  26. 26. Belongie S, Malik J, Puzicha J. Shape matching and object recognition using shape con-texts. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2002;24(2):509-522
  27. 27. Govindaraju V, Cao H, Bhardwaj A. Handwritten document retrieval strategies. In: AND ‘09: Proc. Third Workshop on Analytics for Noisy Unstructured Text Data, Barcelona, Spain. 2009. pp. 3-7
  28. 28. Şaykol E, Güdükbay U, Ulusoy Ö. A database model for querying visual surveillance by integrating semantic and low-level features. In: Candan KS, Celentano A, editors. Proc. of 11th International Workshop on Multimedia Information Systems (MIS'05). LNCS. Vol. 3665. Heidelberg: Springer; 2005. pp. 163-176
  29. 29. Tax DMJ, Jusycyak P. Kernel whitening for one-class classification. International Journal of Pattern Recognition and Artificial Intelligence. 2003;17(3):430-445
  30. 30. Markow M, Singh S. Novelty detection: A review-Part 1: Statistical approaches. Signal Processing. 2003; 83(12):2481-2497
  31. 31. Scholkopf B, Platt JC, Shawe-Taylor J, Smola AJ. Estimating the support of a highdimensional distribution. Neural Computation. 2001;13:1443-1471
  32. 32. Markou M, Singh S. A neural network-based novelty detector for image sequence analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2006;28(10):1664-1677
  33. 33. Weinshall D, Zweig A, Hermansky H, Kombrink S, Ohl FW, Anemuller J, et al. Beyond novelty detection: Incongruent events, when general and specific classifiers disagree. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2012;34(10):1886-1901
  34. 34. Perner P. Concepts for novelty detection and handling based on a case-based reasoning processing schema. Engineering Applications of Artificial Intelligence. 2009;22(1):86-91
  35. 35. Şaykol E, Güdükbay U, Ulusoy Ö. Scenario-based query processing for video-surveillance archives. Engineering Applications of Artificial Intelligence. 2010;23(3):331-345
  36. 36. Şaykol E, Baştan M, Güdükbay U, Ulusoy Ö. Keyframe labeling technique for surveillance event classification. Optical Engineering. 2010;49(11): Article no. 117203, 12 p
  37. 37. Marsland S. Density level detection is classification. Neural Computing Surveys. 2002;3:1-39
  38. 38. Chandola V, Banerjee A, Kumar V. Anomaly detection: A survey. ACM Computing Surveys. 2009;41(3):1-15
  39. 39. Can EF, Duygulu P. A line based representation to match the words in historical manuscripts. Pattern Recognition Letters. 2011;32(8):1081-1222
  40. 40. Markow M, Singh S. Novelty detection: A review-Part 2: Neural network based approaches. Signal Processing. 2003;83(12):2499-2521
  41. 41. Zhang Y, Callan J, Minka T. Novelty and redundancy detection in adaptive filtering. In: Proceedings of the 25th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. 2002. pp. 81-88
  42. 42. Zhang J, Ghahramani Z, Yang Y. A probabilistic model for online document clustering with application to novelty detection. In: Advances in Neural Information Processing Systems 17, NIPS 2004. 2004
  43. 43. European Legal Sources. Available from: http://ec.europa.eu/justice/newsroom/data-protection/news/120125_ en.htm
  44. 44. Jones KJ, Bejtlich R, Rose CW, Farmer D, Venema W. The Computer Forenisics Library: File System, Forensic Analysis, Real Digital Forensics, Forensic Discovery. Addison Wesley; 2005
  45. 45. Casey E. Digital Evidence and Computer Crime: Forensic Science, Computers and the Internet. Waltham, MA: Academic Press; 2011
  46. 46. Jones KJ, Bejtlich R, Rose CW. Real Digital Forensics. Pearson Education; 2005
  47. 47. Rochwerger B, Breitgand D, Epstein A, Hadas D, Loy I, Nagin K, et al. Reservoir—When one cloud is not enough. IEEE Computer. 2011;44(3):44-51
  48. 48. Yang H-C, Dasdan A, Hsiao R-L, Parker DS. Map-reduce-merge: simplified relational data processing on large clusters. In: Proceedings of the 2007 ACM SIGMOD International Conference on Management of Data (SIGMOD '07). New York, NY, USA: ACM; 2007. pp. 1029-1040. DOI: 10.1145/1247480.1247602

Notes

  • Directive 95/46/EC on the protection of individuals with regard to the protection of personal data and on the free movement on such data, OJ L 281, 23.11.1995, p. 31.
  • Two public consultations have been launched on the data protection reform: one from July to December 2009 (http://ec.europa.eu/justice/news/consulting_public/news_consulting_0003_en.htm) and a second one from November 2010 till January 2011 (http://ec.europa.eu/justice/news/consulting_public/news_consulting_0006_en.htm).
  • COM(2010)609.
  • The Impact Assessment SEC(2012)72.
  • See p. 4, final COMMUNICATION FROM THE COMMISSION TO THE EUROPEAN PARLIAMENT, THE COUNCIL, THE EUROPEAN ECONOMIC AND SOCIAL COMMITTEE AND THE COMMITTEE OF THE REGIONS Safeguarding Privacy in a Connected World A European Data Protection Framework for the 21st Century Brussels, 25.1.2012 COM(2012) 9.
  • Brussels, 25.1.2012 COM(2012) 11 final 2012/0011 (COD) Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL on the protection of individuals with regard to the processing of personal data and on the free movement of such data (General Data Protection Regulation){SEC(2012) 72 final} {SEC(2012) 73 final} SEC 2012 ...The Regulation also makes a limited number of technical adjustments to the e-Privacy Directive (Directive 2002/58/EC as last amended by Directive 2009/136/EC—OJ L 337, 18.12.2009, p. 11) to take account of the transformation of Directive 95/46/EC into a Regulation.
  • COM(2012) 10: DIRECTIVE OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL on the protection of individuals with regard to the processing of personal data by competent authorities for the purposes of prevention, investigation, detection or prosecution of criminal offenses or the execution of criminal penalties, and the free movement of such data.
  • Framework Decision 2008/977/JHA of 27 November 2008 on the protection of personal data processed in the framework of police and judicial cooperation in criminal matters, OJ L 350, 30.12.2008, p. 60.
  • COM 2012,10, Sections 3 and 3.2.

Written By

Petra Perner

Reviewed: 14 April 2020 Published: 02 June 2020