Open access peer-reviewed Edited Volume

Linked Open Data - Applications, Trends and Future Developments

Kingsley Okoye

Tecnologico de Monterrey

Covering

semantic annotation data labeling knowledge discovery big data property restriction web ontology language semantic web search process modeling process mapping business process management user applications educational innovation

Register your interest in contributing to this book

Collaborate with our community and contribute your knowledge.

About the book

Nowadays, modern tools and techniques for the collection and analysis of data in all fields of Science and Technology are proving to be more complex. The growing complexities are evidenced by the need for a more generalized or precise description (integration) of the data sources and formats that allow for flexible exploration of the different data types. Perhaps, the challenge has been on how to create systems capable of providing an understandable format for different datasets, as well as, making the derived standards explicable across the different platforms. Over the past few decades, one of the recent technologies that have proved indispensable in this area is the Linked Open Data (LOD) cloud. The LOD consists of a number of machine-readable datasets with Resource Description Framework (RDF) triples that are useful in describing data classes and the properties. Moreover, indications from the early researches note that one of the problems with the existing data or information processing systems is the need for not just representing the data (or information) in formats that can be easily understood by humans, but also for building intelligent systems that trail to process the information that they contain or support. In other words, machine-understandable systems. By machine-understandable systems, we assume that the extracted information or models are either semantically labeled (annotated) to ease the analysis process, or represented in a formal structure (ontology) that allows a computer (the reasoning engine) to infer new facts by making use of the underlying relations.
The main idea for any data or information processing system for those aspects of aggregating the data or computing the hierarchy of various process elements is that they should not only be machine-readable but also machine-understandable. Moreover, an adequate knowledge-based system is perceived to be, on the one hand, understandable by people, and on the other hand understandable by the machines.
As devices become smarter and produce data about themselves, it will become increasingly important for scientists to take advantage of more powerful tools and/or data integration techniques to help provide a common standard for information dissemination across the different platforms. To this end, the content of this book shows that technologies such as the semantic web, machine learning, deep learning, natural language processing, and learning analytics which encompasses the wider spectrum of the Linked Open Data (LOD) are of paramount. Therefore, the work presents two main drivers for the Linked Open Data technologies: (i) encoding knowledge about specific data and process domains, and (ii) advanced reasoning and analysis of the big data at a more conceptual level.


This book intends to provide the reader with a comprehensive overview of the current state-of-the-art within the Linked Open Data and the benefits of the methods – ranging from the semantics-aware techniques that exploit knowledge kept in (big) data to improve data reasoning (big analysis) beyond the possibilities offered by most traditional data mining techniques.

Publishing process

Book initiated and editor appointed

Date completed: October 14th 2019

Applications to edit the book are assessed and a suitable editor is selected, at which point the process begins.

Chapter proposals submitted and reviewed

Deadline Extended: Open for Submissions

Potential authors submit chapter proposals ready for review by the academic editor and our publishing review team.

Approved chapters written in full and submitted

Deadline for full chapters: May 2nd 2020

Once approved by the academic editor and publishing review team, chapters are written and submitted according to pre-agreed parameters

Full chapters peer reviewed

Review results due: July 21st 2020

Full chapter manuscripts are screened for plagiarism and undergo a Main Editor Peer Review. Results are sent to authors within 30 days of submission, with suggestions for rounds of revisions.

Book compiled, published and promoted

Expected publication date: September 19th 2020

All chapters are copy-checked and typesetted before being published. IntechOpen regularly submits its books to major databases for evaluation and coverage, including the Clarivate Analytics Book Citation Index in the Web of ScienceTM Core Collection. Other discipline-specific databases are also targeted, such as Web of Science's BIOSIS Previews.

About the editor

Kingsley Okoye

Tecnologico de Monterrey

Kingsley Okoye received his Ph.D. in Software Engineering from the School of Architecture Computing and Engineering, College of Arts Technologies and Innovation, University of East London, UK in 2017. He also completed a Master\'s degree in Technology Management in 2011 and a Bachelors degree in Computer Science in 2007. He is a MIET member at the Institution of Engineering and Technology, UK. and a Graduate Member in the Institute of Electrical and Electronics Engineers, IEEE. He is a devoted researcher to Industry and Academia in operational, hardware and software fields of computing in areas such as Data Science, Machine Learning, Artificial Intelligence, Big Data and Advanced Analytics, Software Development and Programming, and Business Process Management. Therefore, Kingsley has had the opportunity to do case studies and work in interdisciplinary and cross-cultural teams of various business and academic units that serve multiple industries. This includes serving as a software programming lab tutor for undergraduate students. He also serves as editorial board member and reviewer in reputable journals and conferences and has contributed to research and project outcomes by assessing and evaluating their impacts upon the scientific and industrial communities. It is Kingsley\'s personal mission to foster sustainable technical research and provide solutions through critical thinking, creative problem solving and cross-functional collaboration. He has also served as principal organizer and participated in organizing special session workshops, presentations, research methods, and statistical analysis topics in several conferences and workshops. The outcomes of his research have been published as Journal Articles, Book, Book Chapters, Conference Proceedings in high index and reputable Journals, Publishers, and Conferences in the areas of Computing and Educational Innovation. His Research interests include Process Mining, Business Process Modelling and Automation, Learning Analytics and Systems, Semantic Web Technologies, Knowledge Management, Big Data Analysis and Process Querying, Internet Applications and Ontology. Kingsley is a Data Architect in the Writing Lab of Tecnologico de Monterrey. He is also a Member of the Machine Intelligence Research Lab (MIRLabs), USA, and a Member of the IEEE SMCS Technical Committee on Soft Computing.

View profile

Book chapters authored 1

Books edited 0

Introducing your Author Service Manager

Ms. Jasna Bozic

As an Author Service Manager, my responsibilities include monitoring and facilitating all publishing activities for authors and editors. From chapter submission and review to approval and revision, copyediting and design, until final publication, I work closely with authors and editors to ensure a simple and easy publishing process. I maintain constant and effective communication with authors, editors and reviewers, which allows for a level of personal support that enables contributors to fully commit and concentrate on the chapters they are writing, editing, or reviewing. I assist authors in the preparation of their full chapter submissions and track important deadlines and ensure they are met. I help to coordinate internal processes such as linguistic review, and monitor the technical aspects of the process. As an ASM I am also involved in the acquisition of editors. Whether that be identifying an exceptional author and proposing an editorship collaboration, or contacting researchers who would like the opportunity to work with IntechOpen, I establish and help manage author and editor acquisition and contact.

Ask a question
creativecommons
alpsp
cope
stm
ithenticate
crossref
doi
oaspa

Book will be abstracted and indexed in

googlescholar
worldcat
base
az
openaire