De Gruyter Frontiers in Computational Intelligence
THE SERIES: FRONTIERS IN COMPUTATIONAL INTELLIGENCE
The series Frontiers In Computational Intelligence is envisioned to provide comprehensive coverage and understanding of cutting edge research in computational intelligence. It intends to augment the scholarly discourse on all topics relating to the advances in artifi cial life and machine learning in the form of metaheuristics, approximate reasoning, and robotics. Latest research findings are coupled with applications to varied domains of engineering and computer sciences. This field is steadily growing especially with the advent of novel machine learning algorithms being applied to different domains of engineering and technology. The series brings together leading researchers that intend to continue to advance the field and create a broad knowledge about the most recent research.
Series Editor
Dr. Siddhartha Bhattacharyya, CHRIST (Deemed to be University), Bangalore, India
Editorial Advisory Board
Dr. Elizabeth Behrman, Wichita State University, Kansas, USA
Dr. Goran Klepac
Dr. Leo Mrsic, Algebra University College, Croatia
Dr. Aboul Ella Hassanien, Cairo University, Egypt
Dr. Jan Platos, VSB-Technical University of Ostrava, Czech Republic
Dr. Xiao-Zhi Gao, University of Eastern Finland, Finland
Dr. Wellington Pinheiro dos Santos, Federal University of Pernambuco, Brazil
Supplementary Materials
Topics
The new paradigm "Industry 5.0" promises great shifts not only in industry, but also in business and consumption models. With the help of data science and internet of things, manufacturers focus on delivering in real time, and customers will benefit from personalized products. Robots and cobots will collaborate with the humans. This book explains various facets of Industry 5.0, focusing on its applications on medical research and manufacturing.
This book focuses on the latest developments in the fields of visual AI, image processing and computer vision. It shows research in basic techniques like image pre-processing, feature extraction, and enhancement, along with applications in biometrics, healthcare, neuroscience and forensics. The book highlights algorithms, processes, novel architectures and results underlying machine intelligence with detailed execution flow of models.
Researchers, academicians and professionals expone in this book their research in the application of intelligent computing techniques to software engineering. As software systems are becoming larger and complex, software engineering tasks become increasingly costly and prone to errors. Evolutionary algorithms, machine learning approaches, meta-heuristic algorithms, and others techniques can help the effi ciency of software engineering.
This book explains the application of Artificial Intelligence and Internet of Things on green energy systems. The design of smart grids and intelligent networks enhances energy efficiency, while the collection of environmental data through sensors and their prediction through machine learning models improve the reliability of green energy systems.
This book presents a comprehensive study of different tools and techniques available to perform network forensics. Also, various aspects of network forensics are reviewed as well as related technologies and their limitations. This helps security practitioners and researchers in better understanding of the problem, current solution space, and future research scope to detect and investigate various network intrusions against such attacks efficiently.
Forensic computing is rapidly gaining importance since the amount of crime involving digital systems is steadily increasing. Furthermore, the area is still underdeveloped and poses many technical and legal challenges. The rapid development of the Internet over the past decade appeared to have facilitated an increase in the incidents of online attacks. There are many reasons which are motivating the attackers to be fearless in carrying out the attacks. For example, the speed with which an attack can be carried out, the anonymity provided by the medium, nature of medium where digital information is stolen without actually removing it, increased availability of potential victims and the global impact of the attacks are some of the aspects. Forensic analysis is performed at two different levels: Computer Forensics and Network Forensics. Computer forensics deals with the collection and analysis of data from computer systems, networks, communication streams and storage media in a manner admissible in a court of law. Network forensics deals with the capture, recording or analysis of network events in order to discover evidential information about the source of security attacks in a court of law.
Network forensics is not another term for network security. It is an extended phase of network security as the data for forensic analysis are collected from security products like firewalls and intrusion detection systems. The results of this data analysis are utilized for investigating the attacks. Network forensics generally refers to the collection and analysis of network data such as network traffic, firewall logs, IDS logs, etc. Technically, it is a member of the already-existing and expanding the field of digital forensics. Analogously, network forensics is defined as "The use of scientifically proved techniques to collect, fuses, identifies, examine, correlate, analyze, and document digital evidence from multiple, actively processing and transmitting digital sources for the purpose of uncovering facts related to the planned intent, or measured success of unauthorized activities meant to disrupt, corrupt, and or compromise system components as well as providing information to assist in response to or recovery from these activities." Network forensics plays a significant role in the security of today’s organizations.
On the one hand, it helps to learn the details of external attacks ensuring similar future attacks are thwarted. Additionally, network forensics is essential for investigating insiders’ abuses that constitute the second costliest type of attack within organizations. Finally, law enforcement requires network forensics for crimes in which a computer or digital system is either being the target of a crime or being used as a tool in carrying a crime.
Network security protects the system against attack while network forensics focuses on recording evidence of the attack. Network security products are generalized and look for possible harmful behaviors. This monitoring is a continuous process and is performed all through the day. However, network forensics involves post mortem investigation of the attack and is initiated after crime notification. There are many tools which assist in capturing data transferred over the networks so that an attack or the malicious intent of the intrusions may be investigated. Similarly, various network forensic frameworks are proposed in the literature.
This book will focus on the use of Blockchain 3.0 for sustainable development. This tool is invaluable for achieving transparency and trust, but possibilities to benefit society more broadly are emerging that will bring a bright future for sustainable development, too. The adoption of blockchain in agriculture, healthcare, infrastructure, education, environment, energy, communication will provide revolutionary changes in the digital era.
The book will focus on the applications of machine learning for sustainable development. Machine learning (ML) is an emerging technique whose diffusion and adoption in various sectors (such as energy, agriculture, internet of things, infrastructure) will be of enormous benefit. The state of the art of machine learning models is most useful for forecasting and prediction of various sectors for sustainable development.
Agriculture is one of the most fundamental human activities. As the farming capacity has expanded, the usage of resources such as land, fertilizer, and water has grown exponentially, and environmental pressures from modern farming techniques have stressed natural landscapes. Still, by some estimates, worldwide food production needs to increase to keep up with global food demand. Machine Learning and the Internet of Things can play a promising role in the Agricultural industry, and help to increase food production while respecting the environment. This book explains how these technologies can be applied, offering many case studies developed in the research world.
This book focuses on the fundamentals of deep learning along with reporting on the current state-of-art research on deep learning. In addition, it provides an insight of deep neural networks in action with illustrative coding examples.
Deep learning is a new area of machine learning research which has been introduced with the objective of moving ML closer to one of its original goals, i.e. artificial intelligence. Deep learning was developed as an ML approach to deal with complex input-output mappings. While traditional methods successfully solve problems where final value is a simple function of input data, deep learning techniques are able to capture composite relations between non-immediately related fields, for example between air pressure recordings and English words, millions of pixels and textual description, brand-related news and future stock prices and almost all real world problems.
Deep learning is a class of nature inspired machine learning algorithms that uses a cascade of multiple layers of nonlinear processing units for feature extraction and transformation. Each successive layer uses the output from the previous layer as input. The learning may be supervised (e.g. classification) and/or unsupervised (e.g. pattern analysis) manners. These algorithms learn multiple levels of representations that correspond to different levels of abstraction by resorting to some form of gradient descent for training via backpropagation. Layers that have been used in deep learning include hidden layers of an artificial neural network and sets of propositional formulas. They may also include latent variables organized layer-wise in deep generative models such as the nodes in deep belief networks and deep boltzmann machines. Deep learning is part of state-of-the-art systems in various disciplines, particularly computer vision, automatic speech recognition (ASR) and human action recognition.
Quantum-enhanced machine learning refers to quantum algorithms that solve tasks in machine learning, thereby improving a classical machine learning method. Such algorithms typically require one to encode the given classical dataset into a quantum computer, so as to make it accessible for quantum information processing. After this, quantum information processing routines can be applied and the result of the quantum computation is read out by measuring the quantum system.
While many proposals of quantum machine learning algorithms are still purely theoretical and require a full-scale universal quantum computer to be tested, others have been implemented on small-scale or special purpose quantum devices.
The publication is attempted to address emerging trends in machine learning applications. Recent trends in information identification have identified huge scope in applying machine learning techniques for gaining meaningful insights. Random growth of unstructured data poses new research challenges to handle this huge source of information. Efficient designing of machine learning techniques is the need of the hour. Recent literature in machine learning has emphasized on single technique of information identification. Huge scope exists in developing hybrid machine learning models with reduced computational complexity for enhanced accuracy of information identification. This book will focus on techniques to reduce feature dimension for designing light weight techniques for real time identification and decision fusion. Key Findings of the book will be the use of machine learning in daily lives and the applications of it to improve livelihood. However, it will not be able to cover the entire domain in machine learning in its limited scope. This book is going to benefit the research scholars, entrepreneurs and interdisciplinary approaches to find new ways of applications in machine learning and thus will have novel research contributions. The lightweight techniques can be well used in real time which will add value to practice.
Intelligent prediction and decision support systems are based on signal processing, computer vision (CV), machine learning (ML), software engineering (SE), knowledge based systems (KBS), data mining, artificial intelligence (AI) and include several systems developed from the study of expert systems (ES), genetic algorithms (GA), artificial neural networks (ANN) and fuzzy-logic systems The use of automatic decision support systems in design and manufacturing industry, healthcare and commercial software development systems has the following benifits:
- Cost savings in companies, due to employment of expert system technology.
- Fast decision making, completion of projects in time and development of new products.
- Improvement in decision making capability and quality.
- Usage of Knowledge database and Preservation of expertise of individuals
- Eases complex decision problems. Ex: Diagnosis in Healthcare
To address the issues and challenges related to development, implementation and application of automatic and intelligent prediction and decision support systems in domains such as manufacturing, healthcare and software product design, development and optimization, this book aims to collect and publish wide ranges of quality articles such as original research contributions, methodological reviews, survey papers, case studies and/or reports covering intelligent systems, expert prediction systems, evaluation models, decision support systems and Computer Aided Diagnosis (CAD).
After a short description of the key concepts of big data the book explores on the secrecy and security threats posed especially by cloud based data storage. It delivers conceptual frameworks and models along with case studies of recent technology.
This volume comprises eight well-versed contributed chapters devoted to report the latest findings on the intelligent approaches to multimedia data analysis. Multimedia data is a combination of different discrete and continuous content forms like text, audio, images, videos, animations and interactional data. At least a single continuous media in the transmitted information generates multimedia information.
Due to these different types of varieties, multimedia data present varied degrees of uncertainties and imprecision, which cannot be easy to deal by the conventional computing paradigm. Soft computing technologies are quite efficient to handle the imprecision and uncertainty of the multimedia data and they are flexible enough to process the real-world information. Proper analysis of multimedia data finds wide applications in medical diagnosis, video surveillance, text annotation etc.
This volume is intended to be used as a reference by undergraduate and post graduate students of the disciplines of computer science, electronics and telecommunication, information science and electrical engineering.
THE SERIES: FRONTIERS IN COMPUTATIONAL INTELLIGENCE
The series Frontiers In Computational Intelligence is envisioned to provide comprehensive coverage and understanding of cutting edge research in computational intelligence. It intends to augment the scholarly discourse on all topics relating to the advances in artifi cial life and machine learning in the form of metaheuristics, approximate reasoning, and robotics. Latest research fi ndings are coupled with applications to varied domains of engineering and computer sciences. This field is steadily growing especially with the advent of novel machine learning algorithms being applied to different domains of engineering and technology. The series brings together leading researchers that intend to continue to advance the fi eld and create a broad knowledge about the most recent state of the art.
This volume comprises six well-versed contributed chapters devoted to report the latest fi ndings on the applications of machine learning for big data analytics. Big data is a term for data sets that are so large or complex that traditional data processing application software is inadequate to deal with them. The possible challenges in this direction include capture, storage, analysis, data curation, search, sharing, transfer, visualization, querying, updating and information privacy.
Big data analytics is the process of examining large and varied data sets - i.e., big data - to uncover hidden patterns, unknown correlations, market trends, customer preferences and other useful information that can help organizations make more-informed business decisions. This volume is intended to be used as a reference by undergraduate and post graduate students of the disciplines of computer science, electronics and telecommunication, information science and electrical engineering.
THE SERIES: FRONTIERS IN COMPUTATIONAL INTELLIGENCE
The series Frontiers In Computational Intelligence is envisioned to provide comprehensive coverage and understanding of cutting edge research in computational intelligence. It intends to augment the scholarly discourse on all topics relating to the advances in artifi cial life and machine learning in the form of metaheuristics, approximate reasoning, and robotics. Latest research fi ndings are coupled with applications to varied domains of engineering and computer sciences. This field is steadily growing especially with the advent of novel machine learning algorithms being applied to different domains of engineering and technology. The series brings together leading researchers that intend to continue to advance the fi eld and create a broad knowledge about the most recent research.