Home Mathematics 8. Machine learning approach for exploring computational intelligence
Chapter
Licensed
Unlicensed Requires Authentication

8. Machine learning approach for exploring computational intelligence

  • J. Naveenkumar and Dhanashri P. Joshi

Abstract

The new concept provoked with genetic and natural characteristics like neuron structure in brain [neural network (NN)] is known as computational intelligence (CI). CI is depending upon three main stakes, namely, NN, fuzzy systems, and evolutionary computing (EC). Fuzzy system utilizes natural language or more specifically language used by human being to represent linguistic ambiguity as well as solves the difficulties in approximate reasoning computation. NN can be defined as computational model designed in line with structure and working of human brain which can be trained and perform job accordingly. This chapter focuses on EC and various facets of EC for machine learning (ML) applications. EC provides a way out to problem of optimization by generating, calculating, and transforming different likely solutions like use of evolutionary programming, genetic algorithms, multi-objective optimization, and evolvable hardware (EH), etc. In the recent years of ML and BigData, computer systems/applications are creating, gathering, and consuming a huge amount of data. Since such enormous data are utilized for ML functions along with applications which are data demanding, rate at which data transmission occurs from storage unit to host unit will be the reason for a bottleneck. Evolvable programming and EH jointly could be a key to solve abovementioned problem of optimization. Solution could be provided by using near-data processing (NDP) or in-situ processing. NDP allows execution of compute-centered applications in situ, which means within or very close to the storage unit (memory/ storage). Outcome of this may be that NDP seeks to decrease costly data movements and thus performance improvement. NDP is made up of a grouping of software system plus hardware system derived from the collection of central processing unit (host unit) plus smart storage system. Smart storage system has various elements namely various disks for storage, cache, frontend, and backend. Now cache memory consists of large capacity which is used for storing hot data (frequently used data) temporarily. The cold data (data used less frequently) are moved forward to underlying storage layer. Frontend is intended for connection between host system and storage system. Backend is intended to support communication among cache and multiple storage disks. One part of backend is backend controller that pedals these disks having capability of Redundant Array of Independent Disks (RAID) and having error finding as well as error-correction methods. This chapter talks about different architectures of NDP which is compatible for ML methods and in which way mentioned NDP architectures boost execution of ML methods. The chapter focuses on developments in recent research in NDP designed for various methods of ML.

Abstract

The new concept provoked with genetic and natural characteristics like neuron structure in brain [neural network (NN)] is known as computational intelligence (CI). CI is depending upon three main stakes, namely, NN, fuzzy systems, and evolutionary computing (EC). Fuzzy system utilizes natural language or more specifically language used by human being to represent linguistic ambiguity as well as solves the difficulties in approximate reasoning computation. NN can be defined as computational model designed in line with structure and working of human brain which can be trained and perform job accordingly. This chapter focuses on EC and various facets of EC for machine learning (ML) applications. EC provides a way out to problem of optimization by generating, calculating, and transforming different likely solutions like use of evolutionary programming, genetic algorithms, multi-objective optimization, and evolvable hardware (EH), etc. In the recent years of ML and BigData, computer systems/applications are creating, gathering, and consuming a huge amount of data. Since such enormous data are utilized for ML functions along with applications which are data demanding, rate at which data transmission occurs from storage unit to host unit will be the reason for a bottleneck. Evolvable programming and EH jointly could be a key to solve abovementioned problem of optimization. Solution could be provided by using near-data processing (NDP) or in-situ processing. NDP allows execution of compute-centered applications in situ, which means within or very close to the storage unit (memory/ storage). Outcome of this may be that NDP seeks to decrease costly data movements and thus performance improvement. NDP is made up of a grouping of software system plus hardware system derived from the collection of central processing unit (host unit) plus smart storage system. Smart storage system has various elements namely various disks for storage, cache, frontend, and backend. Now cache memory consists of large capacity which is used for storing hot data (frequently used data) temporarily. The cold data (data used less frequently) are moved forward to underlying storage layer. Frontend is intended for connection between host system and storage system. Backend is intended to support communication among cache and multiple storage disks. One part of backend is backend controller that pedals these disks having capability of Redundant Array of Independent Disks (RAID) and having error finding as well as error-correction methods. This chapter talks about different architectures of NDP which is compatible for ML methods and in which way mentioned NDP architectures boost execution of ML methods. The chapter focuses on developments in recent research in NDP designed for various methods of ML.

Downloaded on 19.1.2026 from https://www.degruyterbrill.com/document/doi/10.1515/9783110648195-008/html
Scroll to top button