Deep learning fundamentals

Ravikumar N, Zakeri A, Xia Y, Frangi AF (2023)


Publication Type: Authored book

Publication year: 2023

Publisher: Elsevier

ISBN: 9780128136577

DOI: 10.1016/B978-0-12-813657-7.00041-8

Abstract

This chapter provides an introduction to neural networks and presents the fundamental concepts that underpin modern deep neural networks. Multilayer perceptrons (MLPs) are introduced first, and the equivalence between the simplest MLP (i.e., an MLP with just two fully-connected layers of neurons and linear activation functions) and a multivariate linear regression model is demonstrated. Efficient training of MLPs and all other modern deep neural networks is enabled by the error backpropagation algorithm, which is described next. Subsequently, this chapter provides an overview of the key building blocks used to design and train deep neural networks as powerful universal function approximators. These include a description of frequently used activation functions, optimization algorithms, loss/objective functions, regularization strategies, and normalization techniques.

Involved external institutions

How to cite

APA:

Ravikumar, N., Zakeri, A., Xia, Y., & Frangi, A.F. (2023). Deep learning fundamentals. Elsevier.

MLA:

Ravikumar, Nishant, et al. Deep learning fundamentals. Elsevier, 2023.

BibTeX: Download