Understanding Deep Neural Networks

Corso

Online

Prezzo da consultare

Chiama il centro

Hai bisogno di un coach per la formazione?

Ti aiuterà a confrontare vari corsi e trovare l'offerta formativa più conveniente.

Descrizione

  • Tipologia

    Corso

  • Metodologia

    Online

  • Inizio

    Scegli data

Questo corso inizia con il fornire conoscenze concettuali nelle reti neurali e in generale in algoritmi di apprendimento automatico, deep learning (algoritmi e applicazioni) La parte 1 (40%) di questa formazione è più focalizzata sui fondamentali, ma ti aiuterà a scegliere la tecnologia giusta: TensorFlow, Caffe, Theano, DeepDrive, Keras, ecc La parte 2 (20%) di questa formazione introduce Theano in una libreria python che semplifica la scrittura di modelli di deep learning La parte 3 (40%) della formazione sarebbe ampiamente basata sull'API di seconda generazione di Tensorflow della libreria di software open source di Google per Deep Learning Gli esempi e gli handson saranno tutti fatti in TensorFlow Pubblico Questo corso è destinato agli ingegneri che cercano di usare TensorFlow per i loro progetti di Deep Learning Dopo aver completato questo corso, i delegati: avere una buona conoscenza delle reti neurali profonde (DNN), CNN e RNN comprendere la struttura e i meccanismi di implementazione di TensorFlow essere in grado di svolgere compiti di installazione / ambiente di produzione / architettura e configurazione essere in grado di valutare la qualità del codice, eseguire il debugging, il monitoraggio essere in grado di implementare la produzione avanzata come modelli di formazione, costruzione di grafici e registrazione Non tutti gli argomenti saranno trattati in un'aula pubblica con 35 ore di durata a causa della vastità del tema La durata del corso completo sarà di circa 70 ore e non di 35 ore .
Machine Translated

Sedi e date

Luogo

Inizio del corso

Online

Inizio del corso

Scegli dataIscrizioni aperte

Profilo del corso

Background in physics, mathematics and programming. Involvment in image processing activities.
The delegates should have a prior understanding of machine learning concepts, and should have worked upon Python programming and libraries.

Domande e risposte

Aggiungi la tua domanda

I nostri consulenti e altri utenti potranno risponderti

Chi vuoi che ti risponda?

Inserisci i tuoi dati per ricevere una risposta

Pubblicheremo solo il tuo nome e la domanda

Opinioni

Materie

  • E-learning
  • Librería
  • Reti neurali
  • Caffè
  • Algoritmi
  • Reti
  • Produzione

Programma

Part 1 – Deep Learning and DNN Concepts


Introduction AI, Machine Learning & Deep Learning

  • History, basic concepts and usual applications of artificial intelligence far Of the fantasies carried by this domain

  • Collective Intelligence: aggregating knowledge shared by many virtual agents

  • Genetic algorithms: to evolve a population of virtual agents by selection

  • Usual Learning Machine: definition.

  • Types of tasks: supervised learning, unsupervised learning, reinforcement learning

  • Types of actions: classification, regression, clustering, density estimation, reduction of dimensionality

  • Examples of Machine Learning algorithms: Linear regression, Naive Bayes, Random Tree

  • Machine learning VS Deep Learning: problems on which Machine Learning remains Today the state of the art (Random Forests & XGBoosts)


Basic Concepts of a Neural Network (Application: multi-layer perceptron)

  • Reminder of mathematical bases.

  • Definition of a network of neurons: classical architecture, activation and

  • Weighting of previous activations, depth of a network

  • Definition of the learning of a network of neurons: functions of cost, back-propagation, Stochastic gradient descent, maximum likelihood.

  • Modeling of a neural network: modeling input and output data according to The type of problem (regression, classification ...). Curse of dimensionality.

  • Distinction between Multi-feature data and signal. Choice of a cost function according to the data.

  • Approximation of a function by a network of neurons: presentation and examples

  • Approximation of a distribution by a network of neurons: presentation and examples

  • Data Augmentation: how to balance a dataset

  • Generalization of the results of a network of neurons.

  • Initialization and regularization of a neural network: L1 / L2 regularization, Batch Normalization

  • Optimization and convergence algorithms


Standard ML / DL Tools

A simple presentation with advantages, disadvantages, position in the ecosystem and use is planned.

  • Data management tools: Apache Spark, Apache Hadoop Tools

  • Machine Learning: Numpy, Scipy, Sci-kit

  • DL high level frameworks: PyTorch, Keras, Lasagne

  • Low level DL frameworks: Theano, Torch, Caffe, Tensorflow


Convolutional Neural Networks (CNN).

  • Presentation of the CNNs: fundamental principles and applications

  • Basic operation of a CNN: convolutional layer, use of a kernel,

  • Padding & stride, feature map generation, pooling layers. Extensions 1D, 2D and 3D.

  • Presentation of the different CNN architectures that brought the state of the art in classification

  • Images: LeNet, VGG Networks, Network in Network, Inception, Resnet. Presentation of Innovations brought about by each architecture and their more global applications (Convolution 1x1 or residual connections)

  • Use of an attention model.

  • Application to a common classification case (text or image)

  • CNNs for generation: super-resolution, pixel-to-pixel segmentation. Presentation of

  • Main strategies for increasing feature maps for image generation.


Recurrent Neural Networks (RNN).

  • Presentation of RNNs: fundamental principles and applications.

  • Basic operation of the RNN: hidden activation, back propagation through time, Unfolded version.

  • Evolutions towards the Gated Recurrent Units (GRUs) and LSTM (Long Short Term Memory).

  • Presentation of the different states and the evolutions brought by these architectures

  • Convergence and vanising gradient problems

  • Classical architectures: Prediction of a temporal series, classification ...

  • RNN Encoder Decoder type architecture. Use of an attention model.

  • NLP applications: word / character encoding, translation.

  • Video Applications: prediction of the next generated image of a video sequence.


Generational models: Variational AutoEncoder (VAE) and Generative Adversarial Networks (GAN).

  • Presentation of the generational models, link with the CNNs

  • Auto-encoder: reduction of dimensionality and limited generation

  • Variational Auto-encoder: generational model and approximation of the distribution of a given. Definition and use of latent space. Reparameterization trick. Applications and Limits observed

  • Generative Adversarial Networks: Fundamentals.

  • Dual Network Architecture (Generator and discriminator) with alternate learning, cost functions available.

  • Convergence of a GAN and difficulties encountered.

  • Improved convergence: Wasserstein GAN, Began. Earth Moving Distance.

  • Applications for the generation of images or photographs, text generation, super-resolution.

Deep Reinforcement Learning.

  • Presentation of reinforcement learning: control of an agent in a defined environment

  • By a state and possible actions

  • Use of a neural network to approximate the state function

  • Deep Q Learning: experience replay, and application to the control of a video game.

  • Optimization of learning policy. On-policy && off-policy. Actor critic architecture. A3C.

  • Applications: control of a single video game or a digital system.

Part 2 – Theano for Deep Learning

Theano Basics
  • Introduction

  • Installation and Configuration

Theano Functions

  • inputs, outputs, updates, givens

Training and Optimization of a neural network using Theano
  • Neural Network Modeling

  • Logistic Regression

  • Hidden Layers

  • Training a network

  • Computing and Classification

  • Optimization

  • Log Loss

Testing the model


Part 3 – DNN using Tensorflow

TensorFlow Basics
  • Creation, Initializing, Saving, and Restoring TensorFlow variables

  • Feeding, Reading and Preloading TensorFlow Data

  • How to use TensorFlow infrastructure to train models at scale

  • Visualizing and Evaluating models with TensorBoard

TensorFlow Mechanics
  • Prepare the Data

  • Download

  • Inputs and Placeholders

  • Build the GraphS

    • Inference

    • Loss

    • Training

  • Train the Model

    • The Graph

    • The Session

    • Train Loop

  • Evaluate the Model

    • Build the Eval Graph

    • Eval Output

The Perceptron
  • Activation functions

  • The perceptron learning algorithm

  • Binary classification with the perceptron

  • Document classification with the perceptron

  • Limitations of the perceptron

From the Perceptron to Support Vector Machines
  • Kernels and the kernel trick

  • Maximum margin classification and support vectors

Artificial Neural Networks
  • Nonlinear decision boundaries

  • Feedforward and feedback artificial neural networks

  • Multilayer perceptrons

  • Minimizing the cost function

  • Forward propagation

  • Back propagation

  • Improving the way neural networks learn

Convolutional Neural Networks
  • Goals

  • Model Architecture

  • Principles

  • Code Organization

  • Launching and Training the Model

  • Evaluating a Model


Basic Introductions to be given to the below modules(Brief Introduction to be provided based on time availability):

Tensorflow - Advanced Usage

  • Threading and Queues

  • Distributed TensorFlow

  • Writing Documentation and Sharing your Model

  • Customizing Data Readers

  • Manipulating TensorFlow Model Files


TensorFlow Serving

  • Introduction

  • Basic Serving Tutorial

  • Advanced Serving Tutorial

  • Serving Inception Model Tutorial

Chiama il centro

Hai bisogno di un coach per la formazione?

Ti aiuterà a confrontare vari corsi e trovare l'offerta formativa più conveniente.

Understanding Deep Neural Networks

Prezzo da consultare