Course information may be found here.
You can find more details about the course at my homel.
Feel free to contact me if you have any questions or want to discuss any topic from the course 😊
All authorship is mentioned where possible.
In this exercise, we'll explore essential TensorFlow 2 and Keras concepts through hands-on examples with the MNIST dataset - the "Hello World" of deep learning. We'll cover:
Core Concepts
- 🚀 Building and training a basic neural network for digit classification
- 📒 Understanding validation strategies for model evaluation
- 📊 Exploring model complexity and its impact on performance
- ✅ Designing optimal architectures using fully connected layers
The goal of the exercise is to learn how to solve regression problems using deep learning. We will use our models on Auto MPG dataset.
Core Concepts
- ⛽ Regression task of predicting fuel consumption
- 💾 Auto MPG dataset from UCI Machine Learning Repository
- 🚗 Predicting fuel efficiency of vehicles
- 🧪 Using provided data to train ANN regression models
The aim of the exercise is to learn how to use the basic architecture based on convolutional layers and how to classify image data.
Core Concepts
- 🎯 Convolutional Neural Networks basics
- 📊 Working with CIFAR-10 dataset
- ✅ Model validation techniques
- 🔄 Batch normalization in Keras
The aim of the exercise is to learn how to use transfer learning for image data, in the second part of the exercise we will look at time series classification using CNN.
Core Concepts
- 🧠 Transfer learning techniques in CNNs
- 📈 1D Convolutions for time-series processing
- 📊 CIFAR-10 dataset utilization
- ⏱️ FordA dataset for time-series classification tasks
The goal of the exercise is to learn how to use Autoencoder and Variational autoencoder architectures in image data to generate new image data instances and detect anomalies.
Core Concepts
- 🖼️ Autoencoders for image reconstruction
- 🔀 Variational Autoencoders for image generation
- 🔢 MNIST dataset for image processing tasks
- ⚙️ Implementation of CNN-based autoencoders
The aim of the exercise is to learn how to use recurrent neural networks (RNN) for text data analysis, specifically focusing on sentiment analysis tasks using Twitter data.
Core Concepts
- 🧠 Recurrent neural networks for sequence processing
- 📝 Sentiment analysis of textual data
- 🐦 Twitter dataset utilization
- 🔤 GloVe embeddings for word representation
- 📊 Text classification by sentiment
The aim of this exercise is to learn how to build unsupervised word embeddings using the Word2Vec Skip-Gram method and implement recurrent neural networks (RNNs) for text generation using Harry Potter books as our dataset.
Core Concepts
- 🧠 Word2Vec Skip-Gram model for creating word embeddings
- 📚 Harry Potter corpus for training word embeddings
- 🔤 Analyzing word relationships in embedding space
- ⚡ Text generation using character-based RNNs
- 📝 Creating Harry Potter style stories with generative models
The aim of this exercise is to learn how to implement and utilize attention mechanisms in deep learning models, focusing on how these techniques allow models to selectively focus on the most relevant parts of input data.
Core Concepts
- 🧠 Attention mechanism fundamentals and mathematical foundations
- 🔍 Types of attention mechanisms (Self-attention, Dot-product)
- 📊 Applications in natural language processing
- ⚙️ Implementation of attention-based models
This exercise focuses on implementing and utilizing transformer models using the HuggingFace library in conjunction with TensorFlow 2. We'll explore how to leverage pre-trained models for natural language processing tasks.
Core Concepts
- 🤗 HuggingFace library and its ecosystem
- 🔧 Integration of HuggingFace models with TensorFlow 2
- 📊 Fine-tuning pre-trained models for specific NLP tasks
- 🚀 Practical applications of transformer models
This exercise focuses on implementing Convolutional Neural Networks (CNNs) for object localization tasks and exploring the powerful YOLOv8 architecture. We'll learn how to detect and precisely locate objects in images and videos, then apply these concepts using a state-of-the-art model in real-world scenarios.
Core Concepts
- 🖼️ Object localization fundamentals and bounding box regression
- 🧠 CNN architectures for effective feature extraction and object detection
- 📦 YOLOv8 model architecture and capabilities
- 🔍 Practical implementation of object localization in real-world applications
- 🛠️ Training and fine-tuning YOLOv8 on custom datasets
This exercise focuses on time series forecasting using deep learning techniques. We will apply these methods to predict natural gas consumption, building upon the pre-processed dataset from previous exercises.
Core Concepts
- 📈 Time series forecasting with deep learning
- ⛽ Natural gas consumption prediction
- 📊 Utilizing pre-processed time series datasets
- 🧠 Implementing deep learning models for time series data
- 🛠️ Practical application of deep learning to real-world forecasting problems
The raw dataset is available at ai.vsb.cz, and we will be using a pre-processed version for this exercise.
- You can use (Kaggle)[https://www.kaggle.com/] as an alternative to Google Colab
- 📌 Beware that both platforms use different configuration and libraries versions thus full compatibility cannot be always guaranteed
- For importing the Jupyter notebook perform these steps:
- Click on
+
sign (orCreate
) button in the left panel and selectNew Notebook
- In the new notebook select
File > Import notebook > Link
and paste URL of the Jupyter notebook from Github - In the
Notebook
sidebar (right side, it can be expanded through small arrow icon in the bottom right corner) use these Session options:- Accelerator:
GPU T4x2
orGPU P100
- Persistence:
Variables and Files
- Accelerator:
- Own datasets can be uploaded using the the
Notebook
sidebar as well -Input
section- Click on
Upload > New dataset > File
and Drag&Drop your file(s) - Set the Dataset title and click on
Create
- 💡 zip archives are automatically extracted
- You can copy path of the file using the copy icon when you hover over the filename
- The usual path is in format
/kaggle/input/<dataset_name>/<filename>
- The usual path is in format
- Click on
- 💡 There is some problem with using the hdf5 format in the
filepath
parameter inModelCheckpoint
- Use filename
best.weights.h5
instead (hdf5 and h5 is the same format) - 💡 Remember to change the path in the
load_weights()
function as well!**
- Use filename
- You can download your
.ipynb
notebooks usingFile > Download notebook
option
- Click on
python -m venv venv
- Activate
venv
in Windows
.\venv\Scripts\Activate.ps1
- Activate
venv
in Linux
source venv/bin/activate
- Works for tensorflow 2.18.0
pip install jupyter "jupyterlab>=3" "ipywidgets>=7.6"
# Basic environment setup
pip install pandas matplotlib requests seaborn scipy scikit-learn tqdm tensorflow[and-cuda]
# Advanced environment setup
pip install pandas matplotlib requests seaborn scipy scikit-learn optuna scikit-image pyarrow opencv-python plotly==5.18.0 tensorflow[and-cuda] nltk textblob transformers datasets huggingface_hub evaluate
- It should print list of all your GPUs
- 💡 It is not working if an empty list
[]
is printed
- 💡 It is not working if an empty list
python3 -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"