- Implement a perceptron in C/CUDA
- Implement one-layer perceptron in C/CUDA
- Implement multi-layer pereceptron
- Implement loss functions
- Implement gradient descent and back propagation
- Play with different types of gradient descents (SGD, Adam, etc.)
- Play with Gradient Descent batching
- Play with regularistaion, early stopping, dropout, etc.
- Investigate effects of non-linear activation functions and the reasoning behind the usage of
tanh,ReLU, etc. - Importance and performance effects of the number of layers in the neural network and the number of perceptrons per layer. Effect of different number of perceptrons in different layers.
- Investigate the effects adn importance of different loss functions such as
corss entropy loss (softamx),emperical loss, etc.
- Loss functions, gradient descent, regularization
- Backpropagation, activation functions, initialization
- Matrix operations using PyTorch tensors and Manual gradient calculation exercise
- Build a simple MLP from scratch in Python and compare with PyTorch's autograd system (
- Build neural network components from scratch in C:
- Matrix operations with SIMD optimization
- Automatic differentiation system
- Basic CUDA kernels for key operations