Skip to content

Toy implementation of Neural Nets in C/CUDA and Solutions to Intro to Deep Learning (MIT) course in Python.

Notifications You must be signed in to change notification settings

sheharyaar/deep-learning

Repository files navigation

TODO

Week 1

  • Implement a perceptron in C/CUDA
  • Implement one-layer perceptron in C/CUDA
  • Implement multi-layer pereceptron
  • Implement loss functions
  • Implement gradient descent and back propagation
  • Play with different types of gradient descents (SGD, Adam, etc.)
  • Play with Gradient Descent batching
  • Play with regularistaion, early stopping, dropout, etc.
  • Investigate effects of non-linear activation functions and the reasoning behind the usage of tanh, ReLU, etc.
  • Importance and performance effects of the number of layers in the neural network and the number of perceptrons per layer. Effect of different number of perceptrons in different layers.
  • Investigate the effects adn importance of different loss functions such as corss entropy loss (softamx), emperical loss, etc.

Targets

  • Loss functions, gradient descent, regularization
  • Backpropagation, activation functions, initialization
  • Matrix operations using PyTorch tensors and Manual gradient calculation exercise
  • Build a simple MLP from scratch in Python and compare with PyTorch's autograd system (
  • Build neural network components from scratch in C:
    • Matrix operations with SIMD optimization
    • Automatic differentiation system
    • Basic CUDA kernels for key operations

About

Toy implementation of Neural Nets in C/CUDA and Solutions to Intro to Deep Learning (MIT) course in Python.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published