This repository contains the code and resources to replicate the results of the paper "Random feedback weights support learning in deep neural networks" by Lillicrap et al., which proposes Feedback Alignment (FA) as a biologically plausible alternative to Backpropagation.
Here is the structure of the files:
.
├── paper.pdf # The original paper by Lillicrap et al.
├── report.pdf # the final project report
├── task1_linear_experiment.ipynb # Task 1 - Linear Approximation
├── task2_MNIST_experiment.ipynb # Task 2 - MNIST Classification
├── task3_nonlinear_experiment.ipynb # Task 3 - Deep Networks
├── task4_sparsity_test.ipynb # Task 4 - Testing Robustness/Sparsity
├── results/ # Generated plots and figures
└── README.md # This file
The project is divided into four distinct experimental tasks. The first three replicate the core findings of Lillicrap et al., while the fourth introduces an experiment regarding biological plausibility.
- Script:
task1_linear_experiment.ipynb - Objective: To verify the fundamental hypothesis that a network can learn using asymmetric, random feedback weights on a simple linear regression problem.
- Method: We train a 30-20-10 linear network to approximate a fixed random linear target function.
- Key Metric: We measure the Alignment Angle between the update vector prescribed by standard Backpropagation and the vector produced by Feedback Alignment. A decrease in this angle proves the forward weights are evolving to align with the fixed feedback matrix.
- Script:
task2_MNIST_experiment.ipynb - Objective: To demonstrate that Feedback Alignment scales to non-linear problems and real-world datasets.
- Method: A 3-layer fully connected network (784-1000-10) with Sigmoid activations is trained on the MNIST handwritten digits dataset.
- Comparison: We compare three training regimes:
- Backpropagation (BP): The gold standard (symmetric weights).
- Feedback Alignment (FA): The proposed method (fixed random feedback).
- Shallow Learning: Only training the output layer (baseline).
- Script:
task3_nonlinear_experiment.ipynb - Objective: To ensure that Feedback Alignment supports "Deep" learning and uses the representational power of added depth.
- Method: We compare the convergence speed and final error floor (Normalized Squared Error) of a 3-layer network versus a 4-layer network on a complex non-linear function fitting task.
-
Script:
task4_sparsity_test.ipynb - Objective: To stress-test the algorithm against biological constraints, specifically the sparsity of neural connections.
-
Method: We introduce a sparsity mask to the fixed feedback matrix
$B$ , simulating "dead" or missing connections. We evaluate performance with 0%, 50%, 95%, and 99% of feedback connections removed.