This repository contains a handwritten-digit classifier implemented in C++ from scratch. It is an inference-only project: pretrained weights and biases are loaded from binary files, then a fixed 4-layer multilayer perceptron scores a 28 x 28 image of a digit.
Matrixis a manually managed matrix type with copy semantics, binary input, transpose, vectorize, multiplication, and indexing operators.Activationimplements ReLU and Softmax.Densemodels a fully connected layer with an activation function.MlpNetworkchains four dense layers together to classify digits0-9.main.cppprovides a small CLI wrapper that loads the model files, reads one image matrix, prints the prediction, or inspects a raw image binary without model weights.
There is no training pipeline, dataset downloader, or image preprocessing stage in this repo.
- Matrix storage and arithmetic.
- Operator overloads for matrix access and algebra.
- Dense layer composition.
- ReLU and Softmax activations.
- A fixed architecture multilayer perceptron for digit classification.
The network is hard-coded to the following layer sizes:
784 -> 128 -> 64 -> 20 -> 10
The forward pass is:
28 x 28 image -> vectorize -> Dense + ReLU -> Dense + ReLU -> Dense + ReLU -> Dense + Softmax -> argmax
Requirements:
- A C++17-capable compiler such as
g++orclang++ make
Build the executable:
makeRun the built-in checks:
make testRun the classifier:
./mlpnetwork w1.bin w2.bin w3.bin w4.bin b1.bin b2.bin b3.bin b4.bin [image.bin]If image.bin is omitted, the program prompts for the image path.
Preview a 28 x 28 image binary without loading model weights:
./mlpnetwork --inspect image.bin--preview is accepted as an alias for the same mode.
You can also use the Makefile shortcut:
make preview ARGS=image.binAll model files are raw binary float32 matrices, not text files.
w1.bin:128 x 784w2.bin:64 x 128w3.bin:20 x 64w4.bin:10 x 20b1.bin:128 x 1b2.bin:64 x 1b3.bin:20 x 1b4.bin:10 x 1image.bin:28 x 28
The image should already be normalized to the format expected by the trained model. No preprocessing is bundled here.
The CLI prints:
Prediction: <digit>
Confidence: <probability>
Matrix.h/Matrix.cpp: matrix implementation and binary I/O.Activation.h/Activation.cpp: ReLU and Softmax.Dense.h/Dense.cpp: fully connected layer abstraction.MlpNetwork.h/MlpNetwork.cpp: the fixed 4-layer inference network.main.cpp: command-line entrypoint.tests/test_digit_recognition.cpp: lightweight checks for the deterministic pieces.ImagePreview.cpp/ImagePreview.h: local inspection and ASCII preview helpers.Makefile: build, test, run, preview, and clean targets.
- Build:
make - Test:
make test - Run:
./mlpnetwork ... - Preview:
./mlpnetwork --inspect image.bin - Clean:
make clean
The scoring engine lives in MlpNetwork.cpp. The reusable matrix and layer logic lives in Matrix.cpp, Activation.cpp, and Dense.cpp. The CLI wrapper lives in main.cpp.
- Manual memory management with copy construction and assignment.
- Operator overloading for a small algebraic type.
- Layered neural-network composition without a framework.
- Binary file I/O and simple CLI ergonomics.
- Input inspection tooling for raw image binaries without weights.
- Clear separation between math primitives and model orchestration.
- Replace the raw-pointer matrix storage with safer RAII containers.
- Add a tiny sample asset set so the CLI can be demonstrated end-to-end without external files.
- Bundle a small sample image so preview mode can be shown instantly.
- Add image preprocessing if the repo is meant to accept common image formats.
- Expand tests around exception cases and binary file parsing.
This repo intentionally stays small and educational. It is meant to show how the pieces of a neural network fit together in C++, not to compete with a production ML framework.