This project is a deliberate reconstruction of neural network architectures from their fundamental primitives, inspired by the Asimov Institute's "Neural Network Zoo" visualization. The intent is pedagogical: to rigorously explore the first principles of these models through constructive implementation. While leveraging performant numerical libraries (NumPy, Pandas), the architecture emphasizes modularity and explicit composition. Examine the codebase to observe this structure directly; contributions and critical analysis are welcomed. Clone or fork to inspect the underlying mechanics and extend as desired.
- Overview
- Features
- Getting Started
- Examples
- Project Structure
- Extending the Library
- Contributing
- License
This library provides:
- Atomic Primitives (
foundational_nn
): Base cells, layers, activation functions, and modifiers. - Modular Architectures (
architecture_nn
): Dense, convolutional, recurrent, pooling layers,SequentialModel
,GraphModel
, and common patterns. - Ready-to-Use Models (
models_nn
): Perceptron (logic gates), Hopfield Network (associative memory), Markov Chain (text generation). - Interactive TUI: A Rich-based terminal interface for guided experimentation and visualization.
All implementations emphasize clarity, extensibility, and educational value.
- Build and visualize custom neural network architectures.
- Train and test basic models: logic gates, associative memory, text generation.
- Export network diagrams in Mermaid syntax.
- Clean, decoupled design
git clone https://github.com/ndjuric/neural_foundations.git
cd <repo>
python -m venv .venv
source .venv/bin/activate # macOS/Linux
# .\.venv\Scripts\activate # Windows PowerShell
pip install -r requirements.txt
Launch the interactive Textual UI:
python3 main.py
Follow the on-screen prompts to select a model and preset example.
Use the library in your own Python scripts:
from models_nn.perceptron import Perceptron
# Create a Perceptron for OR logic
p = Perceptron(n_inputs=2, learning_rate=0.1)
X = [[0, 0], [0, 1], [1, 0], [1, 1]]
y = [0, 1, 1, 1]
p.train(X, y, epochs=10)
print([p.predict(x) for x in X])
print(p.visualize()) # Mermaid diagram, WIP
-
Perceptron OR Gate
python3 main.py # Select "Perceptron" → "OR Gate"
-
Hopfield Network Associative Memory
python3 main.py # Select "HopfieldNetwork" → "4-bit Patterns"
-
Markov Chain Text Generation
python3 main.py # Select "MarkovChain" → "Lorem Ipsum"
.
├── main.py # Interactive TUI entrypoint
├── requirements.txt # Project dependencies
├── README.md # This file
└── src/
├── foundational_nn # Base primitives: cells, activations, modifiers
├── architecture_nn # Layers, models, patterns
└── models_nn # Ready-to-use example models
- Define new primitives in
foundational_nn
. - Create custom layers or network patterns in
architecture_nn
. - Package new models with demos in
models_nn
. - Invoke the model from
main.py
entry point in repo root, currently it's a bit clunky but the idea is for working models to appear with some sane pre-defined real world use scenarios whenmain.py
is ran.
Contributions are welcome! Star, better yet fork and be creative! It's a playground made for learning. Also, feel free to open issues and submit pull requests.
Licensed under the MIT License.