Karpathy's 200-Line Pure Python AI Builds
Train GPT, RNNs, RL Pong, and Bitcoin tx in pure Python with zero dependencies—distilling neural nets to essentials in under 200 lines.
Minimalist From-Scratch AI Implementations
Andrej Karpathy demonstrates core AI concepts through dependency-free Python code. His microGPT (Feb 2026) trains and runs inference on a GPT model in exactly 200 lines, proving you don't need heavy frameworks to grasp transformer basics. Similarly, the 2015 RNN post trains character-level recurrent nets to generate poetry, LaTeX math, and code, revealing their structure generation hints for future scaling. For RL, the 2016 Pong example uses policy gradients to master ATARI 2600 from raw pixels, weighing pros like sample efficiency against cons like high variance. These builds prioritize understanding over production scale, letting you replicate end-to-end training on modest hardware.
Neural Net History, Benchmarks, and Weaknesses
Karpathy recreates LeCun et al.'s 1989 backprop-trained net—the first real-world end-to-end DL application—using 33 years of progress, then projects 2055 views on today's DL. On ImageNet (ILSVRC 2014), top ConvNets hit 6.7% Hit@5 error, but humans match closely; his 2014 competition shows classifiers' limits. He exposes fooling attacks: perturb images imperceptibly to flip linear classifiers or ConvNets (2015 post), proving even simple models overfit adversarial examples. CIFAR-10 manual labeling sets human baseline, contextualizing DL gains on tiny 32x32 images.
Productivity Tracking and Data Experiments
Track daily productivity by logging active windows and keystroke frequencies (2014 tool for Ubuntu/OSX), generating HTML viz for insights like peak hours. Scrape Hacker News front/new pages every minute for 50 days (2013) to model story rise/fall: success ties to timing, titles, and early upvotes. Visualize top 500 Twitter accounts with t-SNE (2014), clustering similar tweeters; open-sources tsnejs for browser-based dimensionality reduction. These quantify behaviors without bloat, applying ML to personal/meta data.
Non-AI Hacks and Reflections
Biohacking lite (2020) experiments tweak biochemistry and metabolism for energy gains. PhD survival guide (2016) offers tips for navigating academia. Bitcoin tx (2021): create, sign, broadcast in pure Python. Short AI stories (2015, 2021) anthropomorphize forward passes and cognitive jumps.