Chapter 01
Foundations: uncertainty and loss
Bayes, entropy, and cross-entropy as the backbone of probability and loss.
Personal Project / ML Visualizations
A visual study library for machine learning, built to make core ideas easier to understand, revisit, and explain.
Chapter Library
Chapter 01
Bayes, entropy, and cross-entropy as the backbone of probability and loss.
Chapter 02
Vectors, dot products, eigen directions, and SVD as the geometry underneath modern ML.
Chapter 03
Likelihood, priors, bias-variance, and regularization as one connected mental model.
Chapter 04
Neurons, activations, backpropagation, and optimization as one connected story.
Chapter 05
Learning rate intuition and why repeated derivatives make training stable, brittle, or explosive.
Chapter 06
Tokenization, attention, and retrieval as three different pieces of the same modern LLM stack.
Chapter 07
Fine-tuning, LoRA, quantization, distillation, and the deployment tradeoffs that decide what is practical.
Chapter 08
Thresholds, probability trustworthiness, and ranking quality in the atelier format.
Chapter 09
Latent factors, two-tower retrieval, and ranking objectives that actually match recommendation surfaces.
Chapter 10
Retrieval versus ranking, plus the offline-versus-online tradeoffs that make systems honest.
Chapter 11
Train-serve skew, proxy-vs-live evaluation, and drift in the same authored language.
Chapter 12
Useful tree splits, stagewise boosting, and why tabular ensembles still dominate so much applied ML.
Chapter 13
Leakage, missingness, and distribution shift as the hidden reasons many models fail.
Chapter 14
Temperature, sampling behavior, and why preference tuning needs a leash instead of raw reward chasing.
Chapter 15
MDPs, value functions, TD learning, Q-learning, and DQN in one clean ladder from loop to deep control.
Chapter 16
Guidance, preference tuning, and reward hacking once you go beyond the first generative intuition.