Deep dives into machine learning concepts — the way I wish they were explained to me.
A 3-part series covering VAEs, GANs, and Diffusion Models from first principles
From autoencoders to ELBO — understanding variational autoencoders with every "but why?" answered
From VAE's blur problem to Wasserstein distance — why adversarial training works and when it breaks
From noise to images — how diffusion models achieve the best of both VAE stability and GAN quality
A growing series on attention, architecture, and efficient inference
More deep dives on the way.