Welcome!

This is a blog primarily about topics in mathematics, machine learning, and technology, but occasionally about other things.

New posts every couple of months!

Featured Content

Browse a selection of my finest words:

Latest Posts:

GPT and Technofetishistic Egotism

Hey ChatGPT, can you write a blog post for me?
March 21, 2023

AI is having a moment! New models like ChatGPT and Stable Diffusion have captured the imagination and challenged assumptions about the capabilities and limits of machine learning. But how do these “generative” models work? What does a future where these models are commonplace look like? And what are their limitations? I’ll focus primarily on GPT, but some of this analysis will also apply to image generation models like Stable Diffusion as well (and indeed, with GPT-4’s new visual capabilities, the line between these two categories of models is now rather blurry).

Keep reading...

Some efficient ways to invert a binary tree

Is this code leet enough?
August 16, 2022

It is the most famous LeetCode problem of all time, and a task that every FAANG (or MANGA?) software engineer must know by heart: inverting a binary tree. It’s also one that the inventer of Homebrew famously flubbed in a Google interview (and, less famously and more recently, so did I!) In this post I want to take a closer look at this problem and talk about some truly absurd ways you might choose to answer it in an interview. But first, let’s recap the problem.

Keep reading...

Exponential smoothing like a Bayesian

No frequentists were harmed in the smoothing process
April 12, 2022

Exponential smoothing is a class of forecasting methods in which weighted averages of past observations are used to predict future values. Exponential smoothing is particularly popular for financial time series because, unlike many other forecasting methods (e.g. ARIMA models), exponential smoothing does not require the underlying series to be stationary (or stationary after differencing). However, many financial time series (such as stock prices) are highly noisy, and exponential smoothing models have to be fit recurrently, using numerical optimization. After fitting, how can we construct confidence intervals for the parameters of our exponential smoothing model? In this post, I’ll describe one way to attack this problem using probabilistic programming, and provide examples using Julia’s Turing package.

Keep reading...

Did I write something you want to use in your own work? (I'm seriously flattered!) Please do: All code is MIT licensed; any other content is licensed CC-BY-NC unless otherwise indicated. (If neither of those licenses work, let's get in touch.)