Code Optimisation via Memoization - Computerphile

ComputerphileComputerphile
Education5 min read19 min video
Nov 27, 2025|137,985 views|6,040|345
Save to Pod

Key Moments

TL;DR

Memoization speeds up recursive step counting by caching results.

Key Insights

1

Memoization caches function results to avoid recomputing identical subproblems.

2

Base cases like f(0) = 1 and f(n < 0) = 0 prevent infinite or invalid recursions.

3

Generalizing beyond fixed steps (1,3,5) allows arbitrary move sets via a steps list.

4

Caching transforms exponential recursion into reuse of prior results, dramatically boosting speed.

5

Benchmarks show naive recursion scales poorly while memoized runs handle large inputs efficiently.

DEFINING MEMOIZATION AND ITS PURPOSE

Memoization is caching the results of a function call so you don’t have to recompute the same answer again. The idea is simple: if a function is called with the same input, you store the output the first time and reuse it on subsequent calls. In recursive algorithms, the same subproblems tend to appear many times, which means a lot of wasted work. By saving intermediate results, memoization turns expensive exponential-time processes into faster, practical ones. It pairs naturally with top-down approaches and is a basic tool in dynamic programming.

THE FROG HOPPING AND STAIR-CLIMBING EXAMPLE

To illustrate, imagine frogs hopping on lily pads or a climber on a staircase. If you can move one or two steps, the count resembles Fibonacci. The video then makes it tougher by allowing 1, 3, or 5-step jumps. The question becomes: how many distinct sequences reach the top using those options? The underlying math is the same idea as Fibonacci, just with a different move set, making the recursion a richer testbed for caching and performance.

NAIVE RECURSION: EXPONENTIAL GROWTH AND OVERLAPPING SUBPROBLEMS

With a naive recursion, you compute f(n) = f(n−1) + f(n−3) + f(n−5) (for steps 1, 3, and 5). Each call spawns three more calls, and many of the same subproblems are evaluated repeatedly. As n grows, the total number of calls explodes, so even modest values become painfully slow. The video demonstrates this by running the function for increasing n; you can see the time climb quickly and the computation becomes impractical on a laptop. This is the classic sign of an exponential, overlapping-subproblems problem.

BASE CASES AND END CONDITIONS THAT SAVE YOU FROM NEGATIVE PATHS

Crucially, to prevent endless recursion you need proper base cases. If n equals zero, there is exactly one valid path—the current sequence is complete—and you return 1. If n becomes negative, that path is invalid and returns 0. These end conditions stop the recursion from wandering into impossible depths or counting non-existent routes. With those guards in place, the function can, in principle, explore all valid paths, but without memoization it still repeats work many times before finishing.

GENERALIZING BEYOND 1,3,5: WHY FLEXIBLE STEPS MATTER

To make the approach general, we replace the fixed set {1,3,5} with a variable list of allowed steps. The code then computes f(n) as the sum of f(n−s) for every s in steps. In Python this can be written succinctly with a comprehension over steps: total = sum(f(n−s) for s in steps). This simple generalization lets you try different move sets without rewriting the core recursion, and it highlights why memoization is so valuable once subproblems repeat for many n.

HAND-ROLLING THE RECURSION: HOW A DICTIONARY CACHE WORKS

Memoization works by attaching a cache to the recursive function. Before expanding a subproblem f(n), the algorithm checks whether the answer for that n is already known. If so, it returns the cached value immediately. If not, it computes the total, stores it in the cache, and then returns it. This turns a tree of exponential calls into a shallow path that mostly reuses earlier results, a shift from recomputation to reuse that underpins dynamic programming.

IMPLEMENTING MEMOIZATION IN PYTHON: STEP BY STEP

In the video the coder implements memoization by introducing a cache dictionary and a helper function that carries the cache. The structure adds base-case checks, a cache lookup, and only then the recursive expansion. If a value for n already exists in the cache, it is returned directly; otherwise the total is computed, saved in cache[n], and returned. The rest of the code remains intact, which makes the change relatively localized and easy to reason about, illustrating how caching layers integrate with recursion.

BENCHMARKS: NAIVE VS MEMOIZED PERFORMANCE

With no memoization the time blows up as n increases. The video shows approximate timings: for n=10 about 47 valid paths; for n=30 the computation takes a fraction of a second; by n=35 it stretches into seconds or longer. Pushing to higher n becomes impractical on a laptop. By contrast, the memoized version performs dramatically faster: around a thousandth or less of the run time for the same input, and it can reach much larger n almost instantly because previously computed results are reused. This contrast highlights the practical payoff of caching.

REAL-WORLD TAKEAWAYS: WHEN TO USE MEMOIZATION AND ITS LIMITS

Memoization shines when a function is called with overlapping subproblems and the cost of recomputation dominates. In problems with large branching factors or deep recursion, caching transforms infeasible tasks into doable ones. However, it requires extra memory and careful cache management; not every problem benefits if subproblems are unique or if the overhead of caching outweighs gains. The video frames memoization as a principle that often pairs with dynamic programming, turning recursive definitions into iterative, memory-efficient solutions.

SPONSOR MESSAGE AND CLOSING THOUGHTS

Towards the end, the video briefly acknowledges sponsorship by Brilliant, highlighting their math and computing courses. The sponsor notes that their interactive content can help learners grasp concepts like memoization and broader computing ideas, while offering a discount link. The shout-out reminds viewers that mastering algorithmic ideas benefits from practice and experimentation, and encourages exploring Brilliant’s resources to apply memoization to personal coding challenges. To learn for free, visit brilliant.org/computile as noted in the video.

Memoization Quick Reference

Practical takeaways from this episode

Do This

Cache results of expensive recursive calls using a dictionary keyed by input
Check the cache before computing and return cached results when available
Generalize the step options as a list to support arbitrary move sizes

Avoid This

Don't forget to initialize and pass the cache through recursive calls
Don't recompute values that are already cached; avoid unnecessary work

Naive vs Memoized Step-Count Performance (sample)

Data extracted from this episode

n_stepsnaive_time_secmemo_time_sec
100.083null
300.50.051
357.5null

Common Questions

Memoization caches the results of expensive function calls so repeated inputs can return quickly from a cache rather than recomputing. This video demonstrates it with a recursive stair-climbing example and shows how caching prevents exponential blow-up.

Topics

Mentioned in this video

More from Computerphile

View all 11 summaries

Found this useful? Build your knowledge library

Get AI-powered summaries of any YouTube video, podcast, or article in seconds. Save them to your personal pods and access them anytime.

Try Summify free