Teaching Series #8 - Dynamic Programming

"...To understand recursion, one first must understand recursion..."

Dynamic programming (DP) is a problem-solving technique that optimises complex problems by breaking them down into smaller, overlapping subproblems. Instead of recalculating the solutions to these subproblems every time they appear, DP stores their results, effectively avoiding redundant computations. This "store and reuse" approach makes DP particularly efficient for problems with certain structural properties.

Understanding dynamic programming is of high importance for scientists and engineers because it provides a systematic way to solve optimisation problems across various domains, from optimising resource allocation and scheduling to designing efficient algorithms for data analysis and control systems.

The idea behind dynamic programming is to have a "smart" way to solve problems. When you encounter a complex problem, DP breaks it down into smaller, more manageable pieces. The magic happens when these smaller pieces overlap, meaning the same small problem might appear multiple times. Instead of solving it repeatedly, dynamic programming says, "I've seen this before! Let me just use the answer I already calculated." It stores those answers, usually in a table or a similar structure, so they can be quickly retrieved later. This avoids redundant calculations and significantly speeds up the problem-solving process.

Think of it like this: dynamic programming is all about breaking down a big problem into smaller subproblems, solving these subproblems, storing the solutions to these subproblems, and reusing these stored solutions whenever the same subproblem reappears. Think of it as a super-efficient way to avoid doing the same work over and over again!

Two key properties that tell us if a problem is a good candidate for dynamic programming are: Overlapping Subproblems and Optimal Substructure.

A problem has overlapping subproblems if the same subproblems are encountered again and again when solving the larger problem. Think of it like a decision tree where many branches eventually lead to the same smaller decision. Overlapping subproblems mean those smaller decisions are shared.

A problem has optimal substructure if it can be constructed from optimal solutions to its subproblems.

Think of it like this: if you can find the best solution for the small pieces, and putting those best small pieces together gives you the best solution for the whole puzzle, then your problem has optimal substructure.

Many problems in science and engineering, like finding the shortest path in a network, optimising resource allocation, or even DNA sequence alignment, exhibit these two properties, making them perfect for dynamic programming!

There are two primary approaches to dynamic programming: Top-Down (Memoization) or Bottom-Up (Tabulation).

Top-down starts from the big problem and works its way down to the subproblems. Bottom-Up does the opposite: it starts by solving the smallest, simplest subproblems first, and then uses those solutions to build up to the solution of the larger problem, eventually reaching the original problem we want to solve.

Think of it like building LEGO castle, instead of starting with the whole castle blueprint and breaking it down, you start by building all the individual bricks, then assembling them into small sections, then larger sections, until you have the entire castle. You're building from the "ground up."

Bottom-Up (Tabulation) typically works:

  • Identify the base cases: Determine the simplest possible subproblems and their solutions.

  • Create a table/array/map: You usually use a dictionary to store the solutions to these subproblems.

  • Iterative filling: You then iteratively fill the dictionary, starting from the base cases and moving towards the solution of the original problem. Each new entry in the table is calculated using the already-computed (and stored) solutions to smaller subproblems.

Top-Down Dynamic Programming, also known as Memoization, works in the same way as when your teacher tells you to start with the main problem and break it down into smaller pieces. If you ever encounter a piece you have already solved, you just write down the answer in your notebook so you don't have to solve it again later.

  • Start from the top: You begin by trying to solve the original, larger problem.

  • Recursive calls: When solving the larger problem, you naturally encounter subproblems. You make recursive calls to solve these subproblems.

  • Cache results (Memoization): Here's the "dynamic programming" part! Before you calculate the solution to a subproblem, you first check if you've already calculated it and stored it in a "cache" (often an array or a hash map).

    • If it's already in the cache, you simply return the stored result. No new calculation needed!

    • If it's not in the cache, you calculate it, store the result in the cache, and then return it.

Dynamic programming isn't just an abstract concept for algorithms classes; it's a fundamental tool that underpins a vast array of technologies and scientific discoveries.

If you want to practice dynamic programming and be inspired by the implementation. Find different examples implemented in the codebase.

Sincerely, MO