DSA Dynamic Programming


Definition

Dynamic Programming is a method of solving problems by breaking them into smaller overlapping subproblems, solving each just once, and storing the results so you don’t compute the same thing repeatedly.

Rather than recalculating the answer to a subproblem every time it appears, DP saves time by memorizing it — either through a table (bottom-up) or using recursion with a cache (top-down).


What Makes a Problem Fit for DP?

A problem is suitable for DP if it has:

  • Overlapping Subproblems: Smaller tasks repeat as part of solving larger ones.
  • Optimal Substructure: The best answer to the full problem depends on the best answers to its parts.

How It Works (Uniquely Explained)

  • Divide the big question into smaller, repeating tasks.
  • Store answers to these tasks instead of solving them again.
  • Build your final answer using these saved results.

It's like solving a puzzle where once you’ve placed a piece, you never pick it up again — you build the full picture using what you've already done.

Example

Problem: Find the Nth Fibonacci number where:

Fib(0) = 0, Fib(1) = 1, Fib(n) = Fib(n-1) + Fib(n-2)

Brute Force (Slow)

Repeatedly recalculates the same values — very inefficient.


Dynamic Programming (Fast)

Bottom-Up Approach (Tabulation):

def fib(n):     
       if n <= 1:         
           return n     
       dp = [0, 1]     
       for i in range(2, n+1):         
            dp.append(dp[i-1] + dp[i-2])     
       Return dp[n] 

We solve smaller Fibonacci values first and use them to calculate the next, storing everything in a list. No repeated computation.


Top-Down (Memoization) Approach

With recursion and a cache:

def fib(n, memo={}):     
        if n in memo:         
              return memo[n]     
        if n <= 1:         
              return n     
       memo[n] = fib(n-1, memo) + fib(n-2, memo)     
       Return memo[n]

You only compute each Fibonacci number once — after that, it’s pulled straight from memory.


Where DP Shines (Unique Use Cases)

  • Coin Change Problem: Fewest coins to reach a value.
  • 0/1 Knapsack Problem: Maximize value with limited weight.
  • Longest Common Subsequence (LCS): Find the longest shared pattern between strings.
  • Matrix Chain Multiplication: Find the cheapest way to multiply matrices.
  • Edit Distance: Minimum changes to convert one string to another.

Unique Real-Life Analogy

Imagine climbing stairs where some steps are broken. You remember how you got to each safe step so you don’t fall by repeating the same risky route again.


Bottom-Up vs Top-Down

FeatureBottom-Up (Tabulation)Top-Down (Memoization)
StorageUses a table (array or matrix)Uses a hash map or dictionary
ApproachIterativeRecursive
Function CallsNone (usually faster, no stack use)Many (uses call stack, can hit recursion limit)
Easy to DebugYesSlightly trickier

Pros

  • Avoids redundant work by keeping results ready.
  • Efficient even for problems with massive input sizes.
  • Reduces time from exponential to polynomial.

Cons (Unique)

  • Needs extra space for storing answers.
  • Sometimes hard to recognize when DP is applicable.
  • Table/memo can grow large for complex inputs.

Summary

Dynamic Programming helps you solve tough problems faster by remembering solutions to smaller parts instead of solving them over and over. It's like building a staircase where each step is already solid, so you climb to the top without ever looking back.


Prefer Learning by Watching?

Watch these YouTube tutorials to understand DATA STRUCTURES ALGORITHMS Tutorial visually:

What You'll Learn:
  • 📌 What Is Dynamic Programming and How To Use It
  • 📌 Dynamic Programming Explained (Practical Examples)
Previous Next