"Dynamic programming is just careful brute force — you solve every subproblem, but you remember what you've already solved."
Imagine you're climbing a staircase and want to count how many ways you can reach the top. At each step, you can take 1 or 2 stairs. How do you think about this?
The brute force approach: try every possible combination. But this explodes exponentially.
The DP insight: The number of ways to reach step 5 depends only on how many ways you can reach steps 3 and 4. If you know those, you just add them up.
This is the essence of 1D Linear DP: build solutions from smaller subproblems along a single dimension.
Every 1D DP problem follows the same skeleton:
1. DEFINE STATE: What does dp[i] represent?
2. WRITE TRANSITION: How does dp[i] relate to previous values?
3. SET BASE CASES: What are the starting values?
4. FIND ANSWER: Where is the final result?
5. OPTIMIZE SPACE: Can we reduce memory?
Mental Model: Think of dp[i] as "the answer to the problem if the input only went up to index i."
Mental Model: A branching path where you sum all possible routes.
Step 5
/ \
Step 4 Step 3
/ \ / \
... ... ... ...
To reach step 5, you add:
- Ways to reach step 4 (then take 1 step)
- Ways to reach step 3 (then take 2 steps)
Formula: dp[i] = dp[i-1] + dp[i-2]
Problems: LC 70 (Climbing Stairs), LC 746 (Min Cost Climbing)
Mental Model: At each point, you make an optimal choice.
House: [2, 7, 9, 3, 1]
↓
Rob or skip?
At each house, you choose:
- Skip: Keep the best you had before →
dp[i-1] - Rob: Take this house + best from non-adjacent →
dp[i-2] + nums[i]
Formula: dp[i] = max(dp[i-1], dp[i-2] + nums[i])
Problems: LC 198 (House Robber), LC 121 (Best Time Buy/Sell Stock)
Many DP problems reduce to one question at each step:
"Should I include the current element or exclude it?"
| Decision | Effect | When to Choose |
|---|---|---|
| Include | Add value, skip some previous | Current element is valuable |
| Exclude | Keep previous best, skip current | Previous result is better |
The transition becomes: dp[i] = best(include_option, exclude_option)
Most 1D DP only needs the last 1-2 values. Why store an entire array?
Before (O(n) space):
dp = [0] * (n + 1)
dp[0], dp[1] = 1, 1
for i in range(2, n + 1):
dp[i] = dp[i-1] + dp[i-2]
return dp[n]After (O(1) space):
prev2, prev1 = 1, 1
for i in range(2, n + 1):
prev2, prev1 = prev1, prev2 + prev1
return prev1When to optimize:
- Transition only uses last k values → O(k) space
- Don't need to reconstruct the path → safe to optimize
- Not doing multiple queries → no need to keep full array
"How many distinct ways to climb n stairs?" "Count paths from start to end"
Action: Additive DP, sum transitions.
"Rob houses but can't rob neighbors" "Maximum sum of non-consecutive elements"
Action: Include/exclude DP.
"Best time to buy and sell" "Maximum subarray sum"
Action: Running min/max (implicit DP).
"First and last elements are adjacent"
Action: Split into two linear subproblems.
Problem: Off-by-one errors in initialization.
# Wrong: dp[1] should be 1, not 2 for climbing stairs
dp[0], dp[1] = 1, 2 # ERROR
# Right: One way to reach step 0 (stay), one way to reach step 1
dp[0], dp[1] = 1, 1Solution: Trace through small examples manually.
Two different state definitions:
dp[i]= best solution using elements 0..i (can include i)dp[i]= best solution ending at i (must include i)
These require different transitions!
# Always handle small inputs before the loop
if n <= 2:
return n # Or appropriate base case| Problem | Answer Location |
|---|---|
| Reach step n | dp[n] |
| Best among all | max(dp) or dp[n-1] |
| Min cost to go beyond | min(dp[n-1], dp[n-2]) |
When first and last elements conflict (can't both be selected):
Solution: Solve two separate linear problems:
- Exclude the last element: solve for
arr[0:n-1] - Exclude the first element: solve for
arr[1:n]
Take the better of the two results.
Why it works: By excluding one endpoint, you break the circle into a line.
Master 1D Linear DP through this sequence:
- LC 70 (Climbing Stairs) — Pure Fibonacci, additive DP
- LC 746 (Min Cost Climbing) — Add cost, switch to min
- LC 198 (House Robber) — Include/exclude framework
- LC 213 (House Robber II) — Circular decomposition
- LC 121 (Best Time Buy/Sell) — Implicit DP with running min
1D Linear DP is about building optimal solutions from optimal sub-solutions.
The key insight: if you know the best answer for smaller problems, you can combine them to get the best answer for the current problem.
This works because of two properties:
- Optimal Substructure: Optimal solution contains optimal solutions to subproblems
- Overlapping Subproblems: Same subproblems are solved multiple times
DP avoids redundant computation by remembering what you've already solved.
"Don't recalculate. Remember."