📄️ 6.1 Algorithm Explanation
Here, we quote Wikipedia’s description: "Dynamic Programming (DP) is effective in finding the optimal solution for problems with many overlapping subproblems. It redefines the problem into subproblems. To avoid repeatedly solving these subproblems, their results are progressively computed and stored, starting from simpler problems until the entire problem is resolved. Therefore, dynamic programming saves the results of recursion and avoids spending time solving the same problems repeatedly. Dynamic programming can only be applied to problems with an optimal substructure. The optimal substructure means that the local optimal solution can determine the global optimal solution (this requirement is not always completely met for some problems, so certain approximations may be introduced). Simply put, the problem can be broken down into subproblems to solve."
📄️ 6.2 Basic Dynamic Programming: One-Dimensional
70. Climbing Stairs
📄️ 6.3 Basic Dynamic Programming: Two-Dimensional
64. Minimum Path Sum
📄️ 6.4 Partition Type Problems
279. Perfect Squares
📄️ 6.5 Subsequence Problem
300. Longest Increasing Subsequence
📄️ 6.6 Knapsack Problem
The knapsack problem is a combinatorial optimization NP-complete problem: given n items and a knapsack with weight capacity w, where each item has a weight and a value, determine which items to include in the knapsack to maximize the total value. If each item can only be chosen 0 or 1 time, the problem is called the 0-1 knapsack problem; if there is no limit to the number of items chosen, it is called the unbounded knapsack problem.
📄️ 6.7 String Editing
72. Edit Distance
📄️ 6.8 Stock Trading
Stock trading problems can often be solved using dynamic programming. For more complex stock trading scenarios, such as requiring cooldown periods or transaction fees, a state machine implemented with dynamic programming is a common approach.
📄️ 6.9 Exercises
Basic Difficulty