dynamic programming

·

2 min read

overview

  • breaking down a big problem into smaller overlapping subproblems and solving them independently, while reusing previously solved subproblems

    • overlapping subproblems - store and reuse the results of the subproblems

    • optimal substructure - optimal solution of a smaller structure determines the optimal solution of a bigger structure

  • 2 techniques used in dynamic programming

    • top down - dfs + memoization

      • recursion (calling function repeatedly) + caching (storing results)

      • entries are filled on demand

    • bottom up - tabulation

      • recursion iteration + caching

      • all entries filled one by one (from the start point)

when to use dp?

  • maximum/longest, minimum/shortest you can get from doing something on a sequence

  • how many ways are there to do something

  • is it possible to do something

greedy vs dp

  • answer to a dp problem may not always be the best at every state, unlike greedy algorithm

divide & conquer vs dp

  • dp has overlapping subproblems while divide and conque's subproblems do not overlap

classic examples

memoization

  • type of caching - store data to be used later

  • types of memoization --> 1d, 2d, 3d --> depends on number of inputs affecting output

  • how to make 2d arrays
a = [[0 for _ in range(col)] for _ in range(numRows)]

problem: climbing stairs / fibonacci

  • using recursion

    • can reach nth stair from (n-1)th stair or (n-2)th stair

    • distinct # of ways to get to nth stair =

      • distinct # of ways to get to (n-1)th stair + distinct # of ways to get to (n-2)th stair
    • climb(n) = climb(n-1) + climb(n-2)

def solution(n):

    def climbMemo(n, dp):
        if dp[n] != -1:
            return dp[n]
        else:
            dp[n] = climbMemo(n-1, dp) + climbMemo(n-2,dp)
        return dp[n]

    if n == 0 or n == 1: 
        return 1
    else: 
        dp = [-1] * (n+1)
        dp[0] = 1
        dp[1] = 1
        return climbMemo(n, dp)