Algorithms Tutorial: Dynamic Programming Part 3

Aug 18, 2019 · Santa Clara, United States of America

This will be a continuation (Part 3) of the dynamic programming series. You need not have attended Parts 1 and 2, but there are knowledge prerequisites based on what we covered, without which it may be difficult for you to understand this session. This session is a deeper dive into dynamic programming for those who already have some background with the subject.

We will start the session by looking at a classic dynamic programming problem, the "knapsack problem". We will see the usual treatment of it, which is quite straightforward given a solid understanding of dynamic programming principles. Then, we will go deeper. The usual solution solves the problem exactly, but is only efficient when certain conditions are met. It is also known that when those conditions aren't met, the problem is NP-complete, meaning it is unlikely to have an efficient exact solution. However, we will use dynamic programming to still design an efficient algorithm to approximate the answer to any arbitrary chosen extent (say 99.99 percent), effectively "almost solving" the problem.

Then, later in the session, we will gain more perspective on how solving a problem top-down vs. bottom-up can affect the time complexity of the solution. We learned earlier that these do the same thing and so should have the same time complexity, but the truth is slightly more subtle: which technique you choose can sometimes affect implementation details in ways that change the time complexity.

What we covered in earlier sessions:

In Part 1, we covered how to solve a problem using dynamic programming, when to use dynamic programming, and the difference between bottom-up and top-down approaches. In Part 2, we went deeper and understood that the essence of dynamic programming is to find a recursive formula for the problem whose function variables represent everything you need to solve the remainder of the problem from that point forward, capturing all "history" or "state" of the problem as compactly as possible.

If you don't understand the aforementioned material, you can review the videos of the previous sessions. In the Part 1, we covered:

Some problem solving:

Attendees were asked to code the basic coin collector problem discussed at the end as homework. Here is a reference solution to view after you've tried it:

Video for part 2:

Attendees were asked to code the follow-up versions of the code-collector problem where you can additionally move upwards.
(1) Where you can't visit the same cell twice. My top-down and bottom-up solutions:
(2) Where you can visit the same cell as many times as you want, but only collect its value the first time around.

Our Youtube channel having all the available recordings:

Event organizers
  • Tech Interviews and Competitive Programming

    Brush up on your technical problem solving skills together in a friendly environment where you can feel free to share your job search experiences and role-play a short series of practice coding interviews with a study partner in a relaxed setting.  This group is meant to allow participants to build up their self-confidence by solving a range problems from easy to challenging and become comfortable with working together on a problem on paper with another person. For more experienced members we also have wor

    Recent Events

Are you organizing Algorithms Tutorial: Dynamic Programming Part 3?

Claim the event and start manage its content.

I am the organizer

based on 0 reviews

Featured Events