Main menu

Media Coverage

<div class="small-4 columns">media 1</div>

<div class="small-4 columns">media 2</div>

<div class="small-4 columns">media 3</div>

Reach to Us

virtualgate's picture
Submitted by virtualgate on 9 May, 2017 - 13:12

Dynamic Programming

Dynamic Programming

Dynamic Programming is a pretty powerful paradigm of solving problems. Given a problem, which can be broken down into smaller sub-problems, and these smaller sub-problems can still be broken into smaller ones - and if one manages to find out that there are some overlapping sub-problems, then its a DP problem. The problem is needed to break up into a series of overlapping sub-problems, and build up solutions to larger and larger sub-problems. The core idea of Dynamic Programming is to avoid repeated work by remembering partial results and this concept finds it application in a lot of real life situations.

In programming, Dynamic Programming is a powerful technique that allows one to solve different types of problems in time O(n2) or O(n3) for which a naive approach would take exponential time.
The following computer problems can be solved using dynamic programming approach −

  • Fibonacci number series
  • Knapsack problem
  • Tower of Hanoi
  • All pair shortest path by Floyd-Warshall
  • Shortest path by Dijkstra
  • Project scheduling

Contributor's Info

Created:
0Comment
Content covered: 

Matrix chain multiplication is an optimization problem that can be solved using dynamic programming. Given a sequence of matrices, the goal is to find the most efficient way to multiply these matrices. The problem is not actually to perform the multiplications, but merely to decide the sequence of the matrix multiplications involved.

Here are many options because matrix multiplication is associative. In other words, no matter how the product is parenthesized, the result obtained will remain the same. For example, for four matrices A, B, C, and D, we would have:

((AB)C)D = ((A(BC))D) = (AB)(CD) = A((BC)D) = A(B(CD)).

However, the order in which the product is parenthesized affects the number of simple arithmetic operations needed to compute the product, or the efficiency.
For example,
if A is a 10 × 30 matrix, B is a 30 × 5 matrix, and C is a 5 × 60 matrix, then computing (AB)C needs (10×30×5) + (10×5×60) = 1500 + 3000 = 4500 operations, while computing A(BC) needs (30×5×60) + (10×30×60) = 9000 + 18000 = 27000 operations.

Clearly the first method is more efficient. With this information, the problem statement can be refined as "how to determine the optimal parenthesization of a product of n matrices?" Checking each possible parenthesization (brute force) would require a run-time that is exponential in the number of matrices, which is very slow and impractical for large n. A quicker solution to this problem can be achieved by breaking up the problem into a set of related subproblems. By solving subproblems once and reusing the solutions, the required run-time can drastically reduced. 
 

The following bottom-up approach [1] computes, for each 2 ≤ k ≤ n, the minimum costs of all subsequences of length k, using the costs of smaller subsequences already computed. It has the same asymptotic runtime and requires no recursion.

Pseudocode:

// Matrix A[i] has dimension dims[i-1] x dims[i] for i = 1..n
MatrixChainOrder(int dims[])
{
    // length[dims] = n + 1
    n = dims.length - 1;
    // m[i,j] = Minimum number of scalar multiplications (i.e., cost)
    // needed to compute the matrix A[i]A[i+1]...A[j] = A[i..j]
    // The cost is zero when multiplying one matrix
    for (i = 1; i <= n; i++)
        m[i, i] = 0;

    for (len = 2; len <= n; len++) { // Subsequence lengths
        for (i = 1; i <= n - len + 1; i++) {
            j = i + len - 1;
            m[i, j] = MAXINT;
            for (k = i; k <= j - 1; k++) {
                cost = m[i, k] + m[k+1, j] + dims[i-1]*dims[k]*dims[j];
                if (cost < m[i, j]) {
                    m[i, j] = cost;
                    s[i, j] = k; // Index of the subsequence split that achieved minimal cost
                }
            }
        }
    }
}

 

Content covered: 

0/1 Knapsack Problem Dynamic Programming

Content covered: 

Subset Sum Problem Dynamic Programming

Content covered: 

LCS Problem Statement: Given two sequences, find the length of longest subsequence present in both of them. A subsequence is a sequence that appears in the same relative order, but not necessarily contiguous. For example, “abc”, “abg”, “bdf”, “aeg”, ‘”acefg”, .. etc are subsequences of “abcdefg”. So a string of length n has 2^n different possible subsequences.

It is a classic computer science problem, the basis of diff(a file comparison program that outputs the differences between two files), and has applications in bioinformatics.

Examples:
LCS for input Sequences “ABCDGH” and “AEDFHR” is “ADH” of length 3.
LCS for input Sequences “AGGTAB” and “GXTXAYB” is “GTAB” of length 4.

Dynamic Programming Quiz

This quiz contains questions from (Dynamic Programming Quiz)

Dynamic Programming
Type: Short Notes
Nid: 11326
Matrix Chain Multiplication | Dynamic Programming
Type: Interactive content
Nid: 25201
0/1 Knapsack Problem Dynamic Programming
Type: Interactive content
Nid: 25099
Subset Sum Problem Dynamic Programming
Type: Interactive content
Nid: 25100
Dynamic Programming | Longest Common Subsequence
Type: Interactive content
Nid: 25202
Dynamic Programming Quiz
Type: Quiz
Nid: 18321