MATRIX CHAIN MULTIPLICATION | What is matrix chain multiplication? | Matrix chain multiplication problem solving techniques | Implementation | Time complexity

CONTENTS

List of topics that we cover in this article:

  • What is matrix chain multiplication?
  • Matrix chain multiplication problem solving techniques.
  • Implementation
  • Time complexity

MATRIX CHAIN MULTIPLICATION

  • What is matrix chain multiplication?

Matrix chain multiplication problem is the most popular problem can be solved by using dynamic programming. In this problem set of matrices are given and we must figure out how many minimum operations need to perform to multiply all these matrices. The main goal is not to multiply the matrices, but we must know which pair should select such that total cost of multiplication is minimum. So, you can clearly understand that the problem is not to find the multiplication, but we must know how to multiply them.

  • Matrix chain multiplication can also be solved by using:
  • Recursion
  • Top down dynamic programming
  • Bottom up dynamic programming
  • For example, let’s take three matrices A, B, C size of
  • A=10*30
  • B=30*5
  • C=5*60

 

Now we need to find the product of A.B.C

Actually, we don’t need the resultant matrix instead we should tell what the minimum number of operations will take to get the result.

When we multiply A.B.C

Firstly, I may take (AB) and later multiply with C i.e. (AB)C or I can multiply (BC) first and later multiply with A i.e A(BC)

 

Before going to this first know what the minimum number of operations are required to multiply matrices.

Two matrices can be multiplied by if the value of K is same.

And we know that resultant size of matrix is

          For example, the matrix size of A is 1*2 and matrix size of B is 2*3 then the resultant size of matrix is 1*3

So therefore, number of operations required to multiply two matrices is i*k*j 

Back to our possibilities that we made earlier (AB)C and A(BC)

(AB)C -> 1500+3000= 4500 number of operations required to multiply.

A(BC)-> (30*5*60) +(10*30*60) = 2700 number of operations are required.

 So now we can clearly say that 2700 is the minimum we will select A(BC) possibility to perform multiplication.

  • Implementation:

 

Now let’s solve a problem to get a clear idra

                             A1*A2*A3*A4

                            5*4  4*6  6*2  2*7

 

We can solve this problem by many ways. Let’s say for example

  • ((A1*A2) *A3) *A4
  • (A1*A2) *(A3*A4)

See this possibility in tree structure:

T(n)=2nCn/n+1

Now if i give n=3 then T (3) =5

So, these matrices can be multiplied in five different ways. We must find the minimum operation in those 5 ways. How can we know which one will give minimum result?   We will try all possible five ways, but we follow easy method that is given by dynamic programming. Dynamic programming always says that try all possible ways and pick up the best solution.

Let’s observe the example:

Dynamic programming uses tabulation method

A1. A2. A3. A4

5*4 4*6 6*2 2*7

Now let’s say only A1 its not multiplied with anyone so m [1,1] = 0

If we say only A2 it is not multiplied anyone so m [2,2] = 0

Similarly, m [3,3] and m [4,4] are also 0

Calculate m [1,2] that is multiplying two matrices 1 to 2

A1. A2 the dimensions of A1 and A2 are

5*4 4*6 =120 update the value at m[1,2] =120 so here we have multiplied with 1 and update the same value at the same position in S table

Calculate m [2, 3] i.e A2. A3 4*6 6*2 = 48   

Similarly, m [3,4] i.e A3. A4 6*2 2*7 = 84    

Next, m [1,3] that is multiplying three matrices 1 to 3

A1. A2. A3 this can be solved in two ways

A1. (A2. A3) & (A1. A2). A3

Now their dimensions are

5*4 (4*6 6*2)   & (5*4 4*6) 6*2

A1 is nothing but m [1, 1]

(A2. A3) is m [2, 3]

 m [1, 1] +m [2, 3] +5*4*2 = 0+48+40 = 88

And the next way is (A1. A2). A3

m [1, 2] +m [3,3] +5*6*2 = 120+0+60 = 180

By comparing two values, 88 is the minimum so we consider this value and update

m[1, 3] = 88

To understand this in simple way follow the procedure

Calculate m [2,4]

M[2,2]+m[3,4]+4*6*7    m[2,3]+m[4,4]+4*2*7

     0+84+168                    48+0+56

       252                                 104

104 is the minimum so m [2,4] is 104

 

Calculate m [1,4]

= {0 104+140, 120+84+210, 88+70}

= {244, 414, 158}

158 is the minimum so m [1,4] is 158.

 

We have only calculated only half of the table but not the full table.

  • Time complexity:

We are generating n(n-1)/2 and this going to be n^2 approximately. And we must know how each element we are getting, calculating all and we are considering only minimum and how much time it will take atmost is n

Time complexity for matrix chain multiplication is

n(n-1)/2=n^2*n=n^3

Time complexity for matrix chain multiplication is

Contributor's Info

Created:
0Comment
ALGORITHM DESIGN TECHNIQUES | Introduction | What are algorithm design techniques? | Divide and conquer | Greedy algorithm | Dynamic programming | Brute Force Backtracking and branch-and-bound

CONTENTS

List of topics that we are going to cover in this article:

  • Introduction
  • What are algorithm design techniques?
  • Divide and conquer
  • Greedy algorithm
  • Dynamic programming
  • Brute Force
  • Backtracking and branch-and-bound
  • Conclusion

ALGORITHM DESIGN TECHNIQUES

 

  • Introduction:

           Algorithm design techniques are used to solve different types of complex problems. We can also apply more than one technique to solve a single problem the most important thing is to choose a suitable technique to a problem. We must choose a best technique based upon the problem. We get efficient solution when we use a proper technique to a problem. Now let’s see what the different types techniques are.

 

  • Different types of techniques:
  • Divide and conquer:

Divide and conquer is the most popular algorithm in the data structure. The working of divide and conquer algorithm is to divide the problem into subproblems and solve the problems in easy manner and after combine all the subproblems to get final solution.

  • Let’s see how divide and conquer problem works:

Divide: Divide the problem into subproblems

 

Conquer: now we will get solution to all the problems

 

Combine: Here all the subproblems are combined to solve the complex problem.

  • Examples:Merge sort, quick sort.
  • Greedy algorithm:

The main goal of greedy algorithm is to gain optimal solution for the problem. We always choose a solution which is closest that provide optimal solution.

  • Examples:

Minimal spanning tree

Shortest distance in graphs

Coin exchange problem

Greedy algorithm for knapsack problem

Huffman trees.

  • Dynamic programming:

The working of dynamic programming is same as the divide and conquer but the difference in dynamic programming is breaking down the problems into subproblems and those problems into smaller subproblems after store the results to avoid unnecessary computations. And dynamic programming solves different types of problems.

Examples:

Binary search

Fibonacci series

  • Brute Force:

 Brute force is used to solve small-size of a problem. It is one of the easiest techniques to solve the problem based on the problems statement.

For example, consider some of the brute force algorithms.

Computing a^n

Computing n!

Selection sort

Bubble sort

  • Backtracking and branch-and-bound:

The technique is mainly used for state-space search problems. State-space search problems looks like the following:

  • Initial state
  • Goal state
  • Set of intermediate states
  • Set of operators that are transferred from one state to another state.
  • Cost function
  • Utility function
  • In state-space-tree whose nodes represent states, the root represents initial state leaves as goal states. Each edge is labelled with some operator.
  • We will get the solution only when we find the goal state.
  • Depth first search uses Backtracking
  1. Initial state is stored in the stack
  2. If stack is not empty, then:

Read the value from the stack

While there are operators do:

  1. To generate a child, apply an operator
  2. Suppose if child is goal state then stop
  3. Else if it is a new state, push the child into the stack.
  • If children are not get generated from a node then we perform a backtracking.
  •  Branch and Bound:

In Branch and bound we used to evaluate each node using cost and utility functions. At every step we prefer best node. Basically Branch-and-Bound are worked based upon priority queue.e

  • Example : 8 puzzle problem.

 

  • Conclusion:

Mainly problems can be solved by many approaches. Firstly, understand the problem and later apply which technique is most suitable to the problem that gives efficient solution. Finally, on should have a good knowledge in all algorithm design techniques.

 

Contributor's Info

Created:
0Comment