Huffman coding
  • Huffman coding is the  application of greedy algorithm.
  • Huffman coding is also known as Huffman encoding
  • It is a famous greedy algorithm which is used for lossless compression of data
  • It uses a variable-length encoding  where variable-length codes  are assigned to all characters depending on how frequently they occur in the given text

There are two major steps in Huffman coding-

  1. Building a Huffman tree from the input characters
  2. Assigning codes to the characters by traversing the Huffman tree

 

Steps to construct Huffman coding:

  1. Create a leaf node for all the given characters containing the occurring frequency of characters.
  2. Arrange all the nodes in the increasing order of frequency value contained in the nodes.
  3. Considering the first two nodes having a minimum frequency, create a new internal node having a frequency equal to the sum of the two nodes frequencies and make the first node as a left child and the other node as a right child of the newly created node.

  4. Keep repeating Step-02 and Step-03 until all the nodes form a single tree. After following all the above steps, our desired Huffman tree will be constructed.

Important Formula-

 

 

2.Total number of bits in Huffman encoded message

= Total number of characters in the message x Average code length per character

= ∑ ( frequencyi x Code lengthi )

 

Time Complexity-

ExtractMin( ) is called 2 x (n-1) times if there are n nodes.As extractMin( ) calls minHeapify( ), it takes O(logn) time.

Thus, Overall complexity becomes O(nlogn).

 

Time Complexity of Huffman Coding = O (nlogn)

where n is the number of unique characters in the text

 

Contributor's Info

Created:
0Comment
Binary Search Algorithm

.Binary Search is an application of Divide and Conquer

Binary Search:

Input: A Sorted array of ' n' elements and element 'x'

Output:Return position of element 'x' if found else return (-1)

 

let 'i' is a variable used to store the first element index  and 'j' is a variable used to store the last element index and 'mid ' is a variable  for mid element index

 

BinarySearch(a,i,j,x)

{

                      if (i==j)

                       { 

                                       if(a[i]==x)

                                        return (i)

                                         else

                                        return (-1)

                           }

                    else

                           {   

                                    mid=\left \lfloor( i+j)/2 \right \rfloor

                                        if(a[mid]=='x')

                                         return mid;  

                                 }

                                        else    

                                                           {

                                                                  if(x<a[mid])

                                                                     BinarySearch(a,i,mid-1,x)

                                                                         else

                                                                        BinarySearch(a,mid+1,j,x)

                                                                  }

 }

 

let T(n) be the time complexity  of the  above algorithm then recurrence relation 

 

T(n)=\left \{ O(1)if i==j \right \}

         \left \{ T(n/2)+C if n>1 \right \} where C is a constant

After applying master Theorem we get  time complexity

Best case O(1)

Avg. and Worst-case O(logn)

Contributor's Info

Created:
0Comment
Max-Min algorithm

Max-Min is an application of Divide and Conquer.

 

Input -An array of n-element

output: Finding the maximum and minimum of an array

Let 'a' is an array name contain n elements

let 'i' is a variable contain  first element index and  variable 'j' last element index

let max and min used to hold maximum and minimum value

Algo:

DAC-CON(a, i,j)

{

        if(i==j)

               {

                 max=min=a[i];

                 return(max,min)

               }

         if (i==j-1)

               {

                   if(a[i]>a[j])

                             {

                                    max=a[i];

                                     min=a[j];

                              }

                     else 

                             {

                                   min=a[i];

                                  max=a[j];

                             }

                      return (max,min)

                  }

     else

        {

               mid=(i+j)/2;

              (max1,min1)=DAC-CON(a,i, mid);

              (max2,min2)=DAC-CON(a,mid+1,j);

               if(max1>max2)

                      { 

                            max=max1;

                      }

              else

                       {

                           max=max2;

                      }

             if(min1>min2)

                     {

                        min=min2;

                     }

           else 

                   {

                     min=min1;

                   }

 

            return(max,min)

      }

}

Let T(n) be the time complexity of the above program on 'n' elements then 

Recurrence relation:

T(n)=\left \{ O(1) if n=1or 2 \right \}

           \left \{ 2T(n/2)+c if n>2 \right \}

 after Apply master theorem   we get O(n) time complexity

And the  number of comparisons between the elements for above program on 'n' array element is (3*n/2)

 

 

 

Contributor's Info

Created: Edited:
0Comment
Dynamic Programming

Dynamic Programming is a pretty powerful paradigm of solving problems. Given a problem, which can be broken down into smaller sub-problems, and these smaller sub-problems can still be broken into smaller ones - and if one manages to find out that there are some overlapping sub-problems, then its a DP problem. The problem is needed to break up into a series of overlapping sub-problems, and build up solutions to larger and larger sub-problems. The core idea of Dynamic Programming is to avoid repeated work by remembering partial results and this concept finds it application in a lot of real life situations.

In programming, Dynamic Programming is a powerful technique that allows one to solve different types of problems in time O(n2) or O(n3) for which a naive approach would take exponential time.
The following computer problems can be solved using dynamic programming approach −

  • Fibonacci number series
  • Knapsack problem
  • Tower of Hanoi
  • All pair shortest path by Floyd-Warshall
  • Shortest path by Dijkstra
  • Project scheduling

Contributor's Info

Created:
0Comment
Greedy Algorithm | How to come up with a Greedy Algorithm?

Greedy Algorithm:
A greedy algorithm, as the name suggests, always makes the choice that looks best at that moment. That is, it makes a locally optimal choice in the hope that this choice will lead to a globally optimal solution. Now the question is how to decide which choice is the best (or optimal), for that a function is required (which is called as the objective function) that needs to be optimized (either maximized or minimized) at the given point. Greedy algorithm makes greedy choices at each step such that the objective function is optimized. The Greedy Algorithm has only one shot to compute the optimal solution as it never goes back and reverses the decision.

How to come up with a Greedy Algorithm?

Let’s take an example. Suppose you are suffering from a lethal disease and you have only T years left to live. Now you want to try all the different things you always wanted to do. You are given an array A of integers denoting the time taken to complete different things in years. You want to know what is the maximum number of different things you can do in the limited time you have.

This is a simple Greedy Problem. In each iteration, one needs to greedily select the things which will take minimum time to complete and also maintain two variables currentTime and numberOfThings. So, simply sort the array A in nondecreasing order. Then, select the things one by one and add the time taken to complete that thing into currentTime and also add one to numberOfThings. Repeat as long as the currentTime is smaller than or equal to T.

Let A=[5,3,4,2,1] and T=6

After sorting, A=[1,2,3,4,5]

After 1st iteration,currentTime=1 and numberOfThings=1
After 2nd iteration, currentTime=1+2=3 and numberOfThings=2
After 3rd iteration, currentTime=3+3=6 and numberOfThings=3
After 4th iteration, currentTime=6+4=10 which is greater than T, answer will be 3.

Contributor's Info

Created: Edited:
0Comment
Sorting Algorithms

Note: Please note that In the second last Row of the above table, Quicksort best case is :{\color{Red} O(nlogn)}

Contributor's Info

Created: Edited:
0Comment