Introduction to Data link layer
  1. It divides the stream of bits into small manageable packets called as frames.  
  2. The data link layer transforms the physical layer, a raw transmission facility, to a link responsible for node-to-node (hop-to-hop) communication. 
  3. It provides Hop-Hop connection.

Functionality :

  1. Access control
  2. Error control
  3. Flow control
  4. Framing
  5. Physical addressing.

1.Framing:-

Taking streams of bits from the network layer and divides into small manageable units is called frames.

A large stream of bits can be sent throughout the channel but doing so will have some issues like :

When an error occurred, you have to check the entire message and then retransmit the whole bitstream. When a message is carried in one very large frame, even a single-bit error would require the retransmission of the whole message. When a message is divided into smaller frames, a single-bit error affects only that small frame.

 

SFD (Start frame delimiter):- It tells the start of the frame.

EFD(End frame delimiter):-   these bit tells End of the frame.

 

Types of framing:-

1. Fixed-size Framing:

Size of each frame is fixed. Here we don't need any delimiter because the size of frames is confirmed and it itself acts as the delimiter.

 

2. Variable Size Framing:-

Here we have to define delimiter. We have to tell the beginning and end of each frame. For this, we have two approaches.

Character Stuffing and Bit Stuffing.

 

 

 

0Comment
Huffman Coding Problem

A file contains the following characters with the frequencies as shown. If Huffman coding is used for data compression, determine-

  1. Huffman code for each character
  2. Average code length
  3. Length of Huffman encoded message (in bits)               

                                                                                                                                                                                                                                                                                                        

                Characters 

 

Frequencies

         a       10       
         e        15
         i       12
         o        3
        u        4
         s        13
         t          1

 

Solution-

First, let us construct the Huffman tree using the steps we have learned above

 Step 01:

step 02:-

 

 

step 03:-

 

 

step 04:-

 

 

step 05:-

 

 

step 06:-

 

step 07:-

 

 

After we have constructed the Huffman tree, we will assign weights to all the edges. Let us assign weight ‘0’ to the left edges and weight ‘1’ to the right edges.

 

Note:

  • We can also assign weight ‘1’ to the left edges and weight ‘0’ to the right edges.
  • The only thing to keep in mind is that we must follow the same convention at the time of decoding which we adopted at the time of encoding.

After assigning weight ‘0’ to the left edges and weight ‘1’ to the right edges, we get-

 

 

 

1. Huffman code for the characters-

We will traverse the Huffman tree from the root node to all the leaf nodes one by one and and will write the Huffman code for all the characters-

 

  • a = 111
  • e = 10
  • i = 00
  • o = 11001
  • u = 1101
  • s = 01
  • t = 11000

From here, we can observe-

  • Characters occurring less frequently in the text are assigned the larger codes.
  • Characters occurring more frequently in the text are assigned the smaller codes.

2. Average code length-

We know,

Average code length  = ∑ ( frequencyi x code lengthi ) / ∑ ( frequencyi )

                                        = { (10 x 3) + (15 x 2) + (12 x 2) + (3 x 5) + (4 x 4) + (13 x 2) + (1 x 5) } / (10 + 15 + 12 + 3 + 4 + 13 + 1)

                                         = 2.52

 

3. Length of Huffman encoded message-

 

We know-

Total number of bits in Huffman encoded the message = Total number of characters in the message x Average code length per character

                                                                                                                   = 58 x 2.52

                                                                                                                    = 146.16

                                                                                                                      ≅ 147 bits

 

To gain a better understanding of the concepts and to practice more problems of Huffman Coding.

 

Answer
0Comment
Introduction to OSI Layer

OSI Model:-

  • It is a Layered Architecture.
  • It is not a protocol. 

In Layered Architecture, 

Each layer provides services to the upper layer and enjoys services provided by lower layers. Ex. layer 4 provides services to upper layer 5 but enjoys services provided by layer 3.

While sending the data, each layer adds up its own information to the data obtained by one layer above it and sends the complete package to one layer just below it.

 

Few advantages of layering System:-

  1. Divide and Conquer
  2. Encapsulation
  3. Abstraction. 
  4. Testing.

OSI model is consist of 7 layers:

  1. Physical layer 
  2. Data-link layer
  3. Network layer
  4. Transport layer 
  5. Session layer
  6. Presentation layer
  7. Application layer.

Now we will see Each layer Separately.

0Comment
Huffman coding
  • Huffman coding is the  application of greedy algorithm.
  • Huffman coding is also known as Huffman encoding
  • It is a famous greedy algorithm which is used for lossless compression of data
  • It uses a variable-length encoding  where variable-length codes  are assigned to all characters depending on how frequently they occur in the given text

There are two major steps in Huffman coding-

  1. Building a Huffman tree from the input characters
  2. Assigning codes to the characters by traversing the Huffman tree

 

Steps to construct Huffman coding:

  1. Create a leaf node for all the given characters containing the occurring frequency of characters.
  2. Arrange all the nodes in the increasing order of frequency value contained in the nodes.
  3. Considering the first two nodes having a minimum frequency, create a new internal node having a frequency equal to the sum of the two nodes frequencies and make the first node as a left child and the other node as a right child of the newly created node.

  4. Keep repeating Step-02 and Step-03 until all the nodes form a single tree. After following all the above steps, our desired Huffman tree will be constructed.

Important Formula-

 

 

2.Total number of bits in Huffman encoded message

= Total number of characters in the message x Average code length per character

= ∑ ( frequencyi x Code lengthi )

 

Time Complexity-

ExtractMin( ) is called 2 x (n-1) times if there are n nodes.As extractMin( ) calls minHeapify( ), it takes O(logn) time.

Thus, Overall complexity becomes O(nlogn).

 

Time Complexity of Huffman Coding = O (nlogn)

where n is the number of unique characters in the text

 

0Comment
Quick Review

Example:

Consider two functional dependency set F and G: F={ A→B, B→C, C→A} and G= {A→BC, B→AC, AB→C, BC→A} which of the following is true?

A)F⊃G

B))G⊃F

C)F≡G

D)None of these

 

Solution:

Here, According to FD set F, 
A+= {ABC}, B+= {ABC}, C+= {ABC} 
According to FD set G, 
A+= {ABC}, B+= {ABC}, C+= {C} 
So, F covers G but G doesn’t covers F. Hence, option A is correct.

 

Example:

When converting one (1) to many (N) binary relationship into tables, the recommended solution is usually

 

A)One big table with all attribute from both entites included

B)Foreign key added on child  (Many side) referencing the parent.

C)Foreign key added on parent  (one side) referencing the child.

D)foreign key added on the both side(both table)

 

Solution:

When converting one (1) to many(N) binary relationship into tables, the recommended is foreign key added on the Child(many side) referencing the parent.

 

Example:

Which cardinality best describes relationship runs where each team has drag many experiment to runs?

A)1:1

B)1:N

C)N:1

D)N:M

 

Solution:

From the above cardinalities, we see that ratios like 1:1, 1:N, N:1 and N:M gives cardinality constraint or numeric restriction on possible relationships. From this, we see that the ratio 1:N relation is most comfortable to describe the constant of ER team.

 

Example : Let a Relation R have attributes {a1,a2,a3} & a1 is the candidate key. Then how many super keys are possible?

Solution:

Here, any superset of a1 is the super key.
Super keys are = {a1, a1 a2, a1 a3, a1 a2 a3}
Thus we see that 4 Super keys are possible in this case.

In general, if we have ‘N’ attributes with one candidate key then the number of possible superkeys are 2(N – 1).

 

Example : Let a Relation R have attributes {a1, a2, a3,…,an}. Find Super key of R.

Solution

Maximum Super keys = 2n – 1.
If each attribute of relation is candidate key.

 

Example: In relation STUDENT( STUD_NO,STUD_NAME,STUD_STATE,STUD_COUNTRY,STUD_AGE) 

FD set:

{STUD_NO -> STUD_NAME, STUD_NO -> STUD_STATE, STUD_STATE -> STUD_COUNTRY, STUD_NO -> STUD_AGE, STUD_STATE -> STUD_COUNTRY}

Candidate Key: {STUD_NO}.find the normal form and  if it is not in 3NF decompose into 3NF.

Solution:

For this relation 

STUD_NO -> STUD_STATE and STUD_STATE -> STUD_COUNTRY are true.

So STUD_COUNTRY is transitively dependent on STUD_NO. It violates third normal form.

To convert it in third normal form, we will decompose the relation STUDENT (STUD_NO, STUD_NAME, STUD_PHONE, STUD_STATE, STUD_COUNTRY_STUD_AGE) as:
STUDENT (STUD_NO, STUD_NAME, STUD_PHONE, STUD_STATE, STUD_AGE)
STATE_COUNTRY (STATE, COUNTRY)

0Comment
Binary Search Algorithm

.Binary Search is an application of Divide and Conquer

Binary Search:

Input: A Sorted array of ' n' elements and element 'x'

Output:Return position of element 'x' if found else return (-1)

 

let 'i' is a variable used to store the first element index  and 'j' is a variable used to store the last element index and 'mid ' is a variable  for mid element index

 

BinarySearch(a,i,j,x)

{

                      if (i==j)

                       { 

                                       if(a[i]==x)

                                        return (i)

                                         else

                                        return (-1)

                           }

                    else

                           {   

                                    mid=\left \lfloor( i+j)/2 \right \rfloor

                                        if(a[mid]=='x')

                                         return mid;  

                                 }

                                        else    

                                                           {

                                                                  if(x<a[mid])

                                                                     BinarySearch(a,i,mid-1,x)

                                                                         else

                                                                        BinarySearch(a,mid+1,j,x)

                                                                  }

 }

 

let T(n) be the time complexity  of the  above algorithm then recurrence relation 

 

T(n)=\left \{ O(1)if i==j \right \}

         \left \{ T(n/2)+C if n>1 \right \} where C is a constant

After applying master Theorem we get  time complexity

Best case O(1)

Avg. and Worst-case O(logn)

0Comment
Max-Min algorithm

Max-Min is an application of Divide and Conquer.

 

Input -An array of n-element

output: Finding the maximum and minimum of an array

Let 'a' is an array name contain n elements

let 'i' is a variable contain  first element index and  variable 'j' last element index

let max and min used to hold maximum and minimum value

Algo:

DAC-CON(a, i,j)

{

        if(i==j)

               {

                 max=min=a[i];

                 return(max,min)

               }

         if (i==j-1)

               {

                   if(a[i]>a[j])

                             {

                                    max=a[i];

                                     min=a[j];

                              }

                     else 

                             {

                                   min=a[i];

                                  max=a[j];

                             }

                      return (max,min)

                  }

     else

        {

               mid=(i+j)/2;

              (max1,min1)=DAC-CON(a,i, mid);

              (max2,min2)=DAC-CON(a,mid+1,j);

               if(max1>max2)

                      { 

                            max=max1;

                      }

              else

                       {

                           max=max2;

                      }

             if(min1>min2)

                     {

                        min=min2;

                     }

           else 

                   {

                     min=min1;

                   }

 

            return(max,min)

      }

}

Let T(n) be the time complexity of the above program on 'n' elements then 

Recurrence relation:

T(n)=\left \{ O(1) if n=1or 2 \right \}

           \left \{ 2T(n/2)+c if n>2 \right \}

 after Apply master theorem   we get O(n) time complexity

And the  number of comparisons between the elements for above program on 'n' array element is (3*n/2)

 

 

 

0Comment
Basics of Candidate Key and counting

We already know the formal definition of candidate key let's talk some more.

 

1. Given that a set  S =  {{A_{1,}A_{2},A_{3}, .......... A_{n}}}

Then the maximum number of candidate key possible is : 2^{n}-1

 

Example 1:-

R=\left \{ A,B,C,D,E,H \right \}

FD's= \left \{ A\rightarrow B,BC\rightarrow D,E\rightarrow C, D\rightarrow A\right \}

find the Candidate Keys.

Solution:

First, find out which of the  attributes are not derived by any other attributes

So EH are the one. So they must be present in CK.

 

(AEH)^{^{+}}= A,B,,C,D,E,H

(BEH)^{^{+}}= A,B,,C,D,E,H

(DEH)^{^{+}}= A,B,,C,D,E,H

(CEH)^{^{+}}= C,E,H

So only AEH, BEH,DEH are the candidate keys because they are deriving all the attributes.

 

Finding the number of Super keys:-

 

If a set has n elements and one of the element is already known to be a candidate key then any superset of the candidate key is Superkey. We know this stuff. Remaining n-1 keys have two choices either they can be the part of superkey or not.

Example:-

1. let  R=\left \{ E,F,G,H,I,J,K,L,M,N, \right \}

and it is already known that GHI  is  then the number of superkeys is:-

the total number of attributes is 10 and out of which 3 attributes formed candidate key then the remaining 7, has two choices: either they want to be the part of superkey or not.

So 2 choices ................7 times

So total number of superkeys = 2^{7}=128

 

2.  R=\left \{ A,B,C,D \right \} 

      FD= \left \{ AB\rightarrow CD,D\rightarrow A \right \}

find the number of super keys.

Solution:-

First, find the number of candidate keys:-

 B is not derived by anyone So it must be the part of a candidate key.

(AB)^{+}=\left \{ A,B,C,D \right \}

(BD)^{+}=\left \{ A,B,C,D \right \}

(BC)^{+}=\left \{ B,C\right \}

So only two candidate keys: AB,BD

Number of superkeys =

number of superkeys when AB is taken as candidate key number of superkeys when BD is taken as the candidate key  number of superkeys when BOTH are   taken as the candidate key

2^{2}+2^{2} -2 =6

 

 

0Comment
B+ trees structure of leaf and internal nodes

Example:

Search key field is 9 bytes the block size is 512bytes , a record pointer pr 7 bytes , a  block pointer is 6 bytes. What is the order of internal node and leaf .

Solution:

For internal node:-

n*(block pointer) +(n-1)(search key) ≤ 512

n*6 + (n-1)*9  ≤ 512

15n ≤ 521

n ≤ 34.73

order of internal node =34

For leaf node:

Oleaf*(K+pr) + block pointer  ≤ 512

 

Oleaf(7+9) + 6  ≤512

16*Oleaf    ≤ 506

Oleaf  ≤ 506/16

Oleaf  =31

 

 

 

 

0Comment
Under flow and Overflow in the B-trees

while insertion we get overflow and during deletion we face underflow situation.

let the order of b tree is 'p'

Condition for Root node:

in case of Min       ----------- 2 tree pointer

                                                     1 key entry

in case of Max      ----------- 'p' tree pointer

                                  ------------- p-1 key entry

 

Condition for Internal node:

n case of Min       ----------- ceil(p/2) tree pointer

                                                    ceil(p/2)-1  key entry

in case of Max      ----------- 'p' tree pointer

                                  ------------- p-1 key entry

Condition for leaf node:

in case of Min       ----------- ceil(p/2)-1  key entry

in case of Max      ----------- 'p-1' key entry

 

 

Example:let order p=5, now find min and max for all nodes

Solution

Root:

in case of Min       ----------- 2 tree pointer

                                                     1 key entry

in case of Max      ----------- '5' tree pointer

                                  ------------- 4 key entry

 

nternal node:

n case of Min       ----------- ceil(5/2) =3 tree pointer

                                                    ceil(5/2)-1 = 2 key entry

in case of Max      ----------- '5' tree pointer

                                  ------------- 4 key entry

 

leaf node:

in case of Min       -----------   ceil(5/2)-1 = 2 key entry

in case of Max      ----------- 4 key entry

0Comment

Pages