Available Balance
Parametric vs nonparametric data in econometrics
July 20, 2016
Parametric vs nonparametric data



The set of observations that reflect certain characteristics are known as Data. There are two types of data in regards to parametric vs nonparametric data, which are as follows:


Econometrics – Parametric vs nonparametric data

In econometrics quantitative research and analysis, the definition of parametric and nonparametric as a different meaning. Following are the definitions that best explains Parametric Data and non-parametric data in Econometrics and quantitative research & analysis.


Parametric Data

The data that needs a vector to be defined with its magnitude is called a parametric data. For example to define GDP, it requires the currency to support the data. Suppose the GDP of XYZ country in year 2015 is 1.2 million dollar. So the dollar sign over here acts as a vector and therefore GDP is a parametric data. Data which is derived from parametric data, though having no vector, will also be known as parametric data. For example, GDP growth has no vector part, but it is derived from the last year GDP and the current year GDP, therefore GDP growth will also be a parametric data.


Parametric Econometrics and quantitative research & analysis


Nonparametric Data

The data that does not required vector is called nonparametric data. For example, there are 5 students in a course. Here the data does not requires any vector to be defined and therefore known as nonparametric data. Another example of nonparametric data can be the level of satisfaction with a product in a scale of 1 to 5.


nonparametric Econometrics and quantitative research & analysis


In reality, parametric data and procedures require more assumption as compared to the non-parametric data and procedures. If these assumption of parametric data and procedures are correct than the estimated results are more accurate and this will then be noted as most powerful statistical tool. However, on the other hand if these assumptions are not correct then the results will be highly incorrect.

Parametric data and procedures are mostly used due to their simple formulas and fast computing techniques.


For further detail you can visit below website:

Google Books



Rate This Content
TCP Connection, Establishment & Flow Control
March 3, 2015


Transport Control Protocol Connection

TCP connection is a full duplex connection which is not actually a wired connection but it maintains connectivity between two end processes. This is established between a single sender and a single receiver and is thus referred as a “point to point” connection. Before, one application process can send data over a TCP connection, a handshake is essentially required which involves initialization of many TCP variables. It provides a reliable connection by preventing packets from getting corrupted and dropped.

TCP connection serves two important purposes with respect to the health of a network which are flow control and congestion control.
Read more

Rate This Content
Selective Repeat Protocol – sliding window
February 26, 2015

Selective Repeat Protocol

When a data packet is transmitted, its corresponding timer will start and when an acknowledgement is received then the timer will be destroyed. On the other hand, if an acknowledgement packet is not received, then the timer expires after certain time period and then the packet will be resend.
Read more

Rate This Content
Go Back N Sliding Window Protocol
February 25, 2015


Go back N protocol

GO back N protocol is one of the applications of pipeline protocol. In Go back N protocol, packets should be delivered in sequence to the application layer. To better understand the working GBN protocol, you first need to understand how the sender and receiver work to execute GBN protocol.


Sender Side of GBN Protocol:

  1. Invocation from application layer.
  2. Receipt of acknowledgement packet.
  3. A time out event.


Go Back N Protocol - Sender Side


The application layer provides chunks of data to the transport layer. Transport layer places this data into its buffer and starts sending the data in bulk to the receiver. Once the packet have been send in a pipelined fashion to the receiver, then the sender, either get acknowledgement of the packets or a time out may occur. In case of time out, the sender re-transmits the data packet.



Receiver Side of GBN Protocol

  1. Acknowledgement of ordered packets.
  2. Drop packet.


Receiver Side of GBN Protocol

At the receiver end we have two options, either send an acknowledgement to the sender on receiving ordered packets or discard the packet received. The packets will only be acknowledged by the receiver, if its sequence number is “N+1” that is last received packet + 1, as the main restriction of Go Back N protocol is “Order delivery of packets to upper layer”. If the sequence number of the received packet is not in sequence with the previous one then the packet will be dropped and the acknowledgement of the most recent in ordered packet is resend again.



TCP sliding window protocol

Data in sender buffer are sent in chunks instead of the entire data in buffer. Why? Suppose the sender buffer has the capacity of 1 MB and the receiver buffer has the capacity of 512KB, then 50 % of the data will be lost at the receiver end and this will unnecessarily cause re-transmission of packets from the sender end. Therefore, the sender will send data in chunks less than 512. This will be decided by the help of window size. Window size caters the capacity of the receiver. Flow control is the receiver related problem, we do not want the receiver to be overwhelmed, thus in order to avoid overwhelming situation, we need to control the flow by using window size “N”.


Go Back N (GBN) Protocol - Sender Side


The above figure shows the buffer at the sender end.

“Base” indicates the sequence number of the packet that has not been acknowledged (the first unacknowledged packet).

Window shifts when the packets starting from the base gets acknowledged and it starts moving, this process continues till all acknowledgements are received, so it is also known as sliding window protocol. When window size is reached the packet will not be sent further till the window slides further.

How will the sender be aware of the size of the receiver’s buffer or what should be the window size at the sender end?

Before the data is transmitted from one host to another, a connection is first established between the two. During this establishment, such kind of information are share between the two hosts, like window size, buffer size, etc. After which the data transmission begins.


Go Back N (GBN) Protocol - Receiver Side


Question: Suppose an application at node A wants to transfer 500K data in 10K segments to a receiver at node B. Using Go Back N transport layer protocol. Draw a timing diagram if window size is 4. Suppose RTT = 150 msec.


 Case # 1: If no packets are dropped in GBN Protocol(timing diagram)


This means that there are 500/10 = 50 segments to be sent in total with 4 at a time. Since there is no packet loss, therefore below diagram shows the movement of segments:


Go Back N (GBN) Protocol timing diagram


 Case # 2: If there is a packet loss at 5 (timing diagram)


Go Back N (GBN) Protocol timing diagram packet loss


 Disadvantage of Go Back N Protocol

Suppose the window size is very big and bandwidth is greater as well, as shown in the below figure:


Disadvantage of Go Back N Protocol


In the above case, if a packet is delivered then acknowledgement will be received after 600 msec. This means while we will fill the gap with more packets during 600 msec and if 1st packet drops then we need to send all packets again, so this is the performance penalty for GBN. Therefore, if bandwidth delay product and N (Window size) are very big or greater then GBN will not be efficient.

Go Back N protocol has one timer that is used for all packets, however, Selective Repeat protocol uses single timer for each packet.


For Further Read



Rate This Content
Pipeline Computing And Protocol
February 24, 2015

Pipeline Protocol

Today I will begin explaining Pipeline Computing And Protocol by a hypothetical scenario: Suppose an application wants to send a file of 8MB size, packet size equals to 10000bits, Round Trip Time (RTT) equals to 40msec and data rate / bandwidth equals to 1Gbps.


Pipeline Computing And Pipeline Protocol


In the above scenario, next packet is sent by the sender after receiving the acknowledgement of the previous one.

Now let us calculate the efficiency of utilizing the bandwidth:

(Us) Utilization Sender =            L/R                   
                                                 RTT + L/R

Where, L= Length of Packet, R = Bandwidth / Data rate and RTT= Round trip time

Us =            10000/10^9                          = 0.000249938
(40 x 10^-3) + 10000/10^9

Bandwidth Utilization = 10^9 x 0.000249938 = 249937.5 bps = 249.9375 Kbps

The problem with above scenario (stop and wait protocol) is that the waiting time is very long, due to which, out of 1 Gbps the sender is only utilizing 249.9375 Kbps of the bandwidth and remaining is getting wasted.

We can optimize the above scenario by keep on sending the packets and not waiting for the acknowledgement. The sender buffer starts pushing multiple packets into a network and at the other hand the receiver buffers starts sending the acknowledgement packet as soon as it receives data packets from the sender. So in a way, we are utilizing RTT and instead of just sending one packet, we are sending burst of packets. This is known as pipeline protocol.


Suppose sender sends burst of packets through pipeline protocol, however, in the middle few packet in different orders gets lost. Then what?


We can have two possible solutions to the above problem.

  1. Send only those packets which are not received. Such kind of protocol is known as selective repeat protocol. The sender retransmits only those packets which were lost / whose acknowledgements were not received. This protocol will further be explained in the topic Selective Repeat protocol.
  2. Resend all packets. This kind of protocol is known as Go Back N protocol. Suppose the sender wanted to transmit 10 packets to the receiver, 4 packets were transferred and acknowledged, however the 5th acknowledgement was not received, the sender, after the expiry of timer, will resend all the packets from 5th to 10th and this will continue till all the packet acknowledgement is not received; so it’s like go back to the Nth packet and send the burst of packets from there.  This protocol will further be explained in the topic Go Back N protocol.


Application layer does not ensure reliable delivery, neither IP does, it is the transport layer that stores data and ensures the reliability of data transfer.


OSI Layer Pipeline Computing And Pipeline Protocol


Packet loss in Pipeline ProtocolIn the above fig, places of packet No. 1, 3 and 4 are empty so the transport layer now request application layer to send more chunks of data, as the packets here have already been acknowledged. But at places 2, 5 and 6 new data can’t come, because if time out occurs, the sender will need the same previous data for retransmitting.


For the receiver buffer, those packets which have been acknowledged, they will be delivered to the application and if the receiver gets duplicate packets, then one packet will be discarded.

Packet loss in Pipeline Protocol2

Pipeline protocol has the following consequences:

Sequence Number:

The sequence number is required as the packets are sent in bulk and may not be in a sequence when traveling through pipeline and therefore to rearrange the packets in proper sequence.



The packets that are being transferred from the sender to the receiver needs to be buffered at both ends. At the sender end, buffer is required to store the data and wait till all packets acknowledgement has been received by the sender. On the other hand buffer is required to store the data till all of the packets are correctly received by the receiver and then the complete final data is provided to the application layer.


 Range of the sequence number and size of the buffer:

The sequence number is required to identify the packets received by the receiver and to sequence the data accordingly. But here a question arises that what should be the maximum number used for the sequence and what should be the size of the buffer holding such packets? In TCP protocol, sequence number is of 32 bits, therefore maximum number of packets at a time can be 2^32 = 4294967296 packets. Suppose, we have 8 bit sequence number, then this means we can have 2^8 packets.


For Further


Rate This Content
Connectionless / Connection oriented Vs Reliable / Unreliable TCP
January 29, 2015


Connection oriented basically provides sequence number which is the basis of reliability. By reliability, it means that the communication between the sender and the receiver is connection oriented.


Connectionless and connection oriented services transport protocol

When the source and the destination during a communication calculates and share certain parameters before transferring the packets then this kind of communication is called connection oriented.

If source sends data to destination, then the sources cannot expect any acknowledgement for connectionless protocol. But if destination wants to reply it, then it can do so by using IP and port number.

A reliable protocol caters two things:

  1. Network congestion control.
  2. Receiver flow control.

At this point, a question should arrive in your mind that if a connection oriented communicating protocol is much more reliable than why connectionless transport layer protocols are used?


Reasons to use connection less transport protocol

UDP protocol is also known as unreliable connectionless transport layer protocol.
Following are some reasons to use UDP:

  1. Finer application level control over which the data is sent and if when we want to control we need light weight and less consuming protocols. Real time applications require minimum sending rate and don’t want any delay, but can tolerate any loss. Therefore, UDP protocol is used over here.
  2. No connection establishment – UDP protocol does not need to establish any connection before any communication of data packet and therefore it avoids such delays.
  3. No connection state – No buffers, congestion control parameter, etc are maintained, and therefore the server that is utilizing UDP protocol can handle multiple active clients at a time.
  4. Small packet header overheads – Since UDP is not keeping track of the parameters mentioned above, therefore, the size of the UDP packet gets automatically small.




Why UDP protocol is unreliable?

  1. It is a connectionless protocol. This means that there are changes of loss of packets.
  2. They are one ended traffic. This means it sends packet without caring for acknowledgement.

On the other hand, TCP protocol is much more reliable as there are acknowledgements for packets received.


Reliable transport protocol

TCP protocol is also known as reliable connection oriented transport layer protocol. If we use this protocol then flow control and congestion control are done at the transport layer while maintaining a persistence link. So this caters reliability by using connection oriented approach.


Unreliable transport protocol

Now for unreliable approach, flow control and congestion control are not done at the transport, instead their control lies with the application layer.

UDP takes message from the application process, attaches source and destination port numbers field for multiplexing / de-multiplexing service and passes the resulting segment to network layer. The network layer encapsulates it into an IP datagram and delivers to the receiving host. If the segments arrives at the receiving host, UDP uses the port number to deliver segments data to the correct application process.

UDP is used by RIP routing table updates. RIP updates are send periodically, last updates will be replaced by the recent once.


Read Further

Rate This Content
Reliable communication at Layer 4
January 18, 2015


Reliable communication at Transport Layer

IP service model is the best effort delivery model. This clearly indicates that IP (Internet Protocol) makes its best efforts in order to deliver data between communicating hosts but it does not gives any kind of guarantee.  IP model therefore, does not guarantee orderly delivery of segments / data and it does not guarantee integrity of data in the segments.

A transport layer protocols provide a reliable data transfer to the application layer even though the network through which the data is travelling, is unreliable that is, network protocol losses, garbles and duplicate packets.

Transmission control protocol allows TCP connections for communication between hosts traversing through a congested network to share the link bandwidth equally. Whereas, UDP (User Datagram Protocol) traffic is unregulated as UDP transport can send data at any data rate.

Extending host to host delivery to process to process delivery is called application multiplexing and demultiplexing.

Just to Remember: Segments are the data chunks, grouped at transport layer by the help of TCP protocol, and whereas data grouped at transport layer by UDP protocol is known as datagram.


Reliable communication at Layer 4 - Transport Layer


For reliable transport layer function:

  1. Feedback from the receiver is very important for the sender to re-transmit the packet.
  2. Re-transmission of packet may also occur if the timer expires.
  3. Corruption within the received packet.


CRC (Cyclic redundancy check)

Continuing point number 3 from above: What if the packet received at the other side is corrupted?

technique is used at the receiver end in order to make sure that the packet received are corrupted or not. Before transmitting a packet, the sender calculates a checksum, usually 16 or 32 bits, attaches that checksum in that packet and then transmits the packet. When the packet is received by the other end, it rechecks and calculates the checksum in order to see if there is an error in any bit or not. After comparing the checksum, if the bits were correct then an acknowledgement is sent by the receiver.


Click here to read more on Reliable communication at Transport Layer

Rate This Content
Network layer protocols – Congestion problem & remedy
January 17, 2015


Network layers

The network layer is the third layer in OSI model. The network layer is needed because once the data packet go out from a LAN into an internet / networks, we then need an IP address for each data packet to travel and at this point MAC address is not enough and hence we need a network layer.

A transport layer protocol provides a logical communication between processes; however, A network layer protocols provides logical communication between hosts.

Network Layer provides the following:
1.    Unique address to every node which is known as IP address.
2.    Routing traffics between different nodes.


Network layer protocols


Physical and data link are connected to the next hop, whereas the network layer is end to end.


OSI Layer - Network layer protocols


Congestion avoidance in network layer

Data link layer provides reliable communication from link to link; however, once the packet enters the internet cloud, it is very difficult to make sure that the data packet is not lost in the cloud. Therefore, the main concern for reliability is IP packet loss.


Congestion avoidance in network layer
One of the main reasons for congestion and the loss of IP packets are due to the limited size of router buffers. All routers have two kinds of buffers, one at the input [inbuffer] and the other is at the output [outbuffer]. These buffers have certain limited size and if these buffers are completely occupied by data then there will be congestion in paths moving from this router and hence it will start dropping off IP packets. This will cause loss of IP packets due to congestion.

The solution to the problem of congestion and IP packet loss can be as follows:

1.    ARQ, Automatic repeat request
ARQ can be used in case an IP packet is not received at the other end of the host. For, example if an IP packet is not received at the other end of the host, that host will then send a request to the first host indicating that certain packet was not received and then the first host will retransmit that packet again. This cycle will continue until all packets are received at the other end. ARQ will be explained in details in upcoming lectures.

2.    Hypothetically, if the receiver host can somehow tell the sender host about the buffer configuration and path conditions, hence the sender can resize its packet.


Why there is a loss of Packet in IP cloud?

There can be to most important reason for the loss of packet in IP cloud. They are:

  1. Packet loss happens due to the capacity of the buffers of routers. If the number of packets arriving at the router is greater than the capacity of the input buffer of the router, then some packets are going to be lost.
  2. Packet may also be lost if TTL (Time to live) of an IP packet expires. When a router receives a packet, it decrements the value of TTL and forward it to the next hop. Eventually, this packet will be dropped by the router when TTL value will be equal to zero. This router will then send an acknowledgement back to the sender indicating that the packet was dropped due to zero TTL value. The sender will resend the packet with a higher value of TTL and this loop continues until the packet reaches the receiver and finally the receiver sends an acknowledgement that it has received the packet.


Why there is a loss of Packet in IP cloud


Problem: When the source sends a packet to the destination, the destination sends an acknowledgement packet back to the receiver. Suppose, the acknowledgement packet was dropped in a network then the sender will resend the packet, but this will increase the delay.

Remedy: The source / sender uses a timer to keep a track of a packet transmitted to a network. If the reply is not received within the specified time, then that packet will be sent again by the sender. The value of this time is estimated by the sender with the help of RTT (Round Trip Time) value.

Round trip time (RTT) is equal to the sum of the time a packet takes to reach the destination and the time an acknowledgement packet is received by the sender.

RTT value will vary from packet to packet, as packets always does not follow same path. Since we have different value of RTT, therefore the sender host needs to have a moving average of RTT’s and that value will be used by its timer.

Now suppose, the acknowledgement packet gets delayed, then the sender will re-transmit the packet again. This time there will be duplicate packets at the destination. The destination host will then drop one of the duplicate packets. Such kind of scenarios can also cause congestion in a network due to duplicate package.

A point to Remember: Data is originated by an application goes down to different layers of OSI model to physical layer and then into the cloud, reaches the destination and again goes from Physical layer to application and finally stored by it.


Click here to read more

Rate This Content
Transport layer protocols – tcp/ip layer model
January 16, 2015



By definition, protocol means rules or set of guidelines. In the dictionary of networks or the internet, protocols mean the set of rules or guidelines used for communication or transfer of data between the two devices in a network.

Data in a network travels in the form of packets. Each packet contains the data that is also known as payload and a header that is attached to payload. This header contains information related to protocols. Protocols are designed to convey the payload in a concise way.


transport layer protocols - Payload


Transport layer protocols

Transport layer provides effective communication options to the application process running on a different host directly.

A transports layer protocol provides for logical communication between application processes running on different hosts. The transport layer is also known as Mux / Demux because it merges the data coming from different ports to the destination and vice versa.


Virtual circuits

virtual circuits-TCP-IPTraffic roads resembles the physical link. The advantage of lanes is that we are getting different types of traffic speed at the same time. Virtual circuits have the similar concept as they provide logical partitioning of the large bandwidth into smaller chunks. The figure on your right will give you a clear picture of virtual circuits.


Agenda & Working of Transport Layer

Let us look at this example that will give you a clear picture of the working of transport layer protocols:
“Suppose a user opens different pages of the website cnn.com in a browser. When we type cnn.com, our website name will be resolved using DNS.
Here you need to understand a concept that is how transport layer is going to identify between the different pages of cnn.com, that is, sport, entertainment, business, etc. being opened in different tabs of a browser.  At this point, we have one application but three different processes running simultaneously as shown in the figure below:


Transport layer protocols working - tcp/ip layer model


At transport layer, we have to do demarcation of processes and therefore it acts as a Mux (multiplexer) and Demux (demultiplexer).

This process is analogous to a post office. All letters from one area will go to the same post office. But at the post office letters are separated based upon their addresses or codes number. Similarly, the TCP/IP stack assigns different numbers to different processes and in this way they can be separated.

Technically, transport layer opens different sockets for each process. The concept of sockets is very simple and easy, which will be explained in the later post. Just for understanding, socket is the combination of IP and ports. In above scenario, CNN.com has a particular IP address, but different processes will be assigned with different port numbers in order to communicate properly with different pages (entertainment, sports, etc.)


Transmission Control Protocol (TCP)

Application process uses the logical communication provided by the transport layer to send the message without any worry of the physical layer or its infrastructure to carry data. The transport layer protocols are implemented in the end systems and not in the router.

Following are some important concepts which will help you out in better understanding the working of transport layer protocols:


Relation between transport and application layer

As we know, that process is the running instance of a program. The application layer sends chunks of data to transport layer. Transport layer stores the data into its buffer and establishes the connection with the transport layer of the receiver node. Once the connection is established between the transport layers of the two host, then the data begins to be transferred.


Reliable communication at layer 4

The transport layer maintains reliability of the data that is being transmitted from a sender to receiver. It makes sure that the data is properly been received by the receiver host.


Controlling the transmission rate of transport layer entities in order to avoid congestion or recover from congestion within the network.

Problem: How to avoid congestion?
Example: All students were browsing in the lab, if all students use torrent at the same time, the bandwidth dynamics will change drastically, and network will be congested.

Congestion can be avoided by implementing different kinds of algorithms such as Selective repeat algorithm at transport layer. We will study this in later chapters in detail.


To read further Click here

Rate This Content
Virtualization – VM Hypervisor
January 4, 2015


What is Virtualization?

In today’s world, we have high demand of having multiple operating systems or a server or even a multiple storage devices and so on. Therefore to fulfill these needs, we cannot physically have these in multiple quantities; therefore, the concept of virtualization was introduced.

With the help of virtualization, a single hardware can be divided into multiple operating systems, storage devices, network resource, etc. and hence fulfilling the requirement of multiple users.


What is a hypervisor?

An essential requirement for cloud computing is the ability of a single physical machine to run multiple virtual machines on it. This is attained through virtualization, where multiple computers appear to be a single one. It is impossible physically to create multiple processors as this will require scheduling under the same machine. Thus, virtualization caters with a higher degree of virtualization. Virtualization offers low cost virtualization, along with support for heterogeneous environment.

Hypervisor is a software which creates and runs virtual machines. The computer running the hypervisor is called the host machine, while each virtual machine running on it is called the guest machine. Hypervisor manages resource allocation and the memory mapping between the guest machine and the host operating system (OS).

It is also known as a manager of a virtual machine. Hypervisor software may vary for different operating system, but the basic function will always be to transform a single hardware into multiple virtual machines. Hypervisor software controls the host machine automatically by allocating the required processor, memory or other resources to all guest operating systems that are running on the host computer without any fails or any problem.


Types of hypervisor

There are two types of hypervisor, they are as follows:


Type 1 hypervisor or bare-metal hypervisors

has direct access to all the hardware and installs directly on the computer. Type 1 hypervisors cater resource management and provide complete advantage of portability and abstraction of hardware while running multiple virtual machines.


Types of VM Hypervisor Type 2 hypervisor  or hosted hypervisors

also cater the execution of multiple virtual machines but it does not provide direct access to the hardware, which ultimately incurs more cost in running the guest OS. So, in type 2 hypervisor the guest OS does not run at its full potential.






To Read more click here


Rate This Content