Temperature inversion – concept and phenomenon


To understand the phenomenon of temperature inversion, let us first understand the concepts governing the conductivity of semiconductor devices with respect to changes in temperature.

Phenomenon governing semoconductor conductivity vs. temperature: In all, there are two phenomenon that govern the conductivity in any device-
  • Carrier concentration: Electrons and holes are the charge carriers in a semiconductor. More is the number of carriers; greater is the conductivity of the material. Rise in temperature causes greater number of bonds to break due to higher number of collisions among vibrating molecules; thus, resulting in higher number of carriers with increase in temperature. This factor tends to increase the conductivity with increasing temperature. More the number of carriers, greater is the conductivity.
  • Mobility of the carriers: Mobility is another measure of conductivity. Greater is the mobility of carriers, carriers move with greater speed, thus, contributing more to the overall current; hence, greater is the conductivity of the material. With increase in temperature, lattice vibrations increase resulting in less mobility of free carriers. So, this factor tends to decrease the conductivity with temperature increase.
Summing up, the trend of conductivity with temperature depends upon which of the above two factors dominates. Based upon the conductivity, the materials can be divided into three types - conductors, insulators and semi-conductors. Let us explore how the conductivity of these materials is based on the above two factors:

Conductivity of conductors (metals): Metals have abundance of loosely attached nearly free electrons (as is commonly called, the electron sea), the carriers of electric current. The increase in carrier concentration is ignorable with change in temperature. So, mobility factor dominates. The conductivity of conductors decreases with increase in temperature.

Conductivity of insulators (non-metals): Insulators have almost negligible free carriers. The electrons in insulators are tightly bound to atoms by bonds. The conductivity is negligible in insulators due to limited number of carriers. However, the number of free carriers increases exponentially with temperature. This increase in carrier concentration with temperature outpaces the decrease in mobility thereby making the insulators to gain conductivity with rise in temperature. So, the conductivity in insulators increases with rise in temperature.

Conductivity trend in Semiconductors: Semiconductors have conductivity in-between metals and insulators. These are the class of insulating materials in which electrons are loosely bound to atoms. A small energy is needed to break these bonds and supply free carriers, which can be supplied by potential difference applied across the semiconductor, or, by temperature itself, in the form of thermal energy. So, there can be any of the two factors dominating depending upon the voltage applied across the semiconductor. The decrease or increase in conductivity of semiconductor depends upon which of the two factors dominates. For CMOS transistors, the number of charge carriers directly translates to threshold voltage.

At high voltage levels applied, there is abundance of free charge carriers as a result of the energy supplied by the potential difference created. At this state, there is not significant change in carrier concentration with increase in temperature; so, the mobility factor dominates; thereby, decreasing the conductivity with temperature. In other words, at high levels of voltages applied, the conductivity of semiconductors decreases with temperature.

Similarly, in the absence of any voltage applied, or with little voltage applied, the semiconductor behaves similar to an insulator with very less number of carriers, those resulting from only thermal energy. So, increase in carrier concentration is the dominating factor. So, we can say that at low applied voltages, the conductivity of semiconductors increases with temperature.


The concept of temperature inversion: With reference to the discussion we had earlier, at higher technology nodes, the voltage levels used to be high. So, traditionally, the delay of CMOS logic circuits used to increase with temperature. So, the most timing critical corner used to be worst process, minimum voltage and maximum temperature. However, with scaling down of technology, the voltage levels have also scaled down. Due to this, at sub-nanometer technology levels, both the factors come into play. At lower range of the operating voltage levels, first factor comes into play. In other words, at lower technology nodes, the most setup timing critical corner has become worst process, minimum voltage and minimum temperature. This shift in setup critical corner, in VLSI jargon, is termed as temperature inversion.

Also read:

Can hold check be frequency dependant?


We often encounter people argue that hold check is frequency independent. However, it is only partially true. This condition is true only for zero-cycle hold checks. By zero cycle hold checks, we mean that the hold check is performed on the same edge at which it is launched. This is true in case of timing paths between same polarity registers; e.g. between positive edge-triggered flops. Figure 1 below shows timing checks for a data-path launched from a positive edge-triggered flip-flop and captured at a positive edge-triggered flip-flop. The hold timing, in this case, is checked at the same edge at which data is launched. Changing the clock frequency will not cause hold check to change.

Setup check for positive edge-triggered flip-flop to positive edge-triggered flip-flop is single cycle and hold check is zero cycle
Figure 1: Setup and hold checks for positive edge-triggered to positive edge-triggered flip-flop
Most of the cases in today’s designs are of this type only. The exceptions to zero cycle hold check are not too many. There are hold checks for previous edge also. However, these are very relaxed as compared to zero cycle hold check. Hence, are not mentioned. Also, hold checks on next edge are impossible to be met considering cross-corner delay variations. So, seldom do we hear that hold check is frequency dependant. Let us talk of different scenarios of frequency dependant hold checks:

  1.  From positive edge-triggered flip-flop to negative edge-triggered flip-flop and vice-versa: Figure 2 below shows the setup and hold checks for a timing path from positive edge-triggered flip-flop to a negative edge-triggered flip-flop. Change in frequency will change the distance between the two adjacent edges; hence, hold check will change. The equation for hold timing will be given for below case as:

Tdata + Tclk/2 > Tskew + Thold
or
Tslack =  Tclk/2 - Thold - Tskew + Tdata
          Thus, clock period comes into picture in calculation of hold timing slack.

Both setup and hold checks are half cycle. Setup is checked on next edge whereas hold is checked on previous edge
Figure 2: Setup and hold checks for timing path from positive edge-triggered flip-flop to negative edge-triggered flip-flop

Similarly, for timing paths launching from negative edge-triggered flip-flop and being captured at positive edge-triggered flip-flop, clock period comes into picture. However, this check is very relaxed most of the times. It is evident from above equation that for hold slack to be negative, the skew between launch and capture clocks should be greater than half clock cycle which is very rare scenario to occur. Even at 2 GHz frequency (Tclk = 500 ps), skew has to be greater than 250 ps which is still very rare.
Coming to latches, hold check from a positive level-sensitive latch to negative edge-triggered flip-flop is half cycle. Similarly, hold check from a negative level-sensitive latch to positive edge-triggered flip-flop is half cycle. Hence, hold check in both of these cases is frequency dependant.

2. Clock gating hold checks: When data launched from a negative edge-triggered flip-flop gates a clock on an OR gate, hold is checked on next positive edge to the edge at which data is launched as shown in figure 3, which is frequency dependant.

Setup check is single cycle and hold check is half cycle and checked on next clock edge with respect to launch clock edge
Figure 3: Clock gating hold check between data launched from a negative edge-triggered flip-flop and and clock at an OR gate

           Similarly, data launched from positive edge-triggered and gating clock on an AND gate form half cycle hold. However, this kind of check is not possible to meet under normal scenarios considering cross-corner variations.

3)      Non-default hold checks: Sometimes, due to architectural requirements (e.g. multi-cycle paths for hold), hold check is non-zero cycle even for positive edge-triggered to positive edge-triggered paths as shown in figure 4 below.
Figure 4: Non-default hold check with multi-cycle path of 1 cycle specified







C function that converts hexadecimal value to decimal value.

Hexadecimal to decimal conversion is something that is often needed in hardware. Below functions can be used for hexadecimal to decimal conversion in C:
#include<stdio.h>#include<conio.h>#include<string.h>
int get_value(char a)  { if(a>='0'&& a<='9' ) { return (a- '0'); } else if(a>='A' && a<='F') return ((a-'0')-7); } else if(a>='a' && a<='f') return ((a-'0')-39); else return -1;
}


int htoi(char a[]){ int len=strlen(a); int temp=0; for(int i=0;i<len;i++) { int digit=get_value(a[i]); if(digit == -1){ return -1; } temp=temp*16+digit; } return temp;}
int main(){ char a[]="f0"; clrscr(); int b=htoi(a); if(b == -1) printf("invalid input"); else printf("decimal value is %d",b); getch();        return 0;}

Interesting programming quiz : Array Bound Read Error

Problem: Can you figure out what is wrong with following piece of code?


#include <iostream>int main() {     int a[5] = {1,2,3,4,5};     for (int i = 4; a[i] >= 0 && i >=0 ; i--) {          std::cout<< "ith element of array is "<<a[i]<<std::endl;     }}

I would suggest you to  try it yourself before scrolling down to see the answer. Its quite interesting,
......
......
......
......
......
......
......
......
......
......
......
......
......
......
......
......
......
......
Answer : Here, as one can figure out, the intention is to print array elements from end till we don't hit any negative number. In the first look it may seem fine but unfortunately it will end up in ABR (Array Bound Read).

Explanation : After the completion of 5th iteration; i.e. when i = 0, compiler will decrement i;  i.e., i will become "-1". It will, then, try to check the condition, which will result in reading a[-1]. Since, array can have indexes only greater than or equal to 0, it will result in an error. Trying to read array elements out of the allowed indexes is termed as Array Bound Read Error. Hence, one should avoid such conditions because it can result into random result. The program can crash anytime. If you are lucky, it may run successfully also. Its all up to your luck.  Instead, it should be

for (int i = 5; i>=0 && a[i]  >= 0 ; i--) {

i.e. first check index value and then do the array access operation.

Here, with the above solution, one more interesting thing comes up to understand. In AND (&&) operation compiler first evaluates  condition1; if it is true, then goes to evaluate condition2; otherwise return false from there only.

For example,
#include <iostream>
int main() {int i = 0;int j= 1;if(  ( i == 1) && (++j ==3)  ) {      std::cout<<"inside if"<<std::endl;}std::cout<<"i is "<<i<<" and j is "<<j<<std::endl;}
Output :
i is 0 and j is 1


Here, as you can see code control will not go into if branch as none of condition is true. since condition1 i==1 is false, compiler will not even check condition2 i.e. value of j will not be incremented.


Internally,  compiler might be doing some following kind of transformation to evaluate && operation

  bool cond = (i==1);  if( cond ) {      cond = (++j != 0) ;  }if(cond){      std::cout<<"inside if"<<std::endl;}

Function Overloading

Function overloading is a feature inherent in many programming languages including c++. It allows a user to write multiple functions with same name but with different signatures. On calling the function, the version of the function corresponding to the signature will be referred to. Function signature includes function parameters/arguments, but it does not include return type. Function signature may differ in terms of number of parameters or type of parameters. Let us illustrate with the help of a few examples:

Example 1: The two functions below are overloaded, since the return type of arguments differ:
int func(int a,int b);double func(double a,double b);
Example 2: The two functions below are overloaded because they differ in the number of arguments:
int func(int a,int b);int func(int a,int b,int c);
Example 3: The two functions below are not overloaded because they differ only in terms of their return type; the number and type of all the arguments is same.
void func(int a,int b,int c);int func(int a,int b,int c);


Please note that C does not support function overloading because there is no concept of name mangling in C. On the other hand, C++ does support function overloading as name mangling is supported in C++. Name mangling is mangled name of function name and its signature which is used by C++ compiler internally to refer to functions. For instance, in above example 1, mangled name of functions will look something like shown below:
func__int_intfunc__double_double

This way C++ compiler can handle function overloading. 


Note : above are not the actual mangled names. Compiler can make some more complicated names. this is just for understanding. 

Also read:


On-chip variations – the STA takeaway

Static timing analysis of a design is performed to estimate its working frequency after the design has been fabricated. Nominal delays of the logic gates as per characterization are calculated and some pessimism is applied above that to see if there will be any setup and/or hold violation at the target frequency. However, all the transistors manufactured are not alike. Also, not all the transistors receive the same voltage and are at same temperature.  The characterized delay is just the delay of which there is maximum probability. The delay variation of a typical sample of transistors on silicon follows the curve as shown in figure 1. As is shown, most of the transistors have nominal characteristics. Typically, timing signoff is carried out with some margin. By doing this, the designer is trying to ensure that more number of transistors are covered. There is direct relationship between the margin and yield. Greater the margin taken, larger is the yield. However, after a certain point, there is not much increase in yield by increasing margins. In that case, it adds more cost to the designer than it saves by increase in yield. Therefore, margins should be applied so as to give maximum profits.

Most of the transisors have close to nominal delay. However, some transistors have delay variations. Theoretically, there is no bound existing for delay variations. However, probabilty of having that delay decreases as delay gets far from nominal.
Number of transistors v/s delay for a typical silicon transistors sample


We have discussed above how variations in characteristics of transistors are taken care of in STA. These variations in transistors’ characteristics as fabricated on silicon are known as OCV (On-Chip Variations). The reason for OCV, as discussed above also, is that all transistors on-chip are not alike in geometry, in their surroundings, and position with respect to power supply. The variations are mainly caused by three factors:
  • Process variations: The process of fabrication includes diffusion, drawing out of metal wires, gate drawing etc. The diffusion density is not uniform throughout wafer. Also, the width of metal wire is not constant. Let us say, the width is 1um +- 20 nm. So, the metal delays are bound to be within a range rather than a single value. Similarly, diffusion regions for all transistors will not have exactly same diffusion concentrations. So, all transistors are expected to have somewhat different characteristics.
  • Voltage variation: Power is distributed to all transistors on the chip with the help of a power grid. The power grid has its own resistance and capacitance. So, there is voltage drop along the power grid. Those transistors situated close to power source (or those having lesser resistive paths from power source) receive larger voltage as compared to other transistors. That is why, there is variation seen across transistors for delay.
  • Temperature variation: Similarly, all the transistors on the same chip cannot have same temperature. So, there are variations in characteristics due to variation in temperatures across the chip.


How to take care of OCV: To tackle OCV, the STA for the design is closed with some margins. There are various margining methodologies available. One of these is applying a flat margin over whole design. However, this is over pessimistic since some cells may be more prone to variations than others. Another approach is applying cell based margins based on silicon data as what cells are more prone to variations. There also exist methodologies based on different theories e.g. location based margins and statistically calculated margins. As advances are happening in STA, more accurate and faster discoveries are coming into existence.

Latency and throughput – the two measures of system performance

Performance of the system is one of the most stringent criteria for its success. While performance increases the desirability among customers, cost is what makes it affordable. This is the reason why system designers aim for maximum performance with available resources such as power and area constraints. There are two related parameters that determine the performance output of a system –

Throughput - Throughput is a measure of the productivity of the system. In electronic/communication systems, throughput refers to rate at which output data is produced. Higher the throughput, more productive is the system. In most of the cases, it is measured as time difference between two consecutive outputs (nth and n+1th). Throughput also refers to the rate at which input data can be applied to system.
Let us discuss with the help of an example:

throughput summary diagram


Above figure depicts the throughput of 3 number adder. Result of input set applied at 1st clock cycle appears at output at 3rd clock cycle and in 4th clock cycle next input set is applied and output comes in 6th clock cycle.  Hence, throughput of above design is ⅓ per clock cycle. As we can see from diagram, first input is applied in first clock cycle and 2nd input is applied in 4th clock cycle. Hence we can also say that throughput is rate at which input data can be applied to system.

Latency- Latency is the time taken by a system to produce output after input is applied. It is a measure of delay response of a design. Higher the latency value, slower is the system. in synchronous designs, it is measured in terms of number of clock cycles. In combinational designs, latency is basically propagation delay of circuit. In non pipelined designs, latency improvement is major area of concern. In more general terms, it is time difference between output and input time.
Latency
Relationship between throughput and latency: Both latency and throughput are inter-related. It is desired to have maximum throughput and minimum latency. Increasing latency and/or throughput might make the system costly. Let us take an example. Consider a park with 3 rides and it takes 5 minutes for a ride.  A child can take sequentially these rides; i.e, ride 1, ride 2 and then ride 3. Firstly, let us assume that only one child at a time is allowed to enter park at a time. While he is taking a ride, no one is allowed to enter the park. Thus, the throughput of the park is 15 minutes per child and latency is 15 minutes. Now, let us assume that while a child has finished taking ride1, another child is allowed to enter park. Thus, in this case, throughput will be 5 minutes per child whereas latency is still 15 minutes. Thus, we have increased the throughput of the system without affecting latency and at the same cost.

What is Logic Built-in Self Test (LBIST)

LBIST stands for Logic Built-In Self Test. As VLSI marches to deep sub-micron technologies, LBIST is gaining importance due to the unique advantages it provides. LBIST refers to a self-test mechanism for testing random logic. The logic can be tested with no intervention from the outside world. In other words, a piece of hardware and/or software is inbuilt into an integrated circuit to test itself. By random logic, is meant any form of hardware (logic gates, memories etc.) that can form a part or whole of the chip. A generic LBIST system is implemented using STUMPS (Self-Test Using MISR and PRPG) architecture. A typical LBIST system is as shown in the figure below:

A typical LBIST system consists of a PRPG, An LUT and a MISR controlled by an LBIST controller
Figure 1: A typical LBIST system


Components of an LBIST system: A typical LBIST system comprises following:
  1. Logic to be tested, or, as is called Circuit Under Test (CUT): In case of LBIST, the logic to be tested through LBIST is the Circuit under Test (CUT). Any random logic residing on the chip can be brought under LBIST following a certain procedure.
  2. PRPG (Pseudo-Random Pattern Generator): A PRPG generates input patterns that are applied to internal scan chains of the CUT for LBIST testing. In other words, PRPG acts as a Test Pattern Generator (TPG) for LBIST. A PRPG can either use a counter or an LFSR for pattern generation.
  3. MISR (Multi-Input Signature Register): MISR obtains the response of the device to the test patterns applied. An incorrect MISR output indicates a defect in the CUT. In classical language, MISR acts as a ORA (Output Response Analyzer) for LBIST testing.
  4. A master (LBIST controller): The controller controls the functioning of the LBIST; i.e. clocks propagation, initialization and scan patterns flow in and out of the LBIST scan chains.

One of the most stringent requirements in LBIST testing is the prohibition of X-sources. There cannot be any source of ‘X’ during LBIST testing. By ‘X’, is meant a definite, but unknown value. It might be either ‘0’ or ‘1’, but it is not known what value is being propagated. All X-sources are masked and a known value is allowed to be propagated in LBIST.

Why ‘X’ is prohibited in LBIST: As stated above, there cannot be any ‘X’ propagating during LBIST testing. The reason behind this is that LBIST involves MISR to calculate the signature of the LBIST patterns. Since, the resulting signal is unique, any unknown value can result in the corruption of the signature. So, there cannot be any ‘X’ in LBIST testing.

Advantages of LBIST: As stated above, there are many unique advantages of LBIST that make it desirable, especially in safety critical designs such as those used in automobiles and aeroplanes. LBIST offers many advantages as listed below:
  • LBIST provides self-test capability to logic inside chip; thus, the chip can test itself without any external control and interference.
  • This provides the ability to be tested at higher frequencies reducing test time considerably.
  • LBIST can run while the chip is on field running functionally. Thus, it is very useful in safety critical applications wherein faults developed on field can be easily detectable at startup before chip goes into functional mode.

Overheads due to LBIST: Along with many advantages, there are some overheads due to LBIST as mentioned below:
(i)                  The LBIST implementation involves some hardware on-chip to control LBIST. So, there are area and power impacts due to these. In other words, the cost of chip increases.
(ii)                Also, ‘X’-masking involves addition of extra logic gates in already timing critical functional signals causing impact on timing as well.
(iii)               Another disadvantage of using LBIST is that even the on-chip test equipment may fail. This is not the problem with testing using outside equipment with proven test circuitry
References:
  1. Identification and reduction of safe-stating points in LBIST designs 
  2. Logic built-in self-test
  3. Challenges in LBIST verification of high reliability SoCs
Also read:

Worst Slew Propagation


Worst slew propagation is a phenomenon in Static Timing Analysis. According to it, the worst of the slews at the input pin of a gate is propagated to its output. As we know, the output slew of a logic cell is a function of its input slew and output load. For a multi-input logic gate, the output slew should be different for the timing paths through its different input pins. However, this is not the case. This is due to the reason that to maintain a timing grapth, each node in the design can have only 1 slew. So, to cover the worst scenario for setup timing, the maximum slew at each output pin should be equal to that caused by the input pin having worst of the slews. The output slew calculated is on the basis of worst input slew, even if the timing path for which the output slew is being calculated is not through the input pin with worst slew. Similarly, the best of the slews is calculated based upon the effect of all the input pins for hold timing analysis. We can refer to it as best slew propagation.

Let us illustrate with the help of a 2-input AND gate. As shown in figure below, let the slews at the input pins be denoted as SLEW_A and SLEW_B and that at the output pin as SLEW_OUT. Now, as we know:

SLEW_OUT = func (SLEW_A) if A toggles leading to OUT toggling
And SLEW_OUT = func (SLEW_B) if B toggles leading to OUT toggling

However, even though the timing path as shown through A pin, the resultant slew at output SLEW_OUT will be calculated as:

SLEW_OUT         =  func (SLEW_A) if func(SLEW_A) > func(SLEW_B)

                                =  func (SLEW_B) if func(SLEW_B) > func(SLEW_A)



Worst slew propagation is carried out through the worst of all the slews caused by each input pin
Figure 1: Figure showing worst slew propagation

One may feel this as an over-pessimism inserted by timing analysis tool. Path based timing analysis will not have worst slew propagation phenomenon as it calculates output slew for each timing path rather than one slew per node. 

Similarly, for performing timing analysis for hold violations, the best of the slews at inputs is propagated to the output as mentioned before also. 

Also read:



Depletion MOSFET and negative logic. Why it is not possible?


As we know, depletion MOSFET conducts current even with gate and source at same voltage level. To cut-off the current in depletion MOSFET, a voltage has to be applied at gate so as to exhaust the already existing carriers inside the channel. On the other hand, enhancement type MOSFET is cut-off when gate and source are at same voltage.
Taking the example of NMOS, for a depletion MOS, with source and gate at same level, there is still a channel available, hence, it conducts electric current. To bring it to cut-off, a negative potential is needed to be applied at gate (considering source at ‘X’ potential). Thus, with source at ‘X’ potential and gate at ‘X’ potential, drain attains the potential of source. Since, to cut-off the device, gate has to be given a voltage less than ‘X’, so we can say “when Gate is 1 and source is 1, then drain is 1”.  On the other hand, when source is 1 and gate is 0, drain attains ‘high impedance’. The reverse is true for PMOS.
Similarly, with the same logic, for an enhancement NMOS, “When Gate is 1 and source is 0, drain attains 0 potential”; similarly, “When Gate is 0 and source is 0, drain is 0”. The reverse is true for PMOS.

Source voltage
Gate voltage
Drain voltage for enhancement NMOS
Drain voltage for enhancement PMOS
Drain voltage for depletion NMOS
Drain voltage for depletion PMOS
0
0
Z
Z
0
0
0
1
0
Z
0
Z
1
0
Z
1
Z
1
1
1
Z
Z
1
1

Thus, we can say that it is due to the inherent properties of NMOS and PMOS that that they cannot be used to create negative level logic.

Enhancement and depletion MOSFETs


A MOSFET (Metal Oxide Semiconductor Field Effect Transistor) is a 4-terminal device with Source, Drain, Gate and Body as its terminals. It is used for amplification or switching of electronic signals and is the most common transistor in both digital and analog integrated circuits. The generic structure of a MOSFET is shown in figure 1. The source and drain terminals are separated by a channel. The conduction of the channel is determined by the carrier density in the channel which is a function of voltage applied at the gate terminal. The body terminal is normally connected to the source so as to allow only minimal leakage current to flow.


A MOSFET has 4 terminals, source, drain, gate and body (bulk)
Figure 1: A MOSFET

MOSFETs are categorized into two categories based upon the nature of channel:
        1)      Enhancement mode MOSFETs: In an enhancement MOSFET, the channel is devoid of carriers. The channel has to be created by creating a suitable voltage difference between gate and source terminals. With gate and source at same potential, only minimal current flows. However, when a positive potential difference is applied which is greater than threshold voltage for the MOSFET, a channel is created. Thus, the current will now flow between source and drain if there is a potential difference between them. Figure 2 below shows how a channel is formed on applying a voltage between source and gate terminals.

Figure 2: Channel formation in Enhancement MOSFET


        2)      Depletion mode MOSFETs: In a depletion mode MOSFET, the channel is already present with the help of ion-implantation.  Even with gate and source at same voltage, it will conduct current. The channel has to be depleted by applying suitable potential.






Negative gate delay - is it possible

As discussed in our post ‘propagation delay’, the difference in time from the input reaching 50% of the final value of the transition to that of the output is termed as propagation delay. It seems a bit absurd to have negative value of propagation delay as it provides a misinterpretation of the effect happening before the cause. Common sense says that the output should only change after input. However, under certain special cases, it is possible to have negative delay. In most of such cases, we have one or more of the following conditions:
i)                    A high drive strength transistor
ii)                   Slow transition at the input
iii)                 Small load at the output

Under all of the above mentioned conditions, the output is expected to transition faster than the input signal, and can result in negative propagation delay. An example negative delay scenario is shown in the figure below. The output signal starts to change only after the input signal; however, the faster transition of the output signal causes it to attain 50% level before input signal, thus, resulting in negative propagation delay. In other words, negative delay is a relative concept.
The negative propagation delay can result in certain scenarios as shown in the figure below
Figure 1: Input and output transitions showing negative input delay