The Mystery of Monte Carlo Simulation

If you are in VLSI industry, sometime or the other, you must have heard this term “Mont Carlo (MC)”. In this post let us understand the literal meaning of Monte Carlo simulation and its application in circuit design field. Going by the wiki definition of Monte Carlo “Monte Carlo methods are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results”. Simplifying the definition, Monte Carlo algorithms are used for introducing random variations within the given limits to explore the corner cases of any problem. The problems fed to Monte Carlo algorithms are spanned over a wide variety of applications including Risk Analysis, Finances, Statistics, Physics, and Electronic Designs.

This was pretty much the introduction of Monte Carlo. Now let’s get to the business and talk about WHY and HOW Monte Carlo algorithms are important in the VLSI design.

In VLSI circuit design during simulation, we run the design through various PVT (Process, Voltage and Temperature)corners with an aim that the circuit should be able to reliably operate at all the extreme conditions. These PVT variations can be generalized as,

  1. Temperature from as low as -40° to as high as 125°C,
  2. Voltage ±10% variation from its nominal value
  3. Process – This is generally two letter convention where first letter is the behavior of NMOS and second letter is of PMOS. TT, SS, FF, SF and FS are the corners generally used. Letter T stands for Typical (Nominal Vt), F for Fast (Low Vt) and S for Slow (High Vt).

Running the design over different PVT corners cover the environmental variations (voltage and temperature) as well as manufacturing variations (process). A very common figure to illustrate the process corner is shown here,


Now the million dollar question is “Can we guarantee the functionality of silicon across all condition by simulating the design across PVT corners???”The answer comes out to be NO and guess what’s the reason??? It’s the manufacturing variations introduced during fabrication of the chip. But, didn’t we cover the manufacturing variations in the process corners (TT, SS, FF, FS and SF)? Yes, we did but that’s not enough. Let me explain !!!

Now just think about the scenario, we have a design which has 1000 NMOS and 1000 PMOS. Let’s say we are running this design at FS corner, considering all the 1000 NMOS are identically FAST and all the 1000 PMOS are identically SLOW. This is not true in real silicon where no two transistors are identical due to Systematic and Random variations. So even after running the design across process corners we are leaving behind the corner case where there is variations across different transistors in the same process corner. This is where Monte Carlo pitches in. It aids in introducing the randomness into the transistors by changing its Vt in different directions such that all the 1000 NMOS/PMOS are different at a time, depicting the real silicon behavior.

Industry-wide, the MC corner files itself are different than the usual process corner files for the reason that the variations at MC corners are different than the process corners hence one should avoid the mistake by running the MC at process corner, which will not be able to hit the worst case for above mentioned reasons.

The Monte Carlo simulations can be done in two ways for any given design, Global Monte and Local Monte. Again the corner files for these two will be different. Let’s understand what these are:

Global Monte: We can think of this Monte run as unconstrained in a way that the variations in this case can span over different process corners. Let’s understand this through a fig,


In the figure, each dot represents one Monte Carlo run and as we can see it will spread the variation by introducing a Vt change in its every single run. The span of the variations in this global Monte run is spread across the process corners as its name also suggests, Global MC.

Local MC: This Monte run is constrained to a particular process corner. In general, first step is to run the design at various PVT coroners to find the worst one. Then second step is to run the Monte on this particular corner to see the functionality on worst of worst corner. Let us say the worst corner in the first step was found to be SS then the Monte variations will look something like this,


This is Local Monte as the scope of variations is limited to a particular corner.

Both the methods have their own set of applications and used across industry to emulate the silicon behavior during simulation and have a working silicon in one go. Undoubtedly, The Mysterious Monte Carlo has many flavors or to say applications and I hope now you can appreciate the use of it to just another application, VLSI Circuit Design !!!

VLSI Transistor Basics Interview Question Bank-1

This part of the Interview Question Bank deals with the general transistor level questions asked in various VLSI companies

Q1. If you connect the input of an inverter to its output where will the output gets settled?

Ans. The output will settle at the logical threshold of the inverter ideally at VDD/2.

Q2. The Vt of the transistor increases or decreases with the temperature.

Ans. The Vt of the the transistor decreases with temperature?

Q3. According to the saturation current equation the current through the transistor increases as Vt of the transistor decreases but it is not the case in practical situation, the reason is ?

Ans. The reason is the mobility of the charge carriers as it is the more prominent factor in the ON current equation and it decreases with the temperature.

Q4. What is channel length modulation and how it occurs?

Ans. Channel length modulation is the shortening of the channel length after Pinch off occurs, this causes the saturation current to increase linearly with Vds. This occurs due to the widening of the depletion region between drain to bulk region, primarily in the channel.

Q5. What are short channel effects, is channel length modulation also a short channel effect?

Ans. In short channel technologies, vertical electric field loses its complete control over the channel. That is gate loses its control over the channel as the horizontal electric field starts interfering with the channel formation. The most prominent effects are subthreshold leakage, velocity saturation. No, channel length modulation is not a short channel effect as it occurs after pinch off point, while short channel effects are mostly pre or during channel formation.

Q6. How does a Vt of transistor varies with temperature and doping and why?

Ans. The Vt of the transistor decreases as the temperature increases as minority carrier concentration increases, while Vt of the transistor decreases with increased doping as more majority carriers needs to be pushed into the bulk to create the depletion region before channel formation.

Q7. The value of the capacitance between gate to bulk for an NMOS transistor is maximum in which region of the MOS?

Ans. The gate to bulk capacitance is maximum for the cutoff region of the transistor.

Q8. Considering a 3 terminal NMOS device if a supply of Vdd is connected to the gate of a NOMS having Vt as the threshold value, and the supply voltage of Vdd is connected to either of the remaining terminal, In which of the region does the NMOS is in, and what is the output voltage at the remaining terminal?

Ans. The transistor being an NMOS works in saturation region, and being an NMOS its weak pull up, the output voltage will be Vdd-Vt.

Q9. Considering the question above if we connect another NMOS with its drain connected to the output of the previous NMOS and gate being connected to Vdd, then in which region does this NMOS operates in and what is the output voltage?

Ans. The transistor works in saturation region, the output voltage will be Vdd-Vt. The first transistor is connected to Vdd which make it’s Vds as Vdd -(Vdd-Vt) which is greater than Vgs-Vt, for the second transistor it can pass the voltage level up to which Vgs = Vt, this means it can also pass the full Vdd – Vt potential which is at its drain, also here Vds = Vgs – Vt, so it also works in saturation region.

Q10. If the voltage Vsb that is source to bulk voltage difference is increased in an NMOS how does Vt varies and why?

Ans. The Vt of the transistor increases, as the depletion region around the p-n junction increases and it takes more gate voltage to offset that charge.

180 nm, 90 nm, 45 nm…- What’s the difference?

Many of you might have worked on different VLSI technology nodes such as 180 nm, 90 nm, 45 nm etc. in circuit simulation tools like Cadence etc. With the invention and evolution of transistors, various technologies came into existence and more would continue to come in future. According to Moore’s law, the number of transistors will continue to double in every 1.5 years. That means the same silicon area would accommodate more and more number of transistors. To achieve this, transistor size is gradually getting reduced. This we say, transistor size is shifting from one technology node to a smaller technology node by scaling process.The sifting of technology node helped many leading players in semiconductor industry like Intel, IBM, AMD, Texas Instruments etc. to come up with many innovative and highly powerful products day by day. A particular technology gets used by the industries for a span of time period till the time the next feasible smaller technology node would be ready for implementation. For example, 180 nm technology was used by most of them in the 1999-2000 time-frame, while 90 nm was used in 2004-2005.

You might find the post PMOS is no longer the Culprit interesting

Seeing the above technological evolution and having worked on them, have you ever tried to find the answer of a question that what is the difference between these different technologies used in VLSI?

Starting with the main difference between the technologies – 180 nm, 90 nm etc., the numbers represent the minimum feature size of the transistor (PMOS or NMOS). The minimum feature size means that during the fabrication process of a transistor, how closely can the transistors be placed on a chip to be used for various purposes. The smaller this size is, the larger number of transistors can be fabricated on the chip. For example, suppose separate chips are to be designed using 180 nm and 90 nm transistors. Now, the number of 90 nm transistors that can be placed on a particular area of the chip would be more (nearly twice) than the number of 180 nm ones that can be placed on the same silicon area.

The above can also be understood by the fact that the numbers 180 nm, 90 nm etc. represent the minimum channel length that can be used in fabrication. Also, these numbers aren’t randomly assigned but decided by dividing the previous number by square root of 2 (2 because it is neither too small nor too big). For example, the next technology node after 180 nm was 180 divided by square root of 2 which comes out to be nearly 130 nm. Likewise, the next after 130 nm will be 130 divided by square root of 2 which is approximately 90 nm and so on. Different technologies are being used today and the transistor size is shrinking day-by-day to lower the cost of production of a chip as smaller the chip, cheaper is to make it. In the year 2016, the technology will come down to be around 11 nm and estimated to be 4 nm (approx.) by the year 2020!.

Gautam Vashisht

SETUP Time and SETUP Violation in a Single D Latch

Setup and Hold time concept is one of the fundamental concepts that is very necessary for closing and analysing and timing margin. The analysis in digital domain, in Reg to Reg system is very popular but the root cause of Setup and Hold time is often not taken care of in the education system. This Post elaborates the cause of setup analysis in a single D latch taking the Transistor level schematic into account, also I will try to explain the points where Setup and Hold time is measured and why we do so and why we can not write it on any other points.

Fig.1 Displays a Transistor Level Diagram of a simple D-latch, D is the input and I1, I2 are the inverters in the data path of the latch, T1 is the forward path transmission gate and T2 is the feedback path transmission gate, while L_I_1 and L_I_2 are the cross coupled latch inverters. The latch is controlled by the signal CK.


Fig.1 A single D Latch

Fig.2 shows the CK signal used, the CLK_delay is the delay between the rising edge and the falling edge of the CK and CK_bar signal.


Fig.2 Clock signals used .

Setup Analysis

1. Setup time is the minimum time required for the data to get settled before the latching edge of the clock in this case it is the Rising edge.

2. The requirement of the setup time arises from the fact that the latching action is performed by the cross coupled inverters L_I_1 and L_I_2, the latch is a Bi-Stable which means that is is stable at two points either (0,1) or (1,0)., this implies that if the latch is at any between logic, it can go in either direction, so to have the safest of the operation the logic at point B should be same as the logic at point C. This means any change in data should propagate to point C before the latching action of the latch begins that is closing of the T2 and opening of T1.

3. Moreover as the first latching edge of the clock arrives in this case it is the rising edge, corresponding transistor of T1 begins switching off and T2 begins conducting, this causes the output to degrade and finally the latch is unable to sample the changed data and it latches the previous data only, which is a functionality failure.

Fig.3 and Fig.4 displays the simulation of setup violation in the D-latch , the blue line is the clock edge, the Red one is the data, which in multiple simulations is pushed near to the clock edge,and the green one is the output data which is inverted of the input data due to the circuit taken.


Fig.3 Setup Simulation and Violation (A)


Fig.4 Setup Simulation and Violation (B)

4. As it can be clearly seen that as we push the data towards the clock edge the output data degrades and finally it tries to reach the level 0 but reverts back to the logic 1 as the latching action of the latch overcomes the changed data which was not able to propagate to point C, and the previous data which was residing at C take over the change.

5. So going by this explanation one can say that if this clock is the same as external clock then at the basic explanation we can say that the for the surest of the operation setup time is the data delay which is from the point D to C and it is commonly denoted as MAX Data delay

6. Different measurement standards calculate Setup time differently, some industry parameters measure it like the minimum time before the edge of the clock for which gets output degradation is up to 5% of the relaxed value.

For the basic understanding and explanation we can conclude that for the surety of the operation the data should reach the point C before the latching just kicks in, and this delay is the SETUP time of the latch as if some one violates it, the output data will start degrading, now its on the design requirement, that up to which extent degradation is allowed.

I hope this will suffice your queries, Keep following for the some more posts on Setup and Hold. Please post your queries in the comments or start a thread in the discussion forum.

PMOS is no longer the Culprit

MOS scaling has introduced many undesired effects, known second order ones being channel length modulation, velocity saturation, mobility degradation etc. These are introducing a new set of challenges for the designers. From the performance perspective, supply voltage scaling has reduced the driving capability of MOS due to decrease in effective overdrive voltage. On the other hand, Process has made it worse by introducing high mobility degradation which in turn has reduced the drive current of MOS. So going by the situation here, with every new node out, MOS is getting weaker and weaker. At the same time it is expected to pull off higher performance from every last generation. SEEMS TOUGH !!!

Have you ever thought how we are able to do that ????

We have a savior here and its name is “Process induced strained silicon”. Something which is actively being used in sub 100nm technologies. Seems like a process related stuff, then why we being designer should care about it. Let me explain but let’s first talk about this savior guy in very simple terms.

This is a process technique to enhance the mobility of the charge carriers of MOS. This is done through applying some high stress film across the channel during fabrication. Using the same process flow on the wafer, we apply compressive strain into P-type and tensile strain into N-type which leads to improve both electron and hole mobility. Let’s not dig deep into process related things and all the physics involved behind it. But this step, somehow, favors the holes more than electrons and hence the improvement in the hole mobility is higher than that of electrons.

So now since we have talked about our savior, let’s see what the impact of this mobility improvement is. The first observation, obviously, is the driving capability of both NMOS and PMOS is improved. But here is the catch, the driving capability of PMOS is improved with a larger factor than of NMOS. This leads to a stronger PMOS with every coming generation. To look for the effect, we can just go on and plot the inverter characteristic for different beta-ratio (PMOS to NMOS W/L ratio) and observe inverter’s switching threshold. Needless to say, the beta-ratio which result in inverter switching threshold of Vdd/2 indicates the exact difference in strength of PMOS and NMOS for a particular technology. Following figure is the Inverter DC characteristic for beta-ratio of 2:1 and 1:1 in 16nm technology. Surprisingly, conventionally used 2:1 beta-ratio seems skewed in this case while ratio 1:1 has switching threshold of Vdd/2. So for 16nm, NMOS and PMOS are of same strength.

Switching of Inverter for Different Beta ratioMoreover, if we intend to build an inverter with the switching threshold of Vdd/2 for different technologies then the beta ratio would look something like this.

Technology Node Beta – ratio
65nm 2.1:1
40nm 1.66:1
28nm 1.3:1
16nm 1:1
7nm 0.9:1

Just look at the beta-ratio of 7nm column, it says that for an inverter of Vdd/2 switching threshold, the PMOS size will be smaller than NMOS. Now this is contrary to what we have been taught in the books and class. In previous generations PMOS device was made of higher size so as to compensate the mobility imbalance, occupying the lager area. But now PMOS is no longer the culprit!!!


Contributed by: Jitendra Yadav