Showing posts with label Virtual clock. Show all posts
Showing posts with label Virtual clock. Show all posts

Virtual clock - purpose and timing

What is a virtual clock: By definition, a virtual clock is a clock without any source. Stating more clearly, a virtual clock is a clock that has been defined, but has not been associated with any pin/port. A virtual clock is used as a reference to constrain the interface pins by relating the arrivals at input/output ports with respect to it with the help of input and output delays.

How to define a virtual clock: The most simple sdc command syntax to define a virtual clock is as follows:
                create_clock –name VCLK –period 10
The above SDC command will define a virtual clock “VCLK” with period 10 ns.

Purpose of defining a virtual clock: The advantage of defining a virtual clock is that we can specify desired latency for virtual clock. As mentioned above, virtual clock is used to time interface paths. Figure 1 shows a scenario where it helps to define a virtual clock. Reg-A is flop inside block that is sending data through PORT outside the block. Since, it is a synchronous signal, we can assume it to be captured by a flop (Reg-B) sitting outside the block. Now, within the block, the path to PORT can be timed by specifying output delay for this port with a clock synchronous to clock_in. We can specify a delay with respect to clock_in itself, but there lies the difficulty of specifying the clock latency. If we specify the latency for clock_in, it will be applied to Reg-A also. Applying output delay with respect to a real clock causes input ports to get relaxed and output ports to get tightened after clock tree has been built. Let us elaborate it in some detail below. Let us assume clock period to be 10 ns and the budget allocated to be 3 ns inside; thus, having a "set_output_delay" of 7 ns.



 virtual clock is used to time interface paths. Figure 1 shows a scenario where it helps to define a virtual clock. Reg-A is flop inside block that is sending data through PORT outside the block. Since, it is a synchronous signal, we can assume it to be captured by a flop (Reg-B) sitting outside the block. Now, within the block, the path to PORT can be timed by specifying output delay for this port with a clock synchronous to clock_in. We can specify a delay with respect to clock_in itself, but there lies the difficulty of specifying the clock latency. If we specify the latency for clock_in, it will be applied to Reg-A also. Applying output delay with respect to a real clock causes input ports to get relaxed and output ports to get tightened after clock tree has been built.
Figure 1: Figure to illustrate virtual clock

Case 1: Applying "set_output_delay" with respect to real clock (R_CLK)
Pre-CTS scenario: Here, if we apply any latency to the clock, it will be applied both to launch as well as capture registers (capture register is imaginary here). So, we unltimately get a full cycle to time the path. In other words, applying or not applying a latency to the clock will time the path as needed.
Post-CTS scenario: Post-CTS, we need to "set_propagate_clock RCLK" in order for clock latencies to come into effect. Doing so, the launch register's actual clock latency will come into picture. However, since, capture register is imaginary, there is no clock built onto it and its latency will be zero. So, we get (clock_period - RCLK_latency) as the actual phase shift to time the path. Thus, timing path gets tightened by "RCLK_latency".
Case 2:  Applying set_output_delay with respect to virtual clock (VCLK)
Pre-CTS scenario: In this case, in order to provide full cycle for the path to be timed; if we have applied any latency to RCLK, we will have to apply the same latency for VCLK as well.
Post-CTS scenario: After CTS is built and clocks are propagated, network latency of RCLK will be overridden by actual latency. But VCLK will not be propagated and its source + network latencies will still be reflected as applied in constraints. If (VCLK_source_latency + VCLK_network_latency_user) is equal to (RCLK_source_latency + RCLK_network_latency_CTS), we will still see the same timing path as we see pre-CTS.
Thus, the solution to the problem is to define a virtual clock and apply output delay with respect to it. Making the source latency of virtual clock equal to network latency of real clock will solve the problem.

Can you think of any other method that can serve the purpose of a virtual clock?