Virtual clock - purpose and timing

What is a virtual clock: By definition, a virtual clock is a clock without any source. Stating more clearly, a virtual clock is a clock that has been defined, but has not been associated with any pin/port. A virtual clock is used as a reference to constrain the interface pins by relating the arrivals at input/output ports with respect to it with the help of input and output delays.

How to define a virtual clock: The most simple sdc command syntax to define a virtual clock is as follows:
                create_clock –name VCLK –period 10
The above SDC command will define a virtual clock “VCLK” with period 10 ns.

Purpose of defining a virtual clock: The advantage of defining a virtual clock is that we can specify desired latency for virtual clock. As mentioned above, virtual clock is used to time interface paths. Figure 1 shows a scenario where it helps to define a virtual clock. Reg-A is flop inside block that is sending data through PORT outside the block. Since, it is a synchronous signal, we can assume it to be captured by a flop (Reg-B) sitting outside the block. Now, within the block, the path to PORT can be timed by specifying output delay for this port with a clock synchronous to clock_in. We can specify a delay with respect to clock_in itself, but there lies the difficulty of specifying the clock latency. If we specify the latency for clock_in, it will be applied to Reg-A also. Applying output delay with respect to a real clock causes input ports to get relaxed and output ports to get tightened after clock tree has been built. Let us elaborate it in some detail below. Let us assume clock period to be 10 ns and the budget allocated to be 3 ns inside; thus, having a "set_output_delay" of 7 ns.



 virtual clock is used to time interface paths. Figure 1 shows a scenario where it helps to define a virtual clock. Reg-A is flop inside block that is sending data through PORT outside the block. Since, it is a synchronous signal, we can assume it to be captured by a flop (Reg-B) sitting outside the block. Now, within the block, the path to PORT can be timed by specifying output delay for this port with a clock synchronous to clock_in. We can specify a delay with respect to clock_in itself, but there lies the difficulty of specifying the clock latency. If we specify the latency for clock_in, it will be applied to Reg-A also. Applying output delay with respect to a real clock causes input ports to get relaxed and output ports to get tightened after clock tree has been built.
Figure 1: Figure to illustrate virtual clock

Case 1: Applying "set_output_delay" with respect to real clock (R_CLK)
Pre-CTS scenario: Here, if we apply any latency to the clock, it will be applied both to launch as well as capture registers (capture register is imaginary here). So, we unltimately get a full cycle to time the path. In other words, applying or not applying a latency to the clock will time the path as needed.
Post-CTS scenario: Post-CTS, we need to "set_propagate_clock RCLK" in order for clock latencies to come into effect. Doing so, the launch register's actual clock latency will come into picture. However, since, capture register is imaginary, there is no clock built onto it and its latency will be zero. So, we get (clock_period - RCLK_latency) as the actual phase shift to time the path. Thus, timing path gets tightened by "RCLK_latency".
Case 2:  Applying set_output_delay with respect to virtual clock (VCLK)
Pre-CTS scenario: In this case, in order to provide full cycle for the path to be timed; if we have applied any latency to RCLK, we will have to apply the same latency for VCLK as well.
Post-CTS scenario: After CTS is built and clocks are propagated, network latency of RCLK will be overridden by actual latency. But VCLK will not be propagated and its source + network latencies will still be reflected as applied in constraints. If (VCLK_source_latency + VCLK_network_latency_user) is equal to (RCLK_source_latency + RCLK_network_latency_CTS), we will still see the same timing path as we see pre-CTS.
Thus, the solution to the problem is to define a virtual clock and apply output delay with respect to it. Making the source latency of virtual clock equal to network latency of real clock will solve the problem.

Can you think of any other method that can serve the purpose of a virtual clock?

10 comments:

  1. Applying output delay with respect to a real clock causes input ports to get relaxed and output ports to get tightened after clock tree has been built. (c)
    Could you please explain why?

    ReplyDelete
    Replies
    1. Hi Anastasiia

      I have modified the contents to be a little more elaborative. Please go through. We can discuss in case there are queries.

      Delete
  2. source latency is from origin of clk to definition point and network latency is from definition point to reg clock pin , then how it became equal in this concept ??

    ReplyDelete
    Replies
    1. Hi

      It does not become equal by itself. We have to make these equal in case we want to see same timing pre-clock_tree and post-clock_tree. :-) There is an "if" at the start of the line.

      Delete
  3. If PreCTS we don't apply any latency to both of them, in PostCTS we can apply what is called "clock network delay (propagated)" in PT (i.e. network+source latency) of real clock as source latency of virtual clock.
    What about uncertainty if we apply it to real clock and not to virtual?

    ReplyDelete
    Replies
    1. Hi

      We need to understand what the uncertainty is for. The most common purpose of uncertainty is stage-based margins. If we dont apply uncertainty for virtual clocks, then we assume that we dont need stage margins for timing paths being formed with respect to virtual clocks, which is actually not true. So, we should be applying uncertainty wrt virtual clocks as well; however the magnitude of uncertainty should be lesser as there are less number of elements in IO paths than reg2reg paths.

      Delete
    2. In postCTS stage I applied the source latency on Virtual clock = to the "clock network delay propagated" and now the OUTPUT delays are ok. Do U think the same must be done for input delay?
      I mean to apply source latency on Virtual clock = to clock network delay propagated of real clock?

      Delete
    3. Yes, but virtual clock is common for output delays as well as input delays. So, I think you have already done this. If you dont apply source latency of virtual clock, output timing becomes tight at block level and input timing becomes relaxed. Thus, timing paths from input ports will become underoptimized due to area and power recovery. This will, then, pose an issue once you plug-in your block with top which will see the timing paths through input ports of the block violating due to incorrect modeling at block level.

      Delete
    4. If reg2out is violated and in2reg isn't violated. so, when I apply to both input and output port => reg2out violation will be solved, but in2reg will be violated. Is my setting right?

      Delete
    5. it means that you will need to reduce delays for either input or output ports. For any latency you set, the sum of slacks of input ports & output ports should be greater than 0.

      Delete

Thanks for your valuable inputs/feedbacks. :-)