science and technology Hold violation at post P&R simulation By feedproxy.google.com Published On :: Mon, 08 Oct 2012 04:28:27 GMT Hello, I am working in a digital design. The functional, post synthesis and post P&R without IO pads are all working fine, i.e., functionally and with clean timing reports "no setup/hold violations". I just added the IO pads to the same design, I had to change the timing constraints a bit for the synthesis but I have a clean design at SOC Encounter, i.e., clean DRC and clean timing reports "no setup/hold violations". However, when I perform simulation using the exported net-list from SOC Encounter together with SDF exported from the same tool, I got a lot of hold violations. Consequently, the design is not funcitioning. Why and how I can overcome or trobleshoot this issue?In waiting for your feedback and comments.Regards. Full Article
science and technology Kitchen Design Manchester By feedproxy.google.com Published On :: Tue, 30 Jul 2013 13:09:55 GMT Try looking at www.solidwoodkitchen.co.uk. They have some amazing designs and prices. Kitchen Design Manchester Full Article
science and technology memory leak in ncsim By feedproxy.google.com Published On :: Fri, 16 Aug 2013 06:32:51 GMT ncsim will consume an increasing ammount of memory when a function has an output port that return an associative array which was not initialized. My simulator version is 12.10-s011.Below is a code example to reproduce the failure. The code is inside a class (uvm_object): function void a_function(output bit ret_val[int]); // empty endfunction : get_coveach time the call is done a small ammount of memory is allocated. I n my case I call this function several (millions of) times during simulation and then I can see the memory leaking. Full Article
science and technology Specman Mode for Emacs By feedproxy.google.com Published On :: Tue, 11 Feb 2014 13:16:39 GMT Attached is the latest emacs mode for e/Specman - version 1.23 Please follow the install instructions in the top section of the actual file (after unzipping it) to install/load this package with your emacs. Full Article
science and technology help with automating adding CLP files to DRA files By feedproxy.google.com Published On :: Thu, 12 Jun 2014 16:50:37 GMT Question for forum: I’m currently working on a code to automatically add CLP files to DRA files and then add two classes called “APPROVED” and “CLP”. To do this manually you have to open a DRA file, click file import subdrawing and choose the clp file with the same name as dra. (path already set). You then set the clp to position x 0 0. And then click on Set Up > Subclasses > Package geometry and type in “Approved” and “Clp.” So far we’ve recorded the macros in Allegro for all of these actions. The macros correspond to one specific file name and we want to apply this to numerous files. To do this we created a python program that locates all of the specified CLP and DRA files, and if they have a matching name, runs a for loop that puts each file name into a stored variable that runs a loop for each file. We converted this script into batch and then added a function that we thought would run Allegro macros from batch. In order to get the script working, we need to have an allegro batch command that will run the script without opening the Allegro start popup, or closing the popup when it appears. We need to do this to run any script from starting Allegro. I’ve done another similar program in batch where I made a for loop for each dra file and within the loop there was a batch a2dxf command that converted all dra files to dxf files. Is there a similar batch command for adding clp files to position 0 0 and/ or adding classes? If anyone has done something similar please let me know! Thank you very much for the help. Jen Full Article
science and technology Shell Script rc By feedproxy.google.com Published On :: Wed, 02 Jul 2014 13:36:49 GMT Dear I want to automate the synthesis of 100 IP cores so i wrote a small script (see below), but i am stuck on <rc> In easy way, rc seems does not like be called in a shallscript To make it works i need get in each folder, call the rc and then run <souce synth.tcl>Please anyone know how can i by pass the problem ?Thank you so much foreach i(*.C) cd $i rc source synth.tcl cd ..end Full Article
science and technology help for $sscanf By feedproxy.google.com Published On :: Fri, 22 Apr 2016 04:03:22 GMT Can anyone tell me how i can supress few strings or integers while reading with $sscanf. I read a line from a file into a string. there are few strings and integers seperated by white spaces in the line. I am interested in one string which comes at postion 5 in the line. how can i suppress all other strings and integers with $scanf. i tried the following syntax but it dint work. $sscanf(line,"* * * * %s",string_arg); i am tring to supress first 4 integers/strings in the line. Full Article
science and technology Creating transition coverage bins using a queue or dynamically By feedproxy.google.com Published On :: Sun, 21 Apr 2019 01:31:28 GMT I want to write a transition coverage on an enumeration. One of the parts of that transition is a queue of the enum. I construct this queue in my constructor. Considering the example below, how would one go about it. In my coverage bin I can create a range like this A => [queue1Enum[0]:queue1Enum[$]] => [queue2Enum[0]:queue2Enum[$]]. But I only get first and last element then. typedef enum { red, d_green, d_blue, e_yellow, e_white, e_black } Colors; Colors dColors[$]; Colors eColors[$]; Lcolors = Colors.first(); do begin if (Lcolors[0].name=='d') begin dColors.push_back(Lcolors); end if (Lcolors[0].name=='e') begin eColors.push_back(Lcolors); end end while(Lcolors != Lcolors.first()) covergroup cgTest with function sample(Colors c); cpTran : coverpoint c{ bins t[] = (red => dColors =>eColors); } endgroup bins t[] should come out like this(red=>d_blue,d_green=>e_yellow,e_white) Full Article
science and technology Creating cover items for sparse values/queue or define in specman By feedproxy.google.com Published On :: Fri, 12 Jul 2019 17:51:31 GMT Hello, I have a question I want to create a cover that consists a sparse values, pre-computed (a list or define) for example l = {1; 4; 7; 9; 2048; 700} I'd like to cover that data a (uint(bits:16)) had those values, Any suggestion on how to achieve this, I'd prefer to stay away from macros, and avoid to write a lot of code struct inst { data :uint(bits:16); opcode :uint(bits:16); !valid_data : list of uint(bits:16) = {0; 12; 10; 700; 890; 293;}; event data_e; event opcode_e; cover data_e is { item data using radix = HEX, ranges = { //I dont want to write all of this range([0], "My range1"); range([10], "My range2"); //... many values in between range([700], "My rangen"); }; item opcode; cross data, opcode; }; post_generate() is also { emit data_e; };}; Full Article
science and technology FEV ISSUE By feedproxy.google.com Published On :: Sat, 11 Apr 2020 08:38:12 GMT I see unmapped points (not-mapped) on both golden and revised side. These are all ddr scan latches. Eg- */latch_lo_gt_ctech_customlib_ddr_scan_latch[156]/o_reg in golden */latch_lo_gt_ctech_customlib_ddr_scan_latch[156]_clock_scan_latch_dt/sttb_$U4/udp1/U$1 in revised There are many not-mapped similar to above one. Below renaming rule doesn’t seem to work ren rule r1 "_clock_scan_latch_dt" "/o_reg" -rev Could someone please help here? Full Article
science and technology ViVA XL export to vcsv failed By feedproxy.google.com Published On :: Wed, 22 Apr 2020 12:42:52 GMT Exporting a waveform into a vcsv file returns the error: The wsSaveTraceCommand command generated an exception basic_string::_S_construct null not valid. Only the first row of the vcsv file is created (";Version, 1, 0"). This was the first time I've exported waveforms generated with Assembler. I had no issue before with the combination of ADE L, Parametric sweep and ViVA XL. My project uses ICADV 12.3. I have not found any related forum entry or documentation. How could I export the waveforms in vcsv? Exporting the values into a table and then exporting into a csv works, but my post-processing script was written for vcsv format. Full Article
science and technology Extracting 1dB bandwidth from parametric sweep-DFT results By feedproxy.google.com Published On :: Wed, 22 Apr 2020 18:55:50 GMT Hi all, I am using ADE assembler. I ran transient simulation and swept the input frequency (Fin) of the circuit. And I use Spectrum Measurement to return a value of the fundamental tone magnitude (Sig_fund) for each sweep point. Previously, I use "plot across design points" to plot both "Fin" and "Sig_fund", and then use "Y vs Y" to get a waveform of Sig_fund vs Fin. Measure the 1dB Bandwidth with markers. Can I realized above measurement with an expression in "output setup" ? And how? I know to set the "Eval type" to "sweep" to process the data across sweep points. But here, it has to return an interpolated value from "Fin" with a criteria "(value(calcVal("Sig_fund" 0) - 1)". I am not sure whether it can be done in ADE assembler. Thanks and regards, Yutao Full Article
science and technology how to add section info to extsim_model_include? By feedproxy.google.com Published On :: Wed, 22 Apr 2020 22:12:45 GMT i had encountered error message like this before. but in liberate, i did not find the entry to input section info. Full Article
science and technology VHDL-AMS std and ieee libraries not found/empty By feedproxy.google.com Published On :: Thu, 23 Apr 2020 17:53:15 GMT I'm trying to set up a VHDL-AMS simulation, so I made a new cell, selected the vhdlamstext type, and copied some example from the web. But when I hit the save and compile button, I first got the following NOLSTD error: https://www.edaboard.com/showthread.php?27832-Simulating-a-VHDL-design-in-ldv5-1 So I added said file to my cds.lib and tried again. But now I'm getting this: ncvhdl_p: *F,DLUNNE: Can't find STANDARD at /cadappl/ictools/cadence_ic/6.1.7.721/tools/inca/files/STD. If I go over to the Library Browser, it indeed shows that the library is completely empty. Properties show it has the following files attached. In the file system I've also found a STD.src folder. Is there a way to recompile the library properly? Supposedly this folder includes precompiled versions, but looks like not really. Full Article
science and technology Accurate delay measurement between two clocks By feedproxy.google.com Published On :: Fri, 24 Apr 2020 11:39:09 GMT Hi, I am currently struggling with measuring the delay between two clocks with a sufficient accuracy. The reference one is a fixed-phase clock, and the other one is a squared clock resulting of a circuit (kind of PLL) synthesis.As I need to run a large amount of Monte-Carlo simulations in transient noise, I need to improve the simulation speed, while keeping a satisfactory delay measurement accuracy (<0.1ps), more specifically at 0V-crossings of the differential clocks. So I cannot simply set a max timestep <0.1ps as it would be far too long to simulate.To sum up, I would need a very relaxed timestep on clock up and down levels, and a very short timestep only at rise/fall transitions. For this purpose, I wrote a Verilog-A script- using a timmer function to accurately emulate the reference clock 0V-crossing times (and get the related times with $abstime)- using @(cross to get the 0V-crossing times of the synthesized clock: but this is not accurate enough (I see simulation noise around 3ps in Conservative). Indeed, the "cross" event occures at the simulation time following the effective 0V-crossing time; this could be sometimes >3ps, far not enough accurate for my purpose. - I have tried to replace the cross with the "above" function, but it hasn't changed anything, whatever the time_tol value I put (<0.1ps for instance), the result is the same as with the "cross" function and the points are larger than >>0.1ps, weirdly. So I have decided to give up Verilog-A to measure the delay between my two clocks.I am currently trying to use the "delay" function of the Cadence Calculator as I guess it will "extrapolate" the time between two simulation points and therefore give a more accurate measurement of the 0V-crossing events, but when I try to compute the delay difference between the synthesized clock and the reference clock, it returns "0". ... Could you please give me hints to dramatically improve my 0V-crossing time measurements while relaxing the simulation time?- either by helping me in writing a more suitable Verilog-A script- or by helping me in using the "delay" function of the calculator- or maybe by providing me a "magic" Skill function?Using AMS+Multithread simulator... Thanks a lot in advance for your help and best regards. Full Article
science and technology ERROR (SPECTRE-308) By feedproxy.google.com Published On :: Fri, 24 Apr 2020 22:40:03 GMT Hi I have this error when I run the simulation SPECTRE_DEFAULTS=-I/CMC/kits/tsmc_130nm/CR013G/PDK_OA/PDKOA33/models/spectre -f psfbinCommand line: /CMC/tools/cadence/MMSIM15.10.801_lnx86/tools.lnx86/bin/spectre -64 input.scs +escchars +log ../psf/spectre.out +inter=mpsc +mpssession=spectre0_28131_1 -format psfxl -raw ../psf -I/CMC/kits/cmosp13.V1.8.0.0DM/IBM_PDK/cmrf8sf/V1.8.0.4DM/Spectre/models -I/CMC/kits/tsmc_130nm/CR013G/PDK_OA/PDKOA33/models/spectre +lqtimeout 900 -maxw 5 -maxn 5 ERROR (SPECTRE-308): Unable to open input directory '/CMC/kits/tsmc_130nm/CR013G/PDK_OA/PDKOA33/models/spectre'. Permission denied or no such directory. ERROR (SPECTRE-308): Unable to open input directory '/CMC/kits/tsmc_130nm/CR013G/PDK_OA/PDKOA33/models/spectre'. Permission denied or no such directory.spectre pid = 29312 Loading /CMC/tools/cadence/MMSIM15.10.801_lnx86/tools.lnx86/cmi/lib/64bit/5.0/libinfineon_sh.so ...Loading /CMC/tools/cadence/MMSIM15.10.801_lnx86/tools.lnx86/cmi/lib/64bit/5.0/libphilips_o_sh.so ... Could someone suggest any solution. Thank you in advance, Sali Full Article
science and technology ISF Function Extraction in Cadence Virtuoso By feedproxy.google.com Published On :: Mon, 27 Apr 2020 19:56:58 GMT Hi all, Is there any tutorial which explains the process of plotting the ISF function for a certain oscillator ? Thank you. Full Article
science and technology Virtuoso Spectre Monte Carlo simulation By feedproxy.google.com Published On :: Tue, 28 Apr 2020 06:49:49 GMT Hi , I have designed analog IP in cadence ADE and simulated in spectre. All corner results looks good. when i run monte carlo 1000 runs have high current in 125C two runs. Simulated with same setup in different user, all clean.Need to know what type sampling method used and why its not clean with my setup. Thanks, Anbarasu Full Article
science and technology Regarding Save/Restore Settings for Transient Simulation By feedproxy.google.com Published On :: Tue, 28 Apr 2020 16:20:14 GMT Hello, I am running a transient simulation on my circuit and usually my simulation time took me more than a day (The circuit is quite big). I am usually saving specific nodes to decrease the simulation time. My problem is, since it usually took me one day to finish I need to save my trans simulation just in case something bad happens. I am aware that the transient simulation have the options for save/restore. But, when I tried to use it I have some problem. Whenever I restore the save file, it starts where it ends before (expected function) but my data is incomplete. It doesn't save the previous data. Its kind of my data is incomplete. What I did is set the saveperiod and savefile. I hope someone can help me. Thank you! Regards, Kiel Full Article
science and technology Unable to Import .v files with `define using "Cadence Verilog In" tool By feedproxy.google.com Published On :: Wed, 29 Apr 2020 00:12:42 GMT Hello, I am trying to import multiple verilog modules defined in a single file with "`define" directive in the top using Verilog In. The code below is an example of what my file contains. When I use the settings below to import the modules into a library, it imports it correctly but completely ignores all `define directive; hence when I simulate using any of the modules below the simulator errors out requesting these variables. My question: Is there a way to make Verilog In consider `define directives in every module cell created? Code to be imported by Cadence Verilog In: -------------------------------------------------------- `timescale 1ns/1ps`define PROP_DELAY 1.1`define INVALID_DELAY 1.3 `define PERIOD 1.1`define WIDTH 1.6`define SETUP_TIME 2.0`define HOLD_TIME 0.5`define RECOVERY_TIME 3.0`define REMOVAL_TIME 0.5`define WIDTH_THD 0.0 `celldefinemodule MY_FF (QN, VDD, VSS, A, B, CK); inout VDD, VSS;output QN;input A, B, CK;reg NOTIFIER;supply1 xSN,xRN; buf IC (clk, CK); and IA (n1, A, B); udp_dff_PWR I0 (n0, n1, clk, xRN, xSN, VDD, VSS, NOTIFIER); not I2 (QN, n0); wire ENABLE_B ;wire ENABLE_A ;assign ENABLE_B = (B) ? 1'b1:1'b0;assign ENABLE_A = (A) ? 1'b1:1'b0; specify$setuphold(posedge CK &&& (ENABLE_B == 1'b1), posedge A, `SETUP_TIME, `HOLD_TIME, NOTIFIER);$setuphold(posedge CK &&& (ENABLE_B == 1'b1), negedge A, `SETUP_TIME, `HOLD_TIME, NOTIFIER);$setuphold(posedge CK &&& (ENABLE_A == 1'b1), posedge B, `SETUP_TIME, `HOLD_TIME, NOTIFIER);$setuphold(posedge CK &&& (ENABLE_A == 1'b1), negedge B, `SETUP_TIME, `HOLD_TIME, NOTIFIER);$width(posedge CK,1.0,0.0,NOTIFIER);$width(negedge CK,1.0,0.0,NOTIFIER);if (A==1'b0 && B==1'b0)(posedge CK => (QN:1'bx)) = (1.0, 1.0);if (A==1'b1 && B==1'b0)(posedge CK => (QN:1'bx)) = (1.0, 1.0);if (B==1'b1)(posedge CK => (QN:1'bx)) = (1.0,1.0); endspecify endmodule // MY_FF`endcelldefine `timescale 1ns/1ps`celldefinemodule MY_FF2 (QN, VDD, VSS, A, B, CK); inout VDD, VSS;output QN;input A, B, CK;reg NOTIFIER;supply1 xSN,xRN; buf IC (clk, CK); and IA (n1, A, B); udp_dff_PWR I0 (n0, n1, clk, xRN, xSN, VDD, VSS, NOTIFIER); not I2 (QN, n0); wire ENABLE_B ;wire ENABLE_A ;assign ENABLE_B = (B) ? 1'b1:1'b0;assign ENABLE_A = (A) ? 1'b1:1'b0; specify$setuphold(posedge CK &&& (ENABLE_B == 1'b1), posedge A, `SETUP_TIME, `HOLD_TIME, NOTIFIER);$setuphold(posedge CK &&& (ENABLE_B == 1'b1), negedge A, `SETUP_TIME, `HOLD_TIME, NOTIFIER);$setuphold(posedge CK &&& (ENABLE_A == 1'b1), posedge B, `SETUP_TIME, `HOLD_TIME, NOTIFIER);$setuphold(posedge CK &&& (ENABLE_A == 1'b1), negedge B, `SETUP_TIME, `HOLD_TIME, NOTIFIER);$width(posedge CK,1.0,0.0,NOTIFIER);$width(negedge CK,1.0,0.0,NOTIFIER);if (A==1'b0 && B==1'b0)(posedge CK => (QN:1'bx)) = (1.0, 1.0);if (A==1'b1 && B==1'b0)(posedge CK => (QN:1'bx)) = (1.0, 1.0);if (B==1'b1)(posedge CK => (QN:1'bx)) = (1.0,1.0); endspecify endmodule // MY_FF2`endcelldefine -------------------------------------------------------- I am using the following Cadence versions: MMSIM Version: 13.1.1.660.isr18 Virtuoso Version: IC6.1.8-64b.500.1 irun Version: 14.10-s039 Spectre Version: 18.1.0.421.isr9 Full Article
science and technology Using calcVal() in Monte-Carlo simulations By feedproxy.google.com Published On :: Wed, 29 Apr 2020 09:10:55 GMT Hello, I am trying to use calcVal for creating a spec condition from a simulated parameter and although this works perfectly fine in corner simulations, I am having some difficulties in Monte-Carlo (and I will explain). (I have also read "Using calcVal() and its arguments with ADE Assembler" in Resources > Rapid Adoption Kits but couldn't find any relevant information that would help me address the "issue"). In the above example I am performing a MC simulation which has 2 corners of 10 runs each. I would like to get the minimum value of variable "OC_limit_thres" out of those 10 runs and pass it as my upper limit to a range argument for variable "OC_flag_thres", so the CPK can be calculated. So the range statement should in reality be like this: range 32m 44.34m (for corner 0) range 32m 43.14m (for corner 1) If I open the Detail - Transpose view in the Results tab, the calcVal("OC_limit_thres" "Currlim_TurnOn_C11") is calculated perfectly fine for each run but here I need one single value out of those 10 runs - in this case the minimum - in order for calcVal to evalute on multiple runs of 1 corner. How can this be done please? Thank you in advance for your time. Full Article
science and technology convert ircx to ict or emDataFile for Voltus-fi By feedproxy.google.com Published On :: Wed, 29 Apr 2020 09:40:07 GMT Hi, I want to convert ircx file(which from TSMC) to ict or emDataFile for Voltus-fi. I tried many way, but I can not make it. and I do not installed QRC. below is some tools installed my server. IC617-64b.500.21 is used. Full Article
science and technology Simulating PSSR+/PSSR- and CMRR By feedproxy.google.com Published On :: Thu, 30 Apr 2020 02:57:17 GMT Hello, I would like to simulate the PSSR+/PSSR- and the CMMR using xf for the attached test bench. Normally, I do the AC analysis and using the post-processing capability of cadence spectre I do 20log(vdd/vout) for PSSR+ and 20*log(vss/vout) for PSSR-. looked online from an old post that I do: PSRR-db20(1/DATA("/Vn/PLUS" "xf-xf"))PSRR+db20(1/DATA("/Vp/MINUS" "xf-xf")) How about the for the CMRR? Thanks a lot in advance. Full Article
science and technology Can't Find Quantus QRC toolbar on the Layout Suite By feedproxy.google.com Published On :: Thu, 30 Apr 2020 08:51:15 GMT Hi, I want my layout verified by Quantus QRC. But, I can't find the tool bar on the option list ( as show in the picture) I have tried to install EXT182 and configured it with iscape already, and also make some path settings on .bashrc, .cshrc. But, when I re-source .cshrc and run virtuoso again, I just can't find the toolbar. If you have some methods, please let me know. Thanks a lot! Appreciated My virtuoso version is: ICADV12.3 Full Article
science and technology Layout can't open with the following warning message in CIW By feedproxy.google.com Published On :: Thu, 30 Apr 2020 15:47:16 GMT Hi, I tried to open my layout by Library Manager, but the Virtuoso CIW window sometimes pops up the follow WARNING messages( as picture depicts). Thus, layout can't open. Sometimes, I try to reconfigure ICADV12.3 by the iscape and restart my VM and then it incredibly works! But, often not! So, If anyone knows what it is going on. Please let me know! Thanks! Appreciated so much Full Article
science and technology Simulating IBIS Model using Spectre By feedproxy.google.com Published On :: Thu, 30 Apr 2020 23:17:12 GMT I have a question regarding simulating IBIS model using Spectre. IBIS model generation always has the die capacitance included and in the generated IBIS file you will have this value as “C_comp” value. Does the Spectre accounts for this capacitance from the IBIS file while computing the time domain voltage waveform during simulation ? If I add additional capacitance outside in the testbench, to model the die capacitance, then it will be double counting. Does anyone know if Spectre is already accounting this C_comp during the time domain voltage wave computation from IBIS file, during simulation ? Full Article
science and technology Design variable in assember -> copy from cell view issue By feedproxy.google.com Published On :: Fri, 01 May 2020 05:32:41 GMT Hello, I find a strange issue when using design variable -> right-click -> copy from cellview in assembler. Cadence version is IC618-64b. 500.9 In fact, I set the value of variable (e.g., AAA = 100), then after I right-click -> copy from cellview, AAA's is updated to other value. In my opinion "copy from cellview" should only update the missing variable to the list, but not change any variable value. Is there any mechanism could change variable value when using "copy from cellview"? Thanks Full Article
science and technology Wrong Constraint Values in Sequential Cell Characterization By feedproxy.google.com Published On :: Fri, 01 May 2020 12:33:48 GMT Hi, I am trying to characterize a D flip-flop for low voltage operation (0.6V) using Cadence Liberate (V16). This is a positive edge triggered D flip flop based on true-single-phase clocking scheme. After the characterization, the measurements reported for hold constraint arcs seem to deviate significantly from its (spectre) spice simulation. The constraint and the power settings to the liberate are as follows : # -------------------------------------------- Timing Constraints --------------------------------------------------------------------------------### Input waveform ###set_var predriver_waveform 2;# 2=use pre-driver waveform### Capacitance ###set_var min_capacitance_for_outputs 1;# write min_capacitance attribute for output pins### Timing ###set_var force_condition 4### Constraint ###set_var constraint_info 2#set_var constraint_search_time_abstol 1e-12 ;# 1ps resolution for bisection searchset_var nochange_mode 1 ;# enable nochange_* constraint characterization### min_pulse_width ###set_var conditional_mpw 0 set_var constraint_combinational 2 #---------------------------------------------- CCS Settings ----------------------------------------------------------------------------------------set_var ccsn_include_passgate_attr 1set_var ccsn_model_related_node_attr 1set_var write_library_is_unbuffered 1 set_var ccsp_min_pts 15 ;# CCSP accuracyset_var ccsp_rel_tol 0.01 ;# CCSP accuracyset_var ccsp_table_reduction 0 ;# CCSP accuracyset_var ccsp_tail_tol 0.02 ;# CCSP accuracyset_var ccsp_related_pin_mode 2 ;# use 3 for multiple input switching scnarios and Voltus only libraries #----------------------------------------------- Power ---------------------------------------------------------------------------------------------------### Leakage ###set_var max_leakage_vector [expr 2**10]set_var leakage_float_internal_supply 0 ;# get worst case leakage for power switch cells when offset_var reset_negative_leakage_power 1 ;# convert negative leakage current to 0 ### Power ###set_var voltage_map 1 ;# create pg_pin groups, related_power_pin / related_ground_pinset_var pin_based_power 0 ;# 0=based on VDD only; 1=power based on VDD and VSS (default); set_var power_combinational_include_output 0 ;# do not include output pins in when conditions for combinational cells set_var force_default_group 1set_default_group -criteria {power avg} ;# use average for default power group #set_var power_subtract_leakage 4 ;# use 4 for cells with exhaustive leakage states.set_var subtract_hidden_power 2 ;# 1=subtract hidden power for all cellsset_var subtract_hidden_power_use_default 3 ;# 3=subtract hidden power from matched when condition then default groupset_var power_multi_output_binning_mode 1 ;# binning for multi-output cell considered for both timing and power arcsset_var power_minimize_switching 1set_var max_hidden_vector [expr 2**10]#-------------------------------------------------------------------------------------------------------------------------------------------------------------- I specifically used set_var constraint_combinational 2 in the settings, in case the Bisection pass/fail mode fails to capture the constraints. In my spice simulation, the hold_rise (D=1, CLK=R, Q=R) arc at-least requires ~250 ps for minimum CLK/D slew combination (for the by default smallest capacitive load as per Liberate) while Liberate reports only ~30 ps. The define_cell template to this flip flop is pretty generic, which does not have any user specified arcs. So which settings most likely affecting the constraint measurements in Liberate and how can I debug this issue ? Thanks Anuradha Full Article
science and technology Help!!, Spectre error: Illegal library definition found in netlist for TSMC 180nm By feedproxy.google.com Published On :: Mon, 04 May 2020 08:11:12 GMT Dear All,When I want to start simulation with spectre the error says:Fatal error: Illegal library definition found in netlistI set the model file correctly, but I don't know why it errors!I opened the ADE>>Setup>>Model libraryand I tried to modify the path of models file (SCS files)It gives me "Illegal library definition found in netlist"Thanks. Full Article
science and technology Importing a capacitor interactive model from manufacturer By feedproxy.google.com Published On :: Mon, 04 May 2020 08:51:16 GMT Hello, I am trying to import (in spectre) an spice model of a ceramic capacitor manufactured by Samsung EM. The link that includes the model is here :- http://weblib.samsungsem.com/mlcc/mlcc-ec.do?partNumber=CL05A156MR6NWR They proved static spice model and interactive spice model. I had no problem while including the static model. However, the interactive model which models voltage and temperature coefficients seems to not be an ordinary spice model. They provide HSPICE, LTSPICE, and PSPICE model files and I failed to include any of them. Any suggestions ? Full Article
science and technology Ultrasim does not converge with BSIMBULK model By feedproxy.google.com Published On :: Tue, 05 May 2020 09:16:51 GMT Hello, I am using ultrasim Version 18.1.0.314.isr5 64bit 03/26/2019 06:33 (csvcm20c-2). When I run my netlist, ultrasim is blocked in the first DC stage and takes forever. Then it will fail or never progress. I am using a 22nm BSIMBULK model. I tried to tune different accuracy and convergence aids options but noting works. When I run the same netlist with spectre it works fine with no problem. Also, If I use another model (not BULKSIM), ultrasim will work and converge with no problem. My first feeling is that ultrasim has a problem with using BSIMBULK model. Could you please advice, Thank you, Kotb Full Article
science and technology Different Extracted Capacitance Values of the Same MOM Cap Structures Obtained from Quantus QRC Filed Solver By feedproxy.google.com Published On :: Tue, 05 May 2020 10:00:51 GMT Hello, I am using Virtuoso 6.1.7. I am performing the parasitic extraction of a MOM cap array of 32 caps. I use Quantus QRC and I enable field solver. I select “QRCFS” for field solver type and “High” for field solver accuracy. The unit MOM cap is horizontally and vertically symmetric. The array looks like the sketch below and there are no other structures except the unit caps: Rationally speaking, the capacitance values of the unit caps should be symmetric with respect to a vertical symmetry axis that is between cap16 and cap17 (shown with dashed red line). For example, the capacitance of cap1 should be equal to the capacitance of cap32 the capacitance of cap2 should be equal to the capacitance of cap31 etc. as there are no other structures around the caps that might create some asymmetry. Nevertheless, what I observe is the following after the parasitic extraction: As it can be seen, the result is not symmetric contrary to what is expected. I should also add that I do not observe this when I perform parasitic extraction with no filed solver. Why do I get this result? Is it an artifact resulting from the field solver tool (my conclusion was yes but still it must be verified)? If not, how can something like this happen? Many thanks in advance. Best regards, Can Full Article
science and technology ERROR (OSSGLD-18): and not able to run simulation By feedproxy.google.com Published On :: Wed, 06 May 2020 02:40:42 GMT I put some stimulus in the simulation file section : _vpd_data_enb (pu_data_enb 0) vsource wave=[0 0 1n 0 1.015n vcchbm 3n vcchbm] dc=0 type=pwl_vpu_data_enb (pd_data_enb 0) vsource dc=pu_enb type=dc I get the following error. ERROR (OSSGLD-18): The command character after '[' in the NLP expression '[0 0 1n 0 1.015n vcchbm 3n vcchbm] dc=0 type=pwl ' is not a valid character. The command character is the first character after '[' in the NLP expression. It must be '?', '!', '#', '$', 'n', '@', '.', '~' or '+'. Enter a valid character as the command character. si: simin did not complete successfully. I dont see anything wrong with the stimulus syntax Full Article
science and technology Delay Degradation vs Glitch Peak Criteria for Constraint Measurement in Cadence Liberate By feedproxy.google.com Published On :: Wed, 06 May 2020 11:41:27 GMT Hi, This question is related to the constraint measurement criteria used by the Liberate inside view. I am trying to characterize a specific D flip-flop for low voltage operation (0.6V) using Cadence Liberate (V16). When the "define_arcs" are not explicitly specified in the settings for the circuit (but the input/outputs are indeed correct in define_cell), the inside view seems to probe an internal node (i.e. master latch output) for constraint measurements instead of the Q output of the flip flop. So to force the tool to probe Q output I added following coder in constraint arcs : # constraint arcs from CK => D define_arc -type hold -vector {RRx} -related_pin CP -pin D -probe Q DFFXXX define_arc -type hold -vector {RFx} -related_pin CP -pin D -probe Q DFFXXX define_arc -type setup -vector {RRx} -related_pin CP -pin D -probe Q DFFXXX define_arc -type setup -vector {RFx} -related_pin CP -pin D -probe Q DFFXXX with -probe Q liberate identifies Q as the output, but uses Glitch-Peak criteria instead of delay degradation method. So what could be the exact reason for this unintended behavior ? In my external (spectre) spice simulation, the Flip-Flop works well and it does not show any issues in the output delay degradation when the input sweeps. Thanks Anuradha Full Article
science and technology Is there a simple way of converting a schematic to an s-parameter model? By feedproxy.google.com Published On :: Fri, 08 May 2020 20:06:07 GMT Before I ask this, I am aware that I can output an s-parameter file from an SP analysis. I'm wondering if there is a simple way of creating an s-parameter model of a component. As an example, if I have an S-parameter model that has 200 ports and 150 of those ports are to be connected to passive components and the remaining 50 ports are to be connected to active components, I can simplify the model by connecting the 150 passive components, running an SP analysis, and generating a 50 port S-parameter file. The problem is that this is cumbersome. You've got to wire up 50 PORT components and then after generating the s50p file, create a new cellview with an nport component and connect the 50 ports with 50 new pins. Wiring up all of those port components takes quite a lot of time to do, especially as the "choosing analyses" form adds arrays in reverse (e.g. if you click on an array of PORT components called X<0:2> it will add X<2>, X<1>, X<0> instead of in ascending order) so you have to add all of them to the analyses form manually. Is any way of taking a schematic and running some magic "generate S-Parameter cellview from schematic cellview" function that automates the whole process? Full Article
science and technology Library Characterization Tidbits: Over the Clouds and Beyond with Arm-Based Graviton and Cadence Liberate Trio By feedproxy.google.com Published On :: Fri, 21 Feb 2020 18:00:00 GMT Cadence Liberate Trio Characterization Suite, ARM-based Graviton Processors, and Amazon Web Services (AWS) Cloud have joined forces to cater to the High-Performance Computing, Machine Learning/Artificial Intelligence, and Big Data Analytics sectors. (read more) Full Article Liberate Trio Characterization Unified Flow Variation Modeling artificial intelligence ARM-based Graviton Processors liberate blog Amazon Web Services Multi-PVT Liberate LV Liberate Variety machine learning aws PVT corners Liberate Liberate Characterization Portfolio TSMC OPI Ecosystem Forum 2019
science and technology Library Characterization Tidbits: Exploring Intuitive Means to Characterize Large Mixed-Signal Blocks By feedproxy.google.com Published On :: Fri, 06 Mar 2020 16:41:00 GMT Let’s review a key characteristic feature of Cadence Liberate AMS Mixed-Signal Characterization that offers to you ease of use along with many other benefits like automation of standard Liberty model creation and improvement of up to 20X throughput.(read more) Full Article Liberate AMS video library generation pin capacitance Mixed-Signal library characterization shell libraries Liberate Characterization Portfolio Liberty Virtuoso ADE Explorer Virtuoso ADE Assembler
science and technology Library Characterization Tidbits: Validating Libraries Effectively By feedproxy.google.com Published On :: Mon, 23 Mar 2020 18:30:00 GMT In this blog, I will brief you about two very useful Rapid Adoption Kits (RAKs) for Liberate LV Library Validation.(read more) Full Article Liberate LV timing validation Digital Implementation interpolation error library validation RAKs
science and technology Are You Stuck While Synthesizing Your Design Due to Low-Power Issues? We Have the Solution! By feedproxy.google.com Published On :: Tue, 31 Mar 2020 14:39:00 GMT Optimizing power can be a very convoluted and crucial process. To make design chips meet throughput goals along with optimal power consumption, you need to plan right from the beginning! (read more) Full Article Low Power Logic Design
science and technology Innovus Implementation System: What Is Stylus UI? By feedproxy.google.com Published On :: Mon, 06 Apr 2020 17:51:00 GMT Hi Everyone, Many of you would have heard about the Cadence Stylus Common UI and are wondering what it is and what the advantages might be to use it versus legacy UI. The webinar answers the following questions: Why did Cadence develop Stylus UI and what is Stylus Common UI? How does someone invoke and use the Stylus Common UI? What are some of the important and useful features of the Stylus Common UI? What are the key ways in which the Stylus Common UI is different from the default UI?​ If you want to learn more about Stylus UI in the context of implementation, view the 45-minute recorded webinar on the Cadence support site. Related Resource Innovus Block Implementation with Stylus Common UI Vinita Nelson Full Article
science and technology Genus Synthesis Solution – Introduction to Stylus Common UI By feedproxy.google.com Published On :: Thu, 09 Apr 2020 12:41:00 GMT The Cadence® Genus Synthesis Solution, Innovus Implementation System, and Tempus Timing Signoff Solution have a lot of shared functionality, but in the past, the separate legacy user interfaces (UIs) created a lot of differences. A new common user interface that the Genus solution shares with the Innovus and Tempus solutions streamlines flow development and simplifies usability across the complete Cadence digital flow. The Stylus Common UI provides a next-generation synthesis-to-signoff flow with unified database access, MMMC timing configuration and reporting, and low-power design initialization. This webinar answers the following questions: What is the Stylus Common UI and why did Cadence develop it? How does someone invoke and use the Stylus Common UI? What are some of the important and useful features of the Stylus Common UI? What are key ways the Stylus Common UI is different from the Legacy UI? If you want to learn more about Stylus UI in the context of Genus Synthesis Solution, refer to 45-minute recorded webinar on https://support.cadence.com (Cadence login required). Video Title: Webinar: Genus Synthesis Solution—Introduction to the Stylus Common UI (Video) Direct Link: https://support.cadence.com/apex/ArticleAttachmentPortal?id=a1O0V000009MoGIUA0&pageName=ArticleContent Related Resources If interested in the full course, including lab content, please contact your Cadence representative or email a request to training_enroll@cadence.com. You can also enroll in the course on http://learning.cadence.com.​ Enhance the Genus Synthesis experience with videos: Genus Synthesis Solution: Video Library For any questions, general feedback, or future blog topic suggestions, please leave a comment. Full Article Genus Logic Design common stylus
science and technology Exploring Genus-Joules Integration is just a click away!! By feedproxy.google.com Published On :: Fri, 10 Apr 2020 13:05:00 GMT Joules RTL Power Solution provides a cockpit for RTL designers to explore and optimize the power efficiency of their designs. But this capability is now not just limited to RTL designers!! Yes, you as a synthesis designer too can use the power analysis capabilities of Joules from within Genus Synthesis Solution!! But: How to do it? Is there any specific switch required? What is the flow/script when Joules is used from within Genus? Are all the Joules commands supported? To answer to all these questions is just a click away in the form of video on “Genus-Joules Integration”; refer it on https://support.cadence.com (Cadence login required). Video Title: Genus-Joules Integration (Video) Direct Link: https://support.cadence.com/apex/ArticleAttachmentPortal?id=a1O0V0000091CnXUAU&pageName=ArticleContent Related Resources Enhance the Genus Synthesis experience with videos: Genus Synthesis Solution: Video Library Enhance the Joules experience with videos: Joules RTL Power Solution: Video Library For any questions, general feedback, or future blog topic suggestions, please leave a comment. Full Article Low Power Genus Joules Logic Design Power Analysis
science and technology Joules – Power Exploration Capabilities By feedproxy.google.com Published On :: Sat, 11 Apr 2020 00:59:00 GMT Several tools can generate power reports based on libraries & stimulus. The issue is what's NEXT? Is there any scope to improve power consumption of my design? What is the best-case power? Pin-point hot spots in my design? How to recover wasted power? And here is the solution in form of Joules RTL Power Exploration. Joules’ framework for power exploration and power implementation/recovery is stimulus based, where analysis is done by Joules and is explored/implemented by user. Power Exploration capabilities include: Efficiency metrics Pin point RTL location Cross probe to stim Centralize all power data Do you want to explore more? What is the flow? What commands can be used? There is a ONE-STOP solution to all these queries in the form of videos on Joules Power Exploration features on https://support.cadence.com (Cadence login required). Video Links: How to Analyze Ideal Power Using Joules RTL Power Solution GUI? (Video) What is Ideal Power Analysis Flow in Joules RTL Power Solution? (Video) How to Apply Observability Don’t Care (ODC) Technique in Joules? (Video) How to Debug Wasted Power Using Ideal Power Analyzer Window in Joules GUI? (Video) Related Resources Enhance the Joules experience with videos: Joules RTL Power Solution: Video Library For any questions, general feedback, or future blog topic suggestions, please leave a comment. Full Article Low Power Joules Logic Design Power Analysis
science and technology Library Characterization Tidbits: Rewind and Replay By feedproxy.google.com Published On :: Thu, 16 Apr 2020 16:36:00 GMT A recap of the blogs published in the Library Characterization Tidbits blog series.(read more) Full Article Liberate AMS Liberate LV RAK Liberate Variety library characterization Application Notes Liberate MX training bytes Library Characterization Tidbit Liberate Characterization Portfolio
science and technology Library Characterization Tidbits: Recharacterize What Matters - Save Time! By feedproxy.google.com Published On :: Thu, 30 Apr 2020 14:50:00 GMT Read how the Cadence Liberate Characterization solution effectively enables you to characterize only the failed or new arcs of a standard cell.(read more) Full Article tidbits Standard Cell library characterization Application Notes missing arcs Library Characterization Tidbit Digital Implementation ldb failed arcs Characterization Solution Liberate Liberate Characterization Portfolio
science and technology The Elephant in the Room: Mixed-Signal Models By feedproxy.google.com Published On :: Wed, 05 Nov 2014 11:45:00 GMT Key Findings: Nearly 100% of SoCs are mixed-signal to some extent. Every one of these could benefit from the use of a metrics-driven unified verification methodology for mixed-signal (MD-UVM-MS), but the modeling step is the biggest hurdle to overcome. Without the magical models, the process breaks down for lack of performance, or holes in the chip verification. In the last installment of The Low Road, we were at the mixed-signal verification party. While no one talked about it, we all saw it: The party was raging and everyone was having a great time, but they were all dancing around that big elephant right in the middle of the room. For mixed-signal verification, that elephant is named Modeling. To get to a fully verified SoC, the analog portions of the design have to run orders of magnitude faster than the speediest SPICE engine available. That means an abstraction of the behavior must be created. It puts a lot of people off when you tell them they have to do something extra to get done with something sooner. Guess what, it couldn’t be more true. If you want to keep dancing around like the elephant isn’t there, then enjoy your day. If you want to see about clearing the pachyderm from the dance floor, you’ll want to read on a little more…. Figure 1: The elephant in the room: who’s going to create the model? Whose job is it? Modeling analog/mixed-signal behavior for use in SoC verification seems like the ultimate hot potato. The analog team that creates the IP blocks says it doesn't have the expertise in digital verification to create a high-performance model. The digital designers say they don’t understand anything but ones and zeroes. The verification team, usually digitally-centric by background, are stuck in the middle (and have historically said “I just use the collateral from the design teams to do my job; I don’t create it”). If there is an SoC verification team, then ensuring that the entire chip is verified ultimately rests upon their shoulders, whether or not they get all of the models they need from the various design teams for the project. That means that if a chip does not work because of a modeling error, it ought to point back to the verification team. If not, is it just a “systemic error” not accounted for in the methodology? That seems like a bad answer. That all makes the most valuable guy in the room the engineer, whose knowledge spans the three worlds of analog, digital, and verification. There are a growing number of “mixed-signal verification engineers” found on SoC verification teams. Having a specialist appears to be the best approach to getting the job done, and done right. So, my vote is for the verification team to step up and incorporate the expertise required to do a complete job of SoC verification, analog included. (I know my popularity probably did not soar with the attendees of DVCON with that statement, but the job has to get done). It’s a game of trade-offs The difference in computations required for continuous time versus discrete time behavior is orders of magnitude (as seen in Figure 2 below). The essential detail versus runtime tradeoff is a key enabler of verification techniques like software-driven testbenches. Abstraction is a lossy process, so care must be taken to fully understand the loss and test those elements in the appropriate domain (continuous time, frequency, etc.). Figure 2: Modeling is required for performance AFE for instance The traditional separation of baseband and analog front-end (AFE) chips has shifted for the past several years. Advances in process technology, analog-to-digital converters, and the desire for cost reduction have driven both a re-architecting and re-partitioning of the long-standing baseband/AFE solution. By moving more digital processing to the AFE, lower cost architectures can be created, as well as reducing those 130 or so PCB traces between the chips. There is lots of good scholarly work from a few years back on this subject, such as Digital Compensation of Dynamic Acquisition Errors at the Front-End of ADCS and Digital Compensation for Analog Front-Ends: A New Approach to Wireless Transceiver Design. Figure 3: AFE evolution from first reference (Parastoo) The digital calibration and compensation can be achieved by the introduction of a programmable solution. This is in fact the most popular approach amongst the mobile crowd today. By using a microcontroller, the software algorithms become adaptable to process-related issues and modifications to protocol standards. However, for the SoC verification team, their job just got a whole lot harder. To determine if the interplay of the digital control and the analog function is working correctly, the software algorithms must be simulated on the combination of the two. That is, here is a classic case of inseparable mixed-signal verification. So, what needs to be in the model is the big question. And the answer is, a lot. For this example, the main sources of dynamic error at the front-end of ADCs are critical for the non-linear digital filtering that is highly frequency dependent. The correction scheme must be verified to show that the nonlinearities are cancelled across the entire bandwidth of the ADC. This all means lots of simulation. It means that the right level of detail must be retained to ensure the integrity of the verification process. This means that domain experience must be added to the list of expertise of that mixed-signal verification engineer. Back to the pachyderm There is a lot more to say on this subject, and lots will be said in future posts. The important starting point is the recognition that the potential flaw in the system needs to be examined. It needs to be examined by a specialist. Maybe a second opinion from the application domain is needed too. So, put that cute little elephant on your desk as a reminder that the beast can be tamed. Steve Carlson Related stories - It’s Late, But the Party is Just Getting Started Full Article metrics-driven methodology real number modeling uvm CPF RNM UPF mixed signal MDV verification
science and technology Mixing It Up in Hardware (an Advantest Case Study in Faster Full-Chip Simulations) By feedproxy.google.com Published On :: Wed, 19 Nov 2014 18:27:00 GMT Key Findings: Advantest, in mixed-signal SoC design, sees 50X speedup, 25 day test reduced to 12 hours, dramatic test coverage increase. Trolling through the CDNLive archives, I discovered another gem. At the May 2013 CDNLive in Munich, Thomas Henkel and Henriette Ossoinig of Advantest presented a paper titled “Timing-accurate emulation of a mixed-signal SoC using Palladium XP”. Advantest makes advanced electronics test equipment. Among the semiconductor designs they create for these products is a test processor chip with over 100 million logic transistors, but also with lots of analog functions.They set out to find a way to speed up their full-chip simulations to a point where they could run the system software. To do that, they needed about a 50X speed-up. Well, they did it! Figure 1: Advantest SoC Test Products To skip the commentary, read Advantest's paper here. Problem Statement Software is becoming a bigger part of just about every hardware product in every market today, and that includes the semiconductor test market. To achieve high product quality in the shortest amount of time, the hardware and software components need to be verified together as early in the design cycle as possible. However, the throughput of a typical software RTL simulation is not sufficient to run significant amounts of software on a design with hundreds of millions of transistors. Executing software on RTL models of the hardware means long runs (“deep cycles”) that are a great fit for an emulator, but the mixed-signal content posed a new type of challenge for the Advantest team. Emulators are designed to run digital logic. Analog is really outside of the expected use model. The Advantest team examined the pros and cons of various co-simulation and acceleration flows intended for mixed signal and did not feel that they could possibly get the performance they needed to have practical runtimes with software testbenches. They became determined to find a way to apply their Palladium XP platform to the problem. Armed with the knowledge of the essential relationship between the analog operations and the logic and software operations, the team was able to craft models of the analog blocks using reduction techniques that accurately depicted the essence of the analog function required for hardware-software verification without the expense of a continuous time simulation engine. The requirements boiled down to the following: • Generation of digital signals with highly accurate and flexible timing • Complete chip needs to run on Palladium XP platform • Create high-resolution timing (100fs) with reasonable emulation performance, i.e. at least 50X faster than simulation on the fastest workstations Solution Idea The solution approach chosen was to simplify the functional model of the analog elements of the design down to generation of digital signal edges with high timing accuracy. The solution employed a fixed-frequency central clock that was used as a reference.Timing-critical analog signals used to produce accurately placed digital outputs were encoded into multi-bit representations that modeled the transition and timing behavior. A cell library was created that took the encoded signals and converted them to desired “regular signals”. Automation was added to the process by changing the netlisting to widen the analog signals according to user-specified schematic annotations. All of this was done in a fashion that is compatible with debugging in Cadence’s Simvision tool. Details on all of these facets to follow. The Timing Description Unit (TDU) Format The innovative thinking that enabled the use of Palladium XP was the idea of combining a reference clock and quantized signal encoding to create offsets from the reference. The implementation of these ideas was done in a general manner so that different bit widths could easily be used to control the quantization accuracy. Figure 2: Quantization method using signal encoding Timed Cell Modeling You might be thinking – timing and emulation, together..!? Yes, and here’s a method to do it…. The engineering work in realizing the TDU idea involved the creation of a library of cells that could be used to compose the functions that convert the encoded signal into the “real signals” (timing-accurate digital output signals). Beyond some basic logic cells (e.g., INV, AND, OR, MUX, DFF, TFF, LATCH), some special cells such as window-latch, phase-detect, vernier-delay-line, and clock-generator were created. The converter functions were all composed from these basic cells. This approach ensured an easy path from design into emulation. The solution was made parameterizable to handle varying needs for accuracy. Single bit inputs need to be translated into transitions at offset zero or a high or low coding depending on the previous state. Single bit outputs deliver the final state of the high-resolution output either at time zero, the next falling, or the next rising edge of the grid clock, selectable by parameter. Output transitions can optionally be filtered to conform to a configurable minimum pulse width. Timed Cell Structure There are four critical elements to the design of the conversion function blocks (time cells): Input conditioning – convert to zero-offset, optional glitch preservation, and multi-cycle path Transition sorting – sort transitions according to timing offset and specified precedence Function – for each input transition, create appropriate output transition Output filtering – Capability to optionally remove multiple transitions, zero-width, pulses, etc. Timed Cell Caveat All of the cells are combinational and deliver a result in the same cycle of an input transition. This holds for storage elements as well. For example a DFF will have a feedback to hold its state. Because feedback creates combinational loops, the loops need a designation to be broken (using a brk input conditioning function in this case – more on this later). This creates an additional requirement for flip-flop clock signals to be restricted to two edges per reference clock cycle. Note that without minimum width filtering, the number of output transitions of logic gates is the sum of all input transitions (potentially lots of switching activity). Also note that the delay cell has the effect of doubling the number of output transitions per input transition. Figure 3: Edge doubling will increase switching during execution SimVision Debug Support The debug process was set up to revolve around VCD file processing and directed and viewed within the SimVision debug tool. In order to understand what is going on from a functional standpoint, the raw simulation output processes the encoded signals so that they appear as high-precision timing signals in the waveform viewer. The flow is shown in the figure below. Figure 4: Waveform post-processing flow The result is the flow is a functional debug view that includes association across representations of the design and testbench, including those high-precision timing signals. Figure 5: Simvision debug window setup Overview of the Design Under Verification (DUV) Verification has to prove that analog design works correctly together with the digital part. The critical elements to verify include: • Programmable delay lines move data edges with sub-ps resolution • PLL generates clocks with wide range of programmable frequency • High-speed data stream at output of analog is correct These goals can be achieved only if parts of the analog design are represented with fine resolution timing. Figure 6: Mixed-signal design partitioning for verification How to Get to a Verilog Model of the Analog Design There was an existing Verilog cell library with basic building blocks that included: - Gates, flip-flops, muxes, latches - Behavioral models of programmable delay elements, PLL, loop filter, phase detector With a traditional simulation approach, a cell-based netlist of the analog schematic is created. This netlist is integrated with the Verilog description of the digital design and can be simulated with a normal workstation. To use Palladium simulation, the (non-synthesizable) portions of the analog design that require fine resolution timing have to be replaced by digital timing representation. This modeling task is completed by using a combination of the existing Verilog cell library and the newly developed timed cells. Loop Breaking One of the chief characteristics of the timed cells is that they contain only combinational cells that propagate logic from inputs to outputs. Any feedback from a cell’s transitive fanout back to an input creates a combinational loop that must be broken to reach a steady state. Although the Palladium XP loop breaking algorithm works correctly, the timed cells provided a unique challenge that led to unpredictable results. Thus, a process was developed to ensure predictable loop breaking behavior. The user input to the process was to provide a property at the loop origin that the netlister recognized and translated to the appropriate loop breaking directives. Augmented Netlisting Ease of use and flow automation were two primary considerations in creating a solution that could be deployed more broadly. That made creating a one-step netlisting process a high-value item. The signal point annotation and automatic hierarchy expansion of the “digital timing” parameter helped achieve that goal. The netlister was enriched to identify the key schematic annotations at any point in the hierarchy, including bit and bus signals. Consistency checking and annotation reporting created a log useful in debugging and evolving the solution. Wrapper Cell Modeling and Verification The netlister generates a list of schematic instances at the designated “netlister stop level” for each instance the requires a Verilog model with fine resolution timing. For the design in this paper there were 160 such instances. The library of timed cells was created; these cells were actually “wrapper” cells comprised of the primitives for timed cell modeling described above. A new verification flow was created that used the behavior of the primitive cells as a reference for the expected behavior of the composed cells. The testing of the composed cells included had the timing width parameter set to 1 to enable direct comparison to the primitive cells. The Cadence Incisive Enterprise Simullator tool was successfully employed to perform assertion-based verification of the composed cells versus the existing primitive cells. Mapping and Long Paths Initial experiments showed that inclusion of the fine resolution timed cells into the digital emulation environment would about double the required capacity per run. As previously pointed out, the timed cells having only combinational forward paths creates a loop issue. This fact also had the result of creating some such paths that were more than 5,000 steps of logic. A timed cell optimization process helped to solve this problem. The basic idea was to break the path up by adding flip-flops in strategic locations to reduce combinational path length. The reason that this is important is that the maximum achievable emulation speed is related to combinational path length. Results Once the flow was in place, and some realistic test cases were run through it, some further performance tuning opportunities were discovered to additionally reduce runtimes (e.g., Palladium XP tbrun mode was used to gain speed). The reference used for overall speed gains on this solution was versus a purely software-based solution on the highest performance workstation available. The findings of the performance comparison were startlingly good: • On Palladium XP, the simulation speed is 50X faster than on Advantest’s fastest workstation • Software simulation running 25 days can now be run in 12 hours -> realistic runtime enables long-running tests that were not feasible before • Now have 500 tests that execute once in more than 48 hours • They can be run much more frequently using randomization and this will increase test coverage dramatically Steve Carlson Full Article Advantest Palladium Mixed Signal Verification Emulation mixed signal
science and technology Five Reasons I'm Excited About Mixed-Signal Verification in 2015 By feedproxy.google.com Published On :: Wed, 03 Dec 2014 12:30:00 GMT Key Findings: Many more design teams will be reaching the mixed-signal methodology tipping point in 2015. That means you need to have a (verification) plan, and measure and execute against it. As 2014 draws to a close, it is time to look ahead to the coming years and make a plan. While the macro view of the chip design world shows that is has been a mixed-signal world for a long time, it is has been primarily the digital teams that have rapidly evolved design and verification practices over the past decade. Well, I claim that is about to change. 2015 will be a watershed year for many more design teams because of the following factors: 85% of designs are mixed signal, and it is going to stay that way (there is no turning back) Advanced node drives new techniques, but they will be applied on all nodes Equilibrium of mixed-signal designs being challenged, complexity raises risk level Tipping point signs are evident and pervasive, things are going to change The convergence of “big A” and “big D” demands true mixed-signal practices Reason 1: Mixed-signal is dominant To begin the examination of what is going to change and why, let’s start with what is not changing. IBS reports that mixed signal accounts for over 85% of chip design starts in 2014, and that percentage will rise, and hold steady at 85% in the coming years. It is a mixed-signal world and there is no turning back! Figure 1. IBS: Mixed-signal design starts as percent of total The foundational nature of mixed-signal designs in the semiconductor industry is well established. The reason it is exciting is that a stable foundation provides a platform for driving change. (It’s hard to drive on crumbling infrastructure. If you’re from California, you know what I mean, between the potholes on the highways and the earthquakes and everything.) Reason 2: Innovation in many directions, mostly mixed-signal applications While the challenges being felt at the advanced nodes, such as double patterning and adoption of FinFET devices, have slowed some from following onto to nodes past 28nm, innovation has just turned in different directions. Applications for Internet of Things, automotive, and medical all have strong mixed-signal elements in their semiconductor content value proposition. What is critical to recognize is that many of the design techniques that were initially driven by advanced-node programs have merit across the spectrum of active semiconductor process technologies. For example, digitally controlled, calibrated, and compensated analog IP, along with power-reducing mutli-supply domains, power shut-off, and state retention are being applied in many programs on “legacy” nodes. Another graph from IBS shows that the design starts at 45nm and below will continue to grow at a healthy pace. The data also shows that nodes from 65nm and larger will continue to comprise a strong majority of the overall starts. Figure 2. IBS: Design starts per process node TSMC made a comprehensive announcement in September related to “wearables” and the Internet of Things. From their press release: TSMC’s ultra-low power process lineup expands from the existing 0.18-micron extremely low leakage (0.18eLL) and 90-nanometer ultra low leakage (90uLL) nodes, and 16-nanometer FinFET technology, to new offerings of 55-nanometer ultra-low power (55ULP), 40ULP and 28ULP, which support processing speeds of up to 1.2GHz. The wide spectrum of ultra-low power processes from 0.18-micron to 16-nanometer FinFET is ideally suited for a variety of smart and power-efficient applications in the IoT and wearable device markets. Radio frequency and embedded Flash memory capabilities are also available in 0.18um to 40nm ultra-low power technologies, enabling system level integration for smaller form factors as well as facilitating wireless connections among IoT products. Compared with their previous low-power generations, TSMC’s ultra-low power processes can further reduce operating voltages by 20% to 30% to lower both active power and standby power consumption and enable significant increases in battery life—by 2X to 10X—when much smaller batteries are demanded in IoT/wearable applications. The focus on power is quite evident and this means that all of the power management and reduction techniques used in advanced node designs will be coming to legacy nodes soon. Integration and miniaturization are being pursued from the system-level in, as well as from the process side. Techniques for power reduction and system energy efficiency are central to innovations under way. For mixed-signal program teams, this means there is an added dimension of complexity in the verification task. If this dimension is not methodologically addressed, the level of risk adds a new dimension as well. Reason 3: Trends are pushing the limits of established design practices Risk is the bane of every engineer, but without risk there is no progress. And, sometimes the amount of risk is not something that can be controlled. Figure 3 shows some of the forces at work that cause design teams to undertake more risk than they would ideally like. With price and form factor as primary value elements in many growing markets, integration of analog front-end (AFE) with digital processing is becoming commonplace. Figure 3. Trends pushing mixed-signal out of equilibrium The move to the sweet spot of manufacturing at 28nm enables more integration, while providing excellent power and performance parameters with the best cost per transistor. Variation becomes great and harder to control. For analog design, this means more digital assistance for calibration and compensation. For greatest flexibility and resiliency, many will opt for embedding a microcontroller to perform the analog control functions in software. Finally, the first wave of leaders have already crossed the methodology bridge into true mixed-signal design and verification; those who do not follow are destined to fall farther behind. Reason 4: The tipping point accelerants are catching fire The factors cited in Reason 3 all have a technical grounding that serves to create pain in the chip-development process. The more factors that are present, the harder it is to ignore the pain and get the treatment relief afforded by adopting known best practices for truly mixed-signal design (versus divide and conquer along analog and digital lines design). In the past design performance was measured in MHz with simple static timing and power analysis. Design flows were conveniently partitioned, literally and figuratively, along analog and digital boundaries. Today, however, there are gigahertz digital signals that interact at the package and board level in analog-like ways. New, dynamic power analysis methods enabled by advanced library characterization must be melded into new design flows. These flows comprehend the growing amount of feedback between analog and digital functions that are becoming so interlocked as to be inseparable. This interlock necessitates design flows that include metrics-driven and software-driven testbenches, cross fabric analysis, electrically aware design, and database interoperability across analog and digital design environments. Figure 4. Tipping point indicators Energy efficiency is a universal driver at this point. Be it cost of ownership in the data center or battery life in a cell phone or wearable device, using less power creates more value in end products. However, layering multiple energy management and optimization techniques on top of complex mixed-signal designs adds yet more complexity demanding adoption of “modern” mixed-signal design practices. Reason 5: Convergence of analog and digital design Divide and conquer is always a powerful tool for complexity management. However, as the number of interactions across the divide increase, the sub-optimality of those frontiers becomes more evident. Convergence is the name of the game. Just as analog and digital elements of chips are converging, so will the industry practices associated with dealing with the converged world. Figure 5. Convergence drivers Truly mixed-signal design is a discipline that unites the analog and digital domains. That means that there is a common/shared data set (versus forcing a single cockpit or user model on everyone). In verification the modern saying is “start with the end in mind”. That means creating a formal approach to the plan of what will be test, how it will be tested, and metrics for success of the tests. Organizing the mechanics of testbench development using the Unified Verification Methodology (UVM) has proven benefits. The mixed-signal elements of SoC verification are not exempted from those benefits. Competition is growing more fierce in the world for semiconductor design teams. Not being equipped with the best-known practices creates a competitive deficit that is hard to overcome with just hard work. As the landscape of IC content drives to a more energy-efficient mixed-signal nature, the mounting risk posed by old methodologies may cause causalities in the coming year. Better to move forward with haste and create a position of strength from which differentiation and excellence in execution can be forged. Summary 2015 is going to be a banner year for mixed-signal design and verification methodologies. Those that have forged ahead are in a position of execution advantage. Those that have not will be scrambling to catch up, but with the benefits of following a path that has been proven by many market leaders. Full Article uvm mixed signal design Metric-Driven-Verification Mixed Signal Verification MDV-UVM-MS
science and technology Top 5 Issues that Make Things Go Wrong in Mixed-Signal Verification By feedproxy.google.com Published On :: Wed, 10 Dec 2014 12:18:00 GMT Key Findings: There are a host of issues that arise in mixed-signal verification. As discussed in earlier blogs, the industry trends indicate that teams need to prepare themselves for a more mixed world. The good news is that these top five pitfalls are all avoidable. It’s always interesting to study the human condition. Watching the world through the lens of mixed-signal verification brings an interesting microcosm into focus. The top 5 items that I regularly see vexing teams are: When there’s a bug, whose problem is it? Verification team is the lightning rod Three (conflicting) points of view Wait, there’s more… software There’s a whole new language Reason 1: When there’s a bug, whose problem is it? It actually turns out to be a good thing when a bug is found during the design process. Much, much better than when the silicon arrives back from the foundry of course. Whether by sheer luck, or a structured approach to verification, sometimes a bug gets discovered. The trouble in mixed-signal design occurs when that bug is near the boundary of an analog and a digital domain. Figure 1. Whose bug is it? Typically designers are a diligent sort and make sure that their block works as desired. However, when things go wrong during integration, it is usually also project crunch time. So, it has to be the other guy’s bug, right? A step in the right direction is to have a third party, a mixed-signal verification expert, apply rigorous methods to the mixed-signal verification task. But, that leads to number 2 on my list. Reason 2: Verification team is the lightning rod Having a dedicated verification team with mixed-signal expertise is a great start, but what can typically happen is that team is hampered by the lack of availability of a fast executing model of the analog behavior (best practice today being a SystemVerilog real number model – SV_RNM). That model is critical because it enables orders of magnitude more tests to be run against the design in the same timeframe. Without that model, there will be a testing deficit. So, when the bugs come in, it is easy for everyone to point their finger at the verification team. Figure 2. It’s the verification team’s fault Yes, the model creates a new validation task – it’s validation – but the speed-up enabled by the model more than compensates in terms of functional coverage and schedule. The postscript on this finger-pointing is the institutionalization of SV-RNM. And, of course, the verification team gets its turn. Figure 3. Verification team’s revenge Reason 3: Three (conflicting) points of view The third common issue arises when the finger-pointing settles down. There is still a delineation of responsibility that is often not easy to achieve when designs of a truly mixed-signal nature are being undertaken. Figure 4. Points of view and roles Figure 4 outlines some of the delegated responsibility, but notice that everyone is still potentially on the hook to create a model. It is questions of purpose, expertise, bandwidth, and convention that go into the decision about who will “own” each model. It is not uncommon for the modeling task to be a collaborative effort where the expertise on analog behavior comes from the analog team, while the verification team ensures that the model is constructed in such a manner that it will fit seamlessly into the overall chip verification. Less commonly, the digital design team does the modeling simply to enable the verification of their own work. Reason 4: Wait, there’s more… software As if verifying the function of a chip was not hard enough, there is a clear trend towards product offerings that include software along with the chip. In the mixed-signal design realm, many times this software has among its functions things like calibration and compensation that provide a flexible way of delivering guards against parameter drift. When the combination of the chip and the software are the product, they need to be verified together. This puts an enormous premium on fast executing SV-RNM. Figure 5. There’s software analog and digital While the added dimension of software to the verification task creates new heights of complexity, it also serves as a very strong driver to get everyone aligned and motivated to adopt best known practices for mixed-signal verification. This is an opportunity to show superior ability! Figure 6. Change in perspective, with the right methodology Reason 5: There’s a whole new language Communication is of vital importance in a multi-faceted, multi-team program. Time zones, cultures, and personalities aside, mixed-signal verification needs to be a collaborative effort. Terminology can be a big stumbling block in getting to a common understanding. If we take a look at the key areas where significant improvement can usually be made, we can start to see the breadth of knowledge that is required to “get” the entirety of the picture: Structure – Verification planning and management Methodology – UVM (Unified Verification Methodology – Accellera Standard) Measure – MDV (Metrics-driven verification) Multi-engine – Software, emulation, FPGA proto, formal, static, VIP Modeling – SystemVerilog (discrete time) down to SPICE (continuous time) Languages – SystemVerilog, Verilog, Verilog-AMS, VHDL, SPICE, PSL, CPF, UPF Each of these areas has its own jumble of terminology and acronyms. It never hurts to create a team glossary to start with. Heck, I often get my LDO, IFV, and UDT all mixed up myself. Summary Yes, there are a lot of things that make it hard for the humans involved in the process of mixed-signal design and verification, but there is a lot that can be improved once the pain is felt (no pain, no gain is akin to no bugs, no verification methodology change). If we take a look at the key areas from the previous section, we can put a different lens on them and describe the value that they bring: Structure – Uniformly organized, auditable, predictable, transparency Methodology – Reusable, productive, portable, industry standard Measure – Quantified progress, risk/quality management, precise goals Multi-engine – Faster execution, improved schedule, enables new quality level Modeling – Enabler, flexible, adaptable for diverse applications/design styles Languages – Flexible, complete, robust, standard, scalability to best practices With all of this value firmly in hand, we can turn our thoughts to happier words: … stay tuned for more! Steve Carlson Full Article MS uvm Metric-Driven-Verification Palladium Mixed Signal Verification Incisive MDV-UVM-MS Virtuoso mixed signal MDV
science and technology Automatically Reusing an SoC Testbench in AMS IP Verification By feedproxy.google.com Published On :: Thu, 04 Jan 2018 18:10:00 GMT The complexity and size of mixed-signal designs in wireless, power management, automotive, and other fast growing applications requires continued advancements in a mixed-signal verification methodology. An SoC, in these fast growing applications, incorporates a large number of analog and mixed-signal (AMS) blocks/IPs, some acquired from IP providers, some designed, often concurrently. AMS IP must be verified independently, but this is not sufficient to ensure an SoC will function properly and all scenarios of interaction among many different AMS IP blocks at full chip / SoC level must be verified thoroughly. To reduce an overall verification cycle, AMS IP and SoC verification teams must work in parallel from early stages of the design. Easier said than done! We will outline a methodology than can help. AMS designers verify their IP meets required specifications by running a testbench they develop for standalone / out of-context verification. Typically, an AMS IP as analog-centric, hierarchal design in schematic, composed of blocks represented by transistor, HDL and behavioral description verified in Virtuoso® Analog Design Environment (ADE) using Spectre AMS Designer simulation. An SoC verification team typically uses UVM SystemVerilog testbech at full chip level where the AMS IP is represented with a simple digital or real number model running Xcelium /DMS simulation from the command line. Ideally, AMS designers should also verify AMS IP function properly in the context of full-chip integration, but reproducing an often complex UVM SystemVerilog testbench and bringing over top-level design description to an analog-centric environment is not a simple task. Last year, Cadence partnered with Infineon on a project with a goal to automate the reuse of a top-level testbench in AMS verification. The automation enabled AMS verification engineers to automatically configure setup for verification runs by assembling all necessary options and files from the AMS IP Virtuoso GUI and digital SoC top-level command line configurations. The benefits of this method were: AMS verification engineers did not need to re-create complex stimuli representing interaction of their IP at the top level Top-level verification stays external to the AMS IP verification environment and continues to be managed by the SoC verification team, but can be reused by the AMS IP team without manual overhead AMS IP is verified in-context and any inconsistencies are detected earlier in the verification process Improved productivity and overall verification time For more details, please see Infineon’s CDNLlive presentation. Full Article AMS mixed signal design mixed-signal methodology mixed signal solution analog Mixed-Signal analog/mixed-signal Virtuoso environment mixed-signal verification