ge How to import different input combination to the same circuit to get max, min, and average delay, power dissipation and area By community.cadence.com Published On :: Wed, 16 Oct 2024 02:47:12 GMT Hi everyone. I'm very a new cadence user. I'm not good at using it and quite lost in finding a way to get the results. With the topic, I would like to ask you for some suggestions to improve my cadence skills. I have some digital decision logic. Some are combinational logic, some are sequential logic that I would like to import or generate random input combination to the inputs of my decision logic to get the maximum, minimum, and average delay power dissipation and area when feeding the different input combination. My logic has 8-bit, 16-bit, and 32-bit input. The imported data tends to be decimal numbers. I would like to ask you: - which tool(s) are the most appropriate to import and feed the different combination to my decision logic? - which tool is the most appropriate to synthesis with different number of input? - I have used Genus Synthesis Solution so far. However with my skill right now I can only let Genus synthesize my Verilog code one setup at a time. I'm not sure if I there is anyway I can feed a lot of input at a time and get those results (min, max, average of delay, power dissipation and area) - which language or scripts I should pick up to use and achieve these results? -where can I find information to solve my problem? which information shall I look for? Thank you so much for your time!! Best Regards Full Article
ge SKILL regex pattern matching By community.cadence.com Published On :: Fri, 01 Nov 2024 08:30:50 GMT Hi, I have a string "[@global_vddi:%:vddi!]" which I need to process to remove "@[]" chars. The desired result is "global_vddi:%:vddi!". I tried the following in CIW netExpr = "[@global_vddi:%:vddi!]"rexCompile("\([a-zA-Z0-9_:!%]+\)")trexExecute(netExpr)trexSubstitute( "\0" )"global_vddi:%:vddi!" and I achieved the desired value. I added the same code to my script but it didn't work. In my script rexExecute returns 't' but rexSubstitute returns 'nil' Here is the snippet from my script netExpr = dbGetTermNetExpr(term) if(netExpr then rexCompile("\([a-zA-Z0-9_:!%]+\)") rexExecute(netExpr) netExpr1 = rexSubstitute( "\0" ) ... . ..) and trace log showing the variable values as the code executes stopped before evaluating dbGetTermNetExpr(term)after evaluating dbGetTermNetExpr(term)==> "[@global_vddi:%:vddi!]"after evaluating (netExpr = dbGetTermNetExpr(term))==> "[@global_vddi:%:vddi!]"stopped before evaluating if(netExpr then rexCompile("\([a-zA-Z0-9_:!%]+\)") rexExecute(netExpr) (netExpr1 = rexSubstitute("\0")) ... )stopped before evaluating rexCompile("\([a-zA-Z0-9_:!%]+\)")after evaluating rexCompile("\([a-zA-Z0-9_:!%]+\)")==> tstopped before evaluating rexExecute(netExpr)after evaluating rexExecute(netExpr)==> tstopped before evaluating (netExpr1 = rexSubstitute("\0"))stopped before evaluating rexSubstitute("\0")after evaluating rexSubstitute("\0")==> nil|[2]netExpr1 set to nil, was nil Any help or suggestions as to why the code executes differently in CIW and when called from a SKILL script file will be much appreciated. I also tried a different approach using rexReplace instead of rexSubstitute but couldn't get the regex pattern correct. The code I tried in CIW using rexReplace is as follows a = "[@global_vddi:%:vddi!]""[@global_vddi:%:vddi!]"rexCompile("\([@\[\]]*\)")trexReplace(a "" 0)"global_vddi:%:vddi!]" Only '@[' get replaced and ']' is still present. The regex pattern contains '\]' to match the closing square bracket yet it is not replaced. Please let me know what I'm missing in these 2 scenarios. Any help is much appreciated!! Regards, Confused SKILL user Full Article
ge Knowledge Booster Training Bytes - The Close Connection Between Schematics and Their Layouts in Microwave Office By community.cadence.com Published On :: Wed, 04 Jan 2023 04:03:00 GMT Microwave Office is Cadence’s tool-of-choice for RF and microwave designers designing everything from III-V 5G chips, to RF systems in board and package technologies. These types of designs require close interaction between the schematic and its layout. A new Training Byte demonstrates how the schematic-layout connections is built into Microwave Office.(read more) Full Article RF RF Simulation RF designer AWR customization RF design microwave office
ge Knowledge Booster Training Bytes - Working with Data Sets in Microwave Office By community.cadence.com Published On :: Fri, 06 Jan 2023 19:39:00 GMT Data sets are a powerful and easy-to-use feature in Microwave Office. Data can be effortlessly be swapped in graphs, and circuit schematics.(read more) Full Article RF Simulation AWR Design Environment awr AWR customization AWR Microwave Office microwave office
ge read from text file with two values and represent that as voltage signals on two different port a and b By community.cadence.com Published On :: Fri, 24 Feb 2023 00:33:01 GMT i want to read from text file two values on two ports , i wrote that code, and i have that error that shown in the image below . and also the data in text file is shown as screenshot module read_file (a,b); electrical a,b;integer in_file_0,data_value, valid, count0,int_value; analog begin @(initial_step) begin in_file_0 = $fopen("/home/hh1667/ee610/my_library/read_file/data2.txt","r"); valid = $fscanf (in_file_0, "%b,%b" ,int_value,count0); end V(a) <+ int_value; V(b) <+ count0; end endmodule Full Article
ge In Simvision, how do I change the waveform font size of the signal names? By community.cadence.com Published On :: Mon, 27 Mar 2023 09:01:44 GMT Hi Cadence, I use simvision 20.09-s007 but my computer screen resolution is very high. As a result, the texts are too small. In ~/.simvision/Xdefaults I changed that number to 16, from 12. But the signal names in the waveform traces don't reflect the change. Simvision*Font: -adobe-helvetica-medium-r-normal--16-*-*-*-*-*-*-* Other .font changes seem to reflect on the simvision correctly, except the signal names. How do I fix that? I dont mind a single variable to change all the texts fonts to 16. Thank you! PS: I found the answer with another post. I change Preference/Waveform/Display/Signal Height. Full Article
ge Genus: Generated netlist doesn't define subckts By community.cadence.com Published On :: Wed, 17 May 2023 13:47:06 GMT Dear all, I'm trying to perform an LVS check using Calibre between a layout that was generated by Innovus and the initial netlist generated by Genus. However, once I hit Run LVS on Calibre, it reports the following warnings and recommends to stop the process: Source netlist references but does not define more than 10 subckts: DFD1BWP7T DFKCND1BWP7T DFKCNQD1BWP7T DFKSND1BWP7T DFQD1BWP7T IND2D0BWP7T INR2D0BWP7T INVD0BWP7T INVD2P5BWP7T IOA21D0BWP7T ... (and more) If I proceed the LVS process it shows lots of errors as shown in the following image: Why Genus doesn't include the definition of those sub circuits in the generated netlist? Is this related to Flat/Hierarchy netlisting? I have included my Genus scripts as well as the generated netlist in the attachments (and here - if attachment don't work). Many thanks, Anas Full Article
ge How to generate "Sheet Name" column in a pin report? By community.cadence.com Published On :: Wed, 08 Nov 2023 03:52:26 GMT Hi everyone, Is there any method to generate "Sheet" column for a pin report like table below? The column "Name.Pin" & "Signal" can be generated easily, but I have no idea to generate the column of "Sheet Name". The software using here are Allegro Design Entry HDL, OrCAD Capture and Allegro PCB Editor. Can these 3 software generate "Sheet Name" data? Name.Pin Signal Sheet Name C1_1.1 N301321 SITE1_1 C1_1.2 GND_ANA_1 SITE1_1 C1_2.1 N180243 SITE2_1 C1_2.2 GND_ANA_2 SITE2_1 Thank you. Full Article
ge Merge several worklibs By community.cadence.com Published On :: Mon, 19 Feb 2024 15:58:11 GMT Hi, I find there is a similar question 10 years ago and the answer is out of date, so I come to ask again. I have compiled 2 different blocks in 2 different paths, using basic xrun -f xxxx.f, generated 2 xcelium.d folder. Then I have to compile another block based on these 2, how can I link these 2 generated libraries while compiling the 3rd one? Thanks Full Article
ge which tools support Linting for early stages of Digital Design flow? By community.cadence.com Published On :: Thu, 03 Oct 2024 19:08:53 GMT I am trying to understand the Linting process. I know that mainly JasperGold is the tool for this purpose. Though I think JasperGold is more suited for later stages of the design. As a RTL Design Engineer, I want to make sure that if another tool has the capability of doing Linting earlier in the flow. for example, does Xcelium, Genus or Confomal support linting. I have seen some contradicting information online regarding this topic, though I can't find anything related to Linting on any of these tools. Thanks Full Article
ge Asking for a software suggestion. By community.cadence.com Published On :: Tue, 15 Oct 2024 23:05:41 GMT Hi. I'm a very new learner on Cadence. I want to synthesis my logic design for the maximum, minimum and an average results of delay, power dissipation and area under varying multiple inputs of different data. The different data will be exported from other software results. I'm lost on the steps/processes I should do. Could anyone suggest me on which software and/or function or scripts I should use to achieve these results? Full Article
ge Achieve 80% Less Late-Stage RTL Changes and Early RTL Bug Detection By community.cadence.com Published On :: Tue, 16 Aug 2022 05:00:00 GMT It has become challenging to ensure that the designs are complete, correct, and adhere to necessary coding rules before handing them off for RTL verification and implementation. RTL Designer Signoff Solution from Cadence helps the user identify RTL bugs at a very early development stage, saving a lot of effort and cost for the design and verification team. Our reputed customers have confirmed that using RTL signoff for their design IP helped save up to 4 weeks and reduce the late-stage RTL changes by up to 80%.(read more) Full Article Jasper RTL Designer Signoff App Jasper Early Bug Detection
ge Moving Beyond EDA: The Intelligent System Design Strategy By community.cadence.com Published On :: Thu, 22 Sep 2022 09:20:00 GMT The rising customer expectations, intermingling fields and high performance needs can be satisfied with the system based design. An intelligent Systems Design strategy can offer a quicker route to an optimum design and helps to increase designers' productivity and analyzes efficiency by providing the ability to explore the entire design space. Cadence Intelligent System Strategy enables a system design revolution and reduces project schedules with optimized continuous integration.(read more) Full Article optimality artificial intelligence intelligent system design
ge IC Packagers: Workflows That Work for You By community.cadence.com Published On :: Fri, 19 Jul 2024 09:07:00 GMT New IC packaging workflows in Cadence Allegro X layout tools allow you to follow a guided path from starting a design through final manufacturing. The path is there to ensure that you don’t miss steps and perform actions in the optimal order. W...(read more) Full Article IC Packaging and SiP Design IC Packaging Workflows Allegro X PCB Editor Allegro X Advanced Package Designer APD PCB design 23.1 allegro x SKILL
ge DesignCon Best Paper 2024: Addressing Challenges in PDN Design By community.cadence.com Published On :: Tue, 17 Sep 2024 19:40:00 GMT Explore Impacts of Finite Interconnect Impedance on PDN Characterization Over the past few decades, many details have been worked out in the power distribution network (PDN) in the frequency and time domains. We have simulation tools that can analyze the physical structure from DC to very high frequencies, including spatial variations of the behavior. We also have frequency- and time-domain test methods to measure the steady-state and transient behavior of the built-up systems. All of these pieces in our current toolbox have their own assumptions, limitations, and artifacts, and they constantly raise the challenging question that designers need to answer: How to select the design process, simulation, measurement tools, and processes so that we get reasonable answers within a reasonable time frame with a reasonable budget. Read this award-winning DesignCon 2024 paper titled “Impact of Finite Interconnect Impedance Including Spatial and Domain Comparison of PDN Characterization.” Led by Samtec’s Istvan Novak and written with a team of nine authors from Cadence, Amazon, and Samtec, the paper discusses a series of continually evolving challenges with PDN requirements for cutting-edge designs. Read the full paper now: “Impact of Finite Interconnect Impedance Including Spatial and Domain Comparison of PDN Characterization.” Full Article featured DesignCon PDN signal integrity analysis Signal Integrity PDN Analysis Sigrity
ge Using Voltus IC Power Integrity to Overcome 3D-IC Design Challenges By community.cadence.com Published On :: Tue, 08 Oct 2024 06:12:00 GMT Power network design and analysis of 3D-ICs is a major challenge due to the complex nature and large size of the power network. In addition, designers must deal with the complexity of routing power through the interposer, multiple dies, through-silicon vias (TSVs), and through-dielectric vias (TDVs). Cadence’s Integrity 3D-IC Platform and Voltus IC Power Integrity Solution provide a fully integrated solution for early planning and analysis of 3D-IC power networks, 3D-IC chip-centric power integrity signoff, and hierarchical methods that significantly improve capacity and performance of power integrity (PI) signoff while maintaining a very high level of accuracy at signoff. This blog summarizes the typical design challenges faced by today’s 3D-IC designers, as discussed in our recent webinar, “Addressing 3D-IC Power Integrity Design Challenges.” Please click here to view the full webinar. Major Trends in Advanced Chip Design From chips to chiplets, stacked die, 3D-ICs, and more, three major trends are impacting advanced semiconductor packaging design. The first is heterogenous integration, which we define as a disaggregated approach to designing systems on chip (SoCs) from multiple chiplets. This approach is similar to system-in-package (SiP) design, except that instead of integrating multiple bare die – including 3D stacking – on a single substrate, multiple IPs are integrated in the form of chiplets on a single substrate. The second major trend is around new silicon manufacturing techniques that leverage silicon vias (TSVs) and high-density fanout RDL. These advancements mean that silicon is becoming a more attractive material for packaging, especially when high bandwidth and form factor become key attributes in the end design. This brings new design and verification challenges to most packaging engineers who typically work with organic and ceramic substrate materials. Finally, on the ecosystem side, all the large semiconductor foundries now offer their own versions of advanced packaging. This brings new ways of supporting design teams with technologies like reference flows and PDKs, concepts that have typically been lacking in the packaging community. Cadence has worked with many of the leading foundries and outsourced semiconductor assembly and test facilities (OSATs) to develop multi-chip(let) packaging reference flows and package assembly design kits. The downside is that, with the time restrictions designers are under today, there isn’t enough time to simulate the details of these flows and PDKs further. For those who must make the best electro/thermal/physical decisions to achieve the best power/performance/area/cost (PPAC), factors can include accurate die size estimations, thermal feasibility, die-to-die interconnect planning, interposer planning (silicon/organic), front-to-front and front-to-back (F2F/F2B) planning, layer stack and electromigration/ IR drop (EMIR)/TSV planning, IO bandwidth feasibility, and system-level architecture selection. 3D-IC Power Network Design and Analysis The key to success in 3D-IC design is early power integrity planning and analysis. Cadence’s Integrity 3D-IC platform is a high-capacity 3D-IC platform that enables 3D design planning, implementation, and system analysis in a single, unified cockpit. Cadence’s Voltus IC Power Integrity Solution is a comprehensive full chip electromigration, IR drop, and power analysis solution. With its fully distributed architecture and hierarchical analysis capabilities, Voltus provides very fast analysis and has the capacity to handle the largest designs in the industry. Typically, 3D-IC PDN design and analysis is performed in four phases, as shown in Figure 1. Phase 1 - Perform early power delivery network (PDN) exploration with each fabric’s PDN cascaded in system PI with early circuit models. Phase 2 – Plan 3D-IC PDNs in Cadence’s Integrity 3D-IC platform, including micro bumps, TSVs, and through dielectric vias (TDVs), power grid synthesis for dies, and early rail analysis and optimization. Phase 3 – Perform full chip-centric signoff in Voltus with detailed die, interposer, and package models, including chip die models, while keeping some dies flat. Phase 4 – Perform full system-level signoff with Cadence’s Sigrity SystemPI using detailed extracted package models from Sigrity XtractIM, board models from Sigrity PowerSI or Clarity 3D Solver, interposer models from XtractIM or Voltus, and chip power models from Voltus. Figure 1. 3D-IC PDN design and analysis phases 3D-IC Chip-Centric Signoff The integration of Integrity 3D-IC and Voltus enables chip-centric early analysis and signoff. Figure 2 and Figure 3 highlight the chip centric early PI optimization and signoff flows. In early analysis, the on-chip power networks are synthesized, and the micro bumps and TSVs can be placed and optimized. In the signoff stage, all the detailed design data is used for power analysis, and detailed models are extracted and used for package, interposer, and on-die power networks. Figure 2. Early chip-centric PI analysis and optimization flow Figure 3. Chip-centric 3D-IC PI signoff Hierarchical 3D-IC PI Analysis To improve the capacity and performance of 3D-IC PI analysis, Voltus enables hierarchical analysis using chiplet models. Chiplet models can be reduced chip models in spice format or more accurate xPGV models which are highly accurate proprietary models generated by Voltus. With xPGV models, the hierarchical PI analysis has almost the same accuracy as flat analysis but offers 10X or higher benefit in runtime and memory requirements. Conclusion This blog has highlighted the major design trends enabled by advanced 3D packaging and the design challenges arising from these advancements. The design of power delivery networks is one of these major challenges. We have discussed Cadence solutions to overcome this PI challenge. To learn more, view our recent webinar, "Addressing 3D-IC Power Integrity Design Challenges" and visit the Voltus web page. Full Article PDN 3D-IC Integrity Power Integrity in-design analysis Sigrity Clarity 3D Solver
ge What is Allegro X Advanced Package Designer and why do I not see Allegro Package Designer Plus (APD+) in 23.1? By community.cadence.com Published On :: Fri, 01 Dec 2023 09:46:22 GMT Starting SPB 23.1, Allegro Package Designer Plus (APD+) has been rebranded as Allegro X Advanced Package Designer (Allegro X APD). The splash screen for Allegro X APD will appear as shown below, instead of showing APD+ 2023: For the Windows Start menu in 23.1, it will display as Allegro X APD 2023 instead of APD+ 2023, as shown below 23.1 Start menu In the Product Choices window for 23.1, you will see Allegro X Advanced Package Designer in the place of Allegro Package Designer +, as shown below: 23.1 product title Full Article
ge Introducing new 3DX Canvas in Allegro X Advanced Package Designer By community.cadence.com Published On :: Tue, 05 Dec 2023 12:50:25 GMT Have you heard that starting SPB 23.1, Allegro Package Designer Plus (APD+) will be renamed as Allegro X Advanced Package Designer (Allegro X APD)? Allegro X APD offers multiple new features and enhancements on topics like Via Structures, Wirebond, Etchback, Text Wizards, 3D Canvas, and more. This post presents the new 3DX Canvas introduced in SPB 23.1. This can be invoked from Allegro X APD (from the menu item View > 3DX Canvas). Some of the key benefits of the new canvas: This canvas addresses the scale and complexity in large modern package designs. It provides highly efficient visual representation and implementation of packages. The new architecture enables high-performance 3D incremental updates by utilizing GPU for fast rendering. Real-time 3D incremental updates are supported, which means that the 3D view is in sync with all changes to the database. The new canvas provides 3D visualization support for packaging objects such as wire bonds, ball, die bump/pillar geometries, die stacks, etch back, and plating bar. This release also introduces the interactive measurement tool for a 3D view of packages. Once you open 3DX Canvas, press the Alt key and you can select the objects you want to measure. 3DX Canvas provides new 3D DRC Bond Wire Clearances with Real 3D DRC Checks. True 3D DRC in Constraint Manager has been introduced. If you open Constraint Manager, there will be a new worksheet added. Following DRC checks are supported: Wire to Wire Wire to Finger Wire to Shape Wire to Cline Wire to Component Full Article
ge Creating Power and Ground rings in Allegro X Package Designer Plus By community.cadence.com Published On :: Fri, 31 May 2024 13:19:12 GMT Power and Ground rings are exposed rings of metal surrounding a die that supply power/ground to the die and create a low-impedance path for the current flow. These rings ensure stable power distribution and reduce noise. Allegro X Package Designer Plus has a utility called Power/Ground Ring Generator which lets you define and place one or more shapes in the form of a ring around a die. To run the PWR/GND Generator Wizard, go to Route > Power/Ground Ring Generator or type "pring wizard" in the APD command window to invoke the Wizard. This Wizard lets you define and place one or more shapes in the form of a ring around a die. The Power/Ground Ring Wizard creates up to 12 rings (shapes) at a time. If you require more rings, you can run the Power/Ground Ring Wizard as many times as needed. This command displays a wizard in which you can specify: The number of rings to be generated The creation of the first ring as a die flag (Die flag is the boundary of the die like the power ring.) If you create a die flag and the first ring is the same net as the flag, you can enter a negative distance to overlap the ring and the die flag. Multiple options for placement of the rings with respect to: Origination point Distance from the edge of the die Distance from the nearest die pin on each die side The reference designator of the die with which the rings will be used The distance between rings The width of each ring The corner types on each ring (arc, chamfer, and right-angle) An assigned net name for each ring A label for each ring The rings are basic in nature. For other shape geometries or split rings, choose Shape > Polygon or Shape > Compose/Decompose Shape from the menu in the design window. Depending on the options selected, the Power/Ground Ring Wizard UI changes, representing how the rings will be created. Verify the Wizard settings to ensure that the rings are created as intended. When the Power/Ground Ring Wizard appears, set the number of rings to 2, accept the other defaults, and click Next. You can set Create first ring as die flag to create a basic die flag. 2. Define Ring 1 and the net associated with it. a) Browse and choose Vss in the Net Names dialog box. b) Click OK. c) Specify the label as VSS. d) Click Next. The first ring should appear in your design. It is associated with the proper net; in this case, VSS. For the second ring, choose the net as Vdd and specify the label as VDD. Click Next. Click Finish in the Result Verification screen to complete the process. The completed rings appear as shown below. Now, when you click on Power and Ground Die Pin and add wirebonds, you will see that the wirebonds are placed directly on the Power and Ground rings. Full Article
ge Package Design Integrity Checks By community.cadence.com Published On :: Fri, 09 Aug 2024 10:02:59 GMT When things go wrong with your package design flow, it can sometimes be difficult to understand the cause of the issue. This can be something like a die component is wrongly identified as a BGA, a via stack has an alignment issue, or there are duplicate bondwires. These are just a few examples of issues; there can be many more. When interactive messages and log files do not help determine the problem, the Package Design Integrity Check tool becomes very handy. This feature lets you run integrity checks, which ensures that the database is configured correctly. To invoke the command from Allegro X Advanced Package Designer, use the Tools > Package Design Integrity menu. Or type package integrity at the Command prompt. The Package Design Integrity Checks dialog box includes all categories and checks currently registered for the currently running product. You can enable all these categories and checks or only the one that you want to run. This utility can fix errors automatically (where possible). Errors and warnings are written to the “package_design_check.log” file. The utility can also be extended with your own custom rules based on your specific flows and needs. Full Article
ge How to transfer etch/conductor delays from Allegro Package Designer (APD) to pin delays in Allegro PCB Editor By community.cadence.com Published On :: Sun, 10 Nov 2024 23:39:10 GMT The packaging group has finished their design in Allegro Package Designer (APD) and I want to use the etch/conductor delay information from the mcm file in the board design in Allegro PCB Designer. Is there a method to do this? This can be done by exporting the etch/conductor data from APD and importing it as PIN_DELAY information into Allegro PCB Editor. If you are generating a length report for use in Allegro Pin Delay, you should consider changing the APD units to Mils and uncheck the Time Delay Report. In Allegro Package Designer: Select File > Export > Board Level Component. Select HDL for the Output format and select OK. 3. Choose a padstack for use when generating the component and select OK. This will create a file, package_pin_delay.rpt, in the component subdirectory of the current working directory. This file will contain the etch/conductor delay information that can be imported into Allegro. In Allegro PCB Editor: Make sure that the device you want to import delays to is placed in your board design and is visible. Select File > Import > Pin delay. Browse to the component directory and select package_pin_delay.rpt. The browser defaults to look for *.csv files so you will need to change the Files of type to *.* to select the file. You may be prompted with an error message stating that the component cannot be found and you should select one. If so, select the appropriate component. Select Import. Once the import is completed, select Close. Note: It is important that all non-trace shapes have a VOLTAGE property so they will not be processed by the the 2D field solver. You should run Reports > Net Delay Report in APD prior to generating the board-level component. This will display the net name of each net as it is processed. If you miss a VOLTAGE property on a net, the net name will show in the report processing window, and you will know which net needs the property. Full Article
ge Unveiling the Capabilities of Verisium Manager for Optimized Operations By community.cadence.com Published On :: Thu, 17 Oct 2024 06:13:06 GMT In SoC development, the verification cycle is a crucial phase that ensures products meet their specifications and function correctly. However, the complexity of modern SoC projects, with their constant data flow, multiple validation teams working in parallel, and tight schedules, presents significant challenges. This article explores these challenges and introduces Verisium Manager as a solution that embodies the 'One Tool Fits All' concept. This means that Verisium Manager is designed to handle all aspects of the verification process for SoC development, from planning to coverage analysis to regression testing, thereby addressing the complex needs of SoC verification. The Hurdles in Traditional Validation Cycles A typical validation process involves planning, coverage analysis, and regression testing. This complexity is compounded by using separate tools for each activity, leading to multiple control environments, APIs, and databases, not to mention the array of tool owners. Such fragmentation results in constant data transfer and translation between systems, from the planning tool to the coverage analysis tool and then to the regression testing tool. This continuous movement of data causes delays, system instability, poor user experiences, and, ultimately, a dip in the quality of the validation process. The use of multiple platforms leads to inefficiency and reduced productivity. What's needed is a unified system that can streamline the workflow, simplify the verification process, and enhance its effectiveness. Envisioning the Ideal Solution: Verisium Manager The cornerstone of an efficient validation cycle is integration and simplicity. The ideal solution is a singular platform that consolidates planning, coverage analysis, and regression management into one smooth, unified process. Verisium Manager emerges as this much-needed solution, encompassing all the functionalities necessary to streamline the validation process. Its comprehensive nature instills confidence in its ability to handle all aspects of the verification cycle. It can be fully customized to address and enforce any validation methodology and can facilitate smooth integration into any customer environment. Features that stand out in Verisium Manager include: Unified Workflow: It acts as a single cockpit from which all activities are orchestrated, ensuring the validation teams' work is uninterrupted and seamlessly integrated. Customization and Integration: Verisium Manager supports customizing test-plan structures and mapping results per project, ensuring a perfect fit for various project requirements. Its ability to smoothly integrate into the project's environment and compute platforms is unparalleled. Support for Continuous Updates and Migration: The tool accommodates constant updates to project data and supports the migration of legacy data, ensuring that no historical data is lost in the transition to a new system. Addressing Project-Specific Needs Verisium Manager recognizes diversity in different projects and offers project-specific solutions, including: Enforcing Project Test-Plan Structures and Attributes: It supports and enforces each project's unique test-plan structure and mapping guidelines. Unified Data Views and Measurements: Verisium Manager promotes a unified view of data across all teams and enforces unified measurements, ensuring consistency and clarity in the validation process. Enabling Project-Specific Actions and Integrations: The tool is designed to support project-specific actions directly from its graphical user interface and allows for smooth integration with in-house databases, dashboards, and the project execution stack. Verisium Manager is the epitome of efficiency in software/hardware validation. Its differentiating features, such as support for customization, unified data view, and comprehensive coverage and regression requirements, make it an indispensable tool for any validation team looking to elevate their workflow. Full Article validation vPlan verisium Verisium Manager vManager verification
ge A Brief on Message Bus Interface in PIPE By community.cadence.com Published On :: Thu, 17 Oct 2024 12:24:00 GMT PHY Interface for the PCI Express (PCIe), SATA, USB, DisplayPort, and USB4 Architectures (PIPE) enables the development of the Physical Layer (PHY) and Media Access Layer (MAC) design separately, providing a standard communication interface between these two components in the system. In recent years, the PIPE interface specification has incorporated many enhancements to support new features and advancements happening in the supported protocols. As the supported features increase, so does the count of signals on PIPE interface. To address the issue of increasing signal count, the message bus interface was introduced in PIPE 4.4 and utilized for PCIe lane margining at the receiver and elastic buffer depth control. In PIPE 5.0, all the legacy PIPE signals without critical timing requirements were mapped into message bus registers so that their associated functionality could be accessed via the message bus interface instead of implementing dedicated signals. It was decided that any new feature added in the new version of PIPE specification will be available only via message bus accesses unless they have critical timing requirements that need dedicated signals. Message Bus Interface The message bus interface provides a way to initiate and participate in non-latency-sensitive PIPE operations using a small number of wires. It also enables future PIPE operations to be added without adding additional wires. The use of this interface requires the device to be in a power state with PCLK running. Control and status bits used for PIPE operations are mapped into 8-bit registers that are hosted in 12-bit address spaces in the PHY and the MAC. The registers are accessed using read-and-write commands driven over the signals M2P_MessageBus[7:0] and P2M_MessageBus[7:0]. These signals are synchronous with the PCLK and are reset with Reset#. Message Bus Interface Commands The 4-bit commands are used for accessing the PIPE registers across the message bus. A transaction consists of a command and any associated address and data. All the following are time multiplexed over the bus from MAC and PHY: Commands (write_uncommitted, write_committed, read, read completion, write_ack) 12-bit address used for all types and read and writes 8-bit data, either read or written There can be cases where multiple PIPE interface signals can change on the same PCLK. To address such cases, the concept of write_uncommitted and write_committed is introduced. The uncommitted write should be saved into a write buffer, and its associated data values are updated into the relevant PIPE register at a future time when a write_committed is received, taking effect during the same PCLK cycle. Once a write_committed is sent, no new writes, whether committed or uncommitted, and any read command may be sent until a write_ack is received. Also, it is allowed to send NOP commands between write uncommitted and write committed. A simple timing demonstration of message bus: Message Address Space MAC and PHY each implement unique 12-bit address spaces. These address spaces will host registers associated with the PIPE operations. MAC accesses PHY registers using M2P_MessageBus[7:0], and PHY accesses the MAC registers using the M2P_MessageBus[7:0]. The MAC and PHY access specific bits in the registers to: initiate operations, Initiate handshakes, and Indicate status. Each 12-bit address space is divided into four main regions: the receiver address region, the transmitter address region, the common address region, and the vendor-specific address region. Each register field has an attribute description of either level or 1-cycle assertion. When a level field is written, the value written is maintained by the hardware until the next write to that field or until a reset occurs. When a 1-cycle field is written to assert the value high, the hardware maintains the assertion for only a single cycle and then automatically resets the value to zero on the next cycle. Cadence has a mature Verification IP solution for the verification of various aspects and topologies of PIPE PHY design. For more details, you may refer to the Simulation VIP for PIPE PHY | Cadence page, or you may send an email to support@cadence.com. Full Article Verification IP PHY VIP PIPE
ge Deferrable Memory Write Usage and Verification Challenges By community.cadence.com Published On :: Thu, 17 Oct 2024 21:00:00 GMT The application of real-time data processing or responsiveness is crucial, such as in high-performance computing, data centers, or applications requiring low-latency data transfers. It enables efficient use of PCIe bandwidth and resources by intelligently managing memory write operations based on system dynamics and workload priorities. By effectively leveraging Deferrable Memory Write [DMWr], Devices can achieve optimized performance and responsiveness, aligning with the evolving demands of modern computing applications. What Is Deferrable Memory Write? Deferrable Memory Write (DMWr) ECN introduced this new memory transaction type, which was later officially incorporated in PCIe 5.0 to CXL2.0. This enhanced type of memory transaction is Deferrable Memory Write [DMWr], which flows as another type of existing Read/Write memory transaction; the major difference of this Deferrable Memory Write, where the Requester attempts to write to a given location in Memory Space using the non-posted DMWr TLP Type, it Postponing their completion of memory write transactions to improve overall system efficiency and performance, those memory write operation can be delay or deferred until other priority task complete. The Deferrable Memory Write (DMWr) requires the Completer to return an acknowledgment to the Requester and provides a mechanism for the recipient to defer (temporarily refuse to service) the Request. DMWr provides a mechanism for Endpoints and hosts to choose to carry out or defer incoming DMWr Requests. This mechanism can be used by Endpoints and Hosts to simplify the design of flow control, reduce latency, and improve throughput. The Deferrable Memory writes TLP format in Figure A. (Fig A) Deferrable Memory writes TLP format. Example Scenario Here's how the DMWr works with a simplified example: Imagine a system with an endpoint device (Device A) and a host CPU (Device B). Device B wants to write data to Device A's memory, but due to varying reasons such as system bus congestion or prioritization of other transactions, Device A can defer the completion of the memory write request. Just follow these steps: Initiation of Memory Write: Device B initiates a memory write transaction to Device A. This involves sending the memory write request along with the data payload over the PCIe physical layer link. Acknowledgment and Deferral: Upon receiving the memory write request, Device A acknowledges the transaction but may decide to defer its completion. Device A sends an acknowledgment (ACK) back to Device B, indicating it has received the data and intends to complete the write operation but not immediately. Deferred Completion: Device A defers the completion of the memory write operation to a later, more opportune time. This deferral allows Device A to prioritize other transactions or optimize the use of system resources, such as memory bandwidth or processor availability. Completion and Response: At a later point, Device A completes the deferred memory write operation and sends a completion indication back to Device B. This completion typically includes any status updates or additional information related to the transaction. Usage or Importance of DMWr Deferrable Memory Write usage provides the improvement in the following aspects: Reduced Latency: By deferring less critical memory write operations, more critical transactions can be processed with lower latency, improving overall system responsiveness. Improved Efficiency: Optimizes the utilization of system resources such as memory bandwidth and CPU cycles, enhancing the efficiency of data transfers within the PCIe architecture. Enhanced Performance: Allows devices to manage and prioritize transactions dynamically, potentially increasing overall system throughput and reducing contention. Challenges in the Implementation of DMWr Transactions The implementation of deferrable memory writes (DMWr) introduces several advancements and challenges in terms of usage and verification: Timing and Synchronization: DMWr allows transactions to be deferred, complicating timing requirements or completing them within acceptable timing windows to avoid protocol violations. Ensuring proper synchronization between devices becomes critical to prevent data loss or corruption. Protocol Compliance: Verification must ensure compliance with ECN PCIe 6.0 and CXL specifications regarding when and how DMWr transactions can be initiated and completed. Performance Optimization: While DMWr can improve overall system performance by reducing latency, verifying its impact on system performance and ensuring it meets expected benchmarks is crucial. Error Handling: Handling errors related to deferred transactions adds complexity. Verifying error detection and recovery mechanisms under various scenarios (e.g., timeout during deferral) is essential. Verification Challenges of DMWr Transactions The challenges to verifying the DMWr transaction consist of all checks with respect to Function, Timing, Protocol compliance, improvement, Error scenario, and security usage on purpose, as well as Data integrity at the PCIe and CXL. Functional Verification: Verifying the correct implementation of DMWr at both ends of the PCIe link (transmitter and receiver) to ensure proper functionality and adherence to specifications. Timing Verification: Validating timing constraints associated with deferring writes and ensuring transactions are completed within specified windows without violating protocol rules. Protocol Compliance Verification: Checking that DMWr transactions adhere to PCIe and CXL protocol rules, including ordering rules and any restrictions on deferral based on the transaction type. Performance Verification: Assessing the impact of DMWr on overall system performance, including latency reduction and bandwidth utilization, through simulation and testing. Error Scenario Verification: Creating and testing scenarios to verify error handling mechanisms related to DMWr, such as timeouts, retries, and recovery procedures. Security Considerations: Assessing potential security vulnerabilities related to DMWr, such as data integrity risks during deferred transactions or exposure to timing-based attacks. Major verification challenges and approaches are timing and synchronization verification in the context of implementing deferrable memory writes (DMWr), which is crucial due to the inherent complexities introduced by deferred transactions. Here are the key issues and approaches to address them: Timing and Synchronization Issues Transaction Completion Timing: Issue: Ensuring deferred transactions are completed within the specified time window without violating protocol timing constraints. Approach: Design an internal timer and checker to model worst-case scenarios where transactions are deferred and verify that they are complete within allowable latency limits. This involves simulating various traffic loads and conditions to assess timing under different scenarios. Ordering and Dependencies: Issue: Verifying that transactions deferred using DMWr maintain the correct ordering and dependencies relative to non-deferred transactions. Approach: Implement test scenarios that include mixed traffic of DMWr and non-DMWr transactions. Verify through simulation or emulation that dependencies and ordering requirements are correctly maintained across the PCIe link. Interrupt Handling and Response Times: Issue: Verify the handling of interrupts and ensure timely responses from devices involved in DMWr transactions. Approach: Implement test cases that simulate interrupt generation during DMWr transactions. Measure and verify the response times to interrupts to ensure they meet system latency requirements. In conclusion, while deferrable memory writes in PCIe and CXL offer significant performance benefits, their implementation and verification present several challenges related to timing, protocol compliance, performance optimization, and error handling. Addressing these challenges requires rigorous testing and testbench of traffic, advanced verification methodologies, and a thorough understanding of PCIe specifications and also the motivation behind introducing this Deferrable Write is effectively used in the CXL further. Outcomes of Deferrable Memory Write verify that the performance benefits of DMWr (reduced latency, improved throughput) are achieved without compromising timing integrity or violating protocol specifications. In summary, PCIe and CXL are complex protocols with many verification challenges. You must understand many new Spec changes and consider the robust verification plan for the new features and backward compatible tests impacted by new features. Cadence's PCIe 6.0 Verification IP is fully compliant with the latest PCIe Express 6.0 specifications and provides an effective and efficient way to verify the components interfacing with the PCIe 6.0 interface. Cadence VIP for PCIe 6.0 provides exhaustive verification of PCIe-based IP and SoCs, and we are working with Early Adopter customers to speed up every verification stage. More Information For more info on how Cadence PCIe Verification IP and TripleCheck VIP enable users to confidently verify PCIe 6.0, see our VIP for PCI Express, VIP for Compute Express Link, and TripleCheck for PCI Express See the PCI-SIG website for more details on PCIe in general and the different PCI standards. Full Article CXL PCIe PCIe Gen5 Deferrable memory write transaction
ge Randomization considerations for PCIe Integrity and Data Encryption Verification Challenges By community.cadence.com Published On :: Fri, 08 Nov 2024 05:00:00 GMT Peripheral Component Interconnect Express (PCIe) is a high-speed interface standard widely used for connecting processors, memory, and peripherals. With the increasing reliance on PCIe to handle sensitive data and critical high-speed data transfer, ensuring data integrity and encryption during verification is the most essential goal. As we know, in the field of verification, randomization is a key technique that drives robust PCIe verification. It introduces unpredictability to simulate real-world conditions and uncover hidden bugs from the design. This blog examines the significance of randomization in PCIe IDE verification, focusing on how it ensures data integrity and encryption reliability, while also highlighting the unique challenges it presents. For more relevant details and understanding on PCIe IDE you can refer to Introducing PCIe's Integrity and Data Encryption Feature . The Importance of Data Integrity and Data Encryption in PCIe Devices Data Integrity : Ensures that the transmitted data arrives unchanged from source to destination. Even minor corruption in data packets can compromise system reliability, making integrity a critical aspect of PCIe verification. Data Encryption : Protects sensitive data from unauthorized access during transmission. Encryption in PCIe follows a standard to secure information while operating at high speeds. Maintaining both data integrity and data encryption at PCIe’s high-speed data transfer rate of 64GT/s in PCIe 6.0 and 128GT/s in PCIe 7.0 is essential for all end point devices. However, validating these mechanisms requires comprehensive testing and verification methodologies, which is where randomization plays a very crucial role. You can refer to Why IDE Security Technology for PCIe and CXL? for more details on this. Randomization in PCIe Verification Randomization refers to the generation of test scenarios with unpredictable inputs and conditions to expose corner cases. In PCIe verification, this technique helps us to ensure that all possible behaviors are tested, including rare or unexpected situations that could cause data corruption or encryption failures that may cause serious hindrances later. So, for PCIe IDE verification, we are considering the randomization that helps us verify behavior more efficiently. Randomization for Data Integrity Verification Here are some approaches of randomized verifications that mimic real-world traffic conditions, uncovering subtle integrity issues that might not surface in normal verification methods. 1. Randomized Packet Injection: This technique randomized data packets and injected into the communication stream between devices. Here we Inject random, malformed, or out-of-sequence packets into the PCIe link and mix valid and invalid IDE-encrypted packets to check the system’s ability to detect and reject unauthorized or invalid packets. Checking if encryption/decryption occurs correctly across packets. On verifying, we check if the system logs proper errors or alerts when encountering invalid packets. It ensures coverage of different data paths and robust protocol check. This technique helps assess the resilience of the IDE feature in PCIe in below terms: (i) Data corruption: Detecting if the system can maintain data integrity. (ii) Encryption failures: Testing the robustness of the encryption under random data injection. (iii) Packet ordering errors: Ensuring reordering does not affect data delivery. 2. Random Errors and Fault Injection: It involves simulating random bit flips, PCRC errors, or protocol violations to help validate the robustness of error detection and correction mechanisms of PCIe. These techniques help assess how well the PCIe IDE implementation: (i) Detects and responds to unexpected errors. (ii) Maintains secure communication under stress. (iii) Follows the PCIe error recovery and reporting mechanisms (AER – Advanced Error Reporting). (iv) Ensures encryption and decryption states stay synchronized across endpoints. 3. Traffic Pattern Randomization: Randomizing the sequence, size, and timing of data packets helps test how the device maintains data integrity under heavy, unpredictable traffic loads. Randomization for Data Encryption Verification Encryption adds complexity to verification, as encrypted data streams are not readable for traditional checks. Randomization becomes essential to test how encryption behaves under different scenarios. Randomization in data encryption verification ensures that vulnerabilities, such as key reuse or predictable patterns, are identified and mitigated. 1. Random Encryption Keys and Payloads: Randomly varying keys and payloads help validate the correctness of encryption without hardcoding assumptions. This ensures that encryption logic behaves correctly across all possible inputs. 2. Randomized Initialization Vectors (IVs): Many encryption protocols require a unique IV for each transaction. Randomized IVs ensure that encryption does not repeat patterns. To understand the IDE Key management flow, we can follow the below diagram that illustrates a detailed example key programming flow using the IDE_KM protocol. Figure 1: IDE_KM Example As Figure 1 shows, the functionality of the IDE_KM protocol involves Start of IDE_KM Session, Device Capability Discovery, Key Request from the Host, Key Programming to PCIe Device, and Key Acknowledgment. First, the Host starts the IDE_KM session by detecting the presence of the PCIe devices; if the device supports the IDE protocol, the system continues with the key programming process. Then a query occurs to discover the device’s encryption capabilities; it ensures whether the device supports dynamic key updates or static keys. Then the host sends a request to the Key Management Entity to obtain a key suitable for the devices. Once the key is obtained, the host programs the key into the IDE Controller on the PCIe endpoint. Both the host and the device now share the same key to encrypt and authenticate traffic. The device acknowledges that it has received and successfully installed the encryption key and the acknowledgment message is sent back to the host. Once both the host and the PCIe endpoint are configured with the key, a secure communication channel is established. From this point, all data transmitted over the PCIe link is encrypted to maintain confidentiality and integrity. IDE_KM plays a crucial role in distributing keys in a secure manner and maintaining encryption and integrity for PCIe transactions. This key programming flow ensures that a secure communication channel is established between the host and the PCIe device. Hence, the Randomized key approach ensures that the encryption does not repeat patterns. 3. Randomization PHE: Partial Header Encryption (PHE) is an additional mechanism added to Integrity and Data Encryption (IDE) in PCIe 6.0. PHE validation using a variety of traffic; incorporating randomization in APIs provided for validating PHE feature can add more robust Encryption to the data. Partial Header Encryption in Integrity and Data Encryption for PCIe has more detailed information on this. Figure 2: High-Level Flow for Partial Header Encryption 4. Randomization on IDE Address Association Register values: IDE Address Association Register 1/2/3 are supposed to be configured considering the memory address range of IDE partner ports. The fields of IDE address registers are split multiple values such as Memory Base Lower, Memory Limit Lower, Memory Base Upper, and Memory Limit Upper. IDE implementation can have multiple register blocks considering addresses with 32 or 64, different registers sizes, 0-255 selective streams, 0-15 address blocks, etc. This Randomization verification can help verify all the corner cases. Please refer to Figure 2. Figure 3: IDE Address Association Register 5. Random Faults During Encryption: Injecting random faults (e.g., dropped packets or timing mismatches) ensures the system can handle disruptions and prevent data leakage. Challenges of IDE Randomization and its Solution Randomization introduces a vast number of scenarios, making it computationally intensive to simulate every possibility. Constrained randomization limits random inputs to valid ranges while still covering edge cases. Again, using coverage-driven verification to ensure critical scenarios are tested without excessive redundancy. Verifying encrypted data with random inputs increases complexity. Encryption masks data, making it hard to verify outputs without compromising security. Here we can implement various IDE checks on the IDE callback to analyze encrypted traffic without decrypting it. Randomization can trigger unexpected failures, which are often difficult to reproduce. By using seed-based randomization, a specific seed generates a repeatable random sequence. This helps in reproducing and analyzing the behavior more precisely. Conclusion Randomization is a powerful technique in PCIe verification, ensuring robust validation of both data integrity and data encryption. It helps us to uncover subtle bugs and edge cases that a non-randomized testing might miss. In Cadence PCIe VIP, we support full-fledged IDE Verification with rigorous randomized verification that ensures data integrity. Robust and reliable encryption mechanisms ensure secure and efficient data communication. However, randomization also brings various challenges, and to overcome them we adopt a combination of constrained randomization, seed-based testing, and coverage-driven verification. As PCIe continues to evolve with higher speeds and focuses on high security demands, our Cadence PCIe VIP ensures it is in line with industry demand and verify high-performance systems that safeguard data in real-world environments with excellence. For more information, you can refer to Verification of Integrity and Data Encryption(IDE) for PCIe Devices and Industry's First Adopted VIP for PCIe 7.0 . More Information: For more info on how Cadence PCIe Verification IP and TripleCheck VIP enables users to confidently verify IDE, see our VIP for PCI Express , VIP for Compute Express Link for and TripleCheck for PCI Express For more information on PCIe in general, and on the various PCI standards, see the PCI-SIG website . Full Article
ge Functional coverage report. By community.cadence.com Published On :: Wed, 13 Feb 2019 23:37:00 GMT Is there a way to generate coverage reports, not in ucd or any other format. I have written basic covergroup and passed arguments[-covoverwrite -cov_cgsample -cov_debuglog -coverage u] to the xrun command, however I don't see anything in sim directory, nor do I see anything in the logs indicating the covergroups have been hit. How can I confirm that cover groups are getting hit and essentially observe the bins. In Questa sim, you essentially get them as part of the log itself. Full Article
ge How do I use TCL to get connections between modules in INNOVUS. By community.cadence.com Published On :: Sun, 20 Sep 2020 04:04:00 GMT Please give me some ideas. Thank you very much. Full Article
ge Knowledge Booster Training Bytes - Writing Physical Verification Language Rules By community.cadence.com Published On :: Wed, 03 Jul 2024 08:56:00 GMT Have you ever wanted to write a DRC rule deck to check for space or width constraints on polygons? Or have you wondered how the multiple lines of an LVS rule deck extract and conduct a comparison between the schematic and layout? Maybe you've been curious about the role of rule deck writers in creating high-quality designs ready for tape-out. If any of these questions interest you, there is good news: the latest version (v23.1) of the Physical Verification Rules Writer (PVLRW) course is designed to teach you rule deck writing. This free 16-hour online course includes audio and labs designed to make your learning experience comfortable and flexible. Whether you are new to the concept or an experienced CAD/PDK engineer, the course is structured to enhance your rule deck writing skills. The PVLRW course covers six core modules: Layer Processing, DRC Rules, Layout Extraction, ERC and LVS Rules, Schematic Netlisting, and Coloring Rules. There are also three optional appendix sections. Each module explains relevant rules with syntax, concepts, graphics, examples, and case studies. This course is based on tool versions PEGASUS231 and Virtuoso Studio IC231. Pegasus Input and Output Pegasus is a cloud-ready physical verification signoff solution that enables engineers to support faster delivery of advanced-node integrated circuits (ICs) to market. Pegasus requires input data in the form of layout geometry, schematic netlists, and rules that direct the tool operation. The rules fall into two categories: those that describe the fabrication process and those that control the job-specific operation. Pegasus provides log and report files, netlists, databases, and error databases as output. Overview of Pegasus Rule File The rule decks written in Physical Verification Language (PVL) work for the Cadence PV signoff tools Pegasus and PVS (Physical Verification System). The PVL rules are placed in a file that gets selected in a run from the GUI or the command line, as the user directs. PVL rules may be on separate lines within the file and can also be contained in named rule blocks. Each line of code starts with a PVL rule that uses prefix type notation. It consists of a keyword followed by options, input layer or variable names, and output layer or variable names. A rule block has the format of the keyword rule, followed by a rule name you wish to give it, followed by an opening curly brace. You enter the rules you wish to perform, followed by a closing curly brace on the last separate line. Sample Rule deck with individual lines of code and rule blocks. DRC Rules The first step in a typical Pegasus flow is a Design Rule Check (DRC), which verifies that layout geometries conform to the minimum width, spacing, and other fabrication process rules required by an IC foundry. Each foundry specifies its own process-dependent rules that must be met by the layout design. There are three types of DRC rules: layer definition rules, layer derivation rules, and DRC design check rules. Layer definition rules identify the layers contained in the input layout database, and layer derivation rules derive additional layers from the original input layers, allowing the tool to test the design against specific foundry requirements using the design check rules. A sample DRC Rule deck A layout view displaying the DRC violations LVS Rules The Pegasus Layout Versus Schematic (LVS) tool compares the layout netlist with the schematic netlist to check for discrepancies. There are two essential LVS rule sets: LVS extraction rules and comparison rules. LVS extraction rules help extract drawn devices and connectivity information from the input layout geometry data and outputs into a layout netlist. The LVS extraction rule set also includes the layer definition, derivation, extraction, connectivity, and net listing rules. LVS comparison rules are associated with comparing the extracted layout netlist to a schematic netlist. A sample LVS Rule deck. TCL, Macros, and Conditional commands Tcl is supported and used in various Pegasus functionalities, such as Pegasus rule files and Pegasus configurator. Macros are functional templates that are defined once and can be used multiple times in a rule file. Conditional Commands are used to process or skip specific commands in the rule file. Do You Have Access to the Cadence Support Portal? If not, follow the steps below to create your account. On the Cadence Support portal, select Register Now and provide the requested information on the Registration page. You will need an email address and host ID to sign up. If you need help with registration, contact support@cadence.com. To stay up to date with the latest news and information about Cadence training and webinars, subscribe to the Cadence Training emails. If you have questions about courses, schedules, online, public, or live onsite training, reach out to us at Cadence Training. For any questions, general feedback, or future blog topic suggestions, please leave a comment. Related Resources Product Manuals Cadence Pegasus Developers Guide Rapid Adoption Kits Running Pegasus DRC/LVS/FILL in Batch Mode Training Byte Videos What Is the Run Command File? How to Run PVS-Pegasus Jobs in GUI and Batch modes? PVS DRC Run From - Setup Rules What Is PVS/Pegasus Layer Viewer? PVL Coloring Ruledecks with Docolor and Stitchcolor PLV Commands: dfm_property with Primary & Secondary Layer PVS Quantus QRC Overview Online Courses Pegasus Verification System PVS (Physical Verification System) Virtuoso Layout Design Basics About Knowledge Booster Training Bytes Knowledge Booster Training Bytes is an online journal that relays information about Cadence Training videos, online courses, and upcoming webinars in the Learning section of the Cadence Learning and Support portal. This blog category brings you direct links to these videos, courses, and other related material on a regular basis. Subscribe to receive email notifications about our latest Custom IC Design blog posts. Full Article Virtuoso Studio Routing Layout Suite Cadence training training bytes Circuit Design Cadence Education Services Custom IC Design online training
ge PCB Chamfering Board edge connectors By community.cadence.com Published On :: Thu, 09 Dec 2021 15:12:48 GMT Hi I am looking into chamfering the edge of PCB for Board edge connectors. I have performed fillet command earlier but new to chamfering. Below is the description : As seen above, the PCB edge are chamfered in thickness as well as at the corners. Using OrCAD PCB hotfix S023. Full Article
ge 10 Layer PCB project won't generate Gerber's completely for middle layers By community.cadence.com Published On :: Thu, 09 Dec 2021 16:29:21 GMT Hello Fellow PCB Designers, We have a 10 layer PCB design that originated in Pads and was converted over to Allegro 17.4, this is an old design but is manufacturable and works perfectly fine. When I try to generate a Gerber for the Top or Bottom layers the Gerber comes out fine. But Most of the middle layers are Etch's and via's for power and grounds, but the Gerber's come mostly blank, there might be some details, but in the Gerber view everything is displayed correctly. The design does have many close spacings, I have not changed anything in the constrains manager yet, turned off a lot of the DRC's, but thinking there might be something wrong with the constrains. I find that the CSet is set to 2_18, not sure yet what this means, also there are many of these definitions, PCS 3,4,5,ect, are the same as CSet 2_18 any suggestions would be great, we are currently looking into this, have seen that even small change in constraint manager can cause long processing and even Allegro crashing, this is a large project. Thanks Much, Thanks, Mike Pollock. Full Article
ge SPB17.4 installation package build defect By community.cadence.com Published On :: Thu, 09 Dec 2021 23:05:50 GMT 1, Some components in the installation package cannot choose to install; even if they do not choose them, they will still be installed; just less shortcut icons, the documents are still released to the installation directory. 2, "Catia Application Frame" repeat the problem? “x:CadenceSPB_17.4 oolsin“ ”x:CadenceSPB_17.4 oolsspatial“ "Catia Application Frame" shouldn't you use the latest version? 3,Follow-up update patch cleaning the useless files and extra empty folder action !!! The SPB17.4 installation package is currently the worst installation package I have seen for large-scale software packaging. Full Article
ge Change code in veriloga view from external program By community.cadence.com Published On :: Wed, 30 Oct 2024 15:31:02 GMT For reasons too complicated to go into here, I need to generate the code for a veriloga view from a outside the normal Verilog-A editor. I would start with an "empty" veriloga view generated from the symbol in the normal way so I get the port order correct, then use external code to provide "guts" of the veriloga view by overwriting the generated code. My understanding is that and code changes made external to the normal flow do not get picked up by Cadence - the Verilog-A code gets read at design time, not at netlist time. Would simply forcing a check and save of the veriloga view after the code is modified fix that problem? Or is there an easier way to incorporate externally generated Verilog-A code? Full Article
ge How to get maximum value of s11 Trace By community.cadence.com Published On :: Mon, 04 Nov 2024 14:04:46 GMT Hello i did a sp-Analysis and now i want to extract the maximum value of the s11 trace and the corresponding frequency. I already tried ymax() in the calculator but i am suspecting it only works on transient Signals. Full Article
ge error when generating snp files from a variable By community.cadence.com Published On :: Tue, 05 Nov 2024 11:22:54 GMT Hello everyone, I have a testbench for generating s2p files from a SP simulation that was working until few months ago. Today I have reopened (w/o making changes that I am aware of) and I get the error as shown below: first I show the testbench settings: notice how the s2p generation is disabled: the field "file" is left blank in the corner I defined some parameters, "filename" is the word that is suppose to generate the name for the s2p. where the two variables are defined as follows And now the output log: spectre.out file gives the following error:When clicking on the error message at "9", the input.scs file opens up and the line 9 gets highlighted in greennow, so far I understood that the problem seem to be related tom the "pathcds" variable, but I really don't understand what the error message here means, since I don't see any error in the input.scs file by the way - if for instance I define the variable "filename" as shown below, then I get no errors: thanksTommaso Full Article
ge ddt VerilogA usage By community.cadence.com Published On :: Thu, 07 Nov 2024 13:14:08 GMT Hi, reading Verilog®-A Language Reference I found this description of ddt function I don't understand: Use the time derivative operator to calculate the time derivative of an argument. ddt( input [ , abstol | nature ] ) input is a dynamic expression. abstol is a constant specifying the absolute tolerance that applies to the output of the ddt operator. Set abstol at the largest signal level that you consider negligible. nature is a nature from which the absolute tolerance is to be derived. Can anyone explain how abstol and nature are defined? how using them? an example would be really appreciated. Thanks Andrea Full Article
ge Importing ODF to vManager does not update vplan By community.cadence.com Published On :: Tue, 05 Mar 2024 06:20:00 GMT I exported vplan to .odf file in vManager and after editing it I imported it to vManager. The vplan was expected to be synchronized and updated. However, nothing has changed to it. Does anyone know why? Full Article
ge Using Vmanager Pre-Script to launch a timed script By community.cadence.com Published On :: Thu, 07 Mar 2024 23:32:05 GMT I would like to send an update about a vmanager regression status x days after the regression has been run. In the current environment, the vmanager regression is creating a new filepath for logs automatically based on regression name/date, so I can't use a cron job to gather logs, as the log location is not known. I tried to use the pre session script to launch a detached shell script that would run after a delay, but when the pre_script runs, it waits until everything is completed before finishing and moving on to starting the regression. Here is the test pre_script I am using: #!/bin/sh echo "pre_script start" delay_script "FIRST" 1nohup delay_script "SECOND" 30 & disowndelay_script "THIRD" 1 echo "pre_script end"exit 0 Here is the test delay_script I am using: #!/bin/sh echo "Starting $1" sleep $2 echo "Ending $1" Here is the script output when run from terminal. After the "pre_script end", I get control back. Here is the script output when run from vmanager. There is no "nohup", and the pre_session phase doesn't complete until all the delay scripts complete. My question is, is there a better way to achieve my goal here? (The goal being to run a script from the vmanager log directory automatically x days after the regression). I think I could use the pre_script to send directory information for an auxiliary cron job to pick up, but I would prefer to not have to have extra cronjobs needed for this. Full Article
ge vManager crashes when analyzing multiple sessions simultaneously with a fatal error detected by the Java Runtime Environment By community.cadence.com Published On :: Sat, 16 Mar 2024 04:34:41 GMT When analyzing multiple sessions simultaneously Verisium Manager crashed and reported below error messages: # A fatal error has been detected by the Java Runtime Environment: # # SIGSEGV (0xb) at pc=0x00007efc52861b74, pid=14182, tid=18380 # # JRE version: OpenJDK Runtime Environment Temurin-17.0.3+7 (17.0.3+7) (build 17.0.3+7) # Java VM: OpenJDK 64-Bit Server VM Temurin-17.0.3+7 (17.0.3+7, mixed mode, sharing, tiered, compressed oops, compressed class ptrs, g1 gc, linux-amd64) # Problematic frame: # C [libucis.so+0x238b74] ...... For more details please refer to the attached log file "hs_err_pid21143.log". Two approaches were tried to solve this problem but neither has worked. Method.1: Setting larger heap size of Java process by "-memlimit" options.For example "vmanager -memlimit 8G". Method.2: Enlarging stack memory size limit of the Coverage engine by setting "IMC_NATIVE_STACKSIZE" environment variable to a larger value. For example "setenv IMC_NATIVE_STACKSIZE 1024000" According to "hs_err_pid*.log" it is almost certain that the memory overflow triggered Java's CrashOnOutOfMemoryError and caused Verisium Manager to crash. There are some arguments about memory management of Java like "Xms, Xmx, ThreadStackSize, Xss5048k etc" and maybe this problem can be fixed by setting these arguments during analysis. However, how exactly does Verisium Manager specify these arguments during analysis? I tried to set them by the form of setting environment variables before analysis but it didn't work in analysis and their values didn't change. Is there something wrong with my operation or is there a better solution? Thank you very much. Full Article
ge explain/correct my understanding between average/covered in imc metrics By community.cadence.com Published On :: Wed, 17 Apr 2024 05:36:41 GMT I'm working on the code coverage. Doing a metrics analysis by default we see overall average grade and overall covered. But when i do a block analysis on an instance i see overall covered grade, code covered grade, block covered grade, statement covered grade, expression covered grade, toggle covered grade. As I dont know the difference I started to read the IMC user guide and came to know there are 3 things we come across while doing a code coverage local, covered, average From my understanding local - child instances metrics doesnt reach the parent level. For example, we have an instance Q and its sub instances like Q.a, Q.b. Block Local grade of Q can be 100% even when its instances Q.a and Q.b a block local grades isnt at 100%. In the attached image there is formula The key difference between average and covered is the weights. Average : Mathematically taking the above scenario where Q.a, and Q.b has 10 blocks each. Q.a has covered 8 blocks and q.b has covered 2 blocks. Now if we take the normal average it should be total covered/ totatl number = 8+2/10+10 yielding 50%. But when we add weights saying Q.a is 70% and Q.b is 30% the new number would be (8*0.7+2*0.3) / (10*0.7+10*0.3) resulting 62%. Because of the weights we see 12% bump. Covered: there is no role of weights. Among these 3 metrics i've changed my default view to this in the image to get more realistic picture when i do analyze metrics. Do you guys agree with the approach? Full Article
ge "How to disable toggle coverage of unused logic" By community.cadence.com Published On :: Tue, 28 May 2024 11:46:30 GMT I'm currently work in coverage analysis. In my design certain register bits remain unused, which could potentially lower toggle coverage. Specifically, I'd like to know how to disable coverage for specific unused register bits within a 32-bit register. For instance, I want to deactivate coverage for bit 17 and bit 20 in a 32-bit register to optimize toggle coverage. Could you please provide guidance on how to accomplish this? Full Article
ge Is it possible to automatically exclude registers or wires that are not used from toggle coverage? By community.cadence.com Published On :: Wed, 03 Jul 2024 12:04:29 GMT Hello, I have a question about toggle coverage. In my case, there are many unused registers or wires that are affecting the toggle coverage score negatively. Is it possible to automatically exclude registers or wires that are not used from toggle coverage? My RTL code is as follows, Is it possible to automatically disable tb.top1.b and tb.top1.c without using an exclude file? module top1; reg a; reg b; reg [31:0] c; initial begin #1 a=1'b0; #1 a=1'b1; #1 a=1'b0; end endmodule module tb; top1 top1(); endmodule Full Article
ge Xcelium: dump coverage information in the middle of a simulation By community.cadence.com Published On :: Fri, 23 Aug 2024 10:25:15 GMT Hi, I'm using the xcelium simulator to simulate a testbench, in which I first stimulate my design to do something (part "A") and then do a direct follow-up test on the design (part "B"). I need two things from this testbench: the results of the test (part "B", passed/failed) and coverage information, but the coverage information should only include part A and explicitly not part B. I could do the following: run the testbench with part A and B, get the "passed/failed" result of the test and then follow up another simulator run with another testbench, that only includes part A and get the coverage information from that simulation run. Is there a way to force xcelium to give me the coverage information of only a part of the simulation? Ideally, I would like to write the verilog code of my testbench to look something like this: do A dump coverage information do B But maybe there is another way to tell xcelium to consider only part of the testbench for the coverage information. I did have a look at the manual, but was not able to find something useful for this problem. Any ideas? Full Article
ge Collecting Coverage using Vmanager By community.cadence.com Published On :: Mon, 02 Sep 2024 16:13:36 GMT Hi, I am running a regression in order to collect the coverage. However I have an issue. I am setting a signal to 0 when reset is de-asserted then this signal takes a fixed value when the reset is asserted. if(!rst_n) init_val= 'b0; else init_val= 31'h34013FF7 the issue is that I got 0% coverage for the init_value since we only have a rising edge and the signal is not toggling during the simulation. is there an option to collect coverage when there is a rising edge or a falling edge? Full Article
ge Using vManager to identify line coverage from a specific test By community.cadence.com Published On :: Tue, 24 Sep 2024 21:20:52 GMT I have been using the rank feature to identify tests that are redundant in our environment, but then I realized I'd also like to be able to see exactly what coverage goes into increasing the delta_cov value for a given test. If I had a test in my rank report that contributed 0.5% of the delta_cov, how could I got about seeing exactly where that 0.5% was coming from? It seems like that might be part of the correlate function, but I couldn't mange to find a way to see what specific coverage was being contributed for a given test. Full Article
ge ce_tools directory no longer shipped with Specman By community.cadence.com Published On :: Tue, 22 Apr 2008 08:59:07 GMT Hello All,starting with version 8.1 the contents of the ce_tools directory will no longerbe shipped with Specman. The directory contains some unsupported AE/R&Dware and has not been updated for several releases (i.e. most of those oldpackages don't work with the latest release). Attached is the contents of this directory. Please read the README beforeusing any of the packages.Regards,-hannesOriginally posted in cdnusers.org by hannes Full Article
ge Specman Makefile generator utility By community.cadence.com Published On :: Tue, 02 Dec 2008 08:31:45 GMT I've heard lots of people asking for a way to generate Makefiles for Specman code, and it seems there are some who don't use "irun" which takes care of this automatically. So I wrote this little utility to build a basic Makefile based on the compiled and loaded e code.It's really easy to use: at any time load the snmakedeps.e into Specman, and use "write makefile <name> [-ignore_test]".This will dump a Makefile with a set of variables corresponding to the loaded packages, and targets to build any compiled modules.Using -ignore_test will avoid having the test file in the Makefile, in case you switch tests often (who doesn't?).It also writes a stub target so you can do "make stub_ncvlog" or "make stub vhdl" etc.The targets are pretty basic, I thought it was more useful to #include this into the main Makefile and define your own more complex targets / dependencies as required.The package uses the "reflection" facility of the e language, which is now documented since Specman 8.1, so you can extend this utility if you want (please share any enhancements you make). Enjoy! :-)Steve. Full Article
ge IntelliGen Statistics Metrics Collection Utilility By community.cadence.com Published On :: Thu, 04 Jun 2009 16:24:28 GMT As noted in white papers, posts on the Team Specman Blog, and the Specman documentation, IntelliGen is a totally new stimulus generator than the original "Pgen" and, as a result, there is some amount of effort needed to migrate an existing verification environment to fully leverage the power of IntelliGen. One of the main steps in migrating code is running the linters on your code and adressing the issues highlighted. Included below is a simple utility you can include in your environment that allows you to collect some valuable statistics about your code base to allow you to better gauge the amount of work that might be required to migrate from Pgen to IntelliGen. The ICFS statistics reported are of particular benefit as the utility not only identifies the approximate number of ICFSs in the environment, it also breaks the total number down according to generation contexts (structs/units and gen-on-the-fly statements) allowing you to better focus your migration efforts. IMPORTANT: Sometimes a given environment can trigger a large number of IntelliGen linting messages right off the bat. Don't let this freak you out! This does not mean that migration will be a long effort as quite often some slight changes to an environment remove a large number of identified issues. I recently encountered a situation where a simple change to three locations in the environment, removed 500+ ICFSs!The methods included in the utility can be used to report information on the following:- Number of e modules - Number of lines in the environment (including blanks and comments)- Number and type of IntelliGen Guidelines linting messages- Number of Inconsistently Connected Field Sets (ICFSs)- Number of ICFS contexts and how many ICFSs per context- Number of soft..select overlays found in the envioronment- Number of Laces identified in the environmentTo use the code below, simply load it before/after loading e-code and then you can execute any of the following methods:- sys.print_file_stats() : prints # of lines and files - sys.print_constraint_stats() : prints # of constraints in the environment- sys.print_guideline_stats() : prints # of each type of linting message- sys.print_icfs_stats() : prints # of ICFSs, contexts and #ICFS/context- sys.print_soft_select_stats() : prints # of soft select overlay issues- sys.print_lace_stats() : *Only works for SPMNv6.2s4 and later* prints # of laces identified in the environmentEach of the above calls to methods produces it's own log files (stored in the current working directory) containing relevant information for more detailed analysis. - file_stats_log.elog : Output of "show modules" command- constraint_log.elog : Output of the "show constraint" command- guidelines_log.elog : Output of "gen lint -g" (with notification set to MAX_INT in order to get all warnings)- icfs_log.elog : Output of "gen lint -i" command- soft_select_log.elog: Output of the "gen lint -s" command- lace_log.elog : Output of the "show lace" commandHappy generating!Corey Goss Full Article
ge latest Specman-Matlab package By community.cadence.com Published On :: Tue, 15 Sep 2009 05:56:14 GMT Attached is the latest revision of the venerable Specman-Matlab package (Lead Application Engineer Jangook Lee is the latest to have refreshed it for a customer in Asia to support 64 bit mode. Look for a guest blog post from him on this package shortly.)There is a README file inside the package that gives a detailed overview, shows how to run a demo and/or validate it’s installed correctly, and explains the general test flow. The test file included in the package called "test_get_cmp_mdim.e" shows all the capabilities of the package, including:* Using Specman to initialize and tear down the Matlab engine in batch mode* Issuing Matlab commands from e-code, using the Specman command prompt to load .m files, initializing variables, and other operational tasks.* Transfering data to and from the Matlab engine to Specman / an e language test bench* Comparing data of previously retrieved Matlab arrays* Accessing Matlab arrays from e-code without converting them to e list data structure* Convert Matlab arrays into e-listsHappy coding!Team Specman Full Article
ge Overcoming Thermal Challenges in Modern Electronic Design By community.cadence.com Published On :: Tue, 09 Aug 2022 14:24:00 GMT Melika Roshandell talks with David Malinak in a Microwaves & RF QuickChat video about the thermal challenges in today’s complex electronic designs and how the Celsius solver uniquely addresses them.(read more) Full Article 3D-IC in-design analysis Thermal Integrity Thermal Analysis electronic systems
ge Harmonic Balance (HB) Large-Signal S-Parameter (LSSP) simulation By community.cadence.com Published On :: Fri, 08 Mar 2024 12:07:53 GMT Dear all, Hi! I'm trying to do a Harmonic Balance (HB) Large-Signal S-Parameter (LSSP) simulation to figure out the input impedance of a nonlinear circuit. Through this simulation, what I want to know is the large-signal S11 only (not S12, S21 and S22). So, I have simulated with only single port (PORT0) at input, but LSSP simulation is terminated and output log shows following text. " Analysis `hb' was terminated prematurely due to an error " The LSSP simulation does not proceed without second port. Should I use floating second port (which is not necessary for my circuit) to succeed the LSSP simulation? Does the LSSP simulation really need two ports? Below figure is my HB LSSP simulation setup. Additionally, Periodic S-Parameter (PSP) simulation using HB is succeeded with only single port. What is the difference between PSP and LSSP simulations? Full Article