or

How to execute APD+ embedded function in my form?

Hello, SKILL experts. 

I'm studying SKILL language to build some useful function in APD+.

Now, I want to execute 'Import Sub-drawing' function in new form.

But I cannot find how to do execute APD+ embedded function in a field of new form. 

Has anyone experienced this or idea to solve this problem? 




or

Database Maintenance: DBDoctor

The DBDoctor application checks the database for errors and other problems, and presents a report about them. DBDoctor supports .brd, .mcm, .mdd, .psm, .dra, .pad, .sav, and .scf databases.

DBDoctor can:

  • Analyze and fix database problems.
  • Eliminate duplicate vias.
  • Perform batch design rule checking (DRC).
  • Upgrade databases more than one revision old.

To verify the integrity of a drawing database at any time during the design cycle, run DBDoctor at regular intervals but make sure you always run it after completing a design.

You can run DBDoctor to verify work in progress, or from a terminal window outside the layout editor, perhaps to check multiple input designs in batch mode by using wildcards and various switches. You do not have to run the layout editor to use DBDoctor.

To run this from Allegro X APD and Allegro PCB Editor, go to Tools > Database Check.

  

You can also go to the Start menu and select Cadence PCB Utilities 2023 > PCB DB Doctor 2023.

  

You can also use the following command to run DBDoctor in batch mode in the system command prompt:

dbdoctor [-check_only] [-drc] [-drc_only] [-shapes][-no_backup] [-outfile <newboardname.brd>]>

 

Comment below if you want to know more about this command and its integration with SKILL programming!!




or

How to transfer etch/conductor delays from Allegro Package Designer (APD) to pin delays in Allegro PCB Editor

The packaging group has finished their design in Allegro Package Designer (APD) and I want to use the etch/conductor delay information from the mcm file in the board design in Allegro PCB Designer. Is there a method to do this?

This can be done by exporting the etch/conductor data from APD and importing it as PIN_DELAY information into Allegro PCB Editor.

If you are generating a length report for use in Allegro Pin Delay, you should consider changing the APD units to Mils and uncheck the Time Delay Report.

In Allegro Package Designer:

  1. Select File > Export > Board Level Component.
  2. Select HDL for the Output format and select OK.

       3. Choose a padstack for use when generating the component and select OK.

This will create a file, package_pin_delay.rpt, in the component subdirectory of the current working directory. This file will contain the etch/conductor delay information that can be imported into Allegro.

In Allegro PCB Editor:

  1. Make sure that the device you want to import delays to is placed in your board design and is visible.
  2. Select File > Import > Pin delay.
  3. Browse to the component directory and select package_pin_delay.rpt. The browser defaults to look for *.csv files so you will need to change the Files of type to *.* to select the file.
  4. You may be prompted with an error message stating that the component cannot be found and you should select one. If so, select the appropriate component.
  5. Select Import.
  6. Once the import is completed, select Close.

Note: It is important that all non-trace shapes have a VOLTAGE property so they will not be processed by the the 2D field solver. You should run Reports > Net Delay Report in APD prior to generating the board-level component. This will display the net name of each net as it is processed. If you miss a VOLTAGE property on a net, the net name will show in the report processing window, and you will know which net needs the property.




or

Maximizing Display Performance with Display Stream Compression (DSC)

Display Stream Compression (DSC) is a lossless or near-lossless image compression standard developed by the Video Electronics Standards Association (VESA) for reducing the bandwidth required to transmit high-resolution video and images. DSC compresses video streams in real-time, allowing for higher resolutions, refresh rates, and color depths while minimizing the data load on transmission interfaces such as DisplayPort, HDMI, and embedded display interfaces.

Why Is DSC Needed?

In the ever-evolving landscape of display technology, the pursuit of higher resolutions and better visual quality is relentless. As display capabilities advance, so do the challenges of managing the immense amounts of data required to drive these high-performance screens. This is where DSC steps in. DSC is designed to address the challenges of transmitting ultra-high-definition content without sacrificing quality or performance. As displays grow in resolution and capability, the amount of data they need to transmit increases exponentially. DSC addresses these issues by compressing video streams in real-time, significantly reducing the bandwidth needed while preserving image quality.
 

DSC Use in End-to-end System

DSC Key Features

  • Encoding tools:
    • Modified Median-Adaptive Prediction (MMAP)
    • Block Prediction (BP)
    • Midpoint Prediction (MPP)
    • Indexed color history (ICH)
    • Entropy coding using delta size unit-variable length coding (DSU-VLC)
  • The DSC bitstream and decoding process are designed to facilitate the decoding of 3 pixels/clock in practical hardware decoder implementations. Hardware encoder implementations are possible at 1 pixel/clock.
  • DSC uses an intra-frame, line-based coding algorithm, which results in very low latency for encoding and decoding.

DSC encoding algorithm
 

  • Compression can be done to a fractional bpp. The compressed bits per pixel ranges from 6 to 63.9375.
  • For validation/compliance certification of DSC compression and decompression engines, cyclic redundancy checks (CRCs) are used to verify the correctness of the bitstream and the reconstructed image.
  • DSC supports more color bit depths, including 8, 10, 12, 14, and 16 bpc.
  • DSC supports RGB and YCbCr input format, supporting 4:4:4, 4:2:2, and 4:2:0 sampling.
  • Maximum decompressor-supported bits/pixel values are as listed in the Maximum Allowed Bit Rate column in the table below

  • DP DSC Source device shall program the bit rate within the range of Minimum Allowed Bit Rate column in the table:

          


Summary

Display Stream Compression (DSC) is a technology used in DisplayPort to enable higher resolutions and refresh rates while maintaining high image quality. It works by compressing the video data transmitted from the source to the display, effectively reducing the bandwidth required. DSC uses a visually lossless algorithm, meaning that the compression is designed to be imperceptible to the human eye, preserving the fidelity of the image. This technology allows for smoother, more detailed visuals at higher resolutions, such as 4K or 8K, without requiring a significant increase in data bandwidth.

More Information

  • Cadence has a very mature Verification IP solution. Verification over many different configurations can be used with DisplayPort 2.1 and DisplayPort 1.4 designs, so you can choose the best version for your specific needs.
  • The DisplayPort VIP provides a full-stack solution for Sink and Source devices with a comprehensive coverage model, protocol checkers, and an extensive test suite.
  • More details are available on the DisplayPort Verification IP product page, Simulation VIP pages.
  • If you have any queries, feel free to contact us at talk_to_vip_expert@cadence.com




or

Training Insights – Palladium Emulation Course for Beginner and Advanced Users

The Cadence Palladium Emulation Platform is a hardware system that implements the design, accelerating its execution and verification. Itoffers the highest performance and fastest bring-up times for pre-silicon validation of billion-gate designs, using a custom processor built by Cadence.

This Palladium Introduction course is based on the Palladium 23.03 ISR4 version and covers the following modules:

  • Introduction
  • Palladium flow
  • Running a design on the Palladium system

This course starts with an “Introduction” module that explains Palladium and other verification platforms to show its place in the big picture. It also compares Palladium with Protium and simulation and discusses its usage and limitations.

The “Palladium Flow” module includes two stages at a high level, which are Compile and Run. Then, it covers these stages in detail. First, it covers the ICE compile flow and IXCOM compile flow steps in detail. Then it explains Run, which is common for both ICE and IXCOM modes.

The third module, “Running Design on the Palladium System,” covers all the items required for running your design on the Palladium system, including:

  • Software stack requirements
  • Basic concepts required to understand the flow
  • Compute machine requirements

In addition, this course contains labs for both the ICE and IXCOM flows with detailed steps to exercise the features provided by the Palladium system. The lab explains a practical example of multiple counters and exercising their signals for force, monitor, and deposit features, along with frequency calculation using a real-time clock. The course is available on the Cadence support page:

There is also a Digital Badge available. You will find the Badge exam opportunity when you enroll in the Online training or after you have taken the training as "live" training.

For questions and inquiries, or issues with registration, reach out to us at Cadence Training. Want to stay up to date on webinars and courses? Subscribe to Cadence Training emails. To view our complete training offerings, visit the Cadence Training website.

Related Training Bytes

Related Courses

Related Blogs




or

Jasper Formal Fundamentals 2403 Course for Starting Formal Verification

The course "Jasper Formal Fundamentals v24.03" introduces formal analysis to those who want to use formal analysis for design or verification. 

To optimally benefit from this course, you must already have sufficient knowledge of the System Verilog assertions to be capable of writing properties for formal verification. Hence, this training provides a module on formal analysis to help cover this essential background. 

In this course, you will learn how to code efficient SVA Properties for formal analysis, understand formal complexity and how to overcome it, and learn the basics of formal coverage.

After completing this course, you will be able to:

  • Define reusable, functionally correct SVA properties that are efficient for formal tools. These shall use abstract auxiliary code to simplify descriptions, make code maintenance easier, reduce debug time, and reduce tool-proof runtime.
  • Set up, run, and analyze results from formal analysis.
  • Identify designs upon which formal is likely to be successful while understanding formal complexity issues and how to identify and overcome them.
  • Use a systematic property development process to approach a completely new verification problem.
  • Understand the basics of formal coverage.

 The most recently updated release includes new modules on:

  • "Basic complexity handling" which discusses the complexity in formal and how to identify and handle them.
  • "Complexity reduction methods” which discusses the complexity reduction methods and which is suitable for which type of complexity problem.
  • “Coverage in formal” which discusses the basics of coverage in formal verification and how coverage can be used in formal.   

Take this course to learn the basics of formal verification. 

What's Next? 

You can check out the complete training: Jasper Formal Fundamentals. There is a free online version of the training available 24/7 for all customers with a Cadence Learning and Support Portal account. If you are interested in an instructor-led version of the training, please contact Cadence Training. And don't forget to obtain your digital badge after completing the training!

You can also check Jasper University page for more materials on formal analysis and Jasper apps. 

Related Trainings 

Jasper Formal Expert Training Course | Cadence

Verilog Language and Application Training Course | Cadence

SystemVerilog for Design and Verification Training Course | Cadence

SystemVerilog Assertions Training Course | Cadence

Related Training Bytes 

Jasper Formal Property Verification (FPV) App: Basic Usage Demo (Video)

Jasper Formal Methodology playlist

Related Training Blogs

It’s the Digital Era; Why Not Showcase Your Brand Through a Digital Badge!

Training Insights: Introducing the C++ Course for All Your C++ Learning Needs!

Training Insights: Reaching Your Verification Closure Using Verisium Manager

Training Insights - Free Online Courses on Cadence Learning and Support Portal




or

Partial Header Encryption in Integrity and Data Encryption for PCIe

Cadence PCIe/CXL VIP support for Partial Header Encryption in Integrity and Data Encryption.(read more)




or

Cadence Verisium Debug Introduces Verisium Debug App Store

Verisium Debug, the Cadence unified debug platform, offers a variety of debugging capabilities, including RTL debug, UVM testbench debug, UPF debug, and DMS debug. From IP to SoC level debug, the user can take the benefits of the rich debugging features to reduce the time for debug.

Not only the common and advanced debug features, Verisium Debug also provides Python-based interface API, which enables capabilities allowing users to customize functions with Verisium Debug Python API to access from design, waveform databases and add functions to Verisium Debug’s GUI for visualization purposes. With Verisium Debug’s Python API, users can turn repetitive works into automatic programs or reduce efforts to create in-house utilities with well-established infrastructure from Verisium Debug.

Here is an example of how the user uses Python API to create a customized function. Users can write a Python program to extract signals in a specific design scope and report the values of the extracted signals. From Fig 1., you can understand the procedure of the traversal steps.

  1. Import Python library in Verisium Debug package.
  2. Setup the database for traversal.
  3. Search the scope with the hierarchy information in the design DB.
  4. Query the signal list and the values of the signals.
  5. Print out the results.

Fig 1. Procedure of Verisium Debug Python Program

The result from the Verisium Debug Python App can be used for post-process design checking or fed into other utilities in the design flow.

The concept is very straightforward. With Verisium Debug and the Python API environment enabled, you can easily query any information that is stored in the databases of Verisium Debug. The result can be outputted in text format, or you can also use the API to display the results back to Verisium Debug’s GUI.

The Verisium Debug Python API is an important capability and resource for Verisium Debug users. To make Verisium Debug Python API easier to access, from Verisium Debug 24.10 release, Verisium Debug introduced the new Verisium Debug Python App Store.

Fig 2. Verisium Debug App Store

The Python App Store includes ready-to-use Python App examples with the availabilities of original source code documents, which help the user to understand how to start writing an app that fits their use case.

Fig 3. Example apps in Verisium Debug App Store

The Verisium Debug Python App Store can also be used by a team as an app management system. App creators can share the developed apps across teams within their companies. The in-house created apps will become easy to manage, and engineers can easily access the apps from the central location, which makes it possible for users to see the updated available Verisium Debug Apps from the Verisium Debug App Store.

Check the following videos for more information about Verisium Debug Python API:

Customize Verisium Debug with Python API

Verisium Debug Customized Apps with Python API




or

Unveiling the Capabilities of Verisium Manager for Optimized Operations

In SoC development, the verification cycle is a crucial phase that ensures products meet their specifications and function correctly. However, the complexity of modern SoC projects, with their constant data flow, multiple validation teams working in parallel, and tight schedules, presents significant challenges. This article explores these challenges and introduces Verisium Manager as a solution that embodies the 'One Tool Fits All' concept. This means that Verisium Manager is designed to handle all aspects of the verification process for SoC development, from planning to coverage analysis to regression testing, thereby addressing the complex needs of SoC verification.

The Hurdles in Traditional Validation Cycles

 A typical validation process involves planning, coverage analysis, and regression testing. This complexity is compounded by using separate tools for each activity, leading to multiple control environments, APIs, and databases, not to mention the array of tool owners. Such fragmentation results in constant data transfer and translation between systems, from the planning tool to the coverage analysis tool and then to the regression testing tool. This continuous movement of data causes delays, system instability, poor user experiences, and, ultimately, a dip in the quality of the validation process.

The use of multiple platforms leads to inefficiency and reduced productivity. What's needed is a unified system that can streamline the workflow, simplify the verification process, and enhance its effectiveness.

Envisioning the Ideal Solution: Verisium Manager

 The cornerstone of an efficient validation cycle is integration and simplicity. The ideal solution is a singular platform that consolidates planning, coverage analysis, and regression management into one smooth, unified process. Verisium Manager emerges as this much-needed solution, encompassing all the functionalities necessary to streamline the validation process. Its comprehensive nature instills confidence in its ability to handle all aspects of the verification cycle. It can be fully customized to address and enforce any validation methodology and can facilitate smooth integration into any customer environment.

Features that stand out in Verisium Manager include: 

  • Unified Workflow: It acts as a single cockpit from which all activities are orchestrated, ensuring the validation teams' work is uninterrupted and seamlessly integrated.
  • Customization and Integration: Verisium Manager supports customizing test-plan structures and mapping results per project, ensuring a perfect fit for various project requirements. Its ability to smoothly integrate into the project's environment and compute platforms is unparalleled.
  • Support for Continuous Updates and Migration: The tool accommodates constant updates to project data and supports the migration of legacy data, ensuring that no historical data is lost in the transition to a new system.

Addressing Project-Specific Needs

 Verisium Manager recognizes diversity in different projects and offers project-specific solutions, including:

 Enforcing Project Test-Plan Structures and Attributes: It supports and enforces each project's unique test-plan structure and mapping guidelines.

  • Unified Data Views and Measurements: Verisium Manager promotes a unified view of data across all teams and enforces unified measurements, ensuring consistency and clarity in the validation process.
  • Enabling Project-Specific Actions and Integrations: The tool is designed to support project-specific actions directly from its graphical user interface and allows for smooth integration with in-house databases, dashboards, and the project execution stack.

Verisium Manager is the epitome of efficiency in software/hardware validation. Its differentiating features, such as support for customization, unified data view, and comprehensive coverage and regression requirements, make it an indispensable tool for any validation team looking to elevate their workflow.




or

Deferrable Memory Write Usage and Verification Challenges

The application of real-time data processing or responsiveness is crucial, such as in high-performance computing, data centers, or applications requiring low-latency data transfers. It enables efficient use of PCIe bandwidth and resources by intelligently managing memory write operations based on system dynamics and workload priorities. By effectively leveraging Deferrable Memory Write [DMWr], Devices can achieve optimized performance and responsiveness, aligning with the evolving demands of modern computing applications.

What Is Deferrable Memory Write?

Deferrable Memory Write (DMWr) ECN introduced this new memory transaction type, which was later officially incorporated in PCIe 5.0 to CXL2.0. This enhanced type of memory transaction is Deferrable Memory Write [DMWr], which flows as another type of existing Read/Write memory transaction; the major difference of this Deferrable Memory Write, where the Requester attempts to write to a given location in Memory Space using the non-posted DMWr TLP Type, it Postponing their completion of memory write transactions to improve overall system efficiency and performance, those memory write operation can be delay or deferred until other priority task complete.

The Deferrable Memory Write (DMWr) requires the Completer to return an acknowledgment to the Requester and provides a mechanism for the recipient to defer (temporarily refuse to service) the Request.

DMWr provides a mechanism for Endpoints and hosts to choose to carry out or defer incoming DMWr Requests. This mechanism can be used by Endpoints and Hosts to simplify the design of flow control, reduce latency, and improve throughput. The Deferrable Memory writes TLP format in Figure A.

 

(Fig A) Deferrable Memory writes TLP format.

Example Scenario

Here's how the DMWr works with a simplified example: Imagine a system with an endpoint device (Device A) and a host CPU (Device B). Device B wants to write data to Device A's memory, but due to varying reasons such as system bus congestion or prioritization of other transactions, Device A can defer the completion of the memory write request. Just follow these steps:

  1. Initiation of Memory Write: Device B initiates a memory write transaction to Device A. This involves sending the memory write request along with the data payload over the PCIe physical layer link.
  2. Acknowledgment and Deferral: Upon receiving the memory write request, Device A acknowledges the transaction but may decide to defer its completion. Device A sends an acknowledgment (ACK) back to Device B, indicating it has received the data and intends to complete the write operation but not immediately.
  3. Deferred Completion: Device A defers the completion of the memory write operation to a later, more opportune time. This deferral allows Device A to prioritize other transactions or optimize the use of system resources, such as memory bandwidth or processor availability.
  4. Completion and Response: At a later point, Device A completes the deferred memory write operation and sends a completion indication back to Device B. This completion typically includes any status updates or additional information related to the transaction.

Usage or Importance of DMWr

Deferrable Memory Write usage provides the improvement in the following aspects:

  • Reduced Latency: By deferring less critical memory write operations, more critical transactions can be processed with lower latency, improving overall system responsiveness.
  • Improved Efficiency: Optimizes the utilization of system resources such as memory bandwidth and CPU cycles, enhancing the efficiency of data transfers within the PCIe architecture.
  • Enhanced Performance: Allows devices to manage and prioritize transactions dynamically, potentially increasing overall system throughput and reducing contention.

Challenges in the Implementation of DMWr Transactions

The implementation of deferrable memory writes (DMWr) introduces several advancements and challenges in terms of usage and verification:

  1. Timing and Synchronization: DMWr allows transactions to be deferred, complicating timing requirements or completing them within acceptable timing windows to avoid protocol violations. Ensuring proper synchronization between devices becomes critical to prevent data loss or corruption.
  2. Protocol Compliance: Verification must ensure compliance with ECN PCIe 6.0 and CXL specifications regarding when and how DMWr transactions can be initiated and completed.
  3. Performance Optimization: While DMWr can improve overall system performance by reducing latency, verifying its impact on system performance and ensuring it meets expected benchmarks is crucial.
  4. Error Handling: Handling errors related to deferred transactions adds complexity. Verifying error detection and recovery mechanisms under various scenarios (e.g., timeout during deferral) is essential.

Verification Challenges of DMWr Transactions

The challenges to verifying the DMWr transaction consist of all checks with respect to Function, Timing, Protocol compliance, improvement, Error scenario, and security usage on purpose, as well as Data integrity at the PCIe and CXL.

  1. Functional Verification: Verifying the correct implementation of DMWr at both ends of the PCIe link (transmitter and receiver) to ensure proper functionality and adherence to specifications.
  2. Timing Verification: Validating timing constraints associated with deferring writes and ensuring transactions are completed within specified windows without violating protocol rules.
  3. Protocol Compliance Verification: Checking that DMWr transactions adhere to PCIe and CXL protocol rules, including ordering rules and any restrictions on deferral based on the transaction type.
  4. Performance Verification: Assessing the impact of DMWr on overall system performance, including latency reduction and bandwidth utilization, through simulation and testing.
  5. Error Scenario Verification: Creating and testing scenarios to verify error handling mechanisms related to DMWr, such as timeouts, retries, and recovery procedures.
  6. Security Considerations: Assessing potential security vulnerabilities related to DMWr, such as data integrity risks during deferred transactions or exposure to timing-based attacks.

Major verification challenges and approaches are timing and synchronization verification in the context of implementing deferrable memory writes (DMWr), which is crucial due to the inherent complexities introduced by deferred transactions. Here are the key issues and approaches to address them:

Timing and Synchronization Issues

  1. Transaction Completion Timing:
    • Issue: Ensuring deferred transactions are completed within the specified time window without violating protocol timing constraints.
    • Approach: Design an internal timer and checker to model worst-case scenarios where transactions are deferred and verify that they are complete within allowable latency limits. This involves simulating various traffic loads and conditions to assess timing under different scenarios.
  2. Ordering and Dependencies:
    • Issue: Verifying that transactions deferred using DMWr maintain the correct ordering and dependencies relative to non-deferred transactions.
    • Approach: Implement test scenarios that include mixed traffic of DMWr and non-DMWr transactions. Verify through simulation or emulation that dependencies and ordering requirements are correctly maintained across the PCIe link.
  3. Interrupt Handling and Response Times:
    • Issue: Verify the handling of interrupts and ensure timely responses from devices involved in DMWr transactions.
    • Approach: Implement test cases that simulate interrupt generation during DMWr transactions. Measure and verify the response times to interrupts to ensure they meet system latency requirements.

In conclusion, while deferrable memory writes in PCIe and CXL offer significant performance benefits, their implementation and verification present several challenges related to timing, protocol compliance, performance optimization, and error handling. Addressing these challenges requires rigorous testing and testbench of traffic, advanced verification methodologies, and a thorough understanding of PCIe specifications and also the motivation behind introducing this Deferrable Write is effectively used in the CXL further. Outcomes of Deferrable Memory Write verify that the performance benefits of DMWr (reduced latency, improved throughput) are achieved without compromising timing integrity or violating protocol specifications.

In summary, PCIe and CXL are complex protocols with many verification challenges. You must understand many new Spec changes and consider the robust verification plan for the new features and backward compatible tests impacted by new features. Cadence's PCIe 6.0 Verification IP is fully compliant with the latest PCIe Express 6.0 specifications and provides an effective and efficient way to verify the components interfacing with the PCIe 6.0 interface. Cadence VIP for PCIe 6.0 provides exhaustive verification of PCIe-based IP and SoCs, and we are working with Early Adopter customers to speed up every verification stage.

More Information




or

Training Webinar: Protium X2: Using Save/Restart for Debugging

Cadence Protium prototyping platforms rapidly bring up an SoC or system prototype and provide a pre-silicon platform for early software development, SoC verification, system validation, and hardware regressions. In this Training W ebinar, we will explore debugging using Save/Restart on Protium X2 . This feature saves execution time and lets you focus on actual debugging. The system state can be saved before the bug appears and restartS directly from there without spending time in initial execution. We’ll cover key concepts and applications, explore Save/Restart performance metrics, and provide examples to help you understand the concepts. Agenda: The key concepts of debugging using save/restart Capabilities, limitations, and performance metrics Some examples to enable and use save/restart on the Protium X2 system Date and Time Thursday, November 7, 2024 07:00 PST San Jose / 10:00 EST New York / 15:00 GMT London / 16:00 CET Munich / 17:00 IST Jerusalem / 20:30 IST Bangalore / 23:00 CST Beijing REGISTER To register for this webinar, sign in with your Cadence Support account (email ID and password) to log in to the Learning and Support System*. Then select Enrol to register for the session. Once registered, you’ll receive a confirmation email containing all login details. A quick reminder: If you haven’t received a registration confirmation within 1 hour of registering, please check your spam folder and ensure your pop-up blockers are off and cookies are enabled. For issues with registration or other inquiries, reach out to eur_training_webinars@cadence.com . Want to See More Webinars? You can find recordings of all past webinars here Like This Topic? Take this opportunity and register for the free online course related to this webinar topic: Protium Introduction Training The course includes slides with audio and downloadable lab exercises designed to emphasize the topics covered in the lecture. There is also a Digital Badge available for the training. Want to share this and other great Cadence learning opportunities with someone else? Tell them to subscribe . Hungry for Training? Choose the Cadence Training Menu that’s right for you. To view our complete training offerings, visit the Cadence Training website . Related Courses Protium Introduction Training Course | Cadence Palladium Introduction Training Course | Cadence Related Blogs Training Insights – A New Free Online Course on the Protium System for Beginner and Advanced Users Training Insights – Palladium Emulation Course for Beginner and Advanced Users Related Training Bytes Protium Flow Steps for Running Design on Protium System ICE and IXCOM mode comparison ICE compile flow IXCOM compile flow PATH settings for using Protium System Please see the course learning maps for a visual representation of courses and course relationships. Regional course catalogs may be viewed here




or

Wild River Collaborates with Cadence on CMP-70 Channel Modeling

Wild River Technology (WRT), the leading supplier of signal integrity measurement and optimization test fixtures for high-speed channels at data rates of up to 224G, has announced the availability of a new advanced channel modeling solution that helps achieve extreme signal integrity design to 70GHz. Read the press release. The CMP-70 program continues the industry-first simulation-to-measurement collaboration with Cadence that was initially established with the CMP-50. Significant resources were dedicated to the development of the CMP-70 by Cadence and WRT over almost three years. The CMP-70 will be on display at DesignCon 2025 , January 28-30, in Cadence booth 827 to benchmark the Cadence Clarity 3D Solver . “I am not a fan of hype-based programs that simply get attention,” remarked Alfred P. Neves, WRT’s co-founder and chief technical officer. “Both Cadence and Wild River brought substantial skills to the table in this project as we continued our industry-first simulation-to-measurement collaboration. The result is a proven, robust and accurate platform that brings extreme signal integrity to 70GHz designs. This application package has also been instrumental in demonstrating the robust 3D EM simulation capability of the Cadence Clarity solver.” “We’re delighted to continue the joint development and validation program with WRT that started with the CMP-50,” said Gary Lytle, product management director at Cadence. “The skilled and experienced signal integrity technologists that both companies bring to the program results in a superior signal integrity solution for our mutual customers.” CMP-70 Solution Features The solution is available both in a standard configuration and as a custom solution for customer-specific stackups and fabrication. The primary target application is to support a 3D EM solver analysis modeling versus the time- and frequency-domain measurement methodologies. The solution features include: The CMP-70 platform, assembled and 100% TDR NIST traceable tested, with custom stands Material Identification overview web-based meeting including anisotropic 3D material identification A cross-section PCB report and structures for using as-fabricated geometries Measured S-parameters, pre-tested for quality (passivity/causality and resampled for time domain simulations) A host of novel crosstalk structures suited for 112G HD level project analysis PCB layout design files (NDA required) An EDA starter library including loss models with industry-first accurate surface roughness models Comprehensive training available for 3D EM analysis – correspondence, material ID in X-Y and Z axis for a host of EDA tools Industry-First Hausdorff Technique The WRT application package also includes an industry-first modified Hausdorff (MHD) technique , included as MATLAB code. This algorithmic approach provides an accurate way to compare two sets of measurements in multi-dimensional space to determine how well they match. The technique is used to compare the results simulated by the Clarity solver with those measured on the CMP-70 platform. The methodology and initial results are shown in the figure below, where the figure of merit (FOM) is calculated from 10, 35, and finally to 50GHz. The MHD algorithm requires a MATLAB license, but WRT also accommodates customer data as another option, where WRT provides the comparison between measured and simulated data. Additional Resources If you are attending DesignCon 2025 , be sure to stop by Cadence booth 827 to see WRT’s CMP-70 advanced channel modeling solution in action with the Clarity 3D Solver. Check out our on-demand webinar, " Validating Clarity 3D Solver Accuracy Through Measurement Correlation ." Learn more about the CMP-70 solution and the Clarity 3D Solver . For more information about Cadence’s full suite of integrated multiphysics simulation solutions, download our Multiphysics System Analysis Solutions Portfolio .




or

Training Webinar: Fast Track RTL Debug with the Verisium Debug Python App Store

As a verification engineer, you’re surely looking for ways to automate the debugging process. Have you developed your own scripts to ease specific debugging steps that tools don’t offer? Working with scripts locally and manually is challenging—so is reusing and organizing them. What if there was a way to create your own app with the required functionality and register it with the tool? The answer to that question is “Yes!” The Verisium Debug Python App Store lets you instantly add additional features and capabilities to your Verisium Debug Application using Python Apps that interact with Verisium Debug via the Python API. Join me, Principal Education Application Engineer Bhairava Prasad, for this Training Webinar and discover the Verisium Debug Python App Store. The app store allows you to search for existing apps, learn about them, install or uninstall them, and even customize existing apps. Date and Time Wednesday, November 20, 2024 07:00 PST San Jose / 10:00 EST New York / 15:00 GMT London / 16:00 CET Munich / 17:00 IST Jerusalem / 20:30 IST Bangalore / 23:00 CST Beijing REGISTER To register for this webinar, sign in with your Cadence Support account (email ID and password) to log in to the Learning and Support System*. Then select Enroll to register for the session. Once registered, you’ll receive a confirmation email containing all login details. A quick reminder: If you haven’t received a registration confirmation within one hour of registering, please check your spam folder and ensure your pop-up blockers are off and cookies are enabled. For issues with registration or other inquiries, reach out to eur_training_webinars@cadence.com . Like this topic? Take this opportunity and register for the free online course related to this webinar topic: Verisium Debug Training To view our complete training offerings, visit the Cadence Training website Want to share this and other great Cadence learning opportunities with someone else? Tell them to subscribe . Hungry for Training? Choose the Cadence Training Menu that’s right for you. Related Courses Xcelium Simulator Training Course | Cadence Related Blogs Unveiling the Capabilities of Verisium Manager for Optimized Operations - Verification - Cadence Blogs - Cadence Community Verisium SimAI: SoC Verification with Unprecedented Coverage Maximization - Corporate News - Cadence Blogs - Cadence Community Verisium SimAI: Maximizing Coverage, Minimizing Bugs, Unlocking Peak Throughput - Verification - Cadence Blogs - Cadence Community Related Training Bytes Introducing Verisium Debug (Video) (cadence.com) Introduction to UVM Debug of Verisium Debug (Video) (cadence.com) Verisium Debug Customized Apps with Python API Please see course learning maps a visual representation of courses and course relationships. Regional course catalogs may be viewed here . *If you don’t have a Cadence Support account, go to Cadence User Registration and complete the requested information. Or visit Registration Help .




or

Versatile Use Case for DDR5 DIMM Discrete Component Memory Models

DDR5 DIMM Architectures The DDR5 generation of Double Data Rate DRAM memories has experienced rapid adoption in recent years. In particular, the JEDEC-defined DDR5 Dual Inline Memory Module (DIMM) cards have become a mainstay for systems looking for high-density, high-bandwidth, off-chip random access memory[1]. Within a short time, the DIMM architecture evolved from an interconnected hierarchy of only SDRAM memory devices (UDIMM[2]) to complex subsystems of interconnected components (RDIMM/LRDIMM/MRDIMM[3]). DIMM Designs and Popular Verification Use Cases The growing complexity of the DIMMs presented a challenge for pre-silicon verification engineers who could no longer simply validate against single DDR5 SDRAM memory models. They needed to consider how their designs would perform against DIMMs connected to each channel and operating at gigahertz clock speeds. To address this verification gap, Cadence developed DDR5 DIMM Memory Models that encapsulated all of the architectural complexities presented by real-world DIMMs based on a robust, easy-to-use, easy-to-debug, and easy-to-reconfigure methodology. This memory-subsystem-in-a-single-instance model has seen explosive adoption among the traditional IP Developer and SOC Integrator customers of Cadence Memory Models. The Cadence DIMM models act as a single unit with all of the relevant DIMM components instantiated and interconnected within, and with all AC/Timing parameters among the various components fully matched out-of-the-box, based on JEDEC specifications as well as datasheets of actual devices in the market. The typical use-case for the DIMM models has been where the DUT is a DDR5 Memory Controller + PHY IP stack, and the validation plan mandated compliance with the JEDEC standards and Memory Device vendor datasheets. Unique Use Case for the DIMM Discrete Component Models Although the Cadence DIMM models have enjoyed tremendous proliferation because of their cohesive implementation and unified user API, the actual DIMM Models are built on top of powerful, flexible discrete component models, each of which was designed to stand on its own as a complete SystemVerilog UVM-based VIP. All of these discrete component models exist in the Cadence VIP Catalog as standalone VIPs, complete with their own protocol compliance checking capabilities and their own configuration mappings comprehensively modeling individual AC/Timing parameters. Because of this deliberate design decision, the Cadence DIMM Discrete Component Models can support a unique use-case scenario. Some users seek to develop IC Designs for the various DIMM components. Such users need verification environments that can model the individual components of a DIMM and allow them the option to replace one or another component with their Component Design IP. They can then validate that their component design is fully compatible with the rest of the components on the DIMM and meets the integrity of the overall DIMM compliance with JEDEC standards or Memory Vendor datasheets. The Cadence Memory VIP portfolio today includes various examples that demonstrate how customers can create DIMM “wrappers” by selecting from among the available DIMM discrete component models and “stitching” them together to build their own custom testbench around their specific Component Design IP. A Solution for Unique Component Scenarios The Cadence DDR5 DIMM Memory Models and DIMM Discrete Component Models can provide users with a flexible approach to validating their specific component designs with a fully populated pre-silicon environment. Augmented Verification Capabilities When the DIMM “wrapper” model is augmented with the Cadence DFI VIP[4] that can simulate an MC+PHY stack and offers a SystemVerilog UVM test API to the verification engineer, the overall testbench transforms into a formidable pre-silicon validation vehicle. The DFI VIP is designed as a combination of an independent DFI MC VIP and a DFI PHY VIP connected to each other via the DFI Standard Interface and capable of operating seamlessly as a single unit. It presents a UVM Sequence API to the user into the DFI MC VIP with the Memory Interface of the PHY VIP connected to the DIMM “wrapper” model. With this testbench in hand, the user can then fully take advantage of the UVM Sequence Library that comes with the DFI VIP to enable deep validation of their Component Design inside the DIMM “wrapper” model. Verification Capabilities Further Enhanced A possible further enhancement comes with the potential addition of an instance of the Cadence DIMM Memory Model in a Passive Monitor mode at the DRAM Memory Interface. The DIMM Passive Monitor consumes the same configuration describing the DIMM “wrapper” in the testbench, and thus can act as a reference model for the DIMM wrapper. If the DIMM Passive Monitor responds successfully to accesses from the DFI VIP, but the DIMM wrapper does not, then it exposes potential bugs in the DUT Components or in the settings of their AC/Timing parameters inside the DIMM wrapper. Debuggability, Interface Visibility, and Protocol Compliance One of the key benefits of the DIMM Discrete Component Models that become manifest, whether in terms of the unique use-case scenario described here, or when working with the wholly unified DDR5 DIMM Memory Models, is the increased debuggability of the protocol functionality. The intentional separation of the discrete components of a DIMM allows the user to have full visibility of the memory traffic at every datapath landmark within a DIMM structure. For example, in modeling an LRDIMM or MRDIMM, the interface between the RCD component and the SDRAM components, the interface between the RCD component and the DB components, and the interface between the SDRAM components and the DB components—all are visible and accessible to the user. The user has full access to dump the values and states of the wire interconnects at these interfaces to the waveform viewer and thus can observe and correlate the activity against any protocol violations flagged in the trace logs by any one or more of the DIMM Discrete Component Models. Access to these interfaces is freely available when using the DIMM Discrete Component Models. On the unified DDR5 DIMM Memory Models, a feature called Debug Ports enables the same level of visibility into the individual interconnects amidst the SDRAM components, RCD components, and DB components. When combined with the Waveform Debugger[5] capability that comes built-in with the VIPs and Memory Models offered by Cadence and used with the Cadence Verisium Debug[6] tool, the enhanced debuggability becomes a powerful platform. With these debug accesses enabled, the user can pull out transaction streams, chip state and bank state streams, mode register streams, and error message streams all right next to their RTL signals in the same Verisium Debug waveform viewer window to debug failures all in one place. The Verisium Debug tool also parses all of the log files to probe and extract messages into a fully integrated Smart Log in a tabbed window fully hyperlinked to the waveform viewer, all at your fingertips. A Solution for Every Scenario Cadence's DDR5 DIMM Memory Models and DIMM Discrete Component Models , partnered with the Cadence DFI VIP, can provide users with a robust and flexible approach to validating their designs thoroughly and effectively in pre-silicon verification environments ahead of tapeout commitments. The solution offers unparalleled latitude in debuggability when the Debug Ports and Waveform Debugger functions of the Memory Models are switched on and boosted with the use of the Cadence Verisium Debug tool. [1] Shyam Sharma, DDR5 DIMM Design and Verification Considerations , 13 Jan 2023. [2] Shyam Sharma, DDR5 UDIMM Evolution to Clock Buffered DIMMs (CUDIMM) , 23 Sep 2024. [3] Kos Gitchev, DDR5 12.8Gbps MRDIMM IP: Powering the Future of AI, HPC, and Data Centers , 26 Aug 2024. [4] Chetan Shingala and Salehabibi Shaikh, How to Verify JEDEC DRAM Memory Controller, PHY, or Memory Device? , 29 Mar 2022. [5] Rahul Jha, Cadence Memory Models - The Gold Standard , 15 Apr 2024. [6] Manisha Pradhan, Accelerate Design Debugging Using Verisium Debug , 11 Jul 2023.




or

Solutions to Maximize Data Center Performance Featured at OCP Global Summit 2024

The demand for higher compute performance, energy efficiency, and faster time-to-market drove the conversations at this year's Open Compute Project (OCP) Global Summit in San Jose, California. It was the scene of showcasing groundbreaking innovations, expert-led sessions, and networking opportunities to drive the future of data center technology. For those who didn't get to attend or stop by our booth, here's a recap of Cadence's comprehensive solutions that enable next-generation compute technology, AI data center design, analysis, and optimization. Optimized Data Center Design and Operations As the data center community increasingly faces demands for enhanced efficiency, thermal management, sustainability, and performance optimization, data center operators, IT managers, and executives are looking for solutions to these challenges. At the Cadence booth, attendees explored the Cadence Reality Digital Twin Platform and Celsius EC Solver. These technologies are pivotal in achieving high-performance standards for AI data centers, providing advanced digital twin modeling capabilities that redefine next-generation data center design and operation. The Celsius EC Solver demonstration showed how it solves challenging thermal and electronics cooling management problems with precision and speed. CadenceCONNECT: Take the Heat Out of Your AI Data Center Cadence hosted a networking reception on October 16 titled "Take the Heat Out of Your AI Data Center." In today's AI era, managing the heat generated by high-density computing environments is more critical than ever. This reception offered insights into current and emerging data center technologies, digital twin cooling strategies that deliver energy-saving operations, and a chance to engage with industry leaders, Cadence experts, and peers to explore the latest cooling, AI, and GPU acceleration advancements. Here's a recap: Researcher, author, and entrepreneur Dr. Jon Koomey highlighted the inefficiency of data centers in his talk "The Rise of Zombie Data Centers," noting that 20-30% of their capacity is stranded and unused. He advocated for organizational changes and technological solutions like digital twins to reduce wasted energy and improve computational effectiveness as AI deployments increase. In "A New Millennium in Multiphysics System Analysis," Cadence Corporate VP Ben Gu explained the company's significant strides in multiphysics system analysis, evolving from chip simulation to a broader application of computational software for simulating various physical systems, including entire data centers. He noted that the latest Cadence venture, a digital twin platform for data center optimization, opened the opportunity to use simulation technology to optimize the efficiency of data centers. Senior Software Engineering Group Director Albert Zeng highlighted the Cadence Reality DC suite's ability to transform data center operations through simulation, emphasizing its multi-phase engine for optimal thermal performance and the integration of AI capabilities for enhanced design and management. A panel discussion titled "Turning AI Factory Blueprints into Reality at the Speed of Light" featured industry experts from NVIDIA, Norman Wright Precision Environmental and Power, NV5, Switch Data Centers, and Cadence, who explored the evolving requirements and multidimensional challenges of AI factories, emphasizing the need for collaboration across the supply chain to achieve high-performing and sustainable data centers. Watch the highlights. Transforming Designs from Chips to Data Centers The OCP Global Summit 2024 has reaffirmed its status as a pivotal event for data center professionals seeking to stay at the forefront of technological advancements. Cadence's contributions, from groundbreaking digital twin technologies to innovative cooling strategies, have shed light on the path forward for efficient, sustainable data centers. For data center professionals, IT managers, and engineers, the insights gained at this summit are invaluable in navigating the challenges and opportunities presented by the burgeoning AI era. Partnering with Arm Arm Total Design Cadence is a member of the Arm Total Design program. At an invitation-only special Arm event, Cadence's VP of Research and Development, Lokesh Korlipara, delivered a presentation focusing on data center challenges and design solutions with Arm Neoverse Compute Subsystem (CSS). The session highlighted: Efficient integration of Arm Neoverse CSS into system on chips (SoCs) with pre-integrated connectivity IP Performance analysis and verification of the Neoverse CSS integration into the SoC through Cadence's System VIP verification suite and automated testbench creation, enhancing both quality and productivity Jumpstarting designs through Cadence's collaboration with Arm for 3D-IC system planning, chiplets, and interposers Design Services readiness and global scale to support and/or deliver the most demanding Arm Neoverse CSS-based SoC design projects Cadence Supports Arm CSS in Arm Booth During the event, Cadence conducted a demo in the Arm booth that showcased the Cadence System VIP verification suite. The demo highlighted automated testbench creation and performance analysis for integrating the Arm CSS into SoCs while enhancing verification quality and productivity. Summary Cadence offers data center solutions for designing everything from the compute and networking chips to the board, racks, data centers, and campuses. Stay connected with Cadence and other industry leaders to continue exploring the innovations set to redefine the future of data centers. Learn More Cadence Joins Arm Total Design Cadence Arm-Based Solutions Cadence Reality Digital Twin Platform




or

Celebrating Milestones: The Cadence Bangalore Toastmasters Club’s Journey

On November 5, 2024, the Cadence Bangalore Toastmasters Club celebrated a significant milestone by hosting its 50th meeting. Established in December 2020, the club was created to provide a supportive environment for individuals looking to improve their communication and leadership skills. Over the years, the club has evolved into a vibrant community filled with success stories of personal development and newfound confidence. A testament to the club's dedication is its achievement of the "Select Distinguished Club" status during the 2023-2024 program year. By fulfilling 7 out of 10 distinguished goals, the club highlighted its commitment to excellence—a success driven by its vibrant members' relentless focus and perseverance. The strategic insight gained from regular Toastmasters committee meetings and the influential "Moments of Truth" sessions held in 2023 and 2024 are key to this success. Our club members have consistently demonstrated strong performance in various speech contests, with notable achievements across multiple levels. In 2023, members excelled in Evaluation and Table Topics contests, reaching the district level while advancing to the Division Level in the International Speech Contest. Continuing their success into 2024, members again qualified for area-level contests, securing third-place positions in the Evaluation and Table Topics categories, highlighting the club's dedication and competitive spirit. The 50th meeting was based on the theme of serendipity. It was not only a milestone celebration but also a vibrant festival of achievements and growth. The day buzzed with energy as activities like a spirited Treasure Hunt injected enthusiasm and camaraderie among attendees. Distinguished guests, including Kripa Venkitachalam and Madhavi Rao, enriched the occasion with inspiring speeches. Madhavi reignited the club's spirit, while Kripa's discourse on the Growth Mindset and the "Power of Yet" encouraged members to pursue continuous self-improvement. The Cadence Bangalore Toastmasters Club is enthusiastic about its promising future and is committed to creating an environment that promotes personal and professional growth. Many members are close to completing their Toastmasters levels and pathways, and this term, a new group of approximately 30 individuals has joined, bringing the total membership to 52. This vibrant community is just beginning its journey and is eager to reach new milestones together through mutual support and a shared commitment to excellence. The transformations experienced by many club members are truly compelling. They often share how the club has significantly improved their communication skills and boosted their confidence. One member recalls, "Before joining, I found public speaking intimidating. Now, I embrace every opportunity to share my ideas." Another member highlights how the club's supportive environment helped him overcome his fear of public speaking, propelling his career to new heights. This culture of constructive feedback and continuous improvement has inspired countless members to pursue their dreams with renewed determination and optimism. The Cadence Bangalore Toastmasters Club's journey is a living testament to the power of community and the potential within each of us to grow and achieve greatness. As the club continues to evolve and inspire, it serves as a beacon for those aspiring to transform their skills and seize their moment in the spotlight. Learn more about life at Cadence.




or

Randomization considerations for PCIe Integrity and Data Encryption Verification Challenges

Peripheral Component Interconnect Express (PCIe) is a high-speed interface standard widely used for connecting processors, memory, and peripherals. With the increasing reliance on PCIe to handle sensitive data and critical high-speed data transfer, ensuring data integrity and encryption during verification is the most essential goal. As we know, in the field of verification, randomization is a key technique that drives robust PCIe verification. It introduces unpredictability to simulate real-world conditions and uncover hidden bugs from the design. This blog examines the significance of randomization in PCIe IDE verification, focusing on how it ensures data integrity and encryption reliability, while also highlighting the unique challenges it presents. For more relevant details and understanding on PCIe IDE you can refer to Introducing PCIe's Integrity and Data Encryption Feature . The Importance of Data Integrity and Data Encryption in PCIe Devices Data Integrity : Ensures that the transmitted data arrives unchanged from source to destination. Even minor corruption in data packets can compromise system reliability, making integrity a critical aspect of PCIe verification. Data Encryption : Protects sensitive data from unauthorized access during transmission. Encryption in PCIe follows a standard to secure information while operating at high speeds. Maintaining both data integrity and data encryption at PCIe’s high-speed data transfer rate of 64GT/s in PCIe 6.0 and 128GT/s in PCIe 7.0 is essential for all end point devices. However, validating these mechanisms requires comprehensive testing and verification methodologies, which is where randomization plays a very crucial role. You can refer to Why IDE Security Technology for PCIe and CXL? for more details on this. Randomization in PCIe Verification Randomization refers to the generation of test scenarios with unpredictable inputs and conditions to expose corner cases. In PCIe verification, this technique helps us to ensure that all possible behaviors are tested, including rare or unexpected situations that could cause data corruption or encryption failures that may cause serious hindrances later. So, for PCIe IDE verification, we are considering the randomization that helps us verify behavior more efficiently. Randomization for Data Integrity Verification Here are some approaches of randomized verifications that mimic real-world traffic conditions, uncovering subtle integrity issues that might not surface in normal verification methods. 1. Randomized Packet Injection: This technique randomized data packets and injected into the communication stream between devices. Here we Inject random, malformed, or out-of-sequence packets into the PCIe link and mix valid and invalid IDE-encrypted packets to check the system’s ability to detect and reject unauthorized or invalid packets. Checking if encryption/decryption occurs correctly across packets. On verifying, we check if the system logs proper errors or alerts when encountering invalid packets. It ensures coverage of different data paths and robust protocol check. This technique helps assess the resilience of the IDE feature in PCIe in below terms: (i) Data corruption: Detecting if the system can maintain data integrity. (ii) Encryption failures: Testing the robustness of the encryption under random data injection. (iii) Packet ordering errors: Ensuring reordering does not affect data delivery. 2. Random Errors and Fault Injection: It involves simulating random bit flips, PCRC errors, or protocol violations to help validate the robustness of error detection and correction mechanisms of PCIe. These techniques help assess how well the PCIe IDE implementation: (i) Detects and responds to unexpected errors. (ii) Maintains secure communication under stress. (iii) Follows the PCIe error recovery and reporting mechanisms (AER – Advanced Error Reporting). (iv) Ensures encryption and decryption states stay synchronized across endpoints. 3. Traffic Pattern Randomization: Randomizing the sequence, size, and timing of data packets helps test how the device maintains data integrity under heavy, unpredictable traffic loads. Randomization for Data Encryption Verification Encryption adds complexity to verification, as encrypted data streams are not readable for traditional checks. Randomization becomes essential to test how encryption behaves under different scenarios. Randomization in data encryption verification ensures that vulnerabilities, such as key reuse or predictable patterns, are identified and mitigated. 1. Random Encryption Keys and Payloads: Randomly varying keys and payloads help validate the correctness of encryption without hardcoding assumptions. This ensures that encryption logic behaves correctly across all possible inputs. 2. Randomized Initialization Vectors (IVs): Many encryption protocols require a unique IV for each transaction. Randomized IVs ensure that encryption does not repeat patterns. To understand the IDE Key management flow, we can follow the below diagram that illustrates a detailed example key programming flow using the IDE_KM protocol. Figure 1: IDE_KM Example As Figure 1 shows, the functionality of the IDE_KM protocol involves Start of IDE_KM Session, Device Capability Discovery, Key Request from the Host, Key Programming to PCIe Device, and Key Acknowledgment. First, the Host starts the IDE_KM session by detecting the presence of the PCIe devices; if the device supports the IDE protocol, the system continues with the key programming process. Then a query occurs to discover the device’s encryption capabilities; it ensures whether the device supports dynamic key updates or static keys. Then the host sends a request to the Key Management Entity to obtain a key suitable for the devices. Once the key is obtained, the host programs the key into the IDE Controller on the PCIe endpoint. Both the host and the device now share the same key to encrypt and authenticate traffic. The device acknowledges that it has received and successfully installed the encryption key and the acknowledgment message is sent back to the host. Once both the host and the PCIe endpoint are configured with the key, a secure communication channel is established. From this point, all data transmitted over the PCIe link is encrypted to maintain confidentiality and integrity. IDE_KM plays a crucial role in distributing keys in a secure manner and maintaining encryption and integrity for PCIe transactions. This key programming flow ensures that a secure communication channel is established between the host and the PCIe device. Hence, the Randomized key approach ensures that the encryption does not repeat patterns. 3. Randomization PHE: Partial Header Encryption (PHE) is an additional mechanism added to Integrity and Data Encryption (IDE) in PCIe 6.0. PHE validation using a variety of traffic; incorporating randomization in APIs provided for validating PHE feature can add more robust Encryption to the data. Partial Header Encryption in Integrity and Data Encryption for PCIe has more detailed information on this. Figure 2: High-Level Flow for Partial Header Encryption 4. Randomization on IDE Address Association Register values: IDE Address Association Register 1/2/3 are supposed to be configured considering the memory address range of IDE partner ports. The fields of IDE address registers are split multiple values such as Memory Base Lower, Memory Limit Lower, Memory Base Upper, and Memory Limit Upper. IDE implementation can have multiple register blocks considering addresses with 32 or 64, different registers sizes, 0-255 selective streams, 0-15 address blocks, etc. This Randomization verification can help verify all the corner cases. Please refer to Figure 2. Figure 3: IDE Address Association Register 5. Random Faults During Encryption: Injecting random faults (e.g., dropped packets or timing mismatches) ensures the system can handle disruptions and prevent data leakage. Challenges of IDE Randomization and its Solution Randomization introduces a vast number of scenarios, making it computationally intensive to simulate every possibility. Constrained randomization limits random inputs to valid ranges while still covering edge cases. Again, using coverage-driven verification to ensure critical scenarios are tested without excessive redundancy. Verifying encrypted data with random inputs increases complexity. Encryption masks data, making it hard to verify outputs without compromising security. Here we can implement various IDE checks on the IDE callback to analyze encrypted traffic without decrypting it. Randomization can trigger unexpected failures, which are often difficult to reproduce. By using seed-based randomization, a specific seed generates a repeatable random sequence. This helps in reproducing and analyzing the behavior more precisely. Conclusion Randomization is a powerful technique in PCIe verification, ensuring robust validation of both data integrity and data encryption. It helps us to uncover subtle bugs and edge cases that a non-randomized testing might miss. In Cadence PCIe VIP, we support full-fledged IDE Verification with rigorous randomized verification that ensures data integrity. Robust and reliable encryption mechanisms ensure secure and efficient data communication. However, randomization also brings various challenges, and to overcome them we adopt a combination of constrained randomization, seed-based testing, and coverage-driven verification. As PCIe continues to evolve with higher speeds and focuses on high security demands, our Cadence PCIe VIP ensures it is in line with industry demand and verify high-performance systems that safeguard data in real-world environments with excellence. For more information, you can refer to Verification of Integrity and Data Encryption(IDE) for PCIe Devices and Industry's First Adopted VIP for PCIe 7.0 . More Information: For more info on how Cadence PCIe Verification IP and TripleCheck VIP enables users to confidently verify IDE, see our VIP for PCI Express , VIP for Compute Express Link for and TripleCheck for PCI Express For more information on PCIe in general, and on the various PCI standards, see the PCI-SIG website .




or

NClaunch : ncelab: *E,CUVHNF error

I'm trying to simulate a practice code . Verilog verification of my code do not give any error.But when I try to elaborate, this error is being showed:

ncelab: *E,CUVHNF (./FSM_test.v,17|20): Hierarchical name component lookup failed at 'l'

What does this mean? How can I debug this error ? Is there any archive or list of possible error list so that I don't have to ask in forum to understand the errors. 




or

TensorFlow Optimization in DSVM: Azure and Cadence

Hello Folks,

Problem statement first: How does one properly setup tensorflow for running on a DSVM using a remote Docker environment? Can this be done in aml_config/*.runconfig?

I receive the following message and I would like to be able to utilize the increased speeds of the extended FMA operations.

tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA

Background: I utilize a local docker environment managed through Azure ML Workbench for initial testing and code validation so that I'm not running an expensive DSVM constantly. Once I assess that my code is to my liking, I then run it on a remote docker instance on an Azure DSVM.

I want a consistent conda environment across my compute environments, so this works out extremely well. However, I cannot figure out how to control the tensorflow build to optimize for the hardware at hand (i.e. my local docker on macOS vs. remote docker on Ubuntu DSVM)




or

Using oscillograph waveform file CSV as the Pspice simulation signal source

hi,

     I save the waveform file of the oscilloscope as CSV file format.

     Now, I need to use this waveform file as the source of the low-pass filter .

     I searched and read the PSPICE help documents, and did not find any  methods. 

     How to realize it?

     Are there any reference documents or examples?

     Thanks!

    




or

Functional coverage report.

Is there a way to generate coverage reports, not in ucd or any other format. I have written basic covergroup and passed arguments[-covoverwrite -cov_cgsample -cov_debuglog -coverage u] to the xrun command, however I don't see anything in sim directory, nor do I see anything in the logs indicating the covergroups have been hit. How can I confirm that cover groups are getting hit and essentially observe the bins. In Questa sim, you essentially get them as part of the log itself.




or

Path mapping for C Firmware source files when debugging

Hi,

i am compiling firmware under Windows transfer the binaries and the sources to Linux to simulate/debug there. The problem is that the paths in the DWARF debug info of the .elf file are the absolute Windows paths as set by the compiler so they are useless under Linux. Is it possible to configure mappings of these paths to the Linux paths when simulating/debugging like with e.g. GDB (https://sourceware.org/gdb/current/onlinedocs/gdb/Source-Path.html#index-set-substitute_002dpath)?

thx,

Peter




or

Arduino: how to save the dynamic memory?

When the Arduino Mega2560 is added to the first serial port, the dynamic memory is 2000 bytes, and when the second serial serial is added, the dynamic memory is 4000 bytes. Now I need to add the third Serial serial port. The dynamic memory is 6000 bytes. Due to the many variables in the program itself, the dynamic memory is not enough. Please help me how to save the dynamic memory?




or

Matlab cannot open Pspice, to prompt orCEFSimpleUI.exe that it has stopped working!

Cadence_SPB_17.4-2019 + Matlab R2019a

请参考本文档中的步骤进行操作

1,打开BJT_AMP.opj

2,设置Matlab路径

3,打开BJT_AMP_SLPS.slx

4,打开后,设置PSpiceBlock,出现或CEFSimpleUI.exe停止工作

5,添加模块

6,相同

7,打开pspsim.slx

8,相同

9,打开C: Cadence Cadence_SPB_17.4-2019 tools bin

orCEFSimpleUI.exe和orCEFSimple.exe

 

10,相同

我想问一下如何解决,非常感谢!




or

How to remove incorrect nets error in cadence?

While doing the lvs it's showing an error in gnd connection, I am not being able to understand exactly what is the error and what do I need to do to remove this error?




or

Formal Verification Approach for I2C Slave

Hello,

I am new in formal verification and I have a concept question about how to verify an I2C Slave block.

I think the response should be valid for any serial interface which needs to receive information for several clocks before making an action.

The the protocol description is the following: 

I have a serial clock (SCL), Serial Data Input (SDI) and Serial Data Output (SDO), all are ports of the I2C Slave block.

The protocol looks like this:

The first byte which is received by the slave consists in 7bits of sensor address and the 8th bit is the command 0/1 Write/Read.

After the first 8 bits, the slave sends an ACK (SDO = 1 for 1 clock) if the sensor address is correct.

Lets consider only this case, where I want to verify that the slave responds with an ACK if the sensor address is correct.

The only solution I found so far was to use the internal buffer from the block which saves the received bits during 8 clocks. The signal is called shift_s.

I also needed to use internal chip state (state_s) and an internal counter (shift_count_s).

Instead of doing an direct check of the SDO(sdo_o) depending on SDI (sdi_d_i), I used the internal shift_s register.

My question is if my approach is the correct one or there is a possibility to write the verification at a blackbox level.

Below you have the 2 properties: first checks connection from SDI to internal buffer, the second checks the connection between internal buffer and output.

property prop_i2c_sdi_store;
  @(posedge sclk_n_i)
  $past(i2c_bl.state_s == `STATE_RECEIVE_I2C_ADDR)
    |-> i2c_bl.shift_s == byte'({ $past(i2c_bl.shift_s), $past(sdi_d_i)});
endproperty
APF_I2C_CHECK_SDI_STORE: assert property(prop_i2c_sdi_store);

property prop_i2c_sensor_addr(sens_addr_sel, sens_addr);
@(posedge sclk_n_i) (i2c_bl.state_s == `STATE_RECEIVE_I2C_ADDR) && (i2c_addr_i == sens_addr_sel) && (i2c_bl.shift_count_s == 7)
  ##1 (i2c_bl.shift_s inside {sens_addr, sens_addr+1}) |-> sdo_o;
endproperty
APF_I2C_CHECK_SENSOR_ADDR0: assert property(prop_i2c_sensor_addr(0, `I2C_SENSOR_ADDRESS_A0));
APF_I2C_CHECK_SENSOR_ADDR1: assert property(prop_i2c_sensor_addr(1, `I2C_SENSOR_ADDRESS_A1));
APF_I2C_CHECK_SENSOR_ADDR2: assert property(prop_i2c_sensor_addr(2, `I2C_SENSOR_ADDRESS_A2));
APF_I2C_CHECK_SENSOR_ADDR3: assert property(prop_i2c_sensor_addr(3, `I2C_SENSOR_ADDRESS_A3));

PS: i2c_addr_i is address selection for the slave (there are 4 configurable sensor addresses, but this is not important for the case).

Thank you!




or

Issue With Loudness Normalization

Hello everyone. In recent days, I'm having a weird problem with sound output on my Windows 10 PC. In fact, I can't control the loudness of it. So is there any possibility of PCB of sound card being damaged?




or

How to turn vavlog IO width mismatch error to warning?

Hi, all.

When I use vavlog to compile verilog rtl, it will recognize IO width mismatch problem as a fatal error.

How to turn the error into warning?

VCS can use -error=noIOPCWM to ingore the error.

Is vavlog has similar arguments?




or

Sweep Function Error

When I try to implement the sweep inside my verilog file, I get the following errors.

/// Function

swp2 sweep param=Ndiscmin values=[0.011 0.5055 1] {     //// Line 63
   tran2 tran start=0 stop=1300n errpreset=conservative
}




or

Can't request Tensilica SDK - Error 500

Hi,

I'm looking to download Tensilica SDK for evaluation, but I can't get past the registration form:




or

To Escalate or Not? This Is Modi’s Zugzwang Moment

This is the 17th installment of The Rationalist, my column for the Times of India.

One of my favourite English words comes from chess. If it is your turn to move, but any move you make makes your position worse, you are in ‘Zugzwang’. Narendra Modi was in zugzwang after the Pulwama attacks a few days ago—as any Indian prime minister in his place would have been.

An Indian PM, after an attack for which Pakistan is held responsible, has only unsavoury choices in front of him. He is pulled in two opposite directions. One, strategy dictates that he must not escalate. Two, politics dictates that he must.

Let’s unpack that. First, consider the strategic imperatives. Ever since both India and Pakistan became nuclear powers, a conventional war has become next to impossible because of the threat of a nuclear war. If India escalates beyond a point, Pakistan might bring their nuclear weapons into play. Even a limited nuclear war could cause millions of casualties and devastate our economy. Thus, no matter what the provocation, India needs to calibrate its response so that the Pakistan doesn’t take it all the way.

It’s impossible to predict what actions Pakistan might view as sufficient provocation, so India has tended to play it safe. Don’t capture territory, don’t attack military assets, don’t kill civilians. In other words, surgical strikes on alleged terrorist camps is the most we can do.

Given that Pakistan knows that it is irrational for India to react, and our leaders tend to be rational, they can ‘bleed us with a thousand cuts’, as their doctrine states, with impunity. Both in 2001, when our parliament was attacked and the BJP’s Atal Bihari Vajpayee was PM, and in 2008, when Mumbai was attacked and the Congress’s Manmohan Singh was PM, our leaders considered all the options on the table—but were forced to do nothing.

But is doing nothing an option in an election year?

Leave strategy aside and turn to politics. India has been attacked. Forty soldiers have been killed, and the nation is traumatised and baying for blood. It is now politically impossible to not retaliate—especially for a PM who has criticized his predecessor for being weak, and portrayed himself as a 56-inch-chested man of action.

I have no doubt that Modi is a rational man, and knows the possible consequences of escalation. But he also knows the possible consequences of not escalating—he could dilute his brand and lose the elections. Thus, he is forced to act. And after he acts, his Pakistan counterpart will face the same domestic pressure to retaliate, and will have to attack back. And so on till my home in Versova is swallowed up by a nuclear crater, right?

Well, not exactly. There is a way to resolve this paradox. India and Pakistan can both escalate, not via military actions, but via optics.

Modi and Imran Khan, who you’d expect to feel like the loneliest men on earth right now, can find sweet company in each other. Their incentives are aligned. Neither man wants this to turn into a full-fledged war. Both men want to appear macho in front of their domestic constituencies. Both men are masters at building narratives, and have a pliant media that will help them.

Thus, India can carry out a surgical strike and claim it destroyed a camp, killed terrorists, and forced Pakistan to return a braveheart prisoner of war. Pakistan can say India merely destroyed two trees plus a rock, and claim the high moral ground by returning the prisoner after giving him good masala tea. A benign military equilibrium is maintained, and both men come out looking like strong leaders: a win-win game for the PMs that avoids a lose-lose game for their nations. They can give themselves a high-five in private when they meet next, and Imran can whisper to Modi, “You’re a good spinner, bro.”

There is one problem here, though: what if the optics don’t work?

If Modi feels that his public is too sceptical and he needs to do more, he might feel forced to resort to actual military escalation. The fog of politics might obscure the possible consequences. If the resultant Indian military action causes serious damage, Pakistan will have to respond in kind. In the chain of events that then begins, with body bags piling up, neither man may be able to back down. They could end up as prisoners of circumstance—and so could we.

***

Also check out:

Why Modi Must Learn to Play the Game of Chicken With Pakistan—Amit Varma
The Two Pakistans—Episode 79 of The Seen and the Unseen
India in the Nuclear Age—Episode 80 of The Seen and the Unseen

The India Uncut Blog © 2010 Amit Varma. All rights reserved.
Follow me on Twitter.




or

Can Amit Shah do for India what he did for the BJP?

This is the 20th installment of The Rationalist, my column for the Times of India.

Amit Shah’s induction into the union cabinet is such an interesting moment. Even partisans who oppose the BJP, as I do, would admit that Shah is a political genius. Under his leadership, the BJP has become an electoral behemoth in the most complicated political landscape in the world. The big question that now arises is this: can Shah do for India what he did for the BJP?

This raises a perplexing question: in the last five years, as the BJP has flourished, India has languished. And yet, the leadership of both the party and the nation are more or less the same. Then why hasn’t the ability to manage the party translated to governing the country?

I would argue that there are two reasons for this. One, the skills required in those two tasks are different. Two, so are the incentives in play.

Let’s look at the skills first. Managing a party like the BJP is, in some ways, like managing a large multinational company. Shah is a master at top-down planning and micro-management. How he went about winning the 2014 elections, described in detail in Prashant Jha’s book How the BJP Wins, should be a Harvard Business School case study. The book describes how he fixed the BJP’s ground game in Uttar Pradesh, picking teams for 147,000 booths in Uttar Pradesh, monitoring them, and keeping them accountable.

Shah looked at the market segmentation in UP, and hit upon his now famous “60% formula”. He realised he could not deliver the votes of Muslims, Yadavs and Jatavs, who were 40% of the population. So he focussed on wooing the other 60%, including non-Yadav OBCs and non-Jatav Dalits. He carried out versions of these caste reconfigurations across states, and according to Jha, covered “over 5 lakh kilometres” between 2014 and 2017, consolidating market share in every state in this country. He nurtured “a pool of a thousand new OBC and Dalit leaders”, going well beyond the posturing of other parties.

That so many Dalits and OBCs voted for the BJP in 2019 is astonishing. Shah went past Mandal politics, managing to subsume previously antagonistic castes and sub-castes into a broad Hindutva identity. And as the BJP increased its depth, it expanded its breadth as well. What it has done in West Bengal, wiping out the Left and weakening Mamata Banerjee, is jaw-dropping. With hindsight, it may one day seem inevitable, but only a madman could have conceived it, and only a genius could have executed it.

Good man to be Home Minister then, eh? Not quite. A country is not like a large company or even a political party. It is much too complex to be managed from the top down, and a control freak is bound to flounder. The approach needed is very different.

Some tasks of governance, it is true, are tailor-made for efficient managers. Building infrastructure, taking care of roads and power, building toilets (even without an underlying drainage system) and PR campaigns can all be executed by good managers. But the deeper tasks of making an economy flourish require a different approach. They need a light touch, not a heavy hand.

The 20th century is full of cautionary tales that show that economies cannot be centrally planned from the top down. Examples of that ‘fatal conceit’, to use my hero Friedrich Hayek’s term, include the Soviet Union, Mao’s China, and even the lady Modi most reminds me of, Indira Gandhi.

The task of the state, when it comes to the economy, is to administer a strong rule of law, and to make sure it is applied equally. No special favours to cronies or special interest groups. Just unleash the natural creativity of the people, and don’t try to micro-manage.

Sadly, the BJP’s impulse, like that of most governments of the past, is a statist one. India should have a small state that does a few things well. Instead, we have a large state that does many things badly, and acts as a parasite on its people.

As it happens, the few things that we should do well are all right up Shah’s managerial alley. For example, the rule of law is effectively absent in India today, especially for the poor. As Home Minister, Shah could fix this if he applied the same zeal to governing India as he did to growing the BJP. But will he?

And here we come to the question of incentives. What drives Amit Shah: maximising power, or serving the nation? What is good for the country will often coincide with what is good for the party – but not always. When they diverge, which path will Shah choose? So much rests on that.

The India Uncut Blog © 2010 Amit Varma. All rights reserved.
Follow me on Twitter.




or

For this Brave New World of cricket, we have IPL and England to thank

This is the 24th installment of The Rationalist, my column for the Times of India.

Back in the last decade, I was a cricket journalist for a few years. Then, around 12 years ago, I quit. I was jaded as hell. Every game seemed like déjà vu, nothing new, just another round on the treadmill. Although I would remember her fondly, I thought me and cricket were done.

And then I fell in love again. Cricket has changed in the last few years in glorious ways. There have been new ways of thinking about the game. There have been new ways of playing the game. Every season, new kinds of drama form, new nuances spring up into sight. This is true even of what had once seemed the dullest form of the game, one-day cricket. We are entering into a brave new world, and the team leading us there is England. No matter what happens in the World Cup final today – a single game involves a huge amount of luck – this England side are extraordinary. They are the bridge between eras, leading us into a Golden Age of Cricket.

I know that sounds hyperbolic, so let me stun you further by saying that I give the IPL credit for this. And now, having woken up you up with such a jolt on this lovely Sunday morning, let me explain.

Twenty20 cricket changed the game in two fundamental ways. Both ended up changing one-day cricket. The first was strategy.

When the first T20 games took place, teams applied an ODI template to innings-building: pinch-hit, build, slog. But this was not an optimal approach. In ODIs, teams have 11 players over 50 overs. In T20s, they have 11 players over 20 overs. The equation between resources and constraints is different. This means that the cost of a wicket goes down, and the cost of a dot ball goes up. Critically, it means that the value of aggression rises. A team need not follow the ODI template. In some instances, attacking for all 20 overs – or as I call it, ‘frontloading’ – may be optimal.

West Indies won the T20 World Cup in 2016 by doing just this, and England played similarly. And some sides began to realise was that they had been underestimating the value of aggression in one-day cricket as well.

The second fundamental way in which T20 cricket changed cricket was in terms of skills. The IPL and other leagues brought big money into the game. This changed incentives for budding cricketers. Relatively few people break into Test or ODI cricket, and play for their countries. A much wider pool can aspire to play T20 cricket – which also provides much more money. So it makes sense to spend the hundreds of hours you are in the nets honing T20 skills rather than Test match skills. Go to any nets practice, and you will find many more kids practising innovative aggressive strokes than playing the forward defensive.

As a result, batsmen today have a wider array of attacking strokes than earlier generations. Because every run counts more in T20 cricket, the standard of fielding has also shot up. And bowlers have also reacted to this by expanding their arsenal of tricks. Everyone has had to lift their game.

In one-day cricket, thus, two things have happened. One, there is better strategic understanding about the value of aggression. Two, batsmen are better equipped to act on the aggressive imperative. The game has continued to evolve.

Bowlers have reacted to this with greater aggression on their part, and this ongoing dialogue has been fascinating. The cricket writer Gideon Haigh once told me on my podcast that the 2015 World Cup featured a battle between T20 batting and Test match bowling.

This England team is the high watermark so far. Their aggression does not come from slogging. They bat with a combination of intent and skills that allows them to coast at 6-an-over, without needing to take too many risks. In normal conditions, thus, they can coast to 300 – any hitting they do beyond that is the bonus that takes them to 350 or 400. It’s a whole new level, illustrated by the fact that at one point a few days ago, they had seven consecutive scores of 300 to their name. Look at their scores over the last few years, in fact, and it is clear that this is the greatest batting side in the history of one-day cricket – by a margin.

There have been stumbles in this World Cup, but in the bigger picture, those are outliers. If England have a bad day in the final and New Zealand play their A-game, England might even lose today. But if Captain Morgan’s men play their A-game, they will coast to victory. New Zealand does not have those gears. No other team in the world does – for now.

But one day, they will all have to learn to play like this.

The India Uncut Blog © 2010 Amit Varma. All rights reserved.
Follow me on Twitter.




or

Virtuoso Studio: How Do You Name Simulation Histories in Virtuoso ADE Assembler?

This blog describes an efficient way to name the histories saved by the simulation runs in Virtuoso ADE Assembler.(read more)




or

Start Your Engines: Create and Insert Connect Modules for Mixed-Signal Verification

Read this blog to know how you can easily create and insert connect modules using Spectre AMS Designer with the Verilog-AMS standard language defined by Accellera. (read more)




or

Virtuoso Studio IC 23.1: Using Net Tracer for Design Review

This blog explores how Virtuoso Studio Net Tracer can help you perform a design review.

We’ll use the net connectivity option, which allows the user to get a clean highlighted net. You can use the Net Tracer tool to highlight the nets. You can find the Net Tracer command under the connectivity pulldown menu in the layout window.

Trace manager and the ability to display different islands on the same net with other colors, you can identify and connect the unconnected islands as you wish.

The Net Tracer utility traces the nets in the physical view (layout). The trace is a highlighted net, which is a non-selectable object. The Net Tracer utility is available from Virtuoso Layout Suite XL onwards. You can use this utility based on your specific needs and preferences.

For a better understanding of the Net Tracer feature, let’s see one scenario between the circuit designer and layout engineer for a layout design review.

Circuit designer: Can we go through the routed input nets “inm” and “inp”?

Layout engineer: From the below layout view where they are highlighted using the XL connectivity, today I will use Net Tracer utility for the design review.

Circuit designer: I have never heard of this feature. Let's see how it works.

Layout engineer: Sure, now we turn on the Net Tracer toolbar using the below option.

You see the Net Tracer options form here:

As you can see on my screen, I have opened the layout view and engaged the Net Tracer utility.

Net Tracer allows shapes to be traced on a net in two tracing modes, namely, physical and logical, where shapes on the same net are physically or logically connected.

Physical tracing gathers all the shapes physically connected on the same net.

Logical tracing gathers all the shapes assigned to the same net. It highlights the net as in the source design (schematic). It will highlight shapes on the same net, even if they are isolated shapes that are not physically connected.

For this scenario, let us use physical tracing for input nets “inm” and “inp."

Highlighted nets are shown below:

Net “inm”                    Net “inp”                   Nets “inm” and “inp” 

      

Net Tracer has features like physical and logical tracing, preview, step-by-step mode, ease of tracing a net on a shape out of multiple underlying shapes, and so on.

Let us explore logical tracing for output nets “outm” and “outp”:

Here, you can see how to enable true color and halo before enabling logical tracing to identify the metal route. After enabling the true color halo, enable the logical trace.

Here, I am opening the trace manager to search “outm” and “outp” and click trace. That will trace the particular nets as shown.

Net Tracer has a preview feature, which is helpful in terms of the number of previewed objects. This preview capability hints at how the trace would appear when you create it. This useful feature in Virtuoso Studio highlights both completed and incomplete nets, helping the user better understand the status of the highlighted nets.

Circuit designer: Thanks for the design review. You have done good work. Net Tracer clearly shows both types of tracing, and it was even easy for the circuit designer to understand.

Layout engineer: Let me share the link to the Net Tracer RAK, where other layout engineers can explore many more amazing features of the Net Tracer.

Do You Have Access to the Cadence Support Portal?

If not, follow the steps below to create your account.

  • On the Cadence Support portal, select Register Now and provide the requested information on the Registration page.
  • You will need an email address and host ID to sign up.
  • If you need help with registration, contact support@cadence.com.

To stay up to date with the latest news and information about Cadence training and webinars, subscribe to the Cadence Training emails.

If you have questions about courses, schedules, online, public, or live onsite training, reach out to us at Cadence Training.

For any questions, general feedback, or future blog topic suggestions, please leave a comment.

Become Cadence Certified

Cadence Training Services now offers digital badges for this training course. These badges indicate proficiency in a certain technology or skill and give you a way to validate your expertise to managers and potential employers. You can highlight your expertise by adding these digital badges to your email signature or any social media platform, such as Facebook or LinkedIn. To become Cadence Certified, you can find additional information here.

Related Resources

 Videos

Invoking the MarkNet, Net Tracer command and its options

Net Tracer Features

Video: Net Tracer saving and loading saved trace, neighboring shapes of trace

Net Tracer: Physical Tracing – Step mode

Net Tracer: Physical and Logical Tracing

Video: Net Tracer show preview option, from net and display options, shape count in trace

Video: Net Tracer using a constraint group with different display mode settings and  using the Trace Manager GUI

 RAK

Introduction to Net Tracer

 Product manual

Virtuoso Layout Suite XL: Connectivity Driven Editing User Guide IC23.1

About Knowledge Booster Training Bytes

Knowledge Booster Training Bytes is an online journal that relays information about Cadence Training videos, online courses, and upcoming webinars that are available in the Learning section of the Cadence Learning and Support portal. This blog category brings you direct links to these videos, courses, and other related material on a regular basis.

Sandhya.

On behalf of the Cadence Training team




or

PCB Chamfering Board edge connectors

Hi 

I am looking into chamfering the edge of PCB for Board edge connectors. I have performed fillet command earlier but new to chamfering.

Below is the description :

As seen above, the PCB edge are chamfered in thickness as well as at the corners.

Using OrCAD PCB hotfix S023.




or

10 Layer PCB project won't generate Gerber's completely for middle layers

Hello Fellow PCB Designers,

We have a 10 layer PCB design that originated in Pads and was converted over to Allegro 17.4, this is an old design but is manufacturable and works perfectly fine.  When I try to generate a Gerber for the Top or Bottom layers

the Gerber comes out fine.  But Most of the middle layers are Etch's and via's for power and grounds, but the Gerber's come mostly blank, there might be some details, but in the Gerber view everything is displayed correctly.

The design does have many close spacings, I have not changed anything in the constrains manager yet, turned off a lot of the DRC's, but thinking there might be something wrong with the constrains.

  I find that the CSet is set to 2_18, not sure yet what this means, also there are many of these definitions, PCS 3,4,5,ect, are the same as CSet 2_18 any suggestions would be great, we are currently looking into this, have seen

that even small change in constraint manager can cause long processing and even Allegro crashing, this is a large project.

Thanks Much, Thanks, Mike Pollock.




or

How to transfer custom title block from Orcad Capture to PCB Editor

Hi,

So I was trying to update the title block of a schematic that I have. The title block that was on there was out of date . I clicked on place --> title block and was able to find the title block that I need. I also have a .OLB file that contains that title block. Then I created a Netlist with the old BRD file as the input file (To keep it as is but modify changes) but when I do that I still do not see / cannot place the title block that I need. Under Place --> format symbols in Allegro , I do see a title block that is coming from the database (But it's the old one). I don't know what to do at this point and would appreciate any tips. I did make sure that the path to where the library is , is defined in the user preferences. 
I also tried copying the title block under the library folder in capture before creating my Netlist and that did not work either.

Thank you all.




or

Purging duplicate vias in pcb editor

How do we purge/remove the duplicated vias in the same location of the PCB editor? These vias are not the one stacked and they are just blind vias running in internal layers 12-14. I find there is an additional copy of the blind via at the same location. Not sure what caused this issue.




or

Launch footprint editor from Capture or PCB Editor?

I'd like to be able to edit a footprint for a part in my design without needing to find the footprint filepath and directly open that file in PCB Editor. I see that I can view footprints from Capture, and that doing so shows me the footprint path, but I can't find any way to launch the editor. Is there any way to go directly from a part in a Capture schematic or a placed part in a PCB Editor board design to editing that part's footprint?




or

Orcad PCB (allegro) not using GPU over USB

Hi,

I have a monitor plugged to my laptop using a HDMI to USB adapter. When using this adapter, Allegro runs very slowly. It seems that it is not using my video card.

Is this a known issue with a workaround I can try?

Thanks,

Michael




or

Can I align pin numbers in edit part windows in Orcad Capture?

Hello..

I'm updating part in part editor in orcad capture, and I wonder how to align pin numbers using menu or tcl/tk command.

Please, let me know. Thank you.




or

datasheets for difference of Allegro PCB and OrCAD Professional

Hi All

I am looking for the functions which are different about OrCAD Professional and Allegro tier.

is there any resource?

regard




or

17.4 Design Sync Fails without providing errors

As the title suggests I am unable to perform design sync between OrCAD Capture and Allegro. When I add a layout and try to sync to it I am given ERROR(ORCAP-2426): Cannot run Design Sync because of errors. See session log for error details.

Session Log

[ORPCBFLOW] : Invoking ECO dialog.
INFO(ORNET-1176): Netlisting the design
INFO(ORNET-1178): Design Name:
C:USERSDDOYLEDOCUMENTSCADENCEBOARDSREMOTE POWER DEVICECAPTUREREMOTE_POWER_DEVICE.DSN
Netlist Directory:
c:usersddoyledocumentscadenceoards emote power devicelayoutallegro
Configuration File:
C:CadenceSPB_17.4 ools/capture/allegro.cfg
pstswp.exe - pst - d "C:USERSDDOYLEDOCUMENTSCADENCEBOARDSREMOTE POWER DEVICECAPTUREREMOTE_POWER_DEVICE.DSN"- n "c:usersddoyledocumentscadenceoards emote power devicelayoutallegro" - c "C:CadenceSPB_17.4 ools/capture/allegro.cfg" - v 3 - l 31 - s "" - j "PCB Footprint" - hpath "HPathForCollision"
Spawning... pstswp.exe - pst - d "C:USERSDDOYLEDOCUMENTSCADENCEBOARDSREMOTE POWER DEVICECAPTUREREMOTE_POWER_DEVICE.DSN"- n "c:usersddoyledocumentscadenceoards emote power devicelayoutallegro" - c "C:CadenceSPB_17.4 ools/capture/allegro.cfg" - v 3 - l 31 - s "" - j "PCB Footprint" - hpath "HPathForCollision"
{ Using PSTWRITER 17.4.0 d001Dec-14-2021 at 09:00:49 }

INFO(ORCAP-36080): Scanning netlist files ...

Loading... c:usersddoyledocumentscadenceoards emote power devicelayoutallegropstchip.dat

Loading... c:usersddoyledocumentscadenceoards emote power devicelayoutallegropstchip.dat

Loading... c:usersddoyledocumentscadenceoards emote power devicelayoutallegropstxprt.dat

Loading... c:usersddoyledocumentscadenceoards emote power devicelayoutallegropstxnet.dat
packaging the design view...
Exiting... pstswp.exe - pst - d "C:USERSDDOYLEDOCUMENTSCADENCEBOARDSREMOTE POWER DEVICECAPTUREREMOTE_POWER_DEVICE.DSN"- n "c:usersddoyledocumentscadenceoards emote power devicelayoutallegro" - c "C:CadenceSPB_17.4 ools/capture/allegro.cfg" - v 3 - l 31 - s "" - j "PCB Footprint" - hpath "HPathForCollision"
INFO(ORNET-1179): *** Done ***

This issue started to occur after I changed parts that exist on previously created PCBs. I changed the following leading up to this:

1. Added height in Allegro to many of my components using the Setup->Area->Package Height tool.

2. Changed the reference designator category in OrCAD Capture to TP for several components on board.

Any advice here would be most welcome. Thanks!




or

The default location of orCAD Capture library Pin Number is incorrect

The default position of the pin number is incorrect.




or

Allegro part of DPI does not support scaling above 150%

Allegro part of DPI does not support scaling above 150%




or

Migrating from files Orcad Layout 16.2

I have managed to convert our old schematic and PCD file to from Layout 16.2 to 17.4

I have exported the footprints and moved them to the correct lib directory. 

I get no DRC errors and I can build a new netlist file. The problem is I can't get the PCB editor to update using the new netlist and get the following error:

I cannot figure out how to fix the Name is too long error. 

(---------------------------------------------------------------------)
(                                                                     )
(    Allegro Netrev Import Logic                                      )
(                                                                     )
(    Drawing          : 70055R2.brd                                   )
(    Software Version : 17.4S023                                      )
(    Date/Time        : Tue Dec 14 18:54:25 2021                      )
(                                                                     )
(---------------------------------------------------------------------)


------ Directives ------------

Ripup etch:                  Yes
Ripup delete first segment:  No
Ripup retain bondwire:       No
Ripup symbols:               IfSame
Missing symbol has error:    No
DRC update:                  Yes
Schematic directory:         'C:/AFS/70055 PCB Test 2'
Design Directory:            'C:/AFS/70055 PCB Test 2'
Old design name:             'C:/AFS/70055 PCB Test 2/70055R2.brd'
New design name:             'C:/AFS/70055 PCB Test 2/70055R2.brd'

CmdLine: netrev -$ -i C:/AFS/70055 PCB Test 2 -x -u -t -y 2 -h -z -q netrev_constraint_report.xml C:/AFS/70055 PCB Test 2/#Taaaaae57776.tmp

------ Preparing to read pst files ------

Starting to read C:/AFS/70055 PCB Test 2/pstchip.dat 
   Finished reading C:/AFS/70055 PCB Test 2/pstchip.dat (00:00:00.02)
Starting to read C:/AFS/70055 PCB Test 2/pstxprt.dat 
   Finished reading C:/AFS/70055 PCB Test 2/pstxprt.dat (00:00:00.00)
Starting to read C:/AFS/70055 PCB Test 2/pstxnet.dat 
   Finished reading C:/AFS/70055 PCB Test 2/pstxnet.dat (00:00:00.00)

------ Oversights/Warnings/Errors ------


#1   ERROR(SPMHNI-176): Device library error detected.

ERROR(SPMHNI-189): Problems with the name of device 'SW DPDT_9_SWITCH_OTTO_ALT_SW DPDT': 'Name is too long.'.

ERROR(SPMHNI-170): Device 'SW DPDT_9_SWITCH_OTTO_ALT_SW DP' has library errors. Unable to transfer to Allegro.

#2   ERROR(SPMHNI-176): Device library error detected.

ERROR(SPMHNI-189): Problems with the name of device 'SW DPDT_10_SWITCH_OTTO_LIGHTS_SW DPDT': 'Name is too long.'.

ERROR(SPMHNI-170): Device 'SW DPDT_10_SWITCH_OTTO_LIGHTS_S' has library errors. Unable to transfer to Allegro.

#3   ERROR(SPMHNI-176): Device library error detected.

ERROR(SPMHNI-189): Problems with the name of device 'SW DPDT_7_SWITCH_OTTO_ALT_SW DPDT': 'Name is too long.'.

ERROR(SPMHNI-170): Device 'SW DPDT_7_SWITCH_OTTO_ALT_SW DP' has library errors. Unable to transfer to Allegro.

#4   ERROR(SPMHNI-176): Device library error detected.

ERROR(SPMHNI-189): Problems with the name of device 'SW DPDT_3_SWITCH_OTTO_MASTER_SW DPDT': 'Name is too long.'.

ERROR(SPMHNI-170): Device 'SW DPDT_3_SWITCH_OTTO_MASTER_SW' has library errors. Unable to transfer to Allegro.

#5   ERROR(SPMHNI-176): Device library error detected.

ERROR(SPMHNI-189): Problems with the name of device 'SW DPDT_6_SWITCH_OTTO_LIGHTS_SW DPDT': 'Name is too long.'.

ERROR(SPMHNI-170): Device 'SW DPDT_6_SWITCH_OTTO_LIGHTS_SW' has library errors. Unable to transfer to Allegro.

#6   ERROR(SPMHNI-176): Device library error detected.

ERROR(SPMHNI-189): Problems with the name of device 'SW DPDT_3_SWITCH_OTTO_MASTER_DPDT': 'Name is too long.'.

ERROR(SPMHNI-170): Device 'SW DPDT_3_SWITCH_OTTO_MASTER_DP' has library errors. Unable to transfer to Allegro.

#7   ERROR(SPMHNI-176): Device library error detected.

ERROR(SPMHNI-189): Problems with the name of device 'CONNECTOR DB15_DSUBVPTM15_CONNECTOR DB15': 'Name is too long.'.

ERROR(SPMHNI-170): Device 'CONNECTOR DB15_DSUBVPTM15_CONNE' has library errors. Unable to transfer to Allegro.

#8   ERROR(SPMHNI-176): Device library error detected.

ERROR(SPMHNI-189): Problems with the name of device 'CONNECTOR DB9_DSUBVPTM9_CONNECTOR DB9': 'Name is too long.'.

ERROR(SPMHNI-170): Device 'CONNECTOR DB9_DSUBVPTM9_CONNECT' has library errors. Unable to transfer to Allegro.

#9   ERROR(SPMHNI-175): Netrev error detected.

ERROR(SPMHDB-195): Error processing 'M6': Text line is outside of the extents..

------ Library Paths ------
MODULEPATH =  . 
           C:/Cadence/SPB_17.4/share/local/pcb/modules 

PSMPATH =  . 
           symbols 
           .. 
           ../symbols 
           C:/Cadence/SPB_17.4/share/local/pcb/symbols 
           C:/Cadence/SPB_17.4/share/pcb/pcb_lib/symbols 
           C:/Cadence/SPB_17.4/share/pcb/allegrolib/symbols 
           C:/Cadence/SPB_17.4/share/pcb/pcb_lib/symbols 

PADPATH =  . 
           symbols 
           .. 
           ../symbols 
           C:/Cadence/SPB_17.4/share/local/pcb/padstacks 
           C:/Cadence/SPB_17.4/share/pcb/pcb_lib/symbols 
           C:/Cadence/SPB_17.4/share/pcb/allegrolib/symbols 
           C:/Cadence/SPB_17.4/share/pcb/pcb_lib/symbols 


------ Summary Statistics ------


#10  Run stopped because errors were detected

netrev run on Dec 14 18:54:25 2021
   DESIGN NAME : '70055R2'
   PACKAGING ON Nov  2 2021 14:32:04

   COMPILE 'logic'
   CHECK_PIN_NAMES OFF
   CROSS_REFERENCE OFF
   FEEDBACK OFF
   INCREMENTAL OFF
   INTERFACE_TYPE PHYSICAL
   MAX_ERRORS 500
   MERGE_MINIMUM 5
   NET_NAME_CHARS '#%&()*+-./:=>?@[]^_`|'
   NET_NAME_LENGTH 24
   OVERSIGHTS ON
   REPLACE_CHECK OFF
   SINGLE_NODE_NETS ON
   SPLIT_MINIMUM 0
   SUPPRESS   20
   WARNINGS ON

 10 errors detected
 No oversight detected
 No warning detected

cpu time      0:00:27
elapsed time  0:00:00




or

Allegro 17.4 always reports new files as created in 17.2

Hello. I am using Cadence 17.4 tools. When I open a package symbol (.dra) or board file (.brd) in Allegro that was created in an older version of the tool I get a message like this one (as expected):

"The design created using release 17.2 will be updated for compatibility with the current software..."


If I create a symbol or board file from scratch in the 17.4 tool then open it later, I get the same message. (always referring to version 17.2 which is the previous version I was using here).

So far this has not caused me any problems, but I would like to understand why it is doing this in case I have something setup incorrectly.

I only have version 17.4 installed. I am not exporting to a downrev version when I save (i.e. not using File->Export->Downrev design…) and in User Preferences->Drawing I don’t have anything selected for database_compatibility_mode. What else might I check?

FYI here is the tool version information that I see after selecting Help->About Symbol:

OrCAD PCB Designer Standard 17.4-2019 S012 [10/26/2020] Windows SPB 64-bit Edition


Thanks -Jason




or

Create bounding shape for arcs

When using Shape > Create Bounding Shape on an arc, the outer side works well, but on the inner side it just draws a straight line from the begging to the end of the curve.  Is anyone aware of a fix for this?

I'm attaching  a picture as an example, it works great on lines.