at

read from text file with two values and represent that as voltage signals on two different port a and b

i want to read from text file two values  on two ports , i wrote  that  code, and i have that error that shown in the image below . and also the data in text file is shown as screenshot

 


module read_file (a,b);

electrical a,b;
integer in_file_0,data_value, valid, count0,int_value;


analog begin
@(initial_step) begin
in_file_0 = $fopen("/home/hh1667/ee610/my_library/read_file/data2.txt","r");

valid = $fscanf (in_file_0, "%b,%b" ,int_value,count0);
end

V(a) <+ int_value;
V(b) <+ count0;

end

endmodule




at

Genus: Generated netlist doesn't define subckts

Dear all, 

I'm trying to perform an LVS check using Calibre between a layout that was generated by Innovus and the initial netlist generated by Genus. However, once I hit Run LVS on Calibre, it reports the following warnings and recommends to stop the process:

Source netlist references but does not define more than 10 subckts:
DFD1BWP7T
DFKCND1BWP7T
DFKCNQD1BWP7T
DFKSND1BWP7T
DFQD1BWP7T
IND2D0BWP7T
INR2D0BWP7T
INVD0BWP7T
INVD2P5BWP7T
IOA21D0BWP7T
... (and more)

If I proceed the LVS process it shows lots of errors as shown in the following image:

Why Genus doesn't include the definition of those sub circuits in the generated netlist? Is this related to Flat/Hierarchy netlisting? 

I have included my Genus scripts as well as the generated netlist in the attachments (and here - if attachment don't work).

Many thanks,

Anas




at

Request information on Tools

We are looking for suitable tools that could be used for RTL design, IP-XACT based  integration (third party IP) and RTL design verification ( SV / UVM based methodology).

Request to share details on the different Cadence tools that is most suitable for these activities.




at

Conformal LEC can't finish at analyze abort step. How do I proceed?

Hi Cadence & forumers, 

I am running a conformal LEC with a flattened netlist against RTL. 

The run hang for 5 days at the "analyze abort" step which is automatically launched by the compare. 

The netlist is flattened at some levels so hierarchical flow which I tried didn't help much. The flattened/highly optimized netlist is from customer and the ultimate goal. How shall I proceed now? 

On the a side note, a test run with a hierarchical netlist from a simple DC "compile -map_effort medium" command finished after 1 day or so. 

Thank you! 

// Command: vpx compare -verbose -ABORT_Print -NONEQ_Print -TIMEstamp
// Starting multithreaded comparison ...
Comparing 241112 points in parallel.

// Multithreading Overhead: 38% Gates: 8501606/6168138
// Multithreaded processing completed.
================================================================================
Compared points PO DFF DLAT BBOX CUT Total
--------------------------------------------------------------------------------
Equivalent 1025 241638 30 75 21 242789
--------------------------------------------------------------------------------
Abort 0 124 0 0 0 124
================================================================================
Compare results of instance/output/pin equivalences and/or sequential merge
================================================================================
Compared points DFF Total
--------------------------------------------------------------------------------
Equivalent 204 204
================================================================================
// Warning: 512 DFFs/DLATs have 1 disabled clock port: skipped data cone comparison
// Resolving aborts by analyze abort...




at

How to generate "Sheet Name" column in a pin report?

Hi everyone, 

Is there any method to generate "Sheet" column for a pin report like table below? The column "Name.Pin" & "Signal" can be generated easily, but I have no idea to generate the column of "Sheet Name".

The software using here are Allegro Design Entry HDL, OrCAD Capture and Allegro PCB Editor. Can these 3 software generate "Sheet Name" data?

Name.Pin Signal Sheet Name
C1_1.1 N301321 SITE1_1
C1_1.2 GND_ANA_1 SITE1_1
C1_2.1 N180243 SITE2_1
C1_2.2 GND_ANA_2 SITE2_1

Thank you. 




at

How to identify old Orcad Schematic entry version


Good morning,
I dug up an old project from 2005 and I should open the schematic to check some things.
This is the schematic of a XILINX XC95108-pq160 CPLD which the XILINX ISE 6.1 software then translated and compiled, to generate a JEDEC file to burn CPLD.

My problem is that I can't open schematics with the versions of Orcad Schematic Entry that I have.
Can anyone help me understand which version of Orcad Schematic Entry I need to install to see these files?

I shared the files on:
drive.google.com/.../view

Thank you very much




at

copy paste circuit from one schematic design to another

Hi, have two designs and would like to copy paste one area of circuit from the old design to the new design, best way/approach and guidance please..




at

how to tell conformal to ignore certain combination of input

hi

How can I tell the LEC tool to ignore a combination of Primary input bus in both Golden and revised.

For example in both Golden and revised there is 

input [3:0] data_in

I want LEC not to check the case that data_in[3:0] == 4'b1000




at

Want to use Transmission Gate in my design?

I want to use a transmission gate in my design, but it is not available as a standard cell for Genus RTL synthesis. How can I perform an analysis of area, power, and critical path delay that includes the transmission gate alongside standard cells?

Could you provide guidance or a methodology for integrating custom cells, like the transmission gate, into the synthesis flow for accurate analysis?




at

ask some functions that we don't know if it exists

We have a big circuit having 12K gates totally and trying to show it in one page slide visually. But it is so hard for us to shrink it down from gate-level to module-level. Do you have any function like these:

  • Toggle wires on and off
  • “Right click” elements and group them into black boxes
  • Quickly left or right align elements to clean up pictures




at

5X “Time Warp” in Your Next Verification Cycle Using Xcelium Machine Learning

Artificial intelligence (AI) is everywhere. Machine learning (ML) and its associated inference abilities promise to revolutionize everything from driving your car to making your breakfast. Verification is never truly complete; it is over when you run...(read more)




at

Data Integrity for JEDEC DRAM Memories

 

With the DRAM fabrication advancing from 1x to 1y to 1z and further to 1a, 1b and 1c nodes along with the DRAM device speeds going up to 8533 for Lpddr5/8800 for DDR5, Data integrity is becoming a really important issue that the OEMs and other users have to consider as part of the system that relies on the correctness of data being stored in the DRAMs for system to work as designed.

It’s a complicated problem that requires multiple ways to deal with it.

Traditionally one of the main approaches to deal with data errors is to rely on the ECC. ECC requires additional memory storage in which the ECC codes will calculated and stored at the time of memory write to DRAM. These codes will be read back along with the memory data during to the reads and checked against the data to make sure that there are no errors. Typical ECC schemes use Hamming code that provide for single bit error correction and double bit error detection per burst. Also, while several of previous generation of DRAM required Host to keep aside system memory for ECC storage latest DRAMs like Lpddr5 and DDR5 support on die ECC as part of the normal DRAM function that can be enabled using mode registers. DDR5 further requires Host to run through an ECC Error Check and Scrub (ECS) cycle on an average every tECSint time (Average Periodic ECS Interval) to prevent data errors.

Not meeting the DRAM Refresh requirement is a major reason that can lead to loss of data. This could be challenging as the PVT variation can cause the refresh requirement to change over time. Putting the DRAM in Self Refresh mode can help off-loading Refresh tracking responsibilities to DRAM but may prevent Host to do other scheduling optimizations and should be carefully considered.

Some of the other things that can affect the DRAM data are

  1. Row hammer where same or adjacent rows are activated again and again leading to loss or changing of data contents in the rows that has not being addressed. Latest DRAMs like Lpddr5/Ddr5 support Refresh Management (including DRFM and ARFM) that allows the Host to compensate for these problems by issuing dedicated RFM commands helping DRAMs deals with potential Data loss issues arising out of Row hammer attacks.
  2. Device temperature is another important factor that the Host needs to be aware of and if the application requires DRAM to operate at elevated temperature. The user needs to check with DRAM Vendor on the temperature range that DRAM can still operate. Data integrity at thresholds greater than certain temperature is not assured regardless of refresh rate unless DRAM is manufactured to withstand that.
  3. Loss of power to DRAM will cause DRAM to lose all its contents. If this is a real concern for the system designer, they should consider using NVDIMM-N devices which has an onchip controller and a power source which is just enough to allow the DRAM contents to be copied into a backup non-volatile memory before power is lost. When the power is stored back, the stored memory contents in the non-volatile memory will be written back to the DRAM and system can continue to operate as it was before the power loss event occurred.

For transmissions and manufacturing errors DRAMs support additional features like CRC, DFE, Pre-Emphasis and PPR which will be covered in the next blog.

Cadence MMAV VIPs for DDR5/DDR5 DIMM and LPDDR5 are compressive VIP solutions and supports all of the above-listed Data integrity features including support for ECC error injection and SBE correction/DBE detection to assist with the verification challenges dealing with data integrity issues.

More information on Cadence DDR5/LPDDR5 VIP is available at Cadence VIP Memory Models Website.

Shyam 




at

Cadence in Collaboration with Arm Ensures the Software Just Works

The increase in compute and data-intensive applications and the need for lower power consumption have resulted in a rapidly growing number of Arm-based devices in various market segments; this requires fast time to market (TTM) and support for off-t...(read more)




at

Jasper C2RTL App for Datapath Verification

Ensuring that the RTL designs correctly implement the C++ algorithmic intent in every circumstance is difficult to achieve with conventional verification. Learn more how Jasper C2RTL App helps to perform equivalence checking with 100x performance improvement(read more)





at

Coalesce Xcelium Apps to Maximize Performance by 10X and Catch More Bugs

Xcelium Simulator has been in the industry for years and is the leading high-performance simulation platform. As designs are getting more and more complex and verification is taking longer than ever, the need of the hour is plug-and-play apps that ar...(read more)




at

Achieve 80% Less Late-Stage RTL Changes and Early RTL Bug Detection

It has become challenging to ensure that the designs are complete, correct, and adhere to necessary coding rules before handing them off for RTL verification and implementation. RTL Designer Signoff Solution from Cadence helps the user identify RTL bugs at a very early development stage, saving a lot of effort and cost for the design and verification team. Our reputed customers have confirmed that using RTL signoff for their design IP helped save up to 4 weeks and reduce the late-stage RTL changes by up to 80%.(read more)



  • Jasper RTL Designer Signoff App
  • Jasper
  • Early Bug Detection

at

Moving Beyond EDA: The Intelligent System Design Strategy

The rising customer expectations, intermingling fields and high performance needs can be satisfied with the system based design. An intelligent Systems Design strategy can offer a quicker route to an optimum design and helps to increase designers' productivity and analyzes efficiency by providing the ability to explore the entire design space. Cadence Intelligent System Strategy enables a system design revolution and reduces project schedules with optimized continuous integration.(read more)




at

IC Packagers: Workflows That Work for You

New IC packaging workflows in Cadence Allegro X layout tools allow you to follow a guided path from starting a design through final manufacturing. The path is there to ensure that you don’t miss steps and perform actions in the optimal order. W...(read more)




at

OrCAD X – The Anytime Anywhere PCB Design Platform

OrCAD X is the next-generation integrated PCB design platform. It brings to you a powerful cloud-enabled design solution that includes design and library data management integrated with the proven PCB design and analysis product portfolio of Cad...(read more)




at

The Mechanical Side of Multiphysics System Simulation

Introduction

Multiphysics is an integral part of the concepts around digital twins. In this post, I want to discuss the mechanical aspects of multiphysics in system simulations, which are critical for 3D-IC, multi-die, and chiplet design.

The physical world in which we live is growing ever more electrified. Think of the transformation that the cell phone has brought into our lives, as has the present-day migration to electronic vehicles (EVs). These products are not only feats of electronic engineering but of mechanical as well, as the electronics find themselves in new and novel forms such as foldable phones and flying cars (eVOTLs). Here, engineering domains must co-exist and collaborate to bring about the best end products possible.

Start with the electronics—chips, chiplets, IC packaging, PCB, and modules. But now put these into a new form factor that can be dropped or submerged in water or accelerated along a highway. What about drop testing, aerodynamics, and aeroacoustics? These largely computational fluid dynamics (CFD) and/or mechanical multiphysics phenomena must also be accounted for. And then how does the drop testing impact the electrical performance? The world of electronics and its vast array of end products is pushing us beyond pure electrical engineering to be more broadly minded and develop not only heterogeneous products but heterogeneous engineering teams as well.

Cadence's Unique Expertise

It's at this crossroad of complexity and electronic proliferation that Cadence shines. Let's take, for example, the latest push for higher-performing high-bandwidth memory (HBM) devices and AI data center expansion. These technologies are growing from several layers to 12, and I can't emphasize enough the importance of teamwork and integrated solutions in tackling the challenges of advanced packaging technologies and how collaboration is shaping the future of semiconductor innovation and paving the way for cutting-edge developments in the industry.

These layered electronics are powered, and power creates heat. Heat needs to be understood, and thus, the thermal integrity issues uncovered along the way must be addressed. However, electronic thermal issues are just the first domino in a chain of interdependencies. What about the thermal stress and warpage that can be caused by the powering of these stacked devices? How does that then lend to mechanical stress and even material fatigue as the temperature cycles from high to low and back through the use of the electronic device? This is just one example in a long list of many...

Cadence Multiphysics Analysis Offerings

The confluence of electrical, mechanical, and CFD is exactly why Cadence expanded into multiphysics at a significant rate starting in 2019 with the announcement of the Clarity 3D Solver and Celsius Thermal Solver products for electromagnetic (EM) and thermal multiphysics system simulations. Recent acquisitions of Numeca, Pointwise, and Cascade (now branded within Cadence as the Fidelity CFD Platform) as well as Future Facilities (now the Cadence Reality Digital Twin product line) are all adding CFD expertise. The recent addition of Beta CAE brings mechanical multiphysics to the suite of solutions available from Cadence. The full breadth of these multiphysics system analyses, spanning EM, thermal, signal integrity/power integrity (SI/PI), CFD, and now mechanical, creates a platform for digital twinning across a wide array of applications. You can learn more by viewing Cadence's Reality Digital Twin platform launch on the keynote stage at NVIDIA's GTC in March, as well as this Designed with Cadence video: NV5, NVIDIA, and Cadence Collaboration Optimizes Data Centers.

Conclusion

Ever more sophisticated electronic designs are in demand to fulfill the needs of tomorrow's technologies, driving a convergence of electrical and mechanical aspects of multiphysics in system simulations. To successfully produce the exciting new products of the future, both domains must be able to collaborate effectively and efficiently. Cadence is fully committed to developing and providing our customers with the software products they need to enable this electrical/mechanical evolution. From EM, to thermal, to SI/PI, CFD, and mechanical, Cadence is enabling digital twinning across a wide array of applications that are forging pathways to the future.

For more information on Cadence's multiphysics system analysis offerings, visit our webpage and download our brochure.




at

Accelerate PCB Documentation in OrCAD X Presto with Live Doc

Live Doc is an advanced automated PCB documentation generation tool integrated with OrCAD X Presto designed to streamline the creation of PCB documentation. By automating the generation of PCB fabrication and assembly drawings, Live Doc significantly...(read more)




at

Relative delay analysis is impacted by pbar

Does anyone know how to not include a pbar in a constraint manager analysis? I have some relative delay constraints applied on a group of differential nets. When I analyze the design these all show an error. If I delete the plating bar from the design they are all passing. The plating bar gets generated on the Substrate Geometry / Plating_Bar class. I understand that I could just delete the plating bar to verify the constraint but the issue is when I archive this design I would like it to be clean meaning it is in the final state for manufacturing AND passing all constraints according to design reviews.

Anyone have an idea? 

Thank you!




at

What is Allegro X Advanced Package Designer and why do I not see Allegro Package Designer Plus (APD+) in 23.1?

Starting SPB 23.1, Allegro Package Designer Plus (APD+) has been rebranded as Allegro X Advanced Package Designer (Allegro X APD).

The splash screen for Allegro X APD will appear as shown below, instead of showing APD+ 2023:

For the Windows Start menu in 23.1, it will display as Allegro X APD 2023 instead of APD+ 2023, as shown below

23.1 Start menu 

In the Product Choices window for 23.1, you will see Allegro X Advanced Package Designer in the place of Allegro Package Designer +, as shown below: 

23.1 product title




at

How to access the Transmission Line Calculator in Allegro X APD

Have you ever thought of a handy utility to specify all necessary transmission line parameters to decide upon the stackup?   

Starting SPB 23.1, a handy feature Transmission Line Calculator, is built into Allegro X Advanced Package Designer (Allegro X APD). This feature will require either an SiP Layout license or can be accessed through SiP Layout Bundle. 

From the Analyze dropdown menu in the 23.1 Allegro X APD toolbar, you can choose Transmission Line Calculator. 

 

You can use this calculator to help decide constraints and stackup for laminate-based PCB or Packages. You can calculate the correct stackup material and width/spacing to meet any requirements that may be later entered in a constraint. This is truly a calculated number and not a true field solver. 

The different types of calculations that the Transmission Line Calculator can provide are Microstrip, Embedded microstrip, Stripline, CPW (Coplanar), FGCPW (frequency-dependent Coplanar),Asymmetric stripline, Coupled microstrip (Differential Pair), Coupled stripline (Differential Pair), and Dual striplines. 

This feature is important for customers relying on fabricators/spreadsheets to provide this information or need to test a quick spacing/width as per the impedance value. 

Let us know your comments on this new feature in 23.1 Allegro X APD. 

 




at

Creating Power and Ground rings in Allegro X Package Designer Plus

Power and Ground rings are exposed rings of metal surrounding a die that supply power/ground to the die and create a low-impedance path for the current flow. These rings ensure stable power distribution and reduce noise. Allegro X Package Designer Plus has a utility called Power/Ground Ring Generator which lets you define and place one or more shapes in the form of a ring around a die.

 To run the PWR/GND Generator Wizard, go to Route > Power/Ground Ring Generator or type "pring wizard" in the APD command window to invoke the Wizard.

   

This Wizard lets you define and place one or more shapes in the form of a ring around a die. The Power/Ground Ring Wizard creates up to 12 rings (shapes) at a time. If you require more rings, you can run the Power/Ground Ring Wizard as many times as needed. This command displays a wizard in which you can specify:

  • The number of rings to be generated
  • The creation of the first ring as a die flag (Die flag is the boundary of the die like the power ring.)
    • If you create a die flag and the first ring is the same net as the flag, you can enter a negative distance to overlap the ring and the die flag.
  • Multiple options for placement of the rings with respect to:
    • Origination point
    • Distance from the edge of the die
    • Distance from the nearest die pin on each die side
  • The reference designator of the die with which the rings will be used
  • The distance between rings
  • The width of each ring
  • The corner types on each ring (arc, chamfer, and right-angle)
  • An assigned net name for each ring
  • A label for each ring

The rings are basic in nature. For other shape geometries or split rings, choose Shape > Polygon or Shape > Compose/Decompose Shape from the menu in the design window.

Depending on the options selected, the Power/Ground Ring Wizard UI changes, representing how the rings will be created. Verify the Wizard settings to ensure that the rings are created as intended.

  1. When the Power/Ground Ring Wizard appears, set the number of rings to 2, accept the other defaults, and click Next. You can set Create first ring as die flag to create a basic die flag.

         2. Define Ring 1 and the net associated with it.

              a) Browse and choose Vss in the Net Names dialog box.

            b) Click OK.

            c) Specify the label as VSS.

            d) Click Next.

             The first ring should appear in your design. It is associated with the proper net; in this case, VSS.

  1. For the second ring, choose the net as Vdd and specify the label as VDD.
  2. Click Next.
  3. Click Finish in the Result Verification screen to complete the process.

The completed rings appear as shown below.

Now, when you click on Power and Ground Die Pin and add wirebonds, you will see that the wirebonds are placed directly on the Power and Ground rings.




at

Allegro X APD : Tip of the Week: ‘Auto-blank other rats’ feature

When working on a complex design, it is common to have very many net ratlines. Quantities like 1000 ratlines are possible. It can result in a cluttered view while routing. Therefore, it is useful to make all other ratlines invisible while routing interactively. You would like to make all ratlines visible again when each route action is completed.

You can easily do this by enabling the Auto-blank other rats option during routing. When enabled, all rats other than the primary ones are suppressed during the Add Connect command.




at

Database Maintenance: DBDoctor

The DBDoctor application checks the database for errors and other problems, and presents a report about them. DBDoctor supports .brd, .mcm, .mdd, .psm, .dra, .pad, .sav, and .scf databases.

DBDoctor can:

  • Analyze and fix database problems.
  • Eliminate duplicate vias.
  • Perform batch design rule checking (DRC).
  • Upgrade databases more than one revision old.

To verify the integrity of a drawing database at any time during the design cycle, run DBDoctor at regular intervals but make sure you always run it after completing a design.

You can run DBDoctor to verify work in progress, or from a terminal window outside the layout editor, perhaps to check multiple input designs in batch mode by using wildcards and various switches. You do not have to run the layout editor to use DBDoctor.

To run this from Allegro X APD and Allegro PCB Editor, go to Tools > Database Check.

  

You can also go to the Start menu and select Cadence PCB Utilities 2023 > PCB DB Doctor 2023.

  

You can also use the following command to run DBDoctor in batch mode in the system command prompt:

dbdoctor [-check_only] [-drc] [-drc_only] [-shapes][-no_backup] [-outfile <newboardname.brd>]>

 

Comment below if you want to know more about this command and its integration with SKILL programming!!




at

Use Verisium SimAI to Accelerate Verification Closure with Big Compute Savings

Verisium SimAI App harnesses the power of machine learning technology with the Cadence Xcelium Logic Simulator - the ultimate breakthrough in accelerating verification closure. It builds models from regressions run in the Xcelium simulator, enabling the generation of new regressions with specific targets. The Verisium SimAI app also features cousin bug hunting, a unique capability that uses information from difficult-to-hit failures to expose cousin bugs. With these advanced machine learning techniques, Verisium SimAI offers the potential for a significant boost in productivity, promising an exciting future for our users.

Figure 1: Regression compression and coverage maximization with Verisium SimAI 

What can I do with Verisium SimAI?

You can exercise different use cases with Verisium SimAI as per your requirements. For some users, the goal might be regression compression and improving coverage regain. Coverage maximization and hitting new bins could be another goal. Other users may be interested in exposing hard-to-hit failures, bug hunting for difficult to find issues. Verisium SimAI allows users to take on any of these challenges to achieve the desired results.

Let's go into some more details of these use cases and scenarios where using SimAI can have a big positive impact.

  1. Using SimAI for Regression Compression and Coverage Regain

Unlock up to 10X compute savings with SimAI!

Verisium SimAI can be used to compress regressions and regain coverage. This flow involves setting up your regression environment for SimAI, running your random regressions with coverage and randomization data followed by training, and finally, synthesizing and running the SimAI-generated compressed regressions. The synthesized regression may prune tests that do not help meet the goal and add more runs for the most relevant tests, as well as add run-specific constraints. This flow can also be used to target specific areas like areas involving a high code churn or high complexity.

You can check out the details of this flow with illustrative examples in the following Rapid Adoption Kits (RAK) available on the Cadence Learning and Support Portal (Cadence customer credentials needed):

 

  1. Using SimAI for Coverage Maximization and Targeting coverage holes

Reduce your Functional Coverage Holes by up to 40% using SimAI!

Verisium SimAI can be used for iterative coverage maximization. This is most effective when regressions are largely saturated, and SimAI will explicitly try to hit uncovered bins, which may be hard-to-hit (but not impossible) coverage holes. This is achieved using iterative learning technology where with each iteration, SimAI does some exploration and determines how well it performed. This technique can also be used for bug hunting by using holes as targets of interest.

See more details on the Cadence Learning and Support Portal:

 

  1. Using SimAI for Bug Hunting

Discover and fix bugs faster using SimAI!

Verisium SimAI has a new bug hunting flow which can be used to target the goal of exposing hard-to-hit failure conditions. This is achieved using an iterative framework and by targeting failures or rare bins. The goal to target failures is best exercised when the overall failure rate is typically low (below 5%). Iterative learning can be used to improve the ability to target specific areas. Use the SimAI bug hunting use case to target rare events, low hit coverage bins, and low hit failure signatures.

See more details on the Cadence Learning and Support Portal:

Unlock compute savings, reduce your functional coverage holes, and discover and fix bugs faster with the power of machine learning technology now enabled by Verisium SimAI!

Please keep visiting  https://support.cadence.com/raks to download new RAKs as they become available.

Please note that you will need the Cadence customer credentials to log on to the Cadence Online  Support  https://support.cadence.com/, your 24/7 partner for getting help in resolving issues related to Cadence software or learning Cadence tools and technologies.

Happy Learning!




at

Flow Control Credit Updates in PCIe 6.1 ECN

As technology continues to evolve at a rapid pace, the importance of robust and efficient interconnect standards cannot be overstated. Peripheral Component Interconnect Express (PCIe) has been a cornerstone in high-speed data transfer, enabling seamless communication between various hardware components.   

With the advent of PCIe 6.1 ECN, a significant advancement in speed and efficiency, ensuring the accuracy and reliability of its operations is paramount. One critical aspect of this is the verification of shared credit updates. For detailed understanding on Shared Credit, please refer Understanding PCIe 6.0 Shared Flow Control. 

In this blog, we will discuss why this verification is essential and what it entails.  

Introduction 

PCIe 6.1 ECN brings numerous advancements over earlier versions, such as increased bandwidth and faster data transfer speeds.   

A crucial mechanism for efficient data transmission in PCIe 6.0 is the credit-based flow control system. In this system, devices monitor credits, representing the buffer capacity available for incoming data.   

When a device transmits data, it uses credits, which are replenished or adjusted once the data is received and processed. This system ensures that the sender does not overload the receiver.  

Given the critical role of shared credit updates in maintaining the integrity and efficiency of data transfers, verification of these updates is crucial.  Proper management of credit updates is essential to ensure data integrity, as any discrepancies can lead to data loss, corruption, or system crashes.   

Verification also guarantees efficient resource allocation, preventing scenarios where some components are starved of credit while others have an excess, thus avoiding inefficiencies.  Credit inefficiencies pose issues in low power negotiations by preventing devices from entering low power states. Additionally, verification involves checking for proper error handling mechanisms, ensuring that the system can recover gracefully from errors in credit updates and maintain overall stability.   

PCIe 6.1 ECN Flow Control Optimizations Over PCIe 6.0

PCIe 6.1 ECN builds on the FLIT-based architecture introduced in PCIe 6.0, further optimizing flow control mechanisms to handle increased data rates and improved efficiency.  PCIe 6.1 ECN introduced refinements in credit management, making the allocation and advertisement of credits more precise, which helps in reducing bottlenecks and improving data flow efficiency.  Enhancements in flow control protocols ensure better management of buffer spaces and more efficient credit allocation. These enhancements are designed to handle the increased data rates and throughput demands of next-generation applications, ensuring robust and efficient data flow across PCIe devices.  

Below are some major updates: 

  1. There have been improvements in error detection and correction mechanisms in PCIe 6.1 ECN to enhance flow control reliability by ensuring that corrupted data packets are detected and handled appropriately without disrupting the flow of valid packets.  
  2. The merged credit system, which was a key feature introduced int PCIe 6.0 to simplify and optimize credit management, was further enhanced in PCIe 6.1 ECN to improve performance and efficiency.  
  3. PCIe 6.1 ECN introduced better algorithms for allocating and reclaiming merged credits to handle high data rates, introduced more robust error detection and correction mechanism reducing the degradation or system instability. 
  4. PCIe 6.1 ECN provided clear guidelines on how to implement the merged credit system correctly, helping developers to implement more reliable systems. For more details, please refer to Specifications section 2.6.1 Flow Control (FC) Rules.

Summary 

In summary, PCIe 6.0 is a complex protocol with many verification challenges. You must understand many new Spec changes and think about the robust verification plan for the new features and backward compatible tests impacted by new features. Cadence’s PCIe 6.0 Verification IP is fully compliant with the latest PCIe Express 6.0 specifications and provides an effective and efficient way to verify the components interfacing with the PCIe 6.0 interface. Cadence VIP for PCIe 6.0 provides exhaustive verification of PCIe-based IP and SoCs, and we are working with early adopter customers to speed up every verification stage.   

More Information

For more info on how Cadence PCIe Verification IP and Triple Check VIP enable users to confidently verify PCIe 6.0, see VIP for PCI Express, VIP for Compute Express Link  and TripleCheck for PCI Express  

See the PCI-SIG website for more details on PCIe in general and the different PCI standards.  

For more information on PCIe 6.0 new features, please visit PCIeLaneMarginPCIe6.0LaneMargin, and Demonstrating PCIe 6.0 Equalization Procedure.




at

Training Insights – Palladium Emulation Course for Beginner and Advanced Users

The Cadence Palladium Emulation Platform is a hardware system that implements the design, accelerating its execution and verification. Itoffers the highest performance and fastest bring-up times for pre-silicon validation of billion-gate designs, using a custom processor built by Cadence.

This Palladium Introduction course is based on the Palladium 23.03 ISR4 version and covers the following modules:

  • Introduction
  • Palladium flow
  • Running a design on the Palladium system

This course starts with an “Introduction” module that explains Palladium and other verification platforms to show its place in the big picture. It also compares Palladium with Protium and simulation and discusses its usage and limitations.

The “Palladium Flow” module includes two stages at a high level, which are Compile and Run. Then, it covers these stages in detail. First, it covers the ICE compile flow and IXCOM compile flow steps in detail. Then it explains Run, which is common for both ICE and IXCOM modes.

The third module, “Running Design on the Palladium System,” covers all the items required for running your design on the Palladium system, including:

  • Software stack requirements
  • Basic concepts required to understand the flow
  • Compute machine requirements

In addition, this course contains labs for both the ICE and IXCOM flows with detailed steps to exercise the features provided by the Palladium system. The lab explains a practical example of multiple counters and exercising their signals for force, monitor, and deposit features, along with frequency calculation using a real-time clock. The course is available on the Cadence support page:

There is also a Digital Badge available. You will find the Badge exam opportunity when you enroll in the Online training or after you have taken the training as "live" training.

For questions and inquiries, or issues with registration, reach out to us at Cadence Training. Want to stay up to date on webinars and courses? Subscribe to Cadence Training emails. To view our complete training offerings, visit the Cadence Training website.

Related Training Bytes

Related Courses

Related Blogs




at

Jasper Formal Fundamentals 2403 Course for Starting Formal Verification

The course "Jasper Formal Fundamentals v24.03" introduces formal analysis to those who want to use formal analysis for design or verification. 

To optimally benefit from this course, you must already have sufficient knowledge of the System Verilog assertions to be capable of writing properties for formal verification. Hence, this training provides a module on formal analysis to help cover this essential background. 

In this course, you will learn how to code efficient SVA Properties for formal analysis, understand formal complexity and how to overcome it, and learn the basics of formal coverage.

After completing this course, you will be able to:

  • Define reusable, functionally correct SVA properties that are efficient for formal tools. These shall use abstract auxiliary code to simplify descriptions, make code maintenance easier, reduce debug time, and reduce tool-proof runtime.
  • Set up, run, and analyze results from formal analysis.
  • Identify designs upon which formal is likely to be successful while understanding formal complexity issues and how to identify and overcome them.
  • Use a systematic property development process to approach a completely new verification problem.
  • Understand the basics of formal coverage.

 The most recently updated release includes new modules on:

  • "Basic complexity handling" which discusses the complexity in formal and how to identify and handle them.
  • "Complexity reduction methods” which discusses the complexity reduction methods and which is suitable for which type of complexity problem.
  • “Coverage in formal” which discusses the basics of coverage in formal verification and how coverage can be used in formal.   

Take this course to learn the basics of formal verification. 

What's Next? 

You can check out the complete training: Jasper Formal Fundamentals. There is a free online version of the training available 24/7 for all customers with a Cadence Learning and Support Portal account. If you are interested in an instructor-led version of the training, please contact Cadence Training. And don't forget to obtain your digital badge after completing the training!

You can also check Jasper University page for more materials on formal analysis and Jasper apps. 

Related Trainings 

Jasper Formal Expert Training Course | Cadence

Verilog Language and Application Training Course | Cadence

SystemVerilog for Design and Verification Training Course | Cadence

SystemVerilog Assertions Training Course | Cadence

Related Training Bytes 

Jasper Formal Property Verification (FPV) App: Basic Usage Demo (Video)

Jasper Formal Methodology playlist

Related Training Blogs

It’s the Digital Era; Why Not Showcase Your Brand Through a Digital Badge!

Training Insights: Introducing the C++ Course for All Your C++ Learning Needs!

Training Insights: Reaching Your Verification Closure Using Verisium Manager

Training Insights - Free Online Courses on Cadence Learning and Support Portal




at

Partial Header Encryption in Integrity and Data Encryption for PCIe

Cadence PCIe/CXL VIP support for Partial Header Encryption in Integrity and Data Encryption.(read more)




at

Unveiling the Capabilities of Verisium Manager for Optimized Operations

In SoC development, the verification cycle is a crucial phase that ensures products meet their specifications and function correctly. However, the complexity of modern SoC projects, with their constant data flow, multiple validation teams working in parallel, and tight schedules, presents significant challenges. This article explores these challenges and introduces Verisium Manager as a solution that embodies the 'One Tool Fits All' concept. This means that Verisium Manager is designed to handle all aspects of the verification process for SoC development, from planning to coverage analysis to regression testing, thereby addressing the complex needs of SoC verification.

The Hurdles in Traditional Validation Cycles

 A typical validation process involves planning, coverage analysis, and regression testing. This complexity is compounded by using separate tools for each activity, leading to multiple control environments, APIs, and databases, not to mention the array of tool owners. Such fragmentation results in constant data transfer and translation between systems, from the planning tool to the coverage analysis tool and then to the regression testing tool. This continuous movement of data causes delays, system instability, poor user experiences, and, ultimately, a dip in the quality of the validation process.

The use of multiple platforms leads to inefficiency and reduced productivity. What's needed is a unified system that can streamline the workflow, simplify the verification process, and enhance its effectiveness.

Envisioning the Ideal Solution: Verisium Manager

 The cornerstone of an efficient validation cycle is integration and simplicity. The ideal solution is a singular platform that consolidates planning, coverage analysis, and regression management into one smooth, unified process. Verisium Manager emerges as this much-needed solution, encompassing all the functionalities necessary to streamline the validation process. Its comprehensive nature instills confidence in its ability to handle all aspects of the verification cycle. It can be fully customized to address and enforce any validation methodology and can facilitate smooth integration into any customer environment.

Features that stand out in Verisium Manager include: 

  • Unified Workflow: It acts as a single cockpit from which all activities are orchestrated, ensuring the validation teams' work is uninterrupted and seamlessly integrated.
  • Customization and Integration: Verisium Manager supports customizing test-plan structures and mapping results per project, ensuring a perfect fit for various project requirements. Its ability to smoothly integrate into the project's environment and compute platforms is unparalleled.
  • Support for Continuous Updates and Migration: The tool accommodates constant updates to project data and supports the migration of legacy data, ensuring that no historical data is lost in the transition to a new system.

Addressing Project-Specific Needs

 Verisium Manager recognizes diversity in different projects and offers project-specific solutions, including:

 Enforcing Project Test-Plan Structures and Attributes: It supports and enforces each project's unique test-plan structure and mapping guidelines.

  • Unified Data Views and Measurements: Verisium Manager promotes a unified view of data across all teams and enforces unified measurements, ensuring consistency and clarity in the validation process.
  • Enabling Project-Specific Actions and Integrations: The tool is designed to support project-specific actions directly from its graphical user interface and allows for smooth integration with in-house databases, dashboards, and the project execution stack.

Verisium Manager is the epitome of efficiency in software/hardware validation. Its differentiating features, such as support for customization, unified data view, and comprehensive coverage and regression requirements, make it an indispensable tool for any validation team looking to elevate their workflow.




at

Deferrable Memory Write Usage and Verification Challenges

The application of real-time data processing or responsiveness is crucial, such as in high-performance computing, data centers, or applications requiring low-latency data transfers. It enables efficient use of PCIe bandwidth and resources by intelligently managing memory write operations based on system dynamics and workload priorities. By effectively leveraging Deferrable Memory Write [DMWr], Devices can achieve optimized performance and responsiveness, aligning with the evolving demands of modern computing applications.

What Is Deferrable Memory Write?

Deferrable Memory Write (DMWr) ECN introduced this new memory transaction type, which was later officially incorporated in PCIe 5.0 to CXL2.0. This enhanced type of memory transaction is Deferrable Memory Write [DMWr], which flows as another type of existing Read/Write memory transaction; the major difference of this Deferrable Memory Write, where the Requester attempts to write to a given location in Memory Space using the non-posted DMWr TLP Type, it Postponing their completion of memory write transactions to improve overall system efficiency and performance, those memory write operation can be delay or deferred until other priority task complete.

The Deferrable Memory Write (DMWr) requires the Completer to return an acknowledgment to the Requester and provides a mechanism for the recipient to defer (temporarily refuse to service) the Request.

DMWr provides a mechanism for Endpoints and hosts to choose to carry out or defer incoming DMWr Requests. This mechanism can be used by Endpoints and Hosts to simplify the design of flow control, reduce latency, and improve throughput. The Deferrable Memory writes TLP format in Figure A.

 

(Fig A) Deferrable Memory writes TLP format.

Example Scenario

Here's how the DMWr works with a simplified example: Imagine a system with an endpoint device (Device A) and a host CPU (Device B). Device B wants to write data to Device A's memory, but due to varying reasons such as system bus congestion or prioritization of other transactions, Device A can defer the completion of the memory write request. Just follow these steps:

  1. Initiation of Memory Write: Device B initiates a memory write transaction to Device A. This involves sending the memory write request along with the data payload over the PCIe physical layer link.
  2. Acknowledgment and Deferral: Upon receiving the memory write request, Device A acknowledges the transaction but may decide to defer its completion. Device A sends an acknowledgment (ACK) back to Device B, indicating it has received the data and intends to complete the write operation but not immediately.
  3. Deferred Completion: Device A defers the completion of the memory write operation to a later, more opportune time. This deferral allows Device A to prioritize other transactions or optimize the use of system resources, such as memory bandwidth or processor availability.
  4. Completion and Response: At a later point, Device A completes the deferred memory write operation and sends a completion indication back to Device B. This completion typically includes any status updates or additional information related to the transaction.

Usage or Importance of DMWr

Deferrable Memory Write usage provides the improvement in the following aspects:

  • Reduced Latency: By deferring less critical memory write operations, more critical transactions can be processed with lower latency, improving overall system responsiveness.
  • Improved Efficiency: Optimizes the utilization of system resources such as memory bandwidth and CPU cycles, enhancing the efficiency of data transfers within the PCIe architecture.
  • Enhanced Performance: Allows devices to manage and prioritize transactions dynamically, potentially increasing overall system throughput and reducing contention.

Challenges in the Implementation of DMWr Transactions

The implementation of deferrable memory writes (DMWr) introduces several advancements and challenges in terms of usage and verification:

  1. Timing and Synchronization: DMWr allows transactions to be deferred, complicating timing requirements or completing them within acceptable timing windows to avoid protocol violations. Ensuring proper synchronization between devices becomes critical to prevent data loss or corruption.
  2. Protocol Compliance: Verification must ensure compliance with ECN PCIe 6.0 and CXL specifications regarding when and how DMWr transactions can be initiated and completed.
  3. Performance Optimization: While DMWr can improve overall system performance by reducing latency, verifying its impact on system performance and ensuring it meets expected benchmarks is crucial.
  4. Error Handling: Handling errors related to deferred transactions adds complexity. Verifying error detection and recovery mechanisms under various scenarios (e.g., timeout during deferral) is essential.

Verification Challenges of DMWr Transactions

The challenges to verifying the DMWr transaction consist of all checks with respect to Function, Timing, Protocol compliance, improvement, Error scenario, and security usage on purpose, as well as Data integrity at the PCIe and CXL.

  1. Functional Verification: Verifying the correct implementation of DMWr at both ends of the PCIe link (transmitter and receiver) to ensure proper functionality and adherence to specifications.
  2. Timing Verification: Validating timing constraints associated with deferring writes and ensuring transactions are completed within specified windows without violating protocol rules.
  3. Protocol Compliance Verification: Checking that DMWr transactions adhere to PCIe and CXL protocol rules, including ordering rules and any restrictions on deferral based on the transaction type.
  4. Performance Verification: Assessing the impact of DMWr on overall system performance, including latency reduction and bandwidth utilization, through simulation and testing.
  5. Error Scenario Verification: Creating and testing scenarios to verify error handling mechanisms related to DMWr, such as timeouts, retries, and recovery procedures.
  6. Security Considerations: Assessing potential security vulnerabilities related to DMWr, such as data integrity risks during deferred transactions or exposure to timing-based attacks.

Major verification challenges and approaches are timing and synchronization verification in the context of implementing deferrable memory writes (DMWr), which is crucial due to the inherent complexities introduced by deferred transactions. Here are the key issues and approaches to address them:

Timing and Synchronization Issues

  1. Transaction Completion Timing:
    • Issue: Ensuring deferred transactions are completed within the specified time window without violating protocol timing constraints.
    • Approach: Design an internal timer and checker to model worst-case scenarios where transactions are deferred and verify that they are complete within allowable latency limits. This involves simulating various traffic loads and conditions to assess timing under different scenarios.
  2. Ordering and Dependencies:
    • Issue: Verifying that transactions deferred using DMWr maintain the correct ordering and dependencies relative to non-deferred transactions.
    • Approach: Implement test scenarios that include mixed traffic of DMWr and non-DMWr transactions. Verify through simulation or emulation that dependencies and ordering requirements are correctly maintained across the PCIe link.
  3. Interrupt Handling and Response Times:
    • Issue: Verify the handling of interrupts and ensure timely responses from devices involved in DMWr transactions.
    • Approach: Implement test cases that simulate interrupt generation during DMWr transactions. Measure and verify the response times to interrupts to ensure they meet system latency requirements.

In conclusion, while deferrable memory writes in PCIe and CXL offer significant performance benefits, their implementation and verification present several challenges related to timing, protocol compliance, performance optimization, and error handling. Addressing these challenges requires rigorous testing and testbench of traffic, advanced verification methodologies, and a thorough understanding of PCIe specifications and also the motivation behind introducing this Deferrable Write is effectively used in the CXL further. Outcomes of Deferrable Memory Write verify that the performance benefits of DMWr (reduced latency, improved throughput) are achieved without compromising timing integrity or violating protocol specifications.

In summary, PCIe and CXL are complex protocols with many verification challenges. You must understand many new Spec changes and consider the robust verification plan for the new features and backward compatible tests impacted by new features. Cadence's PCIe 6.0 Verification IP is fully compliant with the latest PCIe Express 6.0 specifications and provides an effective and efficient way to verify the components interfacing with the PCIe 6.0 interface. Cadence VIP for PCIe 6.0 provides exhaustive verification of PCIe-based IP and SoCs, and we are working with Early Adopter customers to speed up every verification stage.

More Information




at

Wild River Collaborates with Cadence on CMP-70 Channel Modeling

Wild River Technology (WRT), the leading supplier of signal integrity measurement and optimization test fixtures for high-speed channels at data rates of up to 224G, has announced the availability of a new advanced channel modeling solution that helps achieve extreme signal integrity design to 70GHz. Read the press release. The CMP-70 program continues the industry-first simulation-to-measurement collaboration with Cadence that was initially established with the CMP-50. Significant resources were dedicated to the development of the CMP-70 by Cadence and WRT over almost three years. The CMP-70 will be on display at DesignCon 2025 , January 28-30, in Cadence booth 827 to benchmark the Cadence Clarity 3D Solver . “I am not a fan of hype-based programs that simply get attention,” remarked Alfred P. Neves, WRT’s co-founder and chief technical officer. “Both Cadence and Wild River brought substantial skills to the table in this project as we continued our industry-first simulation-to-measurement collaboration. The result is a proven, robust and accurate platform that brings extreme signal integrity to 70GHz designs. This application package has also been instrumental in demonstrating the robust 3D EM simulation capability of the Cadence Clarity solver.” “We’re delighted to continue the joint development and validation program with WRT that started with the CMP-50,” said Gary Lytle, product management director at Cadence. “The skilled and experienced signal integrity technologists that both companies bring to the program results in a superior signal integrity solution for our mutual customers.” CMP-70 Solution Features The solution is available both in a standard configuration and as a custom solution for customer-specific stackups and fabrication. The primary target application is to support a 3D EM solver analysis modeling versus the time- and frequency-domain measurement methodologies. The solution features include: The CMP-70 platform, assembled and 100% TDR NIST traceable tested, with custom stands Material Identification overview web-based meeting including anisotropic 3D material identification A cross-section PCB report and structures for using as-fabricated geometries Measured S-parameters, pre-tested for quality (passivity/causality and resampled for time domain simulations) A host of novel crosstalk structures suited for 112G HD level project analysis PCB layout design files (NDA required) An EDA starter library including loss models with industry-first accurate surface roughness models Comprehensive training available for 3D EM analysis – correspondence, material ID in X-Y and Z axis for a host of EDA tools Industry-First Hausdorff Technique The WRT application package also includes an industry-first modified Hausdorff (MHD) technique , included as MATLAB code. This algorithmic approach provides an accurate way to compare two sets of measurements in multi-dimensional space to determine how well they match. The technique is used to compare the results simulated by the Clarity solver with those measured on the CMP-70 platform. The methodology and initial results are shown in the figure below, where the figure of merit (FOM) is calculated from 10, 35, and finally to 50GHz. The MHD algorithm requires a MATLAB license, but WRT also accommodates customer data as another option, where WRT provides the comparison between measured and simulated data. Additional Resources If you are attending DesignCon 2025 , be sure to stop by Cadence booth 827 to see WRT’s CMP-70 advanced channel modeling solution in action with the Clarity 3D Solver. Check out our on-demand webinar, " Validating Clarity 3D Solver Accuracy Through Measurement Correlation ." Learn more about the CMP-70 solution and the Clarity 3D Solver . For more information about Cadence’s full suite of integrated multiphysics simulation solutions, download our Multiphysics System Analysis Solutions Portfolio .




at

Cadence Fem.AI Summit: A Journey of Inspiration

This year, the Cadence Giving Foundation (CGF) launched Fem.AI to achieve a more inclusive tech sector, and the inaugural Fem.AI Summit that took place on October 1 was a luminary in a world where technology is evolving at an unprecedented pace. The summit not only excelled in its mission to enlighten, empower, and mobilize stakeholders across various industries on the issue of gender disparity in high tech and AI, but was a celebration of innovation, diversity, and empowerment. As we reflect on the moments that made the summit unforgettable, it's clear that the event was more than just a meeting of minds—it was a movement for change! Shaping Tomorrow Together Cadence’s president and CEO, Anirudh Devgan, stated, “Women’s talent and perspectives are crucial to shaping the future of AI.” Devgan’s words epitomized the driving force behind the first-ever Fem.AI Summit which brought together innovators, educators, business leadership, and investors across industries to create an ecosystem that ensures women can fully participate in the AI revolution and burgeoning AI economy. The energy of pioneers ready to collectively disrupt the status quo filled the air, and as the day-long summit began, it became clear that we were part of something truly groundbreaking. The event's lineup of speakers held discussions that went beyond the technical aspects of AI, emphasizing the vital importance of diversity in technology. Such insights were lent by leading voices from MIT, Stanford, and UC Berkeley, who set the stage for inspiring discussions with speakers like Dr. Joy Buolamwini, Founder of the Algorithmic Justice League, and Reshma Saujani, Founder and CEO of Moms First and Girls Who Code. Included in this lineup of leading figures was Dr. Chelsea Clinton, Vice Chair of the Clinton Foundation, who left us with her hopes for the future of women in AI: “I’m hoping because of company-wide commitments like what we’re experiencing here today thanks to Cadence, that the people who will be part of designing [future technologies] will have a different group of people around the proverbial table or the computer screens doing that… and that women will be more integral into the conceptualization and then the actualization of AI-driven enterprises.” The hopes and visions for women in AI cannot manifest in a vacuum, they must be achieved with the support of individuals and systems from education all the way to the upper echelons of leadership. It is with this understanding, that Fem.AI is committed to investing in women at every stage of their STEM journey. Breaking Barriers It is with this ideal that we were honored to hear from women breaking through barriers of gender, race, and class in achieving pinnacles of success in areas of science and technology. Dr. Sarah H. Chen, Postdoctoral Researcher at Stanford and Thriving Stars Scholar at MIT, Niki Karanikola, Machine Learning Engineer and Break Through Tech AI Scholar at MIT, and Katya Echazarreta, NASA’s first Mexican Astronaut, showcased the resilience and determination that drive progress within and beyond our industry. Through their stories of persevering despite all odds, we were reminded that supporting students in STEM can create generational change with impacts beyond the realms of AI and technology. The final speaker at the Cadence Fem.AI Summit, the trailblazing Brandi Chastain, Founder of Bay FC, World Cup Champion, and Olympic Gold Medalist, left us with a powerful reminder that when faced with this opportunity: “Our purpose needs to be intentional” especially in building the future of technology and AI where “diversity is not something to be afraid of, but something to be embraced.” Echoing this sentiment, summit attendees left the event reminded of the crucial role we collectively play in ensuring women are part of this tech revolution. Moving Forward While the summit may have concluded, its impact will continue through individuals, companies, and communities aspiring to achieve an equitable tech sector. This is just the start, and we must take collective action now. We hope that you will join Cadence to ensure that we clear the path and catalyze women's role in the AI revolution! Meet Our Partners Our partners are making Fem.AI’s vision a reality through their important work advancing women in technology, including fostering STEM excellence in higher education, launching STEM careers, and achieving gender diversity in leadership. Learn more about the important work of each of our partners by visiting their pages: Break Through Tech Last Mile Education Fund Fast Forward Generation VC Include Global Semiconductor Alliance Join the Fem.AI Alliance Joining the Fem.AI Alliance signals that your company or institution is committed to evolving the AI workforce. By increasing the representation of women in AI, we aim to broaden the talent pool and the perspective so that AI represents us all. Through the Fem.AI Alliance, companies and institutions can share best practices, guidance, and inspiration. Since its launch, companies like the Equinix Foundation, NetApp, NVIDIA, Unity Technologies, and Workday have joined the Alliance in their commitment to Fem.AI’s work and mission. Visit Fem.AI to get involved today or contact Fem.AI@cadence.com .




at

Versatile Use Case for DDR5 DIMM Discrete Component Memory Models

DDR5 DIMM Architectures The DDR5 generation of Double Data Rate DRAM memories has experienced rapid adoption in recent years. In particular, the JEDEC-defined DDR5 Dual Inline Memory Module (DIMM) cards have become a mainstay for systems looking for high-density, high-bandwidth, off-chip random access memory[1]. Within a short time, the DIMM architecture evolved from an interconnected hierarchy of only SDRAM memory devices (UDIMM[2]) to complex subsystems of interconnected components (RDIMM/LRDIMM/MRDIMM[3]). DIMM Designs and Popular Verification Use Cases The growing complexity of the DIMMs presented a challenge for pre-silicon verification engineers who could no longer simply validate against single DDR5 SDRAM memory models. They needed to consider how their designs would perform against DIMMs connected to each channel and operating at gigahertz clock speeds. To address this verification gap, Cadence developed DDR5 DIMM Memory Models that encapsulated all of the architectural complexities presented by real-world DIMMs based on a robust, easy-to-use, easy-to-debug, and easy-to-reconfigure methodology. This memory-subsystem-in-a-single-instance model has seen explosive adoption among the traditional IP Developer and SOC Integrator customers of Cadence Memory Models. The Cadence DIMM models act as a single unit with all of the relevant DIMM components instantiated and interconnected within, and with all AC/Timing parameters among the various components fully matched out-of-the-box, based on JEDEC specifications as well as datasheets of actual devices in the market. The typical use-case for the DIMM models has been where the DUT is a DDR5 Memory Controller + PHY IP stack, and the validation plan mandated compliance with the JEDEC standards and Memory Device vendor datasheets. Unique Use Case for the DIMM Discrete Component Models Although the Cadence DIMM models have enjoyed tremendous proliferation because of their cohesive implementation and unified user API, the actual DIMM Models are built on top of powerful, flexible discrete component models, each of which was designed to stand on its own as a complete SystemVerilog UVM-based VIP. All of these discrete component models exist in the Cadence VIP Catalog as standalone VIPs, complete with their own protocol compliance checking capabilities and their own configuration mappings comprehensively modeling individual AC/Timing parameters. Because of this deliberate design decision, the Cadence DIMM Discrete Component Models can support a unique use-case scenario. Some users seek to develop IC Designs for the various DIMM components. Such users need verification environments that can model the individual components of a DIMM and allow them the option to replace one or another component with their Component Design IP. They can then validate that their component design is fully compatible with the rest of the components on the DIMM and meets the integrity of the overall DIMM compliance with JEDEC standards or Memory Vendor datasheets. The Cadence Memory VIP portfolio today includes various examples that demonstrate how customers can create DIMM “wrappers” by selecting from among the available DIMM discrete component models and “stitching” them together to build their own custom testbench around their specific Component Design IP. A Solution for Unique Component Scenarios The Cadence DDR5 DIMM Memory Models and DIMM Discrete Component Models can provide users with a flexible approach to validating their specific component designs with a fully populated pre-silicon environment. Augmented Verification Capabilities When the DIMM “wrapper” model is augmented with the Cadence DFI VIP[4] that can simulate an MC+PHY stack and offers a SystemVerilog UVM test API to the verification engineer, the overall testbench transforms into a formidable pre-silicon validation vehicle. The DFI VIP is designed as a combination of an independent DFI MC VIP and a DFI PHY VIP connected to each other via the DFI Standard Interface and capable of operating seamlessly as a single unit. It presents a UVM Sequence API to the user into the DFI MC VIP with the Memory Interface of the PHY VIP connected to the DIMM “wrapper” model. With this testbench in hand, the user can then fully take advantage of the UVM Sequence Library that comes with the DFI VIP to enable deep validation of their Component Design inside the DIMM “wrapper” model. Verification Capabilities Further Enhanced A possible further enhancement comes with the potential addition of an instance of the Cadence DIMM Memory Model in a Passive Monitor mode at the DRAM Memory Interface. The DIMM Passive Monitor consumes the same configuration describing the DIMM “wrapper” in the testbench, and thus can act as a reference model for the DIMM wrapper. If the DIMM Passive Monitor responds successfully to accesses from the DFI VIP, but the DIMM wrapper does not, then it exposes potential bugs in the DUT Components or in the settings of their AC/Timing parameters inside the DIMM wrapper. Debuggability, Interface Visibility, and Protocol Compliance One of the key benefits of the DIMM Discrete Component Models that become manifest, whether in terms of the unique use-case scenario described here, or when working with the wholly unified DDR5 DIMM Memory Models, is the increased debuggability of the protocol functionality. The intentional separation of the discrete components of a DIMM allows the user to have full visibility of the memory traffic at every datapath landmark within a DIMM structure. For example, in modeling an LRDIMM or MRDIMM, the interface between the RCD component and the SDRAM components, the interface between the RCD component and the DB components, and the interface between the SDRAM components and the DB components—all are visible and accessible to the user. The user has full access to dump the values and states of the wire interconnects at these interfaces to the waveform viewer and thus can observe and correlate the activity against any protocol violations flagged in the trace logs by any one or more of the DIMM Discrete Component Models. Access to these interfaces is freely available when using the DIMM Discrete Component Models. On the unified DDR5 DIMM Memory Models, a feature called Debug Ports enables the same level of visibility into the individual interconnects amidst the SDRAM components, RCD components, and DB components. When combined with the Waveform Debugger[5] capability that comes built-in with the VIPs and Memory Models offered by Cadence and used with the Cadence Verisium Debug[6] tool, the enhanced debuggability becomes a powerful platform. With these debug accesses enabled, the user can pull out transaction streams, chip state and bank state streams, mode register streams, and error message streams all right next to their RTL signals in the same Verisium Debug waveform viewer window to debug failures all in one place. The Verisium Debug tool also parses all of the log files to probe and extract messages into a fully integrated Smart Log in a tabbed window fully hyperlinked to the waveform viewer, all at your fingertips. A Solution for Every Scenario Cadence's DDR5 DIMM Memory Models and DIMM Discrete Component Models , partnered with the Cadence DFI VIP, can provide users with a robust and flexible approach to validating their designs thoroughly and effectively in pre-silicon verification environments ahead of tapeout commitments. The solution offers unparalleled latitude in debuggability when the Debug Ports and Waveform Debugger functions of the Memory Models are switched on and boosted with the use of the Cadence Verisium Debug tool. [1] Shyam Sharma, DDR5 DIMM Design and Verification Considerations , 13 Jan 2023. [2] Shyam Sharma, DDR5 UDIMM Evolution to Clock Buffered DIMMs (CUDIMM) , 23 Sep 2024. [3] Kos Gitchev, DDR5 12.8Gbps MRDIMM IP: Powering the Future of AI, HPC, and Data Centers , 26 Aug 2024. [4] Chetan Shingala and Salehabibi Shaikh, How to Verify JEDEC DRAM Memory Controller, PHY, or Memory Device? , 29 Mar 2022. [5] Rahul Jha, Cadence Memory Models - The Gold Standard , 15 Apr 2024. [6] Manisha Pradhan, Accelerate Design Debugging Using Verisium Debug , 11 Jul 2023.




at

Simulating Multiple Cadence DSPs as Multiple x86 Processes

An increasing number of embedded designs are multi-core systems. At the pre-silicon stage, customers use a simulation platform for architectural exploration and software development. Architects want to quantify the impact of the number of cores, local memory size, system memory latency, and interconnect bandwidth. Software teams wish to have a practical development platform that is not excruciatingly slow. This blog shares a recipe for simulating Cadence DSPs in a multi-core design as separate x86 processes. The purpose is to reduce simulation time for customers with simple multi-core models where cores interact only through shared memory. It uses a Vision Q8 multi-core design to share details of the XTSC (Xtensa SystemC) model, software application, commands, and debugging. Note the details shared are for a simulation run on an Ubuntu Linux machine, Xtensa tools version RI-2023.11, and core configuration XRC_Vision_Q8_AODP. Complex vs. Simple Model A complex model (Figure 1) is one in which one core accesses another core's local memory, or there are inter-core interrupts. Simulation runs as a single x86 process. Figure 1 A simple model (Figure 2) is one in which cores interact only through shared memory. Shared memory is a file on the Linux host. Figure 2 Multiple x86 Process – Simple Model As depicted in Figure 3, each core is simulated using a separate x86 process. Cores use barriers and locks placed in shared memory for synchronization and data sharing. Locks are placed in un-cached memory that support exclusive subordinate access. The XTSC memory component, xtsc_memory , supports exclusive subordinate access. Cadence software tools provide a way to define memory regions as cached or uncached. For more details, please refer to Cadence's Linker Support Packages (LSP) Reference Manual for Xtensa SDK . Figure 3 Demo Application A demo application performs a 128x128 matrix multiplication. Work is divided so that each of the 32 cores computes four rows of the 128x128 result matrix. Cores use barriers to synchronize. Cadence tools provide APIs for synchronization and locking. Please refer to Cadence's System Software Reference Manual for more details. Note without a higher-level lock, prints from all cores will get mixed up. Therefore, in the demo application, only core#0 prints. SystemC Simulation The following sample command runs the 32-core simulation in such a way that each core is a separate x86 process. It runs a matrix multiplication application in cycle-accurate mode with logging off. >>for (( N=0; N >xtsc-run -define=NumCores=32 -define=N=0 -define=LOGGING=0 -define=TURBO=0 --xxdebug=sync -i=coreNN.inc -sc_main=sc_main.cpp -no_sim Modify the sc_main.cpp generated for core#0 to create a generic sc_main.cpp to build a single simulation executable for all cores. The Xtensa SDK includes Makefile targets to build custom simulations. By default, the simulation runs in cycle-accurate mode. Fast functional (Turbo) mode provides additional improvement over cycle-accurate mode. Note that the fast functional mode has an initialization phase, so gains are visible only when running an application with longer run times. Simulation Wall Time The table captures simulation wall time improvements. Note that these are illustrative wall time numbers. Actual wall time numbers and improvements will depend on your host machine's performance and your application. Simulation Type Wall Time Comments Single process cycle accurate mode 17500 seconds Multiple x86 processes cycle accurate mode 1385 seconds 12X faster than single process Multiple x86 processes turbo mode 415 seconds 3X faster than cycle accurate mode Debugging Attaching a debugger to each of the individual x86 core simulation processes is possible. Synchronous stop/resume and core-specific breakpoints are also supported. Configure the Xplorer launch configuration and attach it to the running simulation processes as follows (Figure 5) Figure 5 Figure 6 shows 32 debug contexts. Figure 6 As shown, using Xtensa SDK, you can create a multi-core simulation that functions as a practical software development platform. Please visit the Cadence support site for information on building and simulating multi-core Xtensa systems.




at

Solutions to Maximize Data Center Performance Featured at OCP Global Summit 2024

The demand for higher compute performance, energy efficiency, and faster time-to-market drove the conversations at this year's Open Compute Project (OCP) Global Summit in San Jose, California. It was the scene of showcasing groundbreaking innovations, expert-led sessions, and networking opportunities to drive the future of data center technology. For those who didn't get to attend or stop by our booth, here's a recap of Cadence's comprehensive solutions that enable next-generation compute technology, AI data center design, analysis, and optimization. Optimized Data Center Design and Operations As the data center community increasingly faces demands for enhanced efficiency, thermal management, sustainability, and performance optimization, data center operators, IT managers, and executives are looking for solutions to these challenges. At the Cadence booth, attendees explored the Cadence Reality Digital Twin Platform and Celsius EC Solver. These technologies are pivotal in achieving high-performance standards for AI data centers, providing advanced digital twin modeling capabilities that redefine next-generation data center design and operation. The Celsius EC Solver demonstration showed how it solves challenging thermal and electronics cooling management problems with precision and speed. CadenceCONNECT: Take the Heat Out of Your AI Data Center Cadence hosted a networking reception on October 16 titled "Take the Heat Out of Your AI Data Center." In today's AI era, managing the heat generated by high-density computing environments is more critical than ever. This reception offered insights into current and emerging data center technologies, digital twin cooling strategies that deliver energy-saving operations, and a chance to engage with industry leaders, Cadence experts, and peers to explore the latest cooling, AI, and GPU acceleration advancements. Here's a recap: Researcher, author, and entrepreneur Dr. Jon Koomey highlighted the inefficiency of data centers in his talk "The Rise of Zombie Data Centers," noting that 20-30% of their capacity is stranded and unused. He advocated for organizational changes and technological solutions like digital twins to reduce wasted energy and improve computational effectiveness as AI deployments increase. In "A New Millennium in Multiphysics System Analysis," Cadence Corporate VP Ben Gu explained the company's significant strides in multiphysics system analysis, evolving from chip simulation to a broader application of computational software for simulating various physical systems, including entire data centers. He noted that the latest Cadence venture, a digital twin platform for data center optimization, opened the opportunity to use simulation technology to optimize the efficiency of data centers. Senior Software Engineering Group Director Albert Zeng highlighted the Cadence Reality DC suite's ability to transform data center operations through simulation, emphasizing its multi-phase engine for optimal thermal performance and the integration of AI capabilities for enhanced design and management. A panel discussion titled "Turning AI Factory Blueprints into Reality at the Speed of Light" featured industry experts from NVIDIA, Norman Wright Precision Environmental and Power, NV5, Switch Data Centers, and Cadence, who explored the evolving requirements and multidimensional challenges of AI factories, emphasizing the need for collaboration across the supply chain to achieve high-performing and sustainable data centers. Watch the highlights. Transforming Designs from Chips to Data Centers The OCP Global Summit 2024 has reaffirmed its status as a pivotal event for data center professionals seeking to stay at the forefront of technological advancements. Cadence's contributions, from groundbreaking digital twin technologies to innovative cooling strategies, have shed light on the path forward for efficient, sustainable data centers. For data center professionals, IT managers, and engineers, the insights gained at this summit are invaluable in navigating the challenges and opportunities presented by the burgeoning AI era. Partnering with Arm Arm Total Design Cadence is a member of the Arm Total Design program. At an invitation-only special Arm event, Cadence's VP of Research and Development, Lokesh Korlipara, delivered a presentation focusing on data center challenges and design solutions with Arm Neoverse Compute Subsystem (CSS). The session highlighted: Efficient integration of Arm Neoverse CSS into system on chips (SoCs) with pre-integrated connectivity IP Performance analysis and verification of the Neoverse CSS integration into the SoC through Cadence's System VIP verification suite and automated testbench creation, enhancing both quality and productivity Jumpstarting designs through Cadence's collaboration with Arm for 3D-IC system planning, chiplets, and interposers Design Services readiness and global scale to support and/or deliver the most demanding Arm Neoverse CSS-based SoC design projects Cadence Supports Arm CSS in Arm Booth During the event, Cadence conducted a demo in the Arm booth that showcased the Cadence System VIP verification suite. The demo highlighted automated testbench creation and performance analysis for integrating the Arm CSS into SoCs while enhancing verification quality and productivity. Summary Cadence offers data center solutions for designing everything from the compute and networking chips to the board, racks, data centers, and campuses. Stay connected with Cadence and other industry leaders to continue exploring the innovations set to redefine the future of data centers. Learn More Cadence Joins Arm Total Design Cadence Arm-Based Solutions Cadence Reality Digital Twin Platform




at

Celebrating Milestones: The Cadence Bangalore Toastmasters Club’s Journey

On November 5, 2024, the Cadence Bangalore Toastmasters Club celebrated a significant milestone by hosting its 50th meeting. Established in December 2020, the club was created to provide a supportive environment for individuals looking to improve their communication and leadership skills. Over the years, the club has evolved into a vibrant community filled with success stories of personal development and newfound confidence. A testament to the club's dedication is its achievement of the "Select Distinguished Club" status during the 2023-2024 program year. By fulfilling 7 out of 10 distinguished goals, the club highlighted its commitment to excellence—a success driven by its vibrant members' relentless focus and perseverance. The strategic insight gained from regular Toastmasters committee meetings and the influential "Moments of Truth" sessions held in 2023 and 2024 are key to this success. Our club members have consistently demonstrated strong performance in various speech contests, with notable achievements across multiple levels. In 2023, members excelled in Evaluation and Table Topics contests, reaching the district level while advancing to the Division Level in the International Speech Contest. Continuing their success into 2024, members again qualified for area-level contests, securing third-place positions in the Evaluation and Table Topics categories, highlighting the club's dedication and competitive spirit. The 50th meeting was based on the theme of serendipity. It was not only a milestone celebration but also a vibrant festival of achievements and growth. The day buzzed with energy as activities like a spirited Treasure Hunt injected enthusiasm and camaraderie among attendees. Distinguished guests, including Kripa Venkitachalam and Madhavi Rao, enriched the occasion with inspiring speeches. Madhavi reignited the club's spirit, while Kripa's discourse on the Growth Mindset and the "Power of Yet" encouraged members to pursue continuous self-improvement. The Cadence Bangalore Toastmasters Club is enthusiastic about its promising future and is committed to creating an environment that promotes personal and professional growth. Many members are close to completing their Toastmasters levels and pathways, and this term, a new group of approximately 30 individuals has joined, bringing the total membership to 52. This vibrant community is just beginning its journey and is eager to reach new milestones together through mutual support and a shared commitment to excellence. The transformations experienced by many club members are truly compelling. They often share how the club has significantly improved their communication skills and boosted their confidence. One member recalls, "Before joining, I found public speaking intimidating. Now, I embrace every opportunity to share my ideas." Another member highlights how the club's supportive environment helped him overcome his fear of public speaking, propelling his career to new heights. This culture of constructive feedback and continuous improvement has inspired countless members to pursue their dreams with renewed determination and optimism. The Cadence Bangalore Toastmasters Club's journey is a living testament to the power of community and the potential within each of us to grow and achieve greatness. As the club continues to evolve and inspire, it serves as a beacon for those aspiring to transform their skills and seize their moment in the spotlight. Learn more about life at Cadence.




at

Cleared to Land: An Interview with Cadence Veterans ERG Lead Johnathan Edmonds

Each November, we are reminded of the bravery and dedication of those who have served our country. At Cadence, we thank our Veteran employees for their patriotism by reaffirming our commitment to honoring their sacrifices and recognizing their contributions to our business success. Our diverse and inclusive culture is strengthened by the unique perspective of our Veteran employees, and we are proud to support the Veterans Inclusion Group as a space for community members and their allies to connect. In celebration of Veterans Day, we were excited to catch up with Johnathan Edmonds, Veterans Inclusion Group Lead and Design Engineering Director, for a heartfelt chat on his journey through military service to leadership within Cadence. Throughout the conversation, he shared the importance of creating space for Veterans, the skills they offer, and his aspirations for what the Veterans Inclusion Group will achieve in the years ahead. Oh yeah, and he flies planes, too! Join us as we dive into what makes this holiday special for so many across the nation and how we can respectfully commemorate it together. Johnathan, you’re a retired Air Force Reservist, pilot, and now a Design Engineering Director. Can you tell us about your journey from the military to your current role at Cadence? I started my military and electronics journey in the Navy. I enlisted at 18 and served for six years as an aviation electronics technician. During this time, I was able to learn about and repair electronics on planes. This set me up for success, and when I was honorably discharged, I attended Virginia Tech to study computer engineering. Once I graduated, I continued my career as an engineer, but I still wanted to be a military pilot. From my past experience, I knew the reserves were an option where I could learn to fly and still have a civilian career. Not only was I lucky enough to get selected to go to pilot training, but after I returned from flight school, my luck grew, and I was hired at Cadence. Cadence has supported me throughout my military career, which has been a great benefit, as many companies don’t support reservists. The best thing about serving and being employed at Cadence is how I could blend my skill sets to further the Air Force’s mission and achieve great things in engineering. As the first lead of Cadence’s Veterans Inclusion Group, you played an integral part in growing our culture and building community at the company since launching the group four years ago. What inspired you to take on the role of Inclusion Group Lead? I was inspired by three things: camaraderie, service, and outreach. I wanted to see if we could achieve a similar sense of community through the Veterans Inclusion Group as we had during our service life. I also wanted to see how we could better serve our Veterans here at Cadence. I wanted to explore any benefits that could be expanded, roles that could be developed by Vets, and, lastly, I wanted to serve a broader community. COVID-19 put a damper on some of the community support, but we are getting back on track with Veteran employment programs and volunteer efforts like Carry the Load and Gold Star Families. Why is it important to have this space dedicated to Veteran employees? There are many reasons! Networking, for one, creates a stronger, more unified Cadence culture. Two, Vets face a variety of issues not generally understood by those who have not served, such as PTSD, where to get help for disabilities, how to get an old medical record, etc. As I mentioned, I’m also passionate about connecting Veterans with employment and job opportunities. It is so nice to work for a company that actively recruits Vets. We have our own “language,” if you will, so it’s nice to have a space to talk in the language that we are familiar with. What have been some of your favorite moments leading this group over the past few years? Are there any “wins” that you would like to recognize? We have a lot of wins. Events held during COVID-19 and getting past COVID-19, donating to worthwhile causes, and hosting guest speakers are all fantastic milestones and accomplishments. That said, the biggest win is the hiring of new Veteran employees. Mark Murphy, Corporate VP of Sales Operations, and I have both welcomed Vets to our team during this time, and it is such a joy to watch what someone can do when given the opportunity to succeed in the right environment. As you are set to transition out of the lead role next year, what do you hope to see the Veterans Inclusion Group accomplish next? My hope is that the Veterans Inclusion Group partners with other companies, expanding our reach externally and exploring new opportunities to engage Veterans outside of Cadence. Johnathan (left) speaks on an inclusion group panel, along with David Sallard (center), lead of Cadence's Black Inclusion Group and Sr. Principal Application Engineer; Christina Jamerson (on screen), lead of Cadence's Abilities Inclusion Group and Demand Generation Director; and Dianne Rambke (right), lead of Cadence's Latinx Inclusion Group and Marketing Communications Director. What are the important ways that people can signal inclusion and respectfully honor Veterans at work? What are the most meaningful or impactful actions employees everywhere can take to support Veteran coworkers? I think there is one answer to both questions. I recommend that people engage with their companies’ employee resource groups (ERGs) and have conversations with them. Opening up the lines of communication will lead to new paths in their journeys. What are you looking forward to in 2025, both personally and professionally? In 2025, professionally, I am looking forward to taking mixed-signal systems and verification to another level by including emulation, automatic model generation, and seeing which boundaries we can push in our SerDes and Chiplets products. Personally, I am looking forward to making my SXS street legal so I can drive places without getting a ticket, seeing my children participate in sports, church, and school, and taking my wife on vacation to Europe or somewhere else we can unplug. Learn more about Cadence’s Inclusion Groups, diverse culture, and commitment to belonging.




at

Randomization considerations for PCIe Integrity and Data Encryption Verification Challenges

Peripheral Component Interconnect Express (PCIe) is a high-speed interface standard widely used for connecting processors, memory, and peripherals. With the increasing reliance on PCIe to handle sensitive data and critical high-speed data transfer, ensuring data integrity and encryption during verification is the most essential goal. As we know, in the field of verification, randomization is a key technique that drives robust PCIe verification. It introduces unpredictability to simulate real-world conditions and uncover hidden bugs from the design. This blog examines the significance of randomization in PCIe IDE verification, focusing on how it ensures data integrity and encryption reliability, while also highlighting the unique challenges it presents. For more relevant details and understanding on PCIe IDE you can refer to Introducing PCIe's Integrity and Data Encryption Feature . The Importance of Data Integrity and Data Encryption in PCIe Devices Data Integrity : Ensures that the transmitted data arrives unchanged from source to destination. Even minor corruption in data packets can compromise system reliability, making integrity a critical aspect of PCIe verification. Data Encryption : Protects sensitive data from unauthorized access during transmission. Encryption in PCIe follows a standard to secure information while operating at high speeds. Maintaining both data integrity and data encryption at PCIe’s high-speed data transfer rate of 64GT/s in PCIe 6.0 and 128GT/s in PCIe 7.0 is essential for all end point devices. However, validating these mechanisms requires comprehensive testing and verification methodologies, which is where randomization plays a very crucial role. You can refer to Why IDE Security Technology for PCIe and CXL? for more details on this. Randomization in PCIe Verification Randomization refers to the generation of test scenarios with unpredictable inputs and conditions to expose corner cases. In PCIe verification, this technique helps us to ensure that all possible behaviors are tested, including rare or unexpected situations that could cause data corruption or encryption failures that may cause serious hindrances later. So, for PCIe IDE verification, we are considering the randomization that helps us verify behavior more efficiently. Randomization for Data Integrity Verification Here are some approaches of randomized verifications that mimic real-world traffic conditions, uncovering subtle integrity issues that might not surface in normal verification methods. 1. Randomized Packet Injection: This technique randomized data packets and injected into the communication stream between devices. Here we Inject random, malformed, or out-of-sequence packets into the PCIe link and mix valid and invalid IDE-encrypted packets to check the system’s ability to detect and reject unauthorized or invalid packets. Checking if encryption/decryption occurs correctly across packets. On verifying, we check if the system logs proper errors or alerts when encountering invalid packets. It ensures coverage of different data paths and robust protocol check. This technique helps assess the resilience of the IDE feature in PCIe in below terms: (i) Data corruption: Detecting if the system can maintain data integrity. (ii) Encryption failures: Testing the robustness of the encryption under random data injection. (iii) Packet ordering errors: Ensuring reordering does not affect data delivery. 2. Random Errors and Fault Injection: It involves simulating random bit flips, PCRC errors, or protocol violations to help validate the robustness of error detection and correction mechanisms of PCIe. These techniques help assess how well the PCIe IDE implementation: (i) Detects and responds to unexpected errors. (ii) Maintains secure communication under stress. (iii) Follows the PCIe error recovery and reporting mechanisms (AER – Advanced Error Reporting). (iv) Ensures encryption and decryption states stay synchronized across endpoints. 3. Traffic Pattern Randomization: Randomizing the sequence, size, and timing of data packets helps test how the device maintains data integrity under heavy, unpredictable traffic loads. Randomization for Data Encryption Verification Encryption adds complexity to verification, as encrypted data streams are not readable for traditional checks. Randomization becomes essential to test how encryption behaves under different scenarios. Randomization in data encryption verification ensures that vulnerabilities, such as key reuse or predictable patterns, are identified and mitigated. 1. Random Encryption Keys and Payloads: Randomly varying keys and payloads help validate the correctness of encryption without hardcoding assumptions. This ensures that encryption logic behaves correctly across all possible inputs. 2. Randomized Initialization Vectors (IVs): Many encryption protocols require a unique IV for each transaction. Randomized IVs ensure that encryption does not repeat patterns. To understand the IDE Key management flow, we can follow the below diagram that illustrates a detailed example key programming flow using the IDE_KM protocol. Figure 1: IDE_KM Example As Figure 1 shows, the functionality of the IDE_KM protocol involves Start of IDE_KM Session, Device Capability Discovery, Key Request from the Host, Key Programming to PCIe Device, and Key Acknowledgment. First, the Host starts the IDE_KM session by detecting the presence of the PCIe devices; if the device supports the IDE protocol, the system continues with the key programming process. Then a query occurs to discover the device’s encryption capabilities; it ensures whether the device supports dynamic key updates or static keys. Then the host sends a request to the Key Management Entity to obtain a key suitable for the devices. Once the key is obtained, the host programs the key into the IDE Controller on the PCIe endpoint. Both the host and the device now share the same key to encrypt and authenticate traffic. The device acknowledges that it has received and successfully installed the encryption key and the acknowledgment message is sent back to the host. Once both the host and the PCIe endpoint are configured with the key, a secure communication channel is established. From this point, all data transmitted over the PCIe link is encrypted to maintain confidentiality and integrity. IDE_KM plays a crucial role in distributing keys in a secure manner and maintaining encryption and integrity for PCIe transactions. This key programming flow ensures that a secure communication channel is established between the host and the PCIe device. Hence, the Randomized key approach ensures that the encryption does not repeat patterns. 3. Randomization PHE: Partial Header Encryption (PHE) is an additional mechanism added to Integrity and Data Encryption (IDE) in PCIe 6.0. PHE validation using a variety of traffic; incorporating randomization in APIs provided for validating PHE feature can add more robust Encryption to the data. Partial Header Encryption in Integrity and Data Encryption for PCIe has more detailed information on this. Figure 2: High-Level Flow for Partial Header Encryption 4. Randomization on IDE Address Association Register values: IDE Address Association Register 1/2/3 are supposed to be configured considering the memory address range of IDE partner ports. The fields of IDE address registers are split multiple values such as Memory Base Lower, Memory Limit Lower, Memory Base Upper, and Memory Limit Upper. IDE implementation can have multiple register blocks considering addresses with 32 or 64, different registers sizes, 0-255 selective streams, 0-15 address blocks, etc. This Randomization verification can help verify all the corner cases. Please refer to Figure 2. Figure 3: IDE Address Association Register 5. Random Faults During Encryption: Injecting random faults (e.g., dropped packets or timing mismatches) ensures the system can handle disruptions and prevent data leakage. Challenges of IDE Randomization and its Solution Randomization introduces a vast number of scenarios, making it computationally intensive to simulate every possibility. Constrained randomization limits random inputs to valid ranges while still covering edge cases. Again, using coverage-driven verification to ensure critical scenarios are tested without excessive redundancy. Verifying encrypted data with random inputs increases complexity. Encryption masks data, making it hard to verify outputs without compromising security. Here we can implement various IDE checks on the IDE callback to analyze encrypted traffic without decrypting it. Randomization can trigger unexpected failures, which are often difficult to reproduce. By using seed-based randomization, a specific seed generates a repeatable random sequence. This helps in reproducing and analyzing the behavior more precisely. Conclusion Randomization is a powerful technique in PCIe verification, ensuring robust validation of both data integrity and data encryption. It helps us to uncover subtle bugs and edge cases that a non-randomized testing might miss. In Cadence PCIe VIP, we support full-fledged IDE Verification with rigorous randomized verification that ensures data integrity. Robust and reliable encryption mechanisms ensure secure and efficient data communication. However, randomization also brings various challenges, and to overcome them we adopt a combination of constrained randomization, seed-based testing, and coverage-driven verification. As PCIe continues to evolve with higher speeds and focuses on high security demands, our Cadence PCIe VIP ensures it is in line with industry demand and verify high-performance systems that safeguard data in real-world environments with excellence. For more information, you can refer to Verification of Integrity and Data Encryption(IDE) for PCIe Devices and Industry's First Adopted VIP for PCIe 7.0 . More Information: For more info on how Cadence PCIe Verification IP and TripleCheck VIP enables users to confidently verify IDE, see our VIP for PCI Express , VIP for Compute Express Link for and TripleCheck for PCI Express For more information on PCIe in general, and on the various PCI standards, see the PCI-SIG website .




at

TensorFlow Optimization in DSVM: Azure and Cadence

Hello Folks,

Problem statement first: How does one properly setup tensorflow for running on a DSVM using a remote Docker environment? Can this be done in aml_config/*.runconfig?

I receive the following message and I would like to be able to utilize the increased speeds of the extended FMA operations.

tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA

Background: I utilize a local docker environment managed through Azure ML Workbench for initial testing and code validation so that I'm not running an expensive DSVM constantly. Once I assess that my code is to my liking, I then run it on a remote docker instance on an Azure DSVM.

I want a consistent conda environment across my compute environments, so this works out extremely well. However, I cannot figure out how to control the tensorflow build to optimize for the hardware at hand (i.e. my local docker on macOS vs. remote docker on Ubuntu DSVM)




at

Using oscillograph waveform file CSV as the Pspice simulation signal source

hi,

     I save the waveform file of the oscilloscope as CSV file format.

     Now, I need to use this waveform file as the source of the low-pass filter .

     I searched and read the PSPICE help documents, and did not find any  methods. 

     How to realize it?

     Are there any reference documents or examples?

     Thanks!

    




at

Path mapping for C Firmware source files when debugging

Hi,

i am compiling firmware under Windows transfer the binaries and the sources to Linux to simulate/debug there. The problem is that the paths in the DWARF debug info of the .elf file are the absolute Windows paths as set by the compiler so they are useless under Linux. Is it possible to configure mappings of these paths to the Linux paths when simulating/debugging like with e.g. GDB (https://sourceware.org/gdb/current/onlinedocs/gdb/Source-Path.html#index-set-substitute_002dpath)?

thx,

Peter




at

Matlab cannot open Pspice, to prompt orCEFSimpleUI.exe that it has stopped working!

Cadence_SPB_17.4-2019 + Matlab R2019a

请参考本文档中的步骤进行操作

1,打开BJT_AMP.opj

2,设置Matlab路径

3,打开BJT_AMP_SLPS.slx

4,打开后,设置PSpiceBlock,出现或CEFSimpleUI.exe停止工作

5,添加模块

6,相同

7,打开pspsim.slx

8,相同

9,打开C: Cadence Cadence_SPB_17.4-2019 tools bin

orCEFSimpleUI.exe和orCEFSimple.exe

 

10,相同

我想问一下如何解决,非常感谢!




at

Formal Verification Approach for I2C Slave

Hello,

I am new in formal verification and I have a concept question about how to verify an I2C Slave block.

I think the response should be valid for any serial interface which needs to receive information for several clocks before making an action.

The the protocol description is the following: 

I have a serial clock (SCL), Serial Data Input (SDI) and Serial Data Output (SDO), all are ports of the I2C Slave block.

The protocol looks like this:

The first byte which is received by the slave consists in 7bits of sensor address and the 8th bit is the command 0/1 Write/Read.

After the first 8 bits, the slave sends an ACK (SDO = 1 for 1 clock) if the sensor address is correct.

Lets consider only this case, where I want to verify that the slave responds with an ACK if the sensor address is correct.

The only solution I found so far was to use the internal buffer from the block which saves the received bits during 8 clocks. The signal is called shift_s.

I also needed to use internal chip state (state_s) and an internal counter (shift_count_s).

Instead of doing an direct check of the SDO(sdo_o) depending on SDI (sdi_d_i), I used the internal shift_s register.

My question is if my approach is the correct one or there is a possibility to write the verification at a blackbox level.

Below you have the 2 properties: first checks connection from SDI to internal buffer, the second checks the connection between internal buffer and output.

property prop_i2c_sdi_store;
  @(posedge sclk_n_i)
  $past(i2c_bl.state_s == `STATE_RECEIVE_I2C_ADDR)
    |-> i2c_bl.shift_s == byte'({ $past(i2c_bl.shift_s), $past(sdi_d_i)});
endproperty
APF_I2C_CHECK_SDI_STORE: assert property(prop_i2c_sdi_store);

property prop_i2c_sensor_addr(sens_addr_sel, sens_addr);
@(posedge sclk_n_i) (i2c_bl.state_s == `STATE_RECEIVE_I2C_ADDR) && (i2c_addr_i == sens_addr_sel) && (i2c_bl.shift_count_s == 7)
  ##1 (i2c_bl.shift_s inside {sens_addr, sens_addr+1}) |-> sdo_o;
endproperty
APF_I2C_CHECK_SENSOR_ADDR0: assert property(prop_i2c_sensor_addr(0, `I2C_SENSOR_ADDRESS_A0));
APF_I2C_CHECK_SENSOR_ADDR1: assert property(prop_i2c_sensor_addr(1, `I2C_SENSOR_ADDRESS_A1));
APF_I2C_CHECK_SENSOR_ADDR2: assert property(prop_i2c_sensor_addr(2, `I2C_SENSOR_ADDRESS_A2));
APF_I2C_CHECK_SENSOR_ADDR3: assert property(prop_i2c_sensor_addr(3, `I2C_SENSOR_ADDRESS_A3));

PS: i2c_addr_i is address selection for the slave (there are 4 configurable sensor addresses, but this is not important for the case).

Thank you!




at

Issue With Loudness Normalization

Hello everyone. In recent days, I'm having a weird problem with sound output on my Windows 10 PC. In fact, I can't control the loudness of it. So is there any possibility of PCB of sound card being damaged?




at

How to turn vavlog IO width mismatch error to warning?

Hi, all.

When I use vavlog to compile verilog rtl, it will recognize IO width mismatch problem as a fatal error.

How to turn the error into warning?

VCS can use -error=noIOPCWM to ingore the error.

Is vavlog has similar arguments?