is

How to add custom indicators to Dynamic Display measuring HUD

I am attempting to use dbGetNeighbor() function inside the dynamic display HUD so that the distance to the next metal on that layer could be viewed. Think of another line in this dynamic table here... 

My SKILL code is essentially the following:

procedure(getNearestNeighborOnMetal(cv)
let((direction tmpBoundingBox)
direction = internal_function()
tmpBoundingBox = dbCreateRect(geGetEditCellView() "tmp" list(hiGetCommandPoint() hiGetCommandPoint()))
car(dbGetNeighbor(geGetEditCellView() tmpBoundingBox direction))
)
)

this returns the distance to the closest metal based on some tests.

Next, I try to register this function to work in the Dynamic Display / Info Balloon world by executing odcRegisterCustomFunc() for each and every object type (I know, absurd, but trying to debug)

In the dynamic display menu, I toggle the "Custom SKILL Function" check in layoutXL, then hit apply, then OK.

After this I find I am unable to view the changes reflected in any info balloons or in the drawing HUD (above) for this wire. I have tried replacing my function with the sample "customFunc" from the odcRegisterCustomFunc() documentation and was still unable to produce any new output.

Any help diagnosing the use of this feature would be very much appreciated




is

Disappearing toolbar or docked menu

Disappearing toolbar or docked menu

Is there a way for the toolbar or floating menu from disappearing when a cells tab is added to a window?

I have created a skill toolbar and it disappeared when I add another cell or tab to a window.

The only toolbars that stay are the ones I have defined in the Layout.toolbar file.

Do I have to add a trigger to keep the toolbars visible or not disappearing from the window?

Cadence version IC23.1-64b.ISR7.27

Paul




is

Destructive form of "cons" - efficiently prepending an item to a procedure's argument which is a list

Hello,

I was looking to destructively and efficiently modify a list that was passed in as an argument to a procedure, by prepending an item to the list.

I noticed that cons lets you do this efficiently, but the operation is non-destructive. Hence this wouldn't work if you are trying to modify a function's list parameter in place.

Here is an example of trying to add "0" to the front of a list:

procedure( attempt_to_prepend_list(l elem)
    l = cons(elem l)
)
a = list(1 2 3)
==> (1 2 3)
attempt_to_prepend_list(a 0)
==> (0 1 2 3)
a
==> (1 2 3)
As we can see, the original list is not prepended.
Here is a function though which achieves the desired result while being efficient. Namely, the following function does not create any new lists and only uses fast methods like cons, rplacd, and rplaca
procedure( prepend_list(l elem)
    ; cons(car(l) cdr(l)) results in a new list with the car(l) duplicated
    ; we then replace the cdr of l so that we are now pointing to this new list
    rplacd(l cons(car(l) cdr(l)))

    ; we replace the previously duplicated car(l) with the element we want
    rplaca(l elem)
)
a = list(1 2 3)
==> (1 2 3)
prepend_list(a 0)
==> (0 1 2 3)
a
==> (0 1 2 3)
This works for me, but I find it surprising there is no built-in function to do this. Am I perhaps overlooking something in the documentation? I know that tconc is an efficient and destructive way to append items to the end of a list, but there isn't an equivalent for the front of the list?




is

μWaveRiders: Thermal Analysis for RF Power Applications

Thermal analysis with the Cadence Celsius Thermal Solver integrated within the AWR Microwave Office circuit simulator gives designers an understanding of device operating temperatures related to power dissipation. That temperature information can be introduced into an electrothermal model to predict the impact on RF performance.(read more)




is

In Simvision, how do I change the waveform font size of the signal names?

Hi Cadence, 

I use simvision 20.09-s007 but my computer screen resolution is very high. As a result, the texts are too small. 

In ~/.simvision/Xdefaults I changed that number to 16, from 12. But the signal names in the waveform traces don't reflect the change. 

Simvision*Font: -adobe-helvetica-medium-r-normal--16-*-*-*-*-*-*-*

Other .font changes seem to reflect on the simvision correctly, except the signal names. 

How do I fix that? I dont mind a single variable to change all the texts fonts to 16. 

Thank you!

PS: I found the answer with another post. I change Preference/Waveform/Display/Signal Height. 




is

Stream in gds to virtuoso from directory other than where cds.lib exists

I am scripting gds streamin using 'strmin', which works fine so far.

But, as it apparently doesn't have an option to specify where the cds.lib file is, I have to run it from the directory where the cds.lib file is, or I guess I could create a dummy one to source that one.

Is there a way to tell strmin where the cds.lib file is?




is

Genus: Generated netlist doesn't define subckts

Dear all, 

I'm trying to perform an LVS check using Calibre between a layout that was generated by Innovus and the initial netlist generated by Genus. However, once I hit Run LVS on Calibre, it reports the following warnings and recommends to stop the process:

Source netlist references but does not define more than 10 subckts:
DFD1BWP7T
DFKCND1BWP7T
DFKCNQD1BWP7T
DFKSND1BWP7T
DFQD1BWP7T
IND2D0BWP7T
INR2D0BWP7T
INVD0BWP7T
INVD2P5BWP7T
IOA21D0BWP7T
... (and more)

If I proceed the LVS process it shows lots of errors as shown in the following image:

Why Genus doesn't include the definition of those sub circuits in the generated netlist? Is this related to Flat/Hierarchy netlisting? 

I have included my Genus scripts as well as the generated netlist in the attachments (and here - if attachment don't work).

Many thanks,

Anas




is

Conformal LEC can't finish at analyze abort step. How do I proceed?

Hi Cadence & forumers, 

I am running a conformal LEC with a flattened netlist against RTL. 

The run hang for 5 days at the "analyze abort" step which is automatically launched by the compare. 

The netlist is flattened at some levels so hierarchical flow which I tried didn't help much. The flattened/highly optimized netlist is from customer and the ultimate goal. How shall I proceed now? 

On the a side note, a test run with a hierarchical netlist from a simple DC "compile -map_effort medium" command finished after 1 day or so. 

Thank you! 

// Command: vpx compare -verbose -ABORT_Print -NONEQ_Print -TIMEstamp
// Starting multithreaded comparison ...
Comparing 241112 points in parallel.

// Multithreading Overhead: 38% Gates: 8501606/6168138
// Multithreaded processing completed.
================================================================================
Compared points PO DFF DLAT BBOX CUT Total
--------------------------------------------------------------------------------
Equivalent 1025 241638 30 75 21 242789
--------------------------------------------------------------------------------
Abort 0 124 0 0 0 124
================================================================================
Compare results of instance/output/pin equivalences and/or sequential merge
================================================================================
Compared points DFF Total
--------------------------------------------------------------------------------
Equivalent 204 204
================================================================================
// Warning: 512 DFFs/DLATs have 1 disabled clock port: skipped data cone comparison
// Resolving aborts by analyze abort...




is

Unable to open 64bit version of simvison

I am not able to open 64bit version of simvision using the following :

simvision -64 -wav "path to wav"

This throws the error "  /lib64/libc.so.6: version `GLIBC_2.14' not found"

I am only able to open it without the -64 option.

As a result I am not able to use the source browser feature since the simulation was run in 64 bit mode.

Need suggestion on how to resolve this. Thanks.




is

removing cdn_loop_breakers from netlist

I was trying to remove the cdn_loop_breaker cells from the netlist. 
When I tried the below 2 things, it removing the cdn_loop_breaker cells but while connecting the cdn_loop_breaker cell input to its proper connection, its somehow misleading the connections

Things i tried:
1.  remove_cdn_loop_breaker -instances *cdn_loop_breaker*
then i just ran remove_cdn_loop_breaker  comand without the -instances switch
2. remove_cdn_loop_breaker  
     
both of the above things are not providing the proper connections after removing the loop_breaker_cells





is

Want to use Transmission Gate in my design?

I want to use a transmission gate in my design, but it is not available as a standard cell for Genus RTL synthesis. How can I perform an analysis of area, power, and critical path delay that includes the transmission gate alongside standard cells?

Could you provide guidance or a methodology for integrating custom cells, like the transmission gate, into the synthesis flow for accurate analysis?




is

ask some functions that we don't know if it exists

We have a big circuit having 12K gates totally and trying to show it in one page slide visually. But it is so hard for us to shrink it down from gate-level to module-level. Do you have any function like these:

  • Toggle wires on and off
  • “Right click” elements and group them into black boxes
  • Quickly left or right align elements to clean up pictures




is

Xcelium PowerPlayBack App and Dynamic Power Analysis

Learn how Xcelium PowerPlayback App enables the massively parallel Xcelium replay of waveforms for glitch-accurate power estimation of multi-billion gate SoC designs.(read more)




is

BoardSurfers: Some Wisdom from Designing for a High-Volume Production OEM

At what stage in the design cycle do you start to think about the PCB material costs? What about the costs to assemble the PCB? Once a design becomes successful, should you then redesign it to achieve a scalable product? Placing components and routi...(read more)




is

Cadence OrCAD X and Allegro X 24.1 is Now Available

The OrCAD X and Allegro X 24.1 release is now available at Cadence Downloads. This blog post provides links to access the release and describes some major changes and new features.   OrCAD X /Allegro X 24.1 (SPB241) Here is a representative li...(read more)




is

Modern Thermal Analysis Overcomes Complex Design Issues

Melika Roshandell, Cadence product marketing director for the Celsius Thermal Solver, recently published an article in Designing Electronics discussing how the use of modern thermal analysis techniques can help engineers meet the challenges of today’s complex electronic designs, which require ever more functionality and performance to meet consumer demand.

Today’s modern electronic designs require ever more functionality and performance to meet consumer demand. These requirements make scaling traditional, flat, 2D-ICs very challenging. With the recent introduction of 3D-ICs into the electronic design industry, IC vendors need to optimize the performance and cost of their devices while also taking advantage of the ability to combine heterogeneous technologies and nodes into a single package. While this greatly advances IC technology, 3D-IC design brings about its own unique challenges and complexities, a major one of which is thermal management.

To overcome thermal management issues, a thermal solution that can handle the complexity of the entire design efficiently and without any simplification is necessary. However, because of the nature of 3D-ICs, the typical point tool approach that dissects the design space into subsections cannot adequately address this need. This approach also creates a longer turnaround time, which can impact critical decision-making to optimize design performance. A more effective solution is to utilize a solver that not only can import the entire package, PCB, and chiplets but also offers high performance to run the entire analysis in a timely manner.

Celsius Thermal Management Solutions

Cadence offers the Celsius Thermal Solver, a unique technology integrated with both IC and package design tools such as the Cadence Innovus Implementation System, Allegro PCB Designer, and Voltus IC Power Integrity Solution. The Celsius Thermal Solver is the first complete electrothermal co-simulation solution for the full hierarchy of electronic systems from ICs to physical enclosures. Based on a production-proven, massively parallel architecture, the Celsius Thermal Solver also provides end-to-end capabilities for both in-design and signoff methodologies and delivers up to 10X faster performance than legacy solutions without sacrificing accuracy.

By combining finite element analysis (FEA) for solid structures with computational fluid dynamics (CFD) for fluids (both liquid and gas, as well as airflow), designers can perform complete system analysis in a single tool. For PCB and IC packaging, engineering teams can combine electrical and thermal analysis and simulate the flow of both current and heat for a more accurate system-level thermal simulation than can be achieved using legacy tools. In addition, both static (steady-state) and dynamic (transient) electrical-thermal co-simulations can be performed based on the actual flow of electrical power in advanced 3D structures, providing visibility into real-world system behavior.

Designers are already co-simulating the Celsius Thermal Solver with Celsius EC Solver (formerly Future Facilities’ 6SigmaET electronics thermal simulation software), which provides state-of-the-art intelligence, automation, and accuracy. The combined workflow that ties Celsius FEA thermal analysis with Celsius EC Solver CFD results in even higher-accuracy models of electronics equipment, allowing engineers to test their designs through thermal simulations and mitigate thermal design risks.

Conclusion

As systems become more densely populated with heat-dissipating electronics, the operating temperatures of those devices impact reliability (device lifetime) and performance. Thermal analysis gives designers an understanding of device operating temperatures related to power dissipation, and that temperature information can be introduced into an electrothermal model to predict the impact on device performance. The robust capabilities in modern thermal management software enable new system analyses and design insights. This empowers electrical design teams to detect and mitigate thermal issues early in the design process—reducing electronic system development iterations and costs and shortening time to market.

To learn more about Cadence thermal analysis products, visit the Celsius Thermal Solver product page and download the Cadence Multiphysics Systems Analysis Product Portfolio.




is

Relative delay analysis is impacted by pbar

Does anyone know how to not include a pbar in a constraint manager analysis? I have some relative delay constraints applied on a group of differential nets. When I analyze the design these all show an error. If I delete the plating bar from the design they are all passing. The plating bar gets generated on the Substrate Geometry / Plating_Bar class. I understand that I could just delete the plating bar to verify the constraint but the issue is when I archive this design I would like it to be clean meaning it is in the final state for manufacturing AND passing all constraints according to design reviews.

Anyone have an idea? 

Thank you!




is

What is Allegro X Advanced Package Designer and why do I not see Allegro Package Designer Plus (APD+) in 23.1?

Starting SPB 23.1, Allegro Package Designer Plus (APD+) has been rebranded as Allegro X Advanced Package Designer (Allegro X APD).

The splash screen for Allegro X APD will appear as shown below, instead of showing APD+ 2023:

For the Windows Start menu in 23.1, it will display as Allegro X APD 2023 instead of APD+ 2023, as shown below

23.1 Start menu 

In the Product Choices window for 23.1, you will see Allegro X Advanced Package Designer in the place of Allegro Package Designer +, as shown below: 

23.1 product title




is

How to reuse device files for existing components

Have you ever encountered ERROR(SPMHNI-67) while importing logic? If yes, you might already know that you had to export libraries of the design and make sure that paths (devpath, padpath, and psmpath) include the location of exported files.  

Starting in SPB23.1, if you go to File > Import > Logic/Netlist and click on the Other tab, you will see an option, Reuse device files for existing components. 

After selecting this option, ERROR(SPMHNI-67) will no longer be there in the log file, because the tool will automatically extract device files and seamlessly use them for newly imported data. In other words, SPB_23.1 lets you reuse the device / component definitions already in the design without first having to dump libraries manually. An excellent improvement, don’t you think?  




is

How to access the Transmission Line Calculator in Allegro X APD

Have you ever thought of a handy utility to specify all necessary transmission line parameters to decide upon the stackup?   

Starting SPB 23.1, a handy feature Transmission Line Calculator, is built into Allegro X Advanced Package Designer (Allegro X APD). This feature will require either an SiP Layout license or can be accessed through SiP Layout Bundle. 

From the Analyze dropdown menu in the 23.1 Allegro X APD toolbar, you can choose Transmission Line Calculator. 

 

You can use this calculator to help decide constraints and stackup for laminate-based PCB or Packages. You can calculate the correct stackup material and width/spacing to meet any requirements that may be later entered in a constraint. This is truly a calculated number and not a true field solver. 

The different types of calculations that the Transmission Line Calculator can provide are Microstrip, Embedded microstrip, Stripline, CPW (Coplanar), FGCPW (frequency-dependent Coplanar),Asymmetric stripline, Coupled microstrip (Differential Pair), Coupled stripline (Differential Pair), and Dual striplines. 

This feature is important for customers relying on fabricators/spreadsheets to provide this information or need to test a quick spacing/width as per the impedance value. 

Let us know your comments on this new feature in 23.1 Allegro X APD. 

 




is

Find Routing problem (Route Vision) and quickly to fix these problems

The vision manager is good tool for routing check. but no quickly or effective  tool to fix or optimize this  problems to be optimized.

For example, parallel Gap less than preferred, min seg/Arc length,uncoupled diff-pair segs,and so on.

I only know use spread between voids to fix the non-optimized segs. in fact it is inefficient.

the parallel gap less than preferred is only to slice evry trace, its inefficient.

If i set the paraller gap less than 50um, Is there any tool to quickly fix these problems(gap less than 50um)?

For other problems,i can use tool to quickly fix the min seg/Arc length,uncoupled diff pair segs,accoding to select by polygon or select  by windows.




is

Allegro X APD - Tip of the week: Wondering how to set two adjacent layers as conductor layers! Then this post should help you.

By default, a dielectric must separate each pair of conductor layers in the cross-section of a design. In rare cases, this does not represent the real, manufactured substrate.

If your design requires you to have conductor layers that are not separated by a dielectric (such as, for half-etch designs), there is a variable that needs to be set in Allegro X APD. You must set this by enabling the variable icp_allow_adjacent_conductors. This entry, and its location in the User Preferences Editor, are shown in the following image.

The Objects on adjacent conductor layers do not electrically connect together, automatically. A via must be used to establish the inter-layer connections.

When enabling this option, it is recommended to exercise caution because excluding dielectric layers from your cross-section can lead to inaccurate calculations, including the calculations for signal integrity and via heights. It is important that your cross-section accurately reflect the finished product to ensure the most accurate results possible. Any dielectric layers present in the manufactured part need to be in the cross-section for accurate extraction, 3D viewing, and so on.

Let us know your comments on the various designs that would require adjacent conductor layers.




is

slide hug only is wrong?

Hi,

Can you tell me which setting is causing this?

In the general edit. I try slide via to other position. but the slide is wrong.

in the cm,i set pad-pad connect is all allowed,and i turn off via to pad spacing in the same net spacing. only turn on via to via spacing in the same net spacing,set to via to via spacing =0.

default the via is closer to the pad edge, I think the correct location is show in the pic2.

 




is

Maximizing Display Performance with Display Stream Compression (DSC)

Display Stream Compression (DSC) is a lossless or near-lossless image compression standard developed by the Video Electronics Standards Association (VESA) for reducing the bandwidth required to transmit high-resolution video and images. DSC compresses video streams in real-time, allowing for higher resolutions, refresh rates, and color depths while minimizing the data load on transmission interfaces such as DisplayPort, HDMI, and embedded display interfaces.

Why Is DSC Needed?

In the ever-evolving landscape of display technology, the pursuit of higher resolutions and better visual quality is relentless. As display capabilities advance, so do the challenges of managing the immense amounts of data required to drive these high-performance screens. This is where DSC steps in. DSC is designed to address the challenges of transmitting ultra-high-definition content without sacrificing quality or performance. As displays grow in resolution and capability, the amount of data they need to transmit increases exponentially. DSC addresses these issues by compressing video streams in real-time, significantly reducing the bandwidth needed while preserving image quality.
 

DSC Use in End-to-end System

DSC Key Features

  • Encoding tools:
    • Modified Median-Adaptive Prediction (MMAP)
    • Block Prediction (BP)
    • Midpoint Prediction (MPP)
    • Indexed color history (ICH)
    • Entropy coding using delta size unit-variable length coding (DSU-VLC)
  • The DSC bitstream and decoding process are designed to facilitate the decoding of 3 pixels/clock in practical hardware decoder implementations. Hardware encoder implementations are possible at 1 pixel/clock.
  • DSC uses an intra-frame, line-based coding algorithm, which results in very low latency for encoding and decoding.

DSC encoding algorithm
 

  • Compression can be done to a fractional bpp. The compressed bits per pixel ranges from 6 to 63.9375.
  • For validation/compliance certification of DSC compression and decompression engines, cyclic redundancy checks (CRCs) are used to verify the correctness of the bitstream and the reconstructed image.
  • DSC supports more color bit depths, including 8, 10, 12, 14, and 16 bpc.
  • DSC supports RGB and YCbCr input format, supporting 4:4:4, 4:2:2, and 4:2:0 sampling.
  • Maximum decompressor-supported bits/pixel values are as listed in the Maximum Allowed Bit Rate column in the table below

  • DP DSC Source device shall program the bit rate within the range of Minimum Allowed Bit Rate column in the table:

          


Summary

Display Stream Compression (DSC) is a technology used in DisplayPort to enable higher resolutions and refresh rates while maintaining high image quality. It works by compressing the video data transmitted from the source to the display, effectively reducing the bandwidth required. DSC uses a visually lossless algorithm, meaning that the compression is designed to be imperceptible to the human eye, preserving the fidelity of the image. This technology allows for smoother, more detailed visuals at higher resolutions, such as 4K or 8K, without requiring a significant increase in data bandwidth.

More Information

  • Cadence has a very mature Verification IP solution. Verification over many different configurations can be used with DisplayPort 2.1 and DisplayPort 1.4 designs, so you can choose the best version for your specific needs.
  • The DisplayPort VIP provides a full-stack solution for Sink and Source devices with a comprehensive coverage model, protocol checkers, and an extensive test suite.
  • More details are available on the DisplayPort Verification IP product page, Simulation VIP pages.
  • If you have any queries, feel free to contact us at talk_to_vip_expert@cadence.com




is

Use Verisium SimAI to Accelerate Verification Closure with Big Compute Savings

Verisium SimAI App harnesses the power of machine learning technology with the Cadence Xcelium Logic Simulator - the ultimate breakthrough in accelerating verification closure. It builds models from regressions run in the Xcelium simulator, enabling the generation of new regressions with specific targets. The Verisium SimAI app also features cousin bug hunting, a unique capability that uses information from difficult-to-hit failures to expose cousin bugs. With these advanced machine learning techniques, Verisium SimAI offers the potential for a significant boost in productivity, promising an exciting future for our users.

Figure 1: Regression compression and coverage maximization with Verisium SimAI 

What can I do with Verisium SimAI?

You can exercise different use cases with Verisium SimAI as per your requirements. For some users, the goal might be regression compression and improving coverage regain. Coverage maximization and hitting new bins could be another goal. Other users may be interested in exposing hard-to-hit failures, bug hunting for difficult to find issues. Verisium SimAI allows users to take on any of these challenges to achieve the desired results.

Let's go into some more details of these use cases and scenarios where using SimAI can have a big positive impact.

  1. Using SimAI for Regression Compression and Coverage Regain

Unlock up to 10X compute savings with SimAI!

Verisium SimAI can be used to compress regressions and regain coverage. This flow involves setting up your regression environment for SimAI, running your random regressions with coverage and randomization data followed by training, and finally, synthesizing and running the SimAI-generated compressed regressions. The synthesized regression may prune tests that do not help meet the goal and add more runs for the most relevant tests, as well as add run-specific constraints. This flow can also be used to target specific areas like areas involving a high code churn or high complexity.

You can check out the details of this flow with illustrative examples in the following Rapid Adoption Kits (RAK) available on the Cadence Learning and Support Portal (Cadence customer credentials needed):

 

  1. Using SimAI for Coverage Maximization and Targeting coverage holes

Reduce your Functional Coverage Holes by up to 40% using SimAI!

Verisium SimAI can be used for iterative coverage maximization. This is most effective when regressions are largely saturated, and SimAI will explicitly try to hit uncovered bins, which may be hard-to-hit (but not impossible) coverage holes. This is achieved using iterative learning technology where with each iteration, SimAI does some exploration and determines how well it performed. This technique can also be used for bug hunting by using holes as targets of interest.

See more details on the Cadence Learning and Support Portal:

 

  1. Using SimAI for Bug Hunting

Discover and fix bugs faster using SimAI!

Verisium SimAI has a new bug hunting flow which can be used to target the goal of exposing hard-to-hit failure conditions. This is achieved using an iterative framework and by targeting failures or rare bins. The goal to target failures is best exercised when the overall failure rate is typically low (below 5%). Iterative learning can be used to improve the ability to target specific areas. Use the SimAI bug hunting use case to target rare events, low hit coverage bins, and low hit failure signatures.

See more details on the Cadence Learning and Support Portal:

Unlock compute savings, reduce your functional coverage holes, and discover and fix bugs faster with the power of machine learning technology now enabled by Verisium SimAI!

Please keep visiting  https://support.cadence.com/raks to download new RAKs as they become available.

Please note that you will need the Cadence customer credentials to log on to the Cadence Online  Support  https://support.cadence.com/, your 24/7 partner for getting help in resolving issues related to Cadence software or learning Cadence tools and technologies.

Happy Learning!




is

Cadence Verisium Debug Introduces Verisium Debug App Store

Verisium Debug, the Cadence unified debug platform, offers a variety of debugging capabilities, including RTL debug, UVM testbench debug, UPF debug, and DMS debug. From IP to SoC level debug, the user can take the benefits of the rich debugging features to reduce the time for debug.

Not only the common and advanced debug features, Verisium Debug also provides Python-based interface API, which enables capabilities allowing users to customize functions with Verisium Debug Python API to access from design, waveform databases and add functions to Verisium Debug’s GUI for visualization purposes. With Verisium Debug’s Python API, users can turn repetitive works into automatic programs or reduce efforts to create in-house utilities with well-established infrastructure from Verisium Debug.

Here is an example of how the user uses Python API to create a customized function. Users can write a Python program to extract signals in a specific design scope and report the values of the extracted signals. From Fig 1., you can understand the procedure of the traversal steps.

  1. Import Python library in Verisium Debug package.
  2. Setup the database for traversal.
  3. Search the scope with the hierarchy information in the design DB.
  4. Query the signal list and the values of the signals.
  5. Print out the results.

Fig 1. Procedure of Verisium Debug Python Program

The result from the Verisium Debug Python App can be used for post-process design checking or fed into other utilities in the design flow.

The concept is very straightforward. With Verisium Debug and the Python API environment enabled, you can easily query any information that is stored in the databases of Verisium Debug. The result can be outputted in text format, or you can also use the API to display the results back to Verisium Debug’s GUI.

The Verisium Debug Python API is an important capability and resource for Verisium Debug users. To make Verisium Debug Python API easier to access, from Verisium Debug 24.10 release, Verisium Debug introduced the new Verisium Debug Python App Store.

Fig 2. Verisium Debug App Store

The Python App Store includes ready-to-use Python App examples with the availabilities of original source code documents, which help the user to understand how to start writing an app that fits their use case.

Fig 3. Example apps in Verisium Debug App Store

The Verisium Debug Python App Store can also be used by a team as an app management system. App creators can share the developed apps across teams within their companies. The in-house created apps will become easy to manage, and engineers can easily access the apps from the central location, which makes it possible for users to see the updated available Verisium Debug Apps from the Verisium Debug App Store.

Check the following videos for more information about Verisium Debug Python API:

Customize Verisium Debug with Python API

Verisium Debug Customized Apps with Python API




is

Unveiling the Capabilities of Verisium Manager for Optimized Operations

In SoC development, the verification cycle is a crucial phase that ensures products meet their specifications and function correctly. However, the complexity of modern SoC projects, with their constant data flow, multiple validation teams working in parallel, and tight schedules, presents significant challenges. This article explores these challenges and introduces Verisium Manager as a solution that embodies the 'One Tool Fits All' concept. This means that Verisium Manager is designed to handle all aspects of the verification process for SoC development, from planning to coverage analysis to regression testing, thereby addressing the complex needs of SoC verification.

The Hurdles in Traditional Validation Cycles

 A typical validation process involves planning, coverage analysis, and regression testing. This complexity is compounded by using separate tools for each activity, leading to multiple control environments, APIs, and databases, not to mention the array of tool owners. Such fragmentation results in constant data transfer and translation between systems, from the planning tool to the coverage analysis tool and then to the regression testing tool. This continuous movement of data causes delays, system instability, poor user experiences, and, ultimately, a dip in the quality of the validation process.

The use of multiple platforms leads to inefficiency and reduced productivity. What's needed is a unified system that can streamline the workflow, simplify the verification process, and enhance its effectiveness.

Envisioning the Ideal Solution: Verisium Manager

 The cornerstone of an efficient validation cycle is integration and simplicity. The ideal solution is a singular platform that consolidates planning, coverage analysis, and regression management into one smooth, unified process. Verisium Manager emerges as this much-needed solution, encompassing all the functionalities necessary to streamline the validation process. Its comprehensive nature instills confidence in its ability to handle all aspects of the verification cycle. It can be fully customized to address and enforce any validation methodology and can facilitate smooth integration into any customer environment.

Features that stand out in Verisium Manager include: 

  • Unified Workflow: It acts as a single cockpit from which all activities are orchestrated, ensuring the validation teams' work is uninterrupted and seamlessly integrated.
  • Customization and Integration: Verisium Manager supports customizing test-plan structures and mapping results per project, ensuring a perfect fit for various project requirements. Its ability to smoothly integrate into the project's environment and compute platforms is unparalleled.
  • Support for Continuous Updates and Migration: The tool accommodates constant updates to project data and supports the migration of legacy data, ensuring that no historical data is lost in the transition to a new system.

Addressing Project-Specific Needs

 Verisium Manager recognizes diversity in different projects and offers project-specific solutions, including:

 Enforcing Project Test-Plan Structures and Attributes: It supports and enforces each project's unique test-plan structure and mapping guidelines.

  • Unified Data Views and Measurements: Verisium Manager promotes a unified view of data across all teams and enforces unified measurements, ensuring consistency and clarity in the validation process.
  • Enabling Project-Specific Actions and Integrations: The tool is designed to support project-specific actions directly from its graphical user interface and allows for smooth integration with in-house databases, dashboards, and the project execution stack.

Verisium Manager is the epitome of efficiency in software/hardware validation. Its differentiating features, such as support for customization, unified data view, and comprehensive coverage and regression requirements, make it an indispensable tool for any validation team looking to elevate their workflow.




is

Sigrity and Systems Analysis 2024.1 Release Now Available

The Sigrity and Systems Analysis (SIGRITY/SYSANLS) 2024.1 release is now available for download at Cadence Downloads . For the list of CCRs fixed in this release, see the README.txt file in the installation hierarchy. SIGRITY/SYSANLS 2024.1 Here is a list of some of the key updates in the SIGRITY/SYSANLS 2024.1 release: For more details about these and all the other new and enhanced features introduced in this release , refer to the following document: Sigrity Release Overview and Common Tools What's New . Supported Platforms and Operating Systems Platform and Architecture X86_64 (lnx86) Windows (64 bit) Development OS RHEL 8.4 Windows Server 2022 Supported OS RHEL 8.4 and above RHEL 9 SLES 15 (SP3 and above) Windows 10 Windows 11 Windows Server 2019 Windows Server 2022 Systems Analysis 2024.1 Clarity 3D Solver Clarity 3D Layout Structure Optimization Workflow : A new workflow, Clarity 3D Layout Structure Optimization Workflow, has been added to Clarity 3D Layout. This workflow integrates Allegro PCB Designer with Clarity 3D Layout for high-speed structure optimization. Component Geometry Model Editor : The new Clarity 3D Layout editor lets you set up ports, solder bumps/balls/extrusions, and two-terminal and multi-terminal circuits using a single GUI. Coaxial Open Port Option Added to Port Setup Wizard : The Coaxial Open Port option lets you create ports for each target net pin and reference net pin in Clarity 3D Layout. The nearby reference net pins are then used as a reference for each target net pin, reducing the number of ports needed. In addition, the ports of unused reference net pins are shorted to the ground. Parametric Import Option Added : Two new options, Parametric Import and Default Import , have been added to the Tools – Launch Clarity3DWorkbench menu. The Parametric Import option lets you import the design along with its parameters into Clarity 3D Workbench. The Default Import option lets you ignore the parameters when importing the design into Clarity 3D Workbench. Component Library Added to Generate 3D Components : Clarity 3D Workbench now includes a new component library that lets you use predefined 3D component templates or add existing 3D components to create 3D designs and simulation models. AI-Powered Content Search Capability : Clarity 3D Workbench and Clarity 3D Transient Solver now support an AI-powered capability for searching the content and displaying relevant information. Expression Parser to Handle Undefined Parameters : Clarity 3D Workbench and Clarity 3D Transient Solver support writing expressions or equations containing undefined parameters in the Property window to describe a simulation variable. The improved expression parser automatically detects any undefined parameter in an expression and prompts users to specify their values. This capability lets you define a model or a simulation variable as a function instead of specifying static values. For detailed information, refer to Clarity 3D Layout User Guide and Clarity 3D Workbench User Guide on the Cadence Support portal. Clarity 3D Transient Solver Mesh Processing Improved to Simulate Large Use Cases : Clarity 3D Transient Solver leverages a new meshing algorithm that enhances overall mesh processing, specifically for large designs and use cases. The new algorithm dramatically improves the mesh quality, minimum mesh size, number of mesh key points, total mesh number, and memory usage. Advanced Material Processing Engine : The material processing capability has been enhanced to handle thin outer metal, which previously resulted in open and short issues in some designs. In addition, the material processing engine offers improved mode extraction for particular use cases, including waveguide and coaxial designs. Characteristic Impedance Calculation Improved : The solver engine now uses a new analytical calculation method to calculate the characteristic impedance of coaxial designs with improved accuracy. For detailed information, refer to Clarity 3D Transient Solver User Guide on the Cadence Support portal. Celsius Studio Celsius Interchange Model Introduced : Celsius Studio now supports Celsius Interchange Model generation, which is a 3D model derived from detailed physical designs for multi-physics and multi-scale analysis. This Celsius Interchange Model file ( .cim ) serves as a design information carrier across Celsius Studio tools, enabling a variety of simulation and analysis tasks . Celsius 3DIC Thermal Workflow Improvements : The Thermal Simulation workflows in Celsius 3DIC have been significantly enhanced. Key improvements include: Advanced Power Setup with Transient Power Function and Multi Mode options Enhanced GUI for the Mesh Control and Simulation Control tabs Improved meshing capabilities Celsius Interchange Model ( .cim ) generation Material library support for block and connections Import of Heat Transfer Coefficients (HTCs) from a CFD file Bump creation through the Bump Array Wizard Layer Stackup CSV file generation Celsius 3DIC Warpage and Stress Workflow Enhancements : The Warpage and Stress workflow in Celsius 3DIC has undergone significant improvements, such as: Improved multi-stage warpage simulation flow for 3DIC packaging process Enhanced GUI for the Mesh Control , Simulation Control , and Stress Boundary Conditions tabs Support for large deformations and temperature profiles Bump creation through the Bump Array Wizard New constraint types Enhanced meshing capabilities Geometric Nonlinearity Support in Warpage and Stress Analysis : Large deformation analysis is now supported in warpage and stress studies. This study uses the Total Lagrangian approach to model geometric nonlinearities in simulation, which allows accurate prediction of final deformations. Thermal Network Extraction and Simulation : In the solid extraction flow in Celsius 3D Workbench, you can now import area-based power map files to create terminals. For designs with multiple blocks, this capability allows automatic terminal creation, eliminating the need to manually create and set up 2D sheets individually. Additionally, thermal throttling feature is now supported in Celsius Thermal Network. This makes it ideal for preliminary analyses or when a quick estimation is required. It runs significantly faster than 3D models, allowing for quicker iterations and more efficient decision-making. For detailed information, refer to the Celsius 3DIC User Guide , Celsius Layout User Guide and Celsius 3D Workbench User Guide on the Cadence Support portal. Sigrity 2024.1 Layout Workbench Improved Graphical User Interface : A new option, Use Improved User Interface , has been added in the Themes page of the Options dialog box in the Layout Workbench GUI. In the new GUI, the toolbar icons and menu options have been enhanced and rearranged. For detailed information, refer to Layout Workbench User Guide on the Cadence Support portal. Broadband SPICE Python Script Integration with Command Line for Simulation Tasks : Broadband SPICE lets you run Python scripts directly from the command line for performing simulation and analysis. The new -py and *.py options make it easier to integrate Python scripts with the command-line operations. This update streamlines the process of automating and customizing simulations from the command line, which makes your simulation tasks faster and easier. For detailed information, refer to Broadband SPICE User Guide on the Cadence Support portal. Celsius PowerDC Block Power Assignment (BPA) File Format Support : PowerDC now supports the BPA file format. Similar to the Pin Location (PLOC) file, the BPA file is a current assignment file that defines the total current of a power grid cell, which is then equally distributed across the power pins within the cell. This provides better control over the power distribution. Ability to Run Multiple IR Drop Cases Sequentially : You can now select multiple result sinks from the Current-Limited IR Drop flow and run IR Drop analysis for them sequentially. PowerDC automatically runs the simulations in sequence after you select multiple result sinks. This saves time by automating the process. Enhanced Support for Mixed Conversion Devices : PowerDC now supports mixing different conversion devices, such as switching regulators and linear regulators within a single DC-DC/LDO instance. This enhancement offers added flexibility by letting you configure each instance in your design according to your specific needs. For detailed information, refer to PowerDC User Guide on the Cadence Support portal. PowerSI Monte Carlo Method Added : A new option, Monte Carlo Method, has been added in the Optimality dialog box. This option lets you create multiple random samples to depict variations in the input parameters and assess the output. Channel Check Optimization Added : The S-Parameter Assessment workflow in PowerSI now supports Channel Check Optimization . It uses the AI-driven Multidisciplinary Analysis and Optimization (MDAO) technology that lets you optimize your design quickly and efficiently with no accuracy loss. For detailed information, refer to PowerSI User Guide on the Cadence Support portal. SPEEDEM Multi-threaded Matrix Solver Support Added : The Enable Multi-threaded Matrix Solver check box has been added that lets you accelerate the simulation speed for high-performance computing. This check box provides two options, Automatic and Always, to include the -lhpc4 or -lhpc5 parameter, respectively, in the SPEEDEM Simulator (SPDSIM) before running the simulation. For detailed information, refer to the SPEEDEM User Guide on the Cadence Support portal. XtractIM Options to Skip or Calculate Special DC-R Simulation Results : The Skip DC_R of Each Path and Only DC_R of Each Path options have been added to the Setup menu. Skip DC_R of Each Path : This option lets you skip the calculation of the DC-R result during the simulation. Other results, such as SPICE T-model , RL_C of Each Path , Coupling of Each Path , etc., are still calculated. Only DC_R of Each Path : This option lets you calculate the DC-R result only during the simulation. Other results, such as SPICE T-model , RL_C of Each Path , Coupling of Each Path , etc., are not calculated. Color Assignment for Pin Matching : The MCP Auto Connection window includes the Display Color Editor , which lets you assign a color for pin matching. It helps you easily identify the matching pins in the left and right sections of the MCP Auto Connection window . Ability to Save Simulations Individually : The Save each simulation individually check box has been added to the Tools - Options - Edit Options - Simulation (Basic) - General form. Select this check box and run the simulation to generate a simulation results folder containing files and logs with a timestamp for each simulation. Reuse of SPD File Settings : The XtractIM setup check box lets you import an existing package setup to reuse the configurations and settings from one .spd file to another. For detailed information, refer to XtractIM User Guide on the Cadence Support portal. Documentation Enhancements Cloud-Based Help System Upgraded The cloud-based help system, Doc Assistant, has been upgraded to version 24.10, which contains several new features and enhancements over the previous 2.03 version. Sigrity Release Team Please send your questions and feedback to sigrity_rmt@cadence.com .




is

BETA CAE Systems Is Now Cadence: Join Our 2024 China Open Meeting

This November, the engineering and simulation community is set to converge in China for an event that promises to be nothing short of revolutionary. The 2024 BETA CAE Systems China Open Meeting, taking place in the vibrant cities of Beijing and Shanghai on November 5 and 7 , respectively, is a must-attend for anyone looking to stay at the forefront of technological innovation in simulation solutions. Prepare to be inspired by Ben Gu , the visionary Corporate VP of Research and Development at Cadence. He will lead both meetings in Beijing and Shanghai with his keynote on " A New Millennium in Multiphysics System Analysis ." This thought-provoking keynote is expected to provide attendees with a glimpse into the future of engineering simulation and analysis. What sets the BETA CAE Systems Open Meetings apart is not just the high caliber of speakers but also the hands-on training sessions designed to enhance your technical expertise with the BETA CAE software suite. Whether you are an inexperienced individual seeking to acquire fundamental knowledge or an accomplished professional endeavoring to hone your expertise, these training sessions following the open meetings are meticulously tailored to meet your needs. Join Us at the BETA CAE Systems Open Meeting in Beijing The BETA CAE Systems Open Meeting in Beijing will feature a keynote speech by Peng Qiao , Senior Engineer at Great Wall Motors Co., Ltd, on Multidisciplinary Optimization Techniques for Automotive Control Arms . ( View detailed agenda for Beijing. ) When: November 5, 2024 Where: Grand Metropark Hotel Beijing If this sounds interesting, register today for the BETA CAE Systems Beijing Open Meeting by clicking the button below. Don't Miss Out on the BETA CAE Systems Open Meeting in Shanghai After the BETA CAE Systems Open Meeting in Beijing, the next meeting in China will be in Shanghai. During this event, Liu Deping, CAE Engineer from Zhejiang Geely Automobile Research Institute Co., Ltd, will deliver a keynote speech on the Application of ANSA in the Simulation Development Cycle . ( View detailed agenda for Shanghai. ) When: November 7, 2024 Where: InterContinental Shanghai Jing'an Following the open meeting on November 7 will be an exclusive training day on November 8. This session will provide attendees with practical experience using the BETA CAE software to improve their technical skills and provide hands-on knowledge of the software. If you find this intriguing, register now for the BETA CAE Systems Shanghai Open Meeting by clicking the button below. Why Attend? Gain firsthand insights into the latest developments in simulation technology Learn from real-world applications and success stories from various industries Connect and exchange ideas with experts in a collaborative environment Mark your calendars for this unparalleled opportunity to explore the forefront of simulation technology. Whether you're aiming to broaden your knowledge, enhance your technical skills, or connect with industry leaders, the BETA CAE Systems Open Meetings are your gateway to the future of engineering. Join us and be part of shaping the next wave of innovation in the simulation world.




is

Training Webinar: Fast Track RTL Debug with the Verisium Debug Python App Store

As a verification engineer, you’re surely looking for ways to automate the debugging process. Have you developed your own scripts to ease specific debugging steps that tools don’t offer? Working with scripts locally and manually is challenging—so is reusing and organizing them. What if there was a way to create your own app with the required functionality and register it with the tool? The answer to that question is “Yes!” The Verisium Debug Python App Store lets you instantly add additional features and capabilities to your Verisium Debug Application using Python Apps that interact with Verisium Debug via the Python API. Join me, Principal Education Application Engineer Bhairava Prasad, for this Training Webinar and discover the Verisium Debug Python App Store. The app store allows you to search for existing apps, learn about them, install or uninstall them, and even customize existing apps. Date and Time Wednesday, November 20, 2024 07:00 PST San Jose / 10:00 EST New York / 15:00 GMT London / 16:00 CET Munich / 17:00 IST Jerusalem / 20:30 IST Bangalore / 23:00 CST Beijing REGISTER To register for this webinar, sign in with your Cadence Support account (email ID and password) to log in to the Learning and Support System*. Then select Enroll to register for the session. Once registered, you’ll receive a confirmation email containing all login details. A quick reminder: If you haven’t received a registration confirmation within one hour of registering, please check your spam folder and ensure your pop-up blockers are off and cookies are enabled. For issues with registration or other inquiries, reach out to eur_training_webinars@cadence.com . Like this topic? Take this opportunity and register for the free online course related to this webinar topic: Verisium Debug Training To view our complete training offerings, visit the Cadence Training website Want to share this and other great Cadence learning opportunities with someone else? Tell them to subscribe . Hungry for Training? Choose the Cadence Training Menu that’s right for you. Related Courses Xcelium Simulator Training Course | Cadence Related Blogs Unveiling the Capabilities of Verisium Manager for Optimized Operations - Verification - Cadence Blogs - Cadence Community Verisium SimAI: SoC Verification with Unprecedented Coverage Maximization - Corporate News - Cadence Blogs - Cadence Community Verisium SimAI: Maximizing Coverage, Minimizing Bugs, Unlocking Peak Throughput - Verification - Cadence Blogs - Cadence Community Related Training Bytes Introducing Verisium Debug (Video) (cadence.com) Introduction to UVM Debug of Verisium Debug (Video) (cadence.com) Verisium Debug Customized Apps with Python API Please see course learning maps a visual representation of courses and course relationships. Regional course catalogs may be viewed here . *If you don’t have a Cadence Support account, go to Cadence User Registration and complete the requested information. Or visit Registration Help .




is

Versatile Use Case for DDR5 DIMM Discrete Component Memory Models

DDR5 DIMM Architectures The DDR5 generation of Double Data Rate DRAM memories has experienced rapid adoption in recent years. In particular, the JEDEC-defined DDR5 Dual Inline Memory Module (DIMM) cards have become a mainstay for systems looking for high-density, high-bandwidth, off-chip random access memory[1]. Within a short time, the DIMM architecture evolved from an interconnected hierarchy of only SDRAM memory devices (UDIMM[2]) to complex subsystems of interconnected components (RDIMM/LRDIMM/MRDIMM[3]). DIMM Designs and Popular Verification Use Cases The growing complexity of the DIMMs presented a challenge for pre-silicon verification engineers who could no longer simply validate against single DDR5 SDRAM memory models. They needed to consider how their designs would perform against DIMMs connected to each channel and operating at gigahertz clock speeds. To address this verification gap, Cadence developed DDR5 DIMM Memory Models that encapsulated all of the architectural complexities presented by real-world DIMMs based on a robust, easy-to-use, easy-to-debug, and easy-to-reconfigure methodology. This memory-subsystem-in-a-single-instance model has seen explosive adoption among the traditional IP Developer and SOC Integrator customers of Cadence Memory Models. The Cadence DIMM models act as a single unit with all of the relevant DIMM components instantiated and interconnected within, and with all AC/Timing parameters among the various components fully matched out-of-the-box, based on JEDEC specifications as well as datasheets of actual devices in the market. The typical use-case for the DIMM models has been where the DUT is a DDR5 Memory Controller + PHY IP stack, and the validation plan mandated compliance with the JEDEC standards and Memory Device vendor datasheets. Unique Use Case for the DIMM Discrete Component Models Although the Cadence DIMM models have enjoyed tremendous proliferation because of their cohesive implementation and unified user API, the actual DIMM Models are built on top of powerful, flexible discrete component models, each of which was designed to stand on its own as a complete SystemVerilog UVM-based VIP. All of these discrete component models exist in the Cadence VIP Catalog as standalone VIPs, complete with their own protocol compliance checking capabilities and their own configuration mappings comprehensively modeling individual AC/Timing parameters. Because of this deliberate design decision, the Cadence DIMM Discrete Component Models can support a unique use-case scenario. Some users seek to develop IC Designs for the various DIMM components. Such users need verification environments that can model the individual components of a DIMM and allow them the option to replace one or another component with their Component Design IP. They can then validate that their component design is fully compatible with the rest of the components on the DIMM and meets the integrity of the overall DIMM compliance with JEDEC standards or Memory Vendor datasheets. The Cadence Memory VIP portfolio today includes various examples that demonstrate how customers can create DIMM “wrappers” by selecting from among the available DIMM discrete component models and “stitching” them together to build their own custom testbench around their specific Component Design IP. A Solution for Unique Component Scenarios The Cadence DDR5 DIMM Memory Models and DIMM Discrete Component Models can provide users with a flexible approach to validating their specific component designs with a fully populated pre-silicon environment. Augmented Verification Capabilities When the DIMM “wrapper” model is augmented with the Cadence DFI VIP[4] that can simulate an MC+PHY stack and offers a SystemVerilog UVM test API to the verification engineer, the overall testbench transforms into a formidable pre-silicon validation vehicle. The DFI VIP is designed as a combination of an independent DFI MC VIP and a DFI PHY VIP connected to each other via the DFI Standard Interface and capable of operating seamlessly as a single unit. It presents a UVM Sequence API to the user into the DFI MC VIP with the Memory Interface of the PHY VIP connected to the DIMM “wrapper” model. With this testbench in hand, the user can then fully take advantage of the UVM Sequence Library that comes with the DFI VIP to enable deep validation of their Component Design inside the DIMM “wrapper” model. Verification Capabilities Further Enhanced A possible further enhancement comes with the potential addition of an instance of the Cadence DIMM Memory Model in a Passive Monitor mode at the DRAM Memory Interface. The DIMM Passive Monitor consumes the same configuration describing the DIMM “wrapper” in the testbench, and thus can act as a reference model for the DIMM wrapper. If the DIMM Passive Monitor responds successfully to accesses from the DFI VIP, but the DIMM wrapper does not, then it exposes potential bugs in the DUT Components or in the settings of their AC/Timing parameters inside the DIMM wrapper. Debuggability, Interface Visibility, and Protocol Compliance One of the key benefits of the DIMM Discrete Component Models that become manifest, whether in terms of the unique use-case scenario described here, or when working with the wholly unified DDR5 DIMM Memory Models, is the increased debuggability of the protocol functionality. The intentional separation of the discrete components of a DIMM allows the user to have full visibility of the memory traffic at every datapath landmark within a DIMM structure. For example, in modeling an LRDIMM or MRDIMM, the interface between the RCD component and the SDRAM components, the interface between the RCD component and the DB components, and the interface between the SDRAM components and the DB components—all are visible and accessible to the user. The user has full access to dump the values and states of the wire interconnects at these interfaces to the waveform viewer and thus can observe and correlate the activity against any protocol violations flagged in the trace logs by any one or more of the DIMM Discrete Component Models. Access to these interfaces is freely available when using the DIMM Discrete Component Models. On the unified DDR5 DIMM Memory Models, a feature called Debug Ports enables the same level of visibility into the individual interconnects amidst the SDRAM components, RCD components, and DB components. When combined with the Waveform Debugger[5] capability that comes built-in with the VIPs and Memory Models offered by Cadence and used with the Cadence Verisium Debug[6] tool, the enhanced debuggability becomes a powerful platform. With these debug accesses enabled, the user can pull out transaction streams, chip state and bank state streams, mode register streams, and error message streams all right next to their RTL signals in the same Verisium Debug waveform viewer window to debug failures all in one place. The Verisium Debug tool also parses all of the log files to probe and extract messages into a fully integrated Smart Log in a tabbed window fully hyperlinked to the waveform viewer, all at your fingertips. A Solution for Every Scenario Cadence's DDR5 DIMM Memory Models and DIMM Discrete Component Models , partnered with the Cadence DFI VIP, can provide users with a robust and flexible approach to validating their designs thoroughly and effectively in pre-silicon verification environments ahead of tapeout commitments. The solution offers unparalleled latitude in debuggability when the Debug Ports and Waveform Debugger functions of the Memory Models are switched on and boosted with the use of the Cadence Verisium Debug tool. [1] Shyam Sharma, DDR5 DIMM Design and Verification Considerations , 13 Jan 2023. [2] Shyam Sharma, DDR5 UDIMM Evolution to Clock Buffered DIMMs (CUDIMM) , 23 Sep 2024. [3] Kos Gitchev, DDR5 12.8Gbps MRDIMM IP: Powering the Future of AI, HPC, and Data Centers , 26 Aug 2024. [4] Chetan Shingala and Salehabibi Shaikh, How to Verify JEDEC DRAM Memory Controller, PHY, or Memory Device? , 29 Mar 2022. [5] Rahul Jha, Cadence Memory Models - The Gold Standard , 15 Apr 2024. [6] Manisha Pradhan, Accelerate Design Debugging Using Verisium Debug , 11 Jul 2023.




is

USB crash issue in Linux 4.14.62

Hi ,

  FIrst of all , I hope I have posted my query in the right place . I am expecting software support/suggestions for the below issue.

   I am working on LTE which use USB interface and the Host Controller is USB 2.0 . The BSP is from NXP which supports Cadence USB 3.0 Host controller and with USB 3.0 supported cadence driver.NXP had used the   USB 3.0 host controller for USB type C based device.

  Cadence USB 3.0 based device driver seems to be backward compatible for USB 2.0 host controller .Since basic LTE functionalities seems to be working fine I continued to use the same driver in Linux 4.14.62 

  But I am facing a kernel warning of unhandled interrupt and the crash log points to cdns_irq function as shown below  The crash/kerenel warning is very random and not occuring all the time.

 

.691533] irq 36: nobody cared (try booting with the "irqpoll" option)

[ 1.698242] CPU: 0 PID: 87 Comm: kworker/0:1 Not tainted 4.9.88 #24

[ 1.704509] Hardware name: Freescale i.MX8QXP MEK (DT)

[ 1.709659] Workqueue: pm pm_runtime_work

[ 1.713675] Call trace:

[ 1.716123] [<ffff0000080897d0>] dump_backtrace+0x0/0x1b0

[ 1.721523] [<ffff000008089994>] show_stack+0x14/0x20

[ 1.726582] [<ffff0000083daff0>] dump_stack+0x94/0xb4

[ 1.731638] [<ffff00000810f064>] __report_bad_irq+0x34/0xf0

[ 1.737212] [<ffff00000810f4ec>] note_interrupt+0x2e4/0x330

[ 1.742790] [<ffff00000810c594>] handle_irq_event_percpu+0x44/0x58

[ 1.748974] [<ffff00000810c5f0>] handle_irq_event+0x48/0x78

[ 1.754553] [<ffff0000081100a8>] handle_fasteoi_irq+0xc0/0x1b0

[ 1.760390] [<ffff00000810b584>] generic_handle_irq+0x24/0x38

[ 1.766141] [<ffff00000810bbe4>] __handle_domain_irq+0x5c/0xb8

[ 1.771979] [<ffff000008081798>] gic_handle_irq+0x70/0x15c

1.807416] 7a40: 00000000000002ba ffff80002645bf00 00000000fa83b2da 0000000001fe116e

[ 1.815252] 7a60: ffff000088bf7c47 ffffffffffffffff 00000000000003f8 ffff0000085c47b8

[ 1.823088] 7a80: 0000000000000010 ffff800026484600 0000000000000001 ffff8000266e9718

[ 1.830925] 7aa0: ffff00000b8b0008 ffff800026784280 ffff00000b8b000c ffff00000b8d8018

[ 1.838760] 7ac0: 0000000000000001 ffff000008b76000 0000000000000000 ffff800026497b20

[ 1.846596] 7ae0: ffff00000810bd24 ffff800026497b20 ffff000008851d18 0000000000000145

[ 1.854433] 7b00: ffff000008b8d6c0 ffff0000081102d8 ffffffffffffffff ffff00000810dda8

[ 1.862268] [<ffff000008082eec>] el1_irq+0xac/0x120

[ 1.867155] [<ffff000008851d18>] _raw_spin_unlock_irqrestore+0x18/0x48

[ 1.873684] [<ffff00000810bd24>] __irq_put_desc_unlock+0x1c/0x48

[ 1.879695] [<ffff00000810de10>] enable_irq+0x48/0x70

[ 1.884756] [<ffff0000085ba8f8>] cdns3_enter_suspend+0x1f0/0x440

[ 1.890764] [<ffff0000085baca0>] cdns3_runtime_suspend+0x48/0x88

[ 1.896776] [<ffff0000084cf398>] pm_generic_runtime_suspend+0x28/0x40

[ 1.903223] [<ffff0000084dc3e8>] genpd_runtime_suspend+0x88/0x1d8

[ 1.909320] [<ffff0000084d0e08>] __rpm_callback+0x70/0x98

[ 1.914724] [<ffff0000084d0e50>] rpm_callback+0x20/0x88

[ 1.919954] [<ffff0000084d1b2c>] rpm_suspend+0xf4/0x4c8

[ 1.925184] [<ffff0000084d20fc>] rpm_idle+0x124/0x168

[ 1.930240] [<ffff0000084d26c0>] pm_runtime_work+0xa0/0xb8

[ 1.935732] [<ffff0000080dc1dc>] process_one_work+0x1dc/0x380

[ 1.941481] [<ffff0000080dc3c8>] worker_thread+0x48/0x4d0

[ 1.946885] [<ffff0000080e2408>] kthread+0xf8/0x100
[ 1.957080] handlers:

[ 1.959350] [<ffff0000085ba668>] cdns3_irq

[ 1.963449] Disabling IRQ #36

 Kindly provide a solution to solve this issue.

Thanks & Regards,

Anjali




is

Issue With Loudness Normalization

Hello everyone. In recent days, I'm having a weird problem with sound output on my Windows 10 PC. In fact, I can't control the loudness of it. So is there any possibility of PCB of sound card being damaged?




is

Xtensa compiler issue

Hi 

I have a Xtensa compiler issue that the compilation for switch case would be optimized in some patterns and leads to unexpected result. I cross-checked the assembly code and found that such compiler optimization seems to be similar to the tree-switch-conversion feature in GCC compiler

Unfortunately I don't find any similar compiler option(like -fno-tree-switch-conversion) in Xtensa compiler(XCC) to enable/disable such feature and such feature seems like enabled in XCC by default even if I'm using -O0 for the least optimization.

I'm wondering if there's any possible solution to permanently disable such feature in XCC?

PS: The release version of XCC compiler I'm using is RD-2012.5

Thanks!




is

How to turn vavlog IO width mismatch error to warning?

Hi, all.

When I use vavlog to compile verilog rtl, it will recognize IO width mismatch problem as a fatal error.

How to turn the error into warning?

VCS can use -error=noIOPCWM to ingore the error.

Is vavlog has similar arguments?




is

Here Is Why the Indian Voter Is Saddled With Bad Economics

This is the 15th installment of The Rationalist, my column for the Times of India.

It’s election season, and promises are raining down on voters like rose petals on naïve newlyweds. Earlier this week, the Congress party announced a minimum income guarantee for the poor. This Friday, the Modi government released a budget full of sops. As the days go by, the promises will get bolder, and you might feel important that so much attention is being given to you. Well, the joke is on you.

Every election, HL Mencken once said, is “an advance auction sale of stolen goods.” A bunch of competing mafias fight to rule over you for the next five years. You decide who wins, on the basis of who can bribe you better with your own money. This is an absurd situation, which I tried to express in a limerick I wrote for this page a couple of years ago:

POLITICS: A neta who loves currency notes/ Told me what his line of work denotes./ ‘It is kind of funny./ We steal people’s money/And use some of it to buy their votes.’

We’re the dupes here, and we pay far more to keep this circus going than this circus costs. It would be okay if the parties, once they came to power, provided good governance. But voters have given up on that, and now only want patronage and handouts. That leads to one of the biggest problems in Indian politics: We are stuck in an equilibrium where all good politics is bad economics, and vice versa.

For example, the minimum guarantee for the poor is good politics, because the optics are great. It’s basically Garibi Hatao: that slogan made Indira Gandhi a political juggernaut in the 1970s, at the same time that she unleashed a series of economic policies that kept millions of people in garibi for decades longer than they should have been.

This time, the Congress has released no details, and keeping it vague makes sense because I find it hard to see how it can make economic sense. Depending on how they define ‘poor’, how much income they offer and what the cost is, the plan will either be ineffective or unworkable.

The Modi government’s interim budget announced a handout for poor farmers that seemed rather pointless. Given our agricultural distress, offering a poor farmer 500 bucks a month seems almost like mockery.

Such condescending handouts solve nothing. The poor want jobs and opportunities. Those come with growth, which requires structural reforms. Structural reforms don’t sound sexy as election promises. Handouts do.

A classic example is farm loan waivers. We have reached a stage in our politics where every party has to promise them to assuage farmers, who are a strong vote bank everywhere. You can’t blame farmers for wanting them – they are a necessary anaesthetic. But no government has yet made a serious attempt at tackling the root causes of our agricultural crisis.

Why is it that Good Politics in India is always Bad Economics? Let me put forth some possible reasons. One, voters tend to think in zero-sum ways, as if the pie is fixed, and the only way to bring people out of poverty is to redistribute. The truth is that trade is a positive-sum game, and nations can only be lifted out of poverty when the whole pie grows. But this is unintuitive.

Two, Indian politics revolves around identity and patronage. The spoils of power are limited – that is indeed a zero-sum game – so you’re likely to vote for whoever can look after the interests of your in-group rather than care about the economy as a whole.

Three, voters tend to stay uninformed for good reasons, because of what Public Choice economists call Rational Ignorance. A single vote is unlikely to make a difference in an election, so why put in the effort to understand the nuances of economics and governance? Just ask, what is in it for me, and go with whatever seems to be the best answer.

Four, Politicians have a short-term horizon, geared towards winning the next election. A good policy that may take years to play out is unattractive. A policy that will win them votes in the short term is preferable.

Sadly, no Indian party has shown a willingness to aim for the long term. The Congress has produced new Gandhis, but not new ideas. And while the BJP did make some solid promises in 2014, they did not walk that talk, and have proved to be, as Arun Shourie once called them, UPA + Cow. Even the Congress is adopting the cow, in fact, so maybe the BJP will add Temple to that mix?

Benjamin Franklin once said, “Democracy is two wolves and a lamb voting on what to have for lunch.” This election season, my friends, the people of India are on the menu. You have been deveined and deboned, marinated with rhetoric, seasoned with narrative – now enter the oven and vote.

The India Uncut Blog © 2010 Amit Varma. All rights reserved.
Follow me on Twitter.




is

India’s Problem is Poverty, Not Inequality

This is the 16th installment of The Rationalist, my column for the Times of India.

Steven Pinker, in his book Enlightenment Now, relates an old Russian joke about two peasants named Boris and Igor. They are both poor. Boris has a goat. Igor does not. One day, Igor is granted a wish by a visiting fairy. What will he wish for?

“I wish,” he says, “that Boris’s goat should die.”

The joke ends there, revealing as much about human nature as about economics. Consider the three things that happen if the fairy grants the wish. One, Boris becomes poorer. Two, Igor stays poor. Three, inequality reduces. Is any of them a good outcome?

I feel exasperated when I hear intellectuals and columnists talking about economic inequality. It is my contention that India’s problem is poverty – and that poverty and inequality are two very different things that often do not coincide.

To illustrate this, I sometimes ask this question: In which of the following countries would you rather be poor: USA or Bangladesh? The obvious answer is USA, where the poor are much better off than the poor of Bangladesh. And yet, while Bangladesh has greater poverty, the USA has higher inequality.

Indeed, take a look at the countries of the world measured by the Gini Index, which is that standard metric used to measure inequality, and you will find that USA, Hong Kong, Singapore and the United Kingdom all have greater inequality than Bangladesh, Liberia, Pakistan and Sierra Leone, which are much poorer. And yet, while the poor of Bangladesh would love to migrate to unequal USA, I don’t hear of too many people wishing to go in the opposite direction.

Indeed, people vote with their feet when it comes to choosing between poverty and inequality. All of human history is a story of migration from rural areas to cities – which have greater inequality.

If poverty and inequality are so different, why do people conflate the two? A key reason is that we tend to think of the world in zero-sum ways. For someone to win, someone else must lose. If the rich get richer, the poor must be getting poorer, and the presence of poverty must be proof of inequality.

But that’s not how the world works. The pie is not fixed. Economic growth is a positive-sum game and leads to an expansion of the pie, and everybody benefits. In absolute terms, the rich get richer, and so do the poor, often enough to come out of poverty. And so, in any growing economy, as poverty reduces, inequality tends to increase. (This is counter-intuitive, I know, so used are we to zero-sum thinking.) This is exactly what has happened in India since we liberalised parts of our economy in 1991.

Most people who complain about inequality in India are using the wrong word, and are really worried about poverty. Put a millionaire in a room with a billionaire, and no one will complain about the inequality in that room. But put a starving beggar in there, and the situation is morally objectionable. It is the poverty that makes it a problem, not the inequality.

You might think that this is just semantics, but words matter. Poverty and inequality are different phenomena with opposite solutions. You can solve for inequality by making everyone equally poor. Or you could solve for it by redistributing from the rich to the poor, as if the pie was fixed. The problem with this, as any economist will tell you, is that there is a trade-off between redistribution and growth. All redistribution comes at the cost of growing the pie – and only growth can solve the problem of poverty in a country like ours.

It has been estimated that in India, for every one percent rise in GDP, two million people come out of poverty. That is a stunning statistic. When millions of Indians don’t have enough money to eat properly or sleep with a roof over their heads, it is our moral imperative to help them rise out of poverty. The policies that will make this possible – allowing free markets, incentivising investment and job creation, removing state oppression – are likely to lead to greater inequality. So what? It is more urgent to make sure that every Indian has enough to fulfil his basic needs – what the philosopher Harry Frankfurt, in his fine book On Inequality, called the Doctrine of Sufficiency.

The elite in their airconditioned drawing rooms, and those who live in rich countries, can follow the fashions of the West and talk compassionately about inequality. India does not have that luxury.

The India Uncut Blog © 2010 Amit Varma. All rights reserved.
Follow me on Twitter.




is

To Escalate or Not? This Is Modi’s Zugzwang Moment

This is the 17th installment of The Rationalist, my column for the Times of India.

One of my favourite English words comes from chess. If it is your turn to move, but any move you make makes your position worse, you are in ‘Zugzwang’. Narendra Modi was in zugzwang after the Pulwama attacks a few days ago—as any Indian prime minister in his place would have been.

An Indian PM, after an attack for which Pakistan is held responsible, has only unsavoury choices in front of him. He is pulled in two opposite directions. One, strategy dictates that he must not escalate. Two, politics dictates that he must.

Let’s unpack that. First, consider the strategic imperatives. Ever since both India and Pakistan became nuclear powers, a conventional war has become next to impossible because of the threat of a nuclear war. If India escalates beyond a point, Pakistan might bring their nuclear weapons into play. Even a limited nuclear war could cause millions of casualties and devastate our economy. Thus, no matter what the provocation, India needs to calibrate its response so that the Pakistan doesn’t take it all the way.

It’s impossible to predict what actions Pakistan might view as sufficient provocation, so India has tended to play it safe. Don’t capture territory, don’t attack military assets, don’t kill civilians. In other words, surgical strikes on alleged terrorist camps is the most we can do.

Given that Pakistan knows that it is irrational for India to react, and our leaders tend to be rational, they can ‘bleed us with a thousand cuts’, as their doctrine states, with impunity. Both in 2001, when our parliament was attacked and the BJP’s Atal Bihari Vajpayee was PM, and in 2008, when Mumbai was attacked and the Congress’s Manmohan Singh was PM, our leaders considered all the options on the table—but were forced to do nothing.

But is doing nothing an option in an election year?

Leave strategy aside and turn to politics. India has been attacked. Forty soldiers have been killed, and the nation is traumatised and baying for blood. It is now politically impossible to not retaliate—especially for a PM who has criticized his predecessor for being weak, and portrayed himself as a 56-inch-chested man of action.

I have no doubt that Modi is a rational man, and knows the possible consequences of escalation. But he also knows the possible consequences of not escalating—he could dilute his brand and lose the elections. Thus, he is forced to act. And after he acts, his Pakistan counterpart will face the same domestic pressure to retaliate, and will have to attack back. And so on till my home in Versova is swallowed up by a nuclear crater, right?

Well, not exactly. There is a way to resolve this paradox. India and Pakistan can both escalate, not via military actions, but via optics.

Modi and Imran Khan, who you’d expect to feel like the loneliest men on earth right now, can find sweet company in each other. Their incentives are aligned. Neither man wants this to turn into a full-fledged war. Both men want to appear macho in front of their domestic constituencies. Both men are masters at building narratives, and have a pliant media that will help them.

Thus, India can carry out a surgical strike and claim it destroyed a camp, killed terrorists, and forced Pakistan to return a braveheart prisoner of war. Pakistan can say India merely destroyed two trees plus a rock, and claim the high moral ground by returning the prisoner after giving him good masala tea. A benign military equilibrium is maintained, and both men come out looking like strong leaders: a win-win game for the PMs that avoids a lose-lose game for their nations. They can give themselves a high-five in private when they meet next, and Imran can whisper to Modi, “You’re a good spinner, bro.”

There is one problem here, though: what if the optics don’t work?

If Modi feels that his public is too sceptical and he needs to do more, he might feel forced to resort to actual military escalation. The fog of politics might obscure the possible consequences. If the resultant Indian military action causes serious damage, Pakistan will have to respond in kind. In the chain of events that then begins, with body bags piling up, neither man may be able to back down. They could end up as prisoners of circumstance—and so could we.

***

Also check out:

Why Modi Must Learn to Play the Game of Chicken With Pakistan—Amit Varma
The Two Pakistans—Episode 79 of The Seen and the Unseen
India in the Nuclear Age—Episode 80 of The Seen and the Unseen

The India Uncut Blog © 2010 Amit Varma. All rights reserved.
Follow me on Twitter.




is

We Must Reclaim Nationalism From the BJP

This is the 18th installment of The Rationalist, my column for the Times of India.

The man who gave us our national anthem, Rabindranath Tagore, once wrote that nationalism was “a great menace.” He went on to say, “It is the particular thing which for years has been at the bottom of India’s troubles.”

Not just India’s, but the world’s: In his book The Open Society and its Enemies, published in 1945 as Adolf Hitler was defeated, Karl Popper ripped into nationalism, with all its “appeals to our tribal instincts, to passion and to prejudice, and to our nostalgic desire to be relieved from the strain of individual responsibility which it attempts to replace by a collective or group responsibility.”

Nationalism is resurgent today, stomping across the globe hand-in-hand with populism. In India, too, it is tearing us apart. But must nationalism always be a bad thing? A provocative new book by the Israeli thinker Yael Tamir argues otherwise.

In her book Why Nationalism, Tamir makes the following arguments. One, nation-states are here to stay. Two, the state needs the nation to be viable. Three, people need nationalism for the sense of community and belonging it gives them. Four, therefore, we need to build a better nationalism, which brings people together instead of driving them apart.

The first point needs no elaboration. We are a globalised world, but we are also trapped by geography and circumstance. “Only 3.3 percent of the world’s population,” Tamir points out, “lives outside their country of birth.” Nutopia, the borderless state dreamed up by John Lennon and Yoko Ono, is not happening anytime soon.

If the only thing that citizens of a state have in common is geographical circumstance, it is not enough. If the state is a necessary construct, a nation is its necessary justification. “Political institutions crave to form long-term political bonding,” writes Tamir, “and for that matter they must create a community that is neither momentary nor meaningless.” Nationalism, she says, “endows the state with intimate feelings linking the past, the present, and the future.”

More pertinently, Tamir argues, people need nationalism. I am a humanist with a belief in individual rights, but Tamir says that this is not enough. “The term ‘human’ is a far too thin mode of delineation,” she writes. “Individuals need to rely on ‘thick identities’ to make their lives meaningful.” This involves a shared past, a common culture and distinctive values.

Tamir also points out that there is a “strong correlation between social class and political preferences.” The privileged elites can afford to be globalists, but those less well off are inevitably drawn to other narratives that enrich their lives. “Rather than seeing nationalism as the last refuge of the scoundrel,” writes Tamir, “we should start thinking of nationalism as the last hope of the needy.”

Tamir’s book bases its arguments on the West, but the argument holds in India as well. In a country with so much poverty, is it any wonder that nationalism is on the rise? The cosmopolitan, globe-trotting elites don’t have daily realities to escape, but how are those less fortunate to find meaning in their lives?

I have one question, though. Why is our nationalism so exclusionary when our nation is so inclusive?

In the nationalism that our ruling party promotes, there are some communities who belong here, and others who don’t. (And even among those who ‘belong’, they exploit divisions.) In their us-vs-them vision of the world, some religions are foreign, some values are foreign, even some culinary traditions are foreign – and therefore frowned upon. But the India I know and love is just the opposite of that.

We embrace influences from all over. Our language, our food, our clothes, our music, our cinema have absorbed so many diverse influences that to pretend they come from a single legit source is absurd. (Even the elegant churidar-kurtas our prime minister wears have an Islamic origin.) As an example, take the recent film Gully Boy: its style of music, the clothes its protagonists wear, even the attitudes in the film would have seemed alien to us a few decades ago. And yet, could there be a truer portrait of young India?

This inclusiveness, this joyous khichdi that we are, is what makes our nation a model for the rest of the world. No nation embraces all other nations as ours does. My India celebrates differences, and I do as well. I wear my kurta with jeans, I listen to ghazals, I eat dhansak and kababs, and I dream in the Indian language called English. This is my nationalism.

Those who try to divide us, therefore, are the true anti-nationals. We must reclaim nationalism from them.

The India Uncut Blog © 2010 Amit Varma. All rights reserved.
Follow me on Twitter.




is

Lessons from an Ankhon Dekhi Prime Minister

This is the 19th installment of The Rationalist, my column for the Times of India.

A friend of mine was very impressed by the interview Narendra Modi granted last week to Akshay Kumar. ‘Such a charming man, such great work ethic,’ he gushed. ‘He is the kind of uncle I would want my kids to have.’ And then, in the same breath, he asked, ‘How can such a good man be such a bad prime minister?”

I don’t want to be uncharitable and suggest that Modi’s image is entirely manufactured, so let’s take the interview at face value. Let’s also grant Modi his claims about the purity of his neeyat (intentions), and reframe the question this way: when it comes to public policy, why do good intentions often lead to bad outcomes? To attempt an answer, I’ll refer to a story a friend of mine, who knows Modi well, once told me about him. 

Modi was chilling with his friends at home more than a decade ago, and told them an incident from his childhood. His mother was ill once, and the young Narendra was tending to her. The heat was enervating, so the boy went to the switchboard to switch on the fan. But there was no electricity. My friend said that as he told this story, Modi’s eyes filled with tears. Even after all these years, he was moved by the memory.

My friend used this story to make the point that Modi’s vision of the world is experiential. If he experiences something, he understands it. When he became chief minister of Gujarat, he made it his stated mission to get reliable electricity to every part of Gujarat. No doubt this was shaped by the time he flicked a switch as a young boy and the fan did not budge. Similarly, he has given importance to things like roads and cleanliness, since he would have experienced the impact of those as a young man.

My term for him, inspired by Rajat Kapoor’s 2014 film, is ‘the ankhon dekhi prime minister’. At one level, this is a good thing. He sees a problem and works for the rest of his life to solve it. But what of things he cannot experience?

The economy is a complex beast, as is society itself, and beyond a certain level, you need to grasp abstract concepts to understand how the world works. You cannot experience them. For example, spontaneous order, or the idea that society and markets, like language, cannot be centrally directed or planned. Or the positive-sum nature of things, which is the engine of our prosperity: the idea that every transaction is a win-win game, and that for one person to win, another does not have to lose. Or, indeed, respect for individual rights and free speech.

One understands abstract concepts by reading about them, understanding them, applying them to the real world. Modi is not known to be a reader, and this is not his fault. Given his background, it is a near-miracle that he has made it this far. He wasn’t born into a home with a reading culture, and did not have either the resources or the time when he was young to devote to reading. The only way he could learn about the world, thus, was by experiencing it.

There are two lessons here, one for Modi himself and others in his position, and another for everyone.

The lesson in this for Modi is a lesson for anyone who rises to such an important position, even if he is the smartest person in the world. That lesson is to have humility about the bounds of your knowledge, and to surround yourself with experts who can advise you well. Be driven by values and not confidence in your own knowledge. Gather intellectual giants around you, and stand on their shoulders.

Modi did not do this in the case of demonetisation, which he carried out against the advice of every expert he consulted. We all know the damage it caused to the economy.

The other learning from this is for all of us. How do we make sense of the world? By connecting dots. An ankhon-dekhi approach will get us very few dots, and our view of the world will be blurred and incomplete. The best way to gather more dots is reading. The more we read, the better we understand the world, and the better the decisions we take. When we can experience a thousand lives through books, why restrict ourselves to one?

A good man with noble intentions can make bad decisions with horrible consequences. The only way to hedge against this is by staying humble and reading more. So when you finish reading this piece, think of an unread book that you’d like to read today – and read it!

The India Uncut Blog © 2010 Amit Varma. All rights reserved.
Follow me on Twitter.




is

Population Is Not a Problem, but Our Greatest Strength

This is the 21st installment of The Rationalist, my column for the Times of India.

When all political parties agree on something, you know you might have a problem. Giriraj Singh, a minister in Narendra Modi’s new cabinet, tweeted this week that our population control law should become a “movement.” This is something that would find bipartisan support – we are taught from school onwards that India’s population is a big problem, and we need to control it.

This is wrong. Contrary to popular belief, our population is not a problem. It is our greatest strength.

The notion that we should worry about a growing population is an intuitive one. The world has limited resources. People keep increasing. Something’s gotta give.

Robert Malthus made just this point in his 1798 book, An Essay on the Principle of Population. He was worried that our population would grow exponentially while resources would grow arithmetically. As more people entered the workforce, wages would fall and goods would become scarce. Calamity was inevitable.

Malthus’s rationale was so influential that this mode of thinking was soon called ‘Malthusian.’ (It is a pejorative today.) A 20th-century follower of his, Harrison Brown, came up with one of my favourite images on this subject, arguing that a growing population would lead to the earth being “covered completely and to a considerable depth with a writhing mass of human beings, much as a dead cow is covered with a pulsating mass of maggots.”

Another Malthusian, Paul Ehrlich, published a book called The Population Bomb in 1968, which began with the stirring lines, “The battle to feed all of humanity is over. In the 1970s hundreds of millions of people will starve to death in spite of any crash programs embarked upon now.” Ehrlich was, as you’d guess, a big supporter of India’s coercive family planning programs. ““I don’t see,” he wrote, “how India could possibly feed two hundred million more people by 1980.”

None of these fears have come true. A 2007 study by Nicholas Eberstadt called ‘Too Many People?’ found no correlation between population density and poverty. The greater the density of people, the more you’d expect them to fight for resources – and yet, Monaco, which has 40 times the population density of Bangladesh, is doing well for itself. So is Bahrain, which has three times the population density of India.

Not only does population not cause poverty, it makes us more prosperous. The economist Julian Simon pointed out in a 1981 book that through history, whenever there has been a spurt in population, it has coincided with a spurt in productivity. Such as, for example, between Malthus’s time and now. There were around a billion people on earth in 1798, and there are around 7.7 billion today. As you read these words, consider that you are better off than the richest person on the planet then.

Why is this? The answer lies in the title of Simon’s book: The Ultimate Resource. When we speak of resources, we forget that human beings are the finest resource of all. There is no limit to our ingenuity. And we interact with each other in positive-sum ways – every voluntary interactions leaves both people better off, and the amount of value in the world goes up. This is why we want to be part of economic networks that are as large, and as dense, as possible. This is why most people migrate to cities rather than away from them – and why cities are so much richer than towns or villages.

If Malthusians were right, essential commodities like wheat, maize and rice would become relatively scarcer over time, and thus more expensive – but they have actually become much cheaper in real terms. This is thanks to the productivity and creativity of humans, who, in Eberstadt’s words, are “in practice always renewable and in theory entirely inexhaustible.”

The error made by Malthus, Brown and Ehrlich is the same error that our politicians make today, and not just in the context of population: zero-sum thinking. If our population grows and resources stays the same, of course there will be scarcity. But this is never the case. All we need to do to learn this lesson is look at our cities!

This mistaken thinking has had savage humanitarian consequences in India. Think of the unborn millions over the decades because of our brutal family planning policies. How many Tendulkars, Rahmans and Satyajit Rays have we lost? Think of the immoral coercion still carried out on poor people across the country. And finally, think of the condescension of our politicians, asserting that people are India’s problem – but always other people, never themselves.

This arrogance is India’s greatest problem, not our people.

The India Uncut Blog © 2010 Amit Varma. All rights reserved.
Follow me on Twitter.




is

For this Brave New World of cricket, we have IPL and England to thank

This is the 24th installment of The Rationalist, my column for the Times of India.

Back in the last decade, I was a cricket journalist for a few years. Then, around 12 years ago, I quit. I was jaded as hell. Every game seemed like déjà vu, nothing new, just another round on the treadmill. Although I would remember her fondly, I thought me and cricket were done.

And then I fell in love again. Cricket has changed in the last few years in glorious ways. There have been new ways of thinking about the game. There have been new ways of playing the game. Every season, new kinds of drama form, new nuances spring up into sight. This is true even of what had once seemed the dullest form of the game, one-day cricket. We are entering into a brave new world, and the team leading us there is England. No matter what happens in the World Cup final today – a single game involves a huge amount of luck – this England side are extraordinary. They are the bridge between eras, leading us into a Golden Age of Cricket.

I know that sounds hyperbolic, so let me stun you further by saying that I give the IPL credit for this. And now, having woken up you up with such a jolt on this lovely Sunday morning, let me explain.

Twenty20 cricket changed the game in two fundamental ways. Both ended up changing one-day cricket. The first was strategy.

When the first T20 games took place, teams applied an ODI template to innings-building: pinch-hit, build, slog. But this was not an optimal approach. In ODIs, teams have 11 players over 50 overs. In T20s, they have 11 players over 20 overs. The equation between resources and constraints is different. This means that the cost of a wicket goes down, and the cost of a dot ball goes up. Critically, it means that the value of aggression rises. A team need not follow the ODI template. In some instances, attacking for all 20 overs – or as I call it, ‘frontloading’ – may be optimal.

West Indies won the T20 World Cup in 2016 by doing just this, and England played similarly. And some sides began to realise was that they had been underestimating the value of aggression in one-day cricket as well.

The second fundamental way in which T20 cricket changed cricket was in terms of skills. The IPL and other leagues brought big money into the game. This changed incentives for budding cricketers. Relatively few people break into Test or ODI cricket, and play for their countries. A much wider pool can aspire to play T20 cricket – which also provides much more money. So it makes sense to spend the hundreds of hours you are in the nets honing T20 skills rather than Test match skills. Go to any nets practice, and you will find many more kids practising innovative aggressive strokes than playing the forward defensive.

As a result, batsmen today have a wider array of attacking strokes than earlier generations. Because every run counts more in T20 cricket, the standard of fielding has also shot up. And bowlers have also reacted to this by expanding their arsenal of tricks. Everyone has had to lift their game.

In one-day cricket, thus, two things have happened. One, there is better strategic understanding about the value of aggression. Two, batsmen are better equipped to act on the aggressive imperative. The game has continued to evolve.

Bowlers have reacted to this with greater aggression on their part, and this ongoing dialogue has been fascinating. The cricket writer Gideon Haigh once told me on my podcast that the 2015 World Cup featured a battle between T20 batting and Test match bowling.

This England team is the high watermark so far. Their aggression does not come from slogging. They bat with a combination of intent and skills that allows them to coast at 6-an-over, without needing to take too many risks. In normal conditions, thus, they can coast to 300 – any hitting they do beyond that is the bonus that takes them to 350 or 400. It’s a whole new level, illustrated by the fact that at one point a few days ago, they had seven consecutive scores of 300 to their name. Look at their scores over the last few years, in fact, and it is clear that this is the greatest batting side in the history of one-day cricket – by a margin.

There have been stumbles in this World Cup, but in the bigger picture, those are outliers. If England have a bad day in the final and New Zealand play their A-game, England might even lose today. But if Captain Morgan’s men play their A-game, they will coast to victory. New Zealand does not have those gears. No other team in the world does – for now.

But one day, they will all have to learn to play like this.

The India Uncut Blog © 2010 Amit Varma. All rights reserved.
Follow me on Twitter.





is

Virtuoso Studio: How Do You Name Simulation Histories in Virtuoso ADE Assembler?

This blog describes an efficient way to name the histories saved by the simulation runs in Virtuoso ADE Assembler.(read more)




is

Doc Assistant A-Z: Making the Most of the Cadence Cloud-Based Help Viewer: Pt. 2

At a bustling Cadence event, we met Adrian, an intern at a startup who immerses himself in Cadence tools for his research and work.

Adrian was enthusiastic about the innovative technologies at his disposal but faced a significant challenge: internet access was limited to a single machine for new joiners, forcing interns to wait in line for their turn to use online resources.

Adrian's excitement soared when he discovered a game-changing solution: Doc Assistant. The cloud-based help viewer, Doc Assistant, ships with all Cadence tools, enabling Adrian to access help resources offline from any machine equipped with the software. This meant Adrian could continue his research and work seamlessly, irrespective of internet availability!

Meeting Cadence users and customers at such events has given us the opportunity to showcase how they can benefit from the diverse features that Doc Assistant offers.

With that note, welcome back to our Doc Assistant A-Z blog series! In Part 1, we explored key features and benefits that our innovative viewer brings to the table. Today, in Part 2, we'll dive deeper into the advanced functionalities and customization options that make Doc Assistant indispensable for its users.

Whether you're looking to streamline your workflow or enhance your user experience, this blog will provide the insights you need to fully leverage the capabilities of our documentation viewer. Let’s get started!

What Makes Doc Assistant Stand Out?

Here are a few (more) cool features of Doc Assistant!

History and Bookmarks: Want to refer to the topic you read last week? Of course, you can! Doc Assistant stores your browsing activity as History. You can also bookmark topics and revisit them later.

Indexing Capabilities: Looking for seamless search capabilities? The advanced indexing capabilities of Doc Assistant enhance the accessibility and manageability of documents. Doc Assistant automatically creates a search index if it is missing or broken.

Jump Links: Worried about scrolling through lengthy topics? Fret no more! Use the jump links in each topic to quickly navigate to different sections within the same topic or across topics. Jump links reduce the need for excessive scrolling and let you access relevant content swiftly.

Just-in-Time Notifications: Looking for alerts and messages? That’s supported. Doc Assistant displays notifications about important events, including errors, warnings, information, and success messages.

Keyword-Based Search Suggestions: You somewhat know your search keyword, but not quite sure? No worries. Just start typing what you know. Keyword and page suggestions are displayed dynamically as you type, providing a more sophisticated and intuitive search experience.

Library-Switch Support: Want to view documents from other libraries? Doc Assistant, by default, displays documents for the currently active release in your machine. You can access documents from other releases by configuring the associated documentation libraries.

Multimedia Support: Want to view product demos? Multimedia support in Doc Assistant lets you play videos, listen to audio, and view images without opening any external application.

Navigation Made Easy: Worried that you’ll get lost in an infinite doc loop? Not at all. The intuitive navigation controls in Doc Assistant are designed to provide you with a fluid and efficient experience. The Doc Assistant user interface is clean and logically organized, with easy-to-access documentation links.

That's not all. We have more coming your way. Until next time, take care and stay tuned for our next edition!

Want to Know More?

Here's a video about Doc Assistant
Visit the Doc Assistant web page
Read the Doc Assistant FAQ document

For any questions or general feedback, write to docassistant.support@cadence.com.

Subscribe to receive email notifications about our latest Custom IC Design blog posts.

Happy reading!

-Priya Sriram, on behalf of the Doc Assistant Team






is

Doc Assistant A-Z: Making the Most of the Cadence Cloud-Based Help Viewer Part 3

Welcome back to the Doc Assistant A-Z blog series!

Since the launch of Doc Assistant, we've been gathering feedback and input from our customers regarding their experiences with our latest documentation viewer. My interaction with Ralf was particularly useful and interesting.

Ralf is a design engineer who works on complex schematics and intricate layouts. For each release, he is challenged with the task of verifying the tool and feature changes across multiple releases. He shared with me that he has been using Doc Assistant’s capabilities to help him achieve this.

Ralf explained that he utilizes Doc Assistant to open and compare documents from different releases side-by-side, seamlessly tracking updates across multiple releases and verifying those updates in his Cadence tools. Additionally, in Doc Assistant’s online mode, he compares documents across previous tool versions, ensuring a thorough review of any changes. Finally, he was happy to share with me that Doc Assistant features have helped him significantly reduce the time he spends on identifying such changes.

You, of course, can also achieve such productivity gains using several Doc Assistant features designed to help simplify such tasks!

In previous editions of this blog series, we looked at some key features and benefits of Doc Assistant. If you've missed these editions, I would highly recommend that you read them:

In this third installment, we're diving into some more of Doc Assistant's key capabilities.

Open Multiple Documents

Want to refer to multiple docs at the same time? That’s easy!

Open each doc on a separate tab in Doc Assistant. 

Personalized Content Recommendations

Is it a hassle to navigate through all docs each time? You don’t have to.

You can tailor your Doc Assistant preferences to match your content requirements.

PDF Support

Do you prefer downloading and reading a PDF instead of an HTML?

That’s also supported.

Quick Access to Relevant Search Results

Are you pressed for time, and yet want to run a comprehensive doc search? You’re covered.

In online mode, search runs on all available product documentation, and the results are listed from multiple sources.

Resource Links

Looking for more information about a topic you’ve just read? That’s handy.

Look out for content recommendations!

Share Content

Want to share a useful doc with the rest of your team? That’s easy.

With a single click, Doc Assistant lets you share content with one or more readers.

Submit Feedback

Your feedback is important to us. Use the Submit Feedback feature to share your comments and inputs.

To learn more about how to use the above features, check out the Doc Assistant User Guide.

These are just a few of the productivity gain features in Doc Assistant. We’ll cover more in the next blog in the series.

Want to Know More?

Here's a video about Doc Assistant
Visit the Doc Assistant web page
Read the Doc Assistant FAQ document

If you have any feedback on Doc Assistant or would like to request more information or a demo, please contact docassistant.support@cadence.com.

Subscribe to receive email notifications about our latest Custom IC Design blog posts.

Happy reading!

Priya Sriram, on behalf of the Doc Assistant Team





is

CIS Standard BOM to Excel 365

I'm not able to export a CIS Standard BOM to a Microsoft 365 Excel (business subscription, version 2111).
Selecting the "Export BOM report to Excel" option opens a new Excel window, but OrCAD (17.4-2019 S023) won't fill it with any data...

I tried it on a different PC with Microsoft Office Professional Plus 2019 Excel (strangely the version number is the same: 2111) and with OrCAD 17.4-2019 S016 and it worked flawlessly.

Does anybody experiencing the same issue?
Does the Excel variant, the OrCAD version or the PC itself causing this?
Thanks for any help!