v

Is the Role of Test Chips Changing at Advanced Foundry Nodes?

Test chips are becoming more widespread and more complex at advanced process nodes as design teams utilize early silicon to diagnose problems prior to production. But this approach also is spurring questions about whether this approach is viable at 7nm and 5nm, due to the rising cost of prototyping advanced technology, such as mask tooling and wafer costs.

Semiconductor designers have long been making test chips to validate test structures, memory bit cells, larger memory blocks, and precision analog circuits like current mirrors, PLLs, temperature sensors, and high-speed I/Os. This has been done at 90nm, 65nm, 40nm, 32nm, 28nm, etc., so having test chips at 16nm, 7nm, or finer geometries should not be a surprise. Still, as costs rise, there is debate about whether those chips are over-used given advancements in tooling, or whether they should be utilized even more, with more advanced diagnostics built into them.

Modern EDA tools are very good. You can simulate and validate almost anything with certain degree of accuracy and correctness. The key to having good and accurate tools and accurate results (for simulation) is the quality of the foundry data provided. The key to having good designs (layouts) is that the DRC deck must be of high quality and accurate and must catch all the things you are not supposed to do in the layout. Most of the challenges in advanced node is in the FEOL where semiconductor physics and lithography play outsize roles. Issues that were not an issue at more mature nodes can manifest themselves as big problems at 7nm or 5nm. Process variation across the wafer and variation across a large die also present problems that were of no consequence in more mature nodes.

The real questions to be asked are as follows:

What is the role of test chips in SoC designs?

  1. Do all hard IP require test chips for validation?
  2. Are test chips more important at advanced nodes compared to more mature nodes?
  3. Is the importance of test chip validation relative to the type of IP protocols?
  4. What are the risks if I do not validate in silicon?

In complex SoC designs, there are many high-performance protocols such as LPDDR4/4x PHY, PCIe4 PHY, USB3.0 PHY, 56G/112G SerDes, etc. Each one of these IP are very complex in and by itself. If there is any chance of failure that is not detected prior to SoC (tapeout) integration, the cost of retrofit is huge. This is why the common practice is to validate each one of these complex IP in silicon before committing to use such IP in chip integration. The test chips are used to validate that the IP are properly designed and meet the functional specifications of the protocols. They are also used to validate if sufficient margins are designed into the IP to mitigate variances due to process tolerances. All high-performance hard IP go through this test chip/silicon validation process. Oftentimes, marginality is detected at this stage. In advanced nodes, it is also important to have the test chips built under different process corners. This is intended to simulate process variations in production wafers so as to maximize yields. Advanced protocols such as 112G, GDDR6, HBM2, and PCIe4 are incredibly complex and sensitive to process variations. It is almost impossible to design these circuits and try to guarantee their performance without going through the test chip route.

Besides validating performance of the IP protocols, test silicon is also used to validate robustness of ESD structures, sensitivity to latch up, and performance degradation over wide temperature ranges. All these items are more critical in advanced nodes than more mature modes. Test chips are vehicles to guarantee design integrity in bite-size chunks. It is better to deal with any potential issues in smaller blocks than to try to fix them in the final integrated SoC.

Test chips will continue to play a vital role in helping IP and SoC teams lower the risk of their designs, and assuring optimal quality and performance in the foreseeable future. They are not going away!

To read more, please visit https://semiengineering.com/test-chips-play-larger-role-at-advanced-nodes/




v

Dimensions to Verifying a USB4 Design

Verification of a USB4 router design is not just about USB4 but also about the inclusion of the three other major protocols namely, USB3, DisplayPort (DP), and PCI Express (PCIe). These protocols can be simultaneously tunneled through a USB4 router. Put in simple terms, such tunneling involves the conversion of the respective native USB3, DP, or PCIe protocol traffic into the USB4 transport layer packets, which are tunneled through a USB4 fabric, and converted back into the respective original native protocol traffic.

It may sound simple but is perhaps not.

There are several aspects in a router that come into picture to carry out this task of conversion of native protocol traffic, route it to the intended destination, and then convert it back to the original form. Some of those are the USB3, DP and PCIe protocol adapters, transport mechanism using routing, flow control, paths, path set-up and teardown, control and configuration, configuration spaces.

That is not all. There are core USB4 specific logical layer intricacies as well, which carry out the tasks of ensuring that all the USB4 ports and links are working as desired to provide up to 40Gbps speed and that the USB4 traffic flows through out the fabric in the intended way. These bring on the table features like High Speed link, ordered sets, lane initialization, lane adapter state machine, low power, lane bonding, RS-FEC, side band channel, sleep and wake, error checking.

All of these put together give rise to a very large verification space against which a USB4 router design should be verified. If we were to break down this space it can be broadly put in the following major dimensions,

  • Protocol Adapter Layer
    • USB3 tunneling
    • DP tunneling
    • PCIe tunneling
  • Host Interface Adapter Layer
  • Transport Layer
    • Flow control
    • Routing
    • Paths
  • Configuration layer and control packet protocol
  • Configuration spaces
  • Logical Layer

The independent verification of these dimensions is not all that would qualify the design as verified. They have to be verified in various combinations of each other too. Overall, all the parts of a USB4 router system need to be working together coherently.

For example, the following diagram depicts the various layers that a USB4 router may comprise of,

A USB4 router or a domain of routers does not work on its own. There is a Connection Manager per domain, which is a software-based entity managing a domain. A router provides the various capabilities for a Connection Manager to carry out its responsibilities of managing a domain.

It would not be an exaggeration to say that the spectrum of verification of a USB4 router ranges from the very minute details of logical layer to the system-level like multiple dependencies as the whole USB4 system is brought up layer by layer, step-by-step.

Cadence has a mature Verification IP solution that can help in the verification of USB4 designs. Cadence has taken an active part in the working group that defined the USB4 specification and has created a comprehensive Verification IP that is being used by multiple members in the last two years.

If you plan to have a USB4 compatible design, you can reduce the risk of adopting a new technology by using our proven and mature USB4 Verification IP. Please contact your Cadence local account team for more details and to get connected.




v

PCIe 3.0 Still Shines While PCIe Keeps Evolving

PCIe has been widely adopted in the electronics industry since its first debut in 2003 (PCIe 1.0 standard release) for wide breach of applications, from Data Center Server, Networking, to Mobile, AI/ML, Automotive, IoT, and many others…. It’s a versatile, high-performance, robust, mature interconnect standard with full “backward compatibility” (e.g., a PCIe 3.0 device can still function well in a PCIe 4.0 system) which enables a solid and strong PCIe eco-system in the industry.  While the market, so as the users,  are enjoying the systems, e.g., desktop/laptop, powered (or to be more specific: “bridged”) by PCIe 3.0 since 2010, the industry is pushing hard for the PCIe 4.0 eco-system enablement. Earlier this year, AMD announced it X570 chipset would support the PCIe 4.0 interface and Phison also introduced the world’s first PCIe 4.0 SSD.

On the standard evolution front, the official PCIe 5.0 came out in May 2019, doubling the data rate to 32GT/s from 16GT/s in PCIe 4.0. The PCIe 6.0 standard will be released in 2021 based on the announcement made by PCI-SIG in June’19 with the goal to further double the data rate to 64GT/s with incorporating the PAM4 coding.

PCIe Protocol Evolution

Having said that, is the latest generation of PCIe always desired?  

My answer would be positive. Just like car maker/enthusiast has kept pursuing faster car in the history, there is no doubt that these speed enhancements/upgrades in the electronic world certainly provide a tremendous benefit for especially those applications craving the most throughput, such as Data center, HPC, Networking, Cloud and AI applications.   

But, does every application have to opt for the fastest speed (bandwidth)? My view would be leaning toward “Not really”. Just like we don’t need a 3-second sport car (meaning 0-60mph acceleration < 3s) for daily commute though it would certainly spice some driving fun on the road, but it may not be "the best fit" for most of commuters.

There are applications still well satisfied with PCIe 3.0 (or even older PCIe 2.0) for its best performance and cost balance.  Those applications include, but not limit to, IoT/consumer, Edge AI, SSD (non-enterprise),…etc. They typically need to make trade-off in between the cost, power consumption (especially battery powered), flexibility on changing product features, and time-to-market (TTM). To address such type of market needs, Cadence also offers an PPA (Performance, Power, Area) optimized PCIe 3.0 solution in addition to its high-performance PCIe 4.0 product line.

Cadence PCIe 3.0 PHY Solution (with Multi-Protocol Multi-Link feature)

With leveraging the multi-protocol SerDes implementation, the same Cadence PHY IP support multi-protocol and multi-link operation. Such a multi-protocol enabled PHY gives the SoC developers the optimum flexibility to integrate multiple commonly used interface protocols (e.g., PCIe 3.0 + USB 3.0) with using only a single PHY design.  This would largely save the product development time (faster TTM), reduce the risk of using multiple different PHY instances (for different protocol needs), and with the configurability to enable different product features/protocols.

Some people might say PCIe 3.0 era has gone. I was not quite yet being convinced as I still see its potential to shine a lot of market use cases. What do you think?

More Information

For more information on Cadence's PCIe IP offerings, see our PCI Express page.

For more information on PCIe in general, and on the various PCI standards, see the PCI-SIG website.

Related Posts




v

PCI-SIG DevCon 2019 APAC Tour: All Around Latest Spec Updates and Solution Offering

PCI-SIG DevCon 2019 APAC tour has come to Tokyo and Taipei this year. The focus is predominantly around the latest updates for PCIe Gen 5 which its version 1.0 specification was just released this year in May.  A series of presentations provided by PCI-SIG on the day 1 with comprehensive information covering all aspects of Gen 5 specification, including protocol, logical, electrical, compliance updates. On the day 2 (only in Taipei), several member companies shared their view on Testing, PCB analysis and Signal integrity. The exhibit is also another spotlight of this event where the member companies showcased their latest PCIe solutions.

Presentation Track (Taipei), Exhibit (Tokyo), Exhibit (Taipei) 

Cadence, as the market leading PCIe IP vendor, participated APAC tour this year with bringing in its latest PCIe IP solution offering (Gen 5/4) to the region as well as showcasing two live demo setups in the exhibit floor. One setup is the PCIe software development kit (SDK) while the other is the Interop/compliance/debug platform. Both come with the Cadence PCIe Gen 4 hardware setup and its corresponding software kit.

The SDK can be used for Device Driver Development, Firmware Development, and for pre-silicon emulation as well. It supports Xtensa and ARM processor with Linux OS and it also equip with Ethernet interface which can be used for remote debugging. It also supports PCIe stress tests for Speed change, link enable/disable, entry/exist for lower power states, …etc. 

Cadence PCIe 4.0 Software Development Kit

The “System Interop/Compliance/Debug platform” was set up to test with multiple endpoint and System platforms. This system come with integrated Cadence software for basic system debug without the need for analyzer to perform the analysis, such as LTSSM History, TS1/TS2 transmitted/received with time stamp, Link training phases, Capturing Packet errors details, Capturing PHY TX/RX internal state machine details, ...etc.

Cadence PCIe System Interop/Compliance/Debug Platform

 

The year 2019 is certainly a "fruitful year" for the PCIe as more Gen 4 products are now available in the market, Gen 5 v1.0 specification got officially ratified, and PCI-SIG's revealing of Gen 6 specification development. We were glad to be part of this APAC tour with the chance to further introduce Cadence’s complete and comprehensive PCIe IP solution.

See you all next year in APAC again!

More Information

For more information on Cadence's PCIe IP offerings, see our PCI Express page.

For more information on PCIe in general, and on the various PCI standards, see the PCI-SIG website.

Related Posts




v

Verification of the Lane Adapter FSM of a USB4 Router Design Is Not Simple

Verifying lane adapter state machine in a router design is quite an involved task and needs verification from several aspects including that for its link training functionality.

The diagram below shows two lane adapters connected to each other and each going through the link training process. Each training sub-state transition is contingent on conditions for both transmission and reception of relevant ordered sets needed for a transition. Until conditions for both are satisfied an adapter cannot transition to the next training sub-state.

As deduced from the lane adapter state machine section of USB4 specification, the reception condition for the next training sub-state transition is less strict than that of the transmission condition. For ex., for LOCK1 to LOCK2 transition, the reception condition requires only two SLOS symbols in a row being detected, while the transmission condition requires at least four complete SLOS1 ordered sets to be sent.

From the above conditions in the specification, it is a possibility that a lane adapter A may detect the two SLOS or TS ordered sets, being sent by the lane adapter B on the other end, in the very beginning as soon as it starts transmitting its own SLOS or TS ordered sets. On the other hand, it is also a possibility that these SLOS or TS ordered sets are not yet detected by lane adapter A even when it has met the condition of sending minimum number of SLOS or TS ordered sets.

In such a case, lane adapter A, even though it has satisfied the transmission condition cannot transition to the next sub-state because the reception condition is not yet met. Hence lane adapter A must first wait for the required number of ordered sets to be detected by it before it can go to the next sub-state. But this wait cannot be endless as there are timeouts defined in the specification, after which the training process may be re-attempted.

This interlocked way of operation also ensures that state machine of a lane adapter does not go out of sync with that of the other lane adapter. Such type of scenarios can occur whenever lane adapter state machine transitions to the training state from other states.

Cadence has a mature Verification IP solution for the verification of various aspects of the logical layer of a USB4 router design, with verification capabilities provided to do a comprehensive verification of it.




v

Vintage Vega

Over ten years ago, Suzanne Vega hit a terribly sexy groove with an album called Nine Objects of Desire that made me seek out every CD she has done since then. She’s kept us waiting for six years for her new studio effort, but it’s such vintage Vega that the reward is well worth the wait.

The first thing to note on Beauty & Crime is that producer Jimmy Hogarth and mixer Tchad Blake  have tuned the album’s tracks entirely to suit Vega’s rather inflexible, breathy voice. With the sonic help, Vega is freed up to focus on enunciating the layers behind her lyrics. Yet Hogarth and Blake also manage to seed each song with finely crafted arrangements and subtle hooks that make them musically interesting.

Although Vega uses a large canvas to record her ruminations, her most touching songs are those that are personal. On “Ludlow Street” she quietly mourns the passing of her brother: “I find each stoop and doorway’s incomplete/without you there”.

On the superbly produced “Bound”, she seems to be confirming her longtime friend Paul Mills’s continuing interest in her after her divorce from Michael Froom in 2001. On “As You Are Now” she manages – against all odds - to fit in a parent’s love for her child in four sweet verses.

Rave Out © 2007 IndiaUncut.com. All rights reserved.
India Uncut * The IU Blog * Rave Out * Extrowords * Workoutable * Linkastic




v

One Chai and a Wills Navy Cut

Pablo Bartholomew’s beautiful photo-show “Outside In” opened in Manhattan a few evenings ago. The exhibition is being held at Bodhi Art in Chelsea. Black-and-white photographs from the seventies and the eighties—reflecting Bartholomew’s engagement with people and places in Delhi, Bombay, and Calcutta.

These are not the pictures that made Bartholomew famous. The undying image of the father brushing the dust from the face of the child he is burying—that was the iconic photograph from the Bhopal tragedy in 1984. It also won for Bartholomew, still in his twenties, the World Press Photo’s Picture of the Year Award.

The images in “Outside In” do not commemorate grim tragedies or celebrate well-publicised public events. Instead, they are documents that offer intimate recall of a period and a milieu. Please click here to look at these photographs.

People who share a context with the photographer will have their own private reading of the scenes. For me, they evoke days when happiness seemed only one chai and a Wills Navy Cut away. There is charm and candor in these scenes. And because the young believe they will live forever, there is nothing defensive or stuck-up or overly self-conscious about their faces and postures.

Even the language of the captions is true to this spirit: “Self-portrait after a trippy night…”; “Nona writing and Alok zonked out…”; “Hanging out with the Maharani Bagh gang….” The exhibition catalogue has a fine essay by Aveek Sen that has also been published in the latest issue of Biblio.

Rave Out © 2007 IndiaUncut.com. All rights reserved.
India Uncut * The IU Blog * Rave Out * Extrowords * Workoutable * Linkastic




v

The Hard Edges of Modern Lives

This new film is the latest remake of Devdas, but what is equally interesting is the fact that it is in conversation with films made in the West. Unlike Bhansali’s more spectacular version of the older story, Anurag Kashyap’s Dev.D is a genuine rewriting of Sarat Chandra’s novel. Kashyap doesn’t flinch from depicting the individual’s downward spiral, but he also gives women their own strength. He has set out to right a wrong—or, at least, tell a more realistic, even redemptive, story. If these characters have lost some of the affective depth of the original creations, they have also gained the hard edges of modern lives.

We don’t always feel the pain of Kashyap’s characters, but we are able to more readily recognize them. Take Chandramukhi, or Chanda, who is a school-girl humiliated by the MMS sex-scandal. Her father, protective and patriarchal, says that he has seen the tape and thinks she knew what she was doing. “How could you watch it?” the girl asks angrily. And then, “Did you get off on it?” When was the last time a father was asked such a question on the Hindi screen? With its frankness toward sex and masturbation, Dev.D takes a huge step toward honesty. In fact, more than the obvious tributes to Danny Boyle’s Trainspotting, or the over-extended psychedelic adventure on screen, in fact, as much as the moody style of film-making, the candour of such questions make Dev.D a film that is truly a part of world cinema.

Rave Out © 2007 IndiaUncut.com. All rights reserved.
India Uncut * The IU Blog * Rave Out * Extrowords * Workoutable * Linkastic




v

This Video Hurts the Sentiments of Hindu’s [sic] Across the World

I loved Nina Paley’s brilliant animated film Sita Sings the Blues. If you’re reading this, stop right now—and watch the film here.

Paley has set the story of the Ramayana to the 1920s jazz vocals of Annette Hanshaw. The epic tale is interwoven with Paley’s account of her husband’s move to India from where he dumps her by e-mail. The Ramayana is presented with the tagline: “The Greatest Break-Up Story Ever Told.”

All of this should make us curious. But there are other reasons for admiring this film:

The film returns us to the message that is made clear by every village-performance of the Ramlila: the epics are for everyone. Also, there is no authoritative narration of an epic. This film is aided by three shadow puppets who, drawing upon memory and unabashedly incomplete knowledge, boldly go where only pundits and philosophers have gone before. The result is a rendition of the epic that is gloriously a part of the everyday.

This idea is taken even further. Paley says that the work came from a shared culture, and it is to a shared culture that it must return: she has put the film on Creative Commons—viewers are invited to distribute, copy, remix the film.

Of course, such art drives the purists and fundamentalists crazy. On the Channel 13 website, “Durgadevi” and “Shridhar” rant about the evil done to Hinduism. It is as if Paley had lit her tail (tale!) and set our houses on fire!

Rave Out © 2007 IndiaUncut.com. All rights reserved.
India Uncut * The IU Blog * Rave Out * Extrowords * Workoutable * Linkastic




v

Here Is Why the Indian Voter Is Saddled With Bad Economics

This is the 15th installment of The Rationalist, my column for the Times of India.

It’s election season, and promises are raining down on voters like rose petals on naïve newlyweds. Earlier this week, the Congress party announced a minimum income guarantee for the poor. This Friday, the Modi government released a budget full of sops. As the days go by, the promises will get bolder, and you might feel important that so much attention is being given to you. Well, the joke is on you.

Every election, HL Mencken once said, is “an advance auction sale of stolen goods.” A bunch of competing mafias fight to rule over you for the next five years. You decide who wins, on the basis of who can bribe you better with your own money. This is an absurd situation, which I tried to express in a limerick I wrote for this page a couple of years ago:

POLITICS: A neta who loves currency notes/ Told me what his line of work denotes./ ‘It is kind of funny./ We steal people’s money/And use some of it to buy their votes.’

We’re the dupes here, and we pay far more to keep this circus going than this circus costs. It would be okay if the parties, once they came to power, provided good governance. But voters have given up on that, and now only want patronage and handouts. That leads to one of the biggest problems in Indian politics: We are stuck in an equilibrium where all good politics is bad economics, and vice versa.

For example, the minimum guarantee for the poor is good politics, because the optics are great. It’s basically Garibi Hatao: that slogan made Indira Gandhi a political juggernaut in the 1970s, at the same time that she unleashed a series of economic policies that kept millions of people in garibi for decades longer than they should have been.

This time, the Congress has released no details, and keeping it vague makes sense because I find it hard to see how it can make economic sense. Depending on how they define ‘poor’, how much income they offer and what the cost is, the plan will either be ineffective or unworkable.

The Modi government’s interim budget announced a handout for poor farmers that seemed rather pointless. Given our agricultural distress, offering a poor farmer 500 bucks a month seems almost like mockery.

Such condescending handouts solve nothing. The poor want jobs and opportunities. Those come with growth, which requires structural reforms. Structural reforms don’t sound sexy as election promises. Handouts do.

A classic example is farm loan waivers. We have reached a stage in our politics where every party has to promise them to assuage farmers, who are a strong vote bank everywhere. You can’t blame farmers for wanting them – they are a necessary anaesthetic. But no government has yet made a serious attempt at tackling the root causes of our agricultural crisis.

Why is it that Good Politics in India is always Bad Economics? Let me put forth some possible reasons. One, voters tend to think in zero-sum ways, as if the pie is fixed, and the only way to bring people out of poverty is to redistribute. The truth is that trade is a positive-sum game, and nations can only be lifted out of poverty when the whole pie grows. But this is unintuitive.

Two, Indian politics revolves around identity and patronage. The spoils of power are limited – that is indeed a zero-sum game – so you’re likely to vote for whoever can look after the interests of your in-group rather than care about the economy as a whole.

Three, voters tend to stay uninformed for good reasons, because of what Public Choice economists call Rational Ignorance. A single vote is unlikely to make a difference in an election, so why put in the effort to understand the nuances of economics and governance? Just ask, what is in it for me, and go with whatever seems to be the best answer.

Four, Politicians have a short-term horizon, geared towards winning the next election. A good policy that may take years to play out is unattractive. A policy that will win them votes in the short term is preferable.

Sadly, no Indian party has shown a willingness to aim for the long term. The Congress has produced new Gandhis, but not new ideas. And while the BJP did make some solid promises in 2014, they did not walk that talk, and have proved to be, as Arun Shourie once called them, UPA + Cow. Even the Congress is adopting the cow, in fact, so maybe the BJP will add Temple to that mix?

Benjamin Franklin once said, “Democracy is two wolves and a lamb voting on what to have for lunch.” This election season, my friends, the people of India are on the menu. You have been deveined and deboned, marinated with rhetoric, seasoned with narrative – now enter the oven and vote.



© 2007 IndiaUncut.com. All rights reserved.
India Uncut * The IU Blog * Rave Out * Extrowords * Workoutable * Linkastic




v

India’s Problem is Poverty, Not Inequality

This is the 16th installment of The Rationalist, my column for the Times of India.

Steven Pinker, in his book Enlightenment Now, relates an old Russian joke about two peasants named Boris and Igor. They are both poor. Boris has a goat. Igor does not. One day, Igor is granted a wish by a visiting fairy. What will he wish for?

“I wish,” he says, “that Boris’s goat should die.”

The joke ends there, revealing as much about human nature as about economics. Consider the three things that happen if the fairy grants the wish. One, Boris becomes poorer. Two, Igor stays poor. Three, inequality reduces. Is any of them a good outcome?

I feel exasperated when I hear intellectuals and columnists talking about economic inequality. It is my contention that India’s problem is poverty – and that poverty and inequality are two very different things that often do not coincide.

To illustrate this, I sometimes ask this question: In which of the following countries would you rather be poor: USA or Bangladesh? The obvious answer is USA, where the poor are much better off than the poor of Bangladesh. And yet, while Bangladesh has greater poverty, the USA has higher inequality.

Indeed, take a look at the countries of the world measured by the Gini Index, which is that standard metric used to measure inequality, and you will find that USA, Hong Kong, Singapore and the United Kingdom all have greater inequality than Bangladesh, Liberia, Pakistan and Sierra Leone, which are much poorer. And yet, while the poor of Bangladesh would love to migrate to unequal USA, I don’t hear of too many people wishing to go in the opposite direction.

Indeed, people vote with their feet when it comes to choosing between poverty and inequality. All of human history is a story of migration from rural areas to cities – which have greater inequality.

If poverty and inequality are so different, why do people conflate the two? A key reason is that we tend to think of the world in zero-sum ways. For someone to win, someone else must lose. If the rich get richer, the poor must be getting poorer, and the presence of poverty must be proof of inequality.

But that’s not how the world works. The pie is not fixed. Economic growth is a positive-sum game and leads to an expansion of the pie, and everybody benefits. In absolute terms, the rich get richer, and so do the poor, often enough to come out of poverty. And so, in any growing economy, as poverty reduces, inequality tends to increase. (This is counter-intuitive, I know, so used are we to zero-sum thinking.) This is exactly what has happened in India since we liberalised parts of our economy in 1991.

Most people who complain about inequality in India are using the wrong word, and are really worried about poverty. Put a millionaire in a room with a billionaire, and no one will complain about the inequality in that room. But put a starving beggar in there, and the situation is morally objectionable. It is the poverty that makes it a problem, not the inequality.

You might think that this is just semantics, but words matter. Poverty and inequality are different phenomena with opposite solutions. You can solve for inequality by making everyone equally poor. Or you could solve for it by redistributing from the rich to the poor, as if the pie was fixed. The problem with this, as any economist will tell you, is that there is a trade-off between redistribution and growth. All redistribution comes at the cost of growing the pie – and only growth can solve the problem of poverty in a country like ours.

It has been estimated that in India, for every one percent rise in GDP, two million people come out of poverty. That is a stunning statistic. When millions of Indians don’t have enough money to eat properly or sleep with a roof over their heads, it is our moral imperative to help them rise out of poverty. The policies that will make this possible – allowing free markets, incentivising investment and job creation, removing state oppression – are likely to lead to greater inequality. So what? It is more urgent to make sure that every Indian has enough to fulfil his basic needs – what the philosopher Harry Frankfurt, in his fine book On Inequality, called the Doctrine of Sufficiency.

The elite in their airconditioned drawing rooms, and those who live in rich countries, can follow the fashions of the West and talk compassionately about inequality. India does not have that luxury.



© 2007 IndiaUncut.com. All rights reserved.
India Uncut * The IU Blog * Rave Out * Extrowords * Workoutable * Linkastic




v

For this Brave New World of cricket, we have IPL and England to thank

This is the 24th installment of The Rationalist, my column for the Times of India.

Back in the last decade, I was a cricket journalist for a few years. Then, around 12 years ago, I quit. I was jaded as hell. Every game seemed like déjà vu, nothing new, just another round on the treadmill. Although I would remember her fondly, I thought me and cricket were done.

And then I fell in love again. Cricket has changed in the last few years in glorious ways. There have been new ways of thinking about the game. There have been new ways of playing the game. Every season, new kinds of drama form, new nuances spring up into sight. This is true even of what had once seemed the dullest form of the game, one-day cricket. We are entering into a brave new world, and the team leading us there is England. No matter what happens in the World Cup final today – a single game involves a huge amount of luck – this England side are extraordinary. They are the bridge between eras, leading us into a Golden Age of Cricket.

I know that sounds hyperbolic, so let me stun you further by saying that I give the IPL credit for this. And now, having woken up you up with such a jolt on this lovely Sunday morning, let me explain.

Twenty20 cricket changed the game in two fundamental ways. Both ended up changing one-day cricket. The first was strategy.

When the first T20 games took place, teams applied an ODI template to innings-building: pinch-hit, build, slog. But this was not an optimal approach. In ODIs, teams have 11 players over 50 overs. In T20s, they have 11 players over 20 overs. The equation between resources and constraints is different. This means that the cost of a wicket goes down, and the cost of a dot ball goes up. Critically, it means that the value of aggression rises. A team need not follow the ODI template. In some instances, attacking for all 20 overs – or as I call it, ‘frontloading’ – may be optimal.

West Indies won the T20 World Cup in 2016 by doing just this, and England played similarly. And some sides began to realise was that they had been underestimating the value of aggression in one-day cricket as well.

The second fundamental way in which T20 cricket changed cricket was in terms of skills. The IPL and other leagues brought big money into the game. This changed incentives for budding cricketers. Relatively few people break into Test or ODI cricket, and play for their countries. A much wider pool can aspire to play T20 cricket – which also provides much more money. So it makes sense to spend the hundreds of hours you are in the nets honing T20 skills rather than Test match skills. Go to any nets practice, and you will find many more kids practising innovative aggressive strokes than playing the forward defensive.

As a result, batsmen today have a wider array of attacking strokes than earlier generations. Because every run counts more in T20 cricket, the standard of fielding has also shot up. And bowlers have also reacted to this by expanding their arsenal of tricks. Everyone has had to lift their game.

In one-day cricket, thus, two things have happened. One, there is better strategic understanding about the value of aggression. Two, batsmen are better equipped to act on the aggressive imperative. The game has continued to evolve.

Bowlers have reacted to this with greater aggression on their part, and this ongoing dialogue has been fascinating. The cricket writer Gideon Haigh once told me on my podcast that the 2015 World Cup featured a battle between T20 batting and Test match bowling.

This England team is the high watermark so far. Their aggression does not come from slogging. They bat with a combination of intent and skills that allows them to coast at 6-an-over, without needing to take too many risks. In normal conditions, thus, they can coast to 300 – any hitting they do beyond that is the bonus that takes them to 350 or 400. It’s a whole new level, illustrated by the fact that at one point a few days ago, they had seven consecutive scores of 300 to their name. Look at their scores over the last few years, in fact, and it is clear that this is the greatest batting side in the history of one-day cricket – by a margin.

There have been stumbles in this World Cup, but in the bigger picture, those are outliers. If England have a bad day in the final and New Zealand play their A-game, England might even lose today. But if Captain Morgan’s men play their A-game, they will coast to victory. New Zealand does not have those gears. No other team in the world does – for now.

But one day, they will all have to learn to play like this.



© 2007 IndiaUncut.com. All rights reserved.
India Uncut * The IU Blog * Rave Out * Extrowords * Workoutable * Linkastic




v

Cadence JasperGold Brings Formal Verification into Mainstream IC Verification Flows

Formal verification is a complex technology that has traditionally required experts or specialized teams who stood apart from the IC design and verification flow. Taking a different approach, a new release of the Cadence JasperGold formal verification platform (June 8, 2015) provides formal techniques that complement simulation, emulation, and debugging in the form of “Apps” or under-the-hood solutions that any design or verification engineer can use.

JasperGold was the initial (in fact only) product of Jasper Design Automation, acquired by Cadence in 2014. Jasper pioneered the formal Apps concept several years ago. While the company had previously sold JasperGold as a one-size-fits-all solution, Jasper began selling semi-automated JasperGold Apps that solved specific problems using formal analysis technology.

The new release is the next generation of JasperGold and will be available later this month. It includes three major improvements over previous Cadence and Jasper formal analysis offerings:

  • A unified Cadence Incisive and JasperGold formal verification platform delivers up to 15X performance gain over previous solutions.
  • JasperGold is integrated into the Cadence System Development Suite, where it provides formal-assisted simulation, emulation, and coverage. As a result, System Development Suite users can find bugs three months earlier than existing verification methods.
  • JasperGold’s formal analysis engines are integrated with the recently announced Indago debug platform, automating root cause analysis and on-the-fly, what-if exploration.

Best of Both Formal Verification Worlds

Taking advantage of technologies from both Cadence and Jasper, the new JasperGold represents a “best of both worlds” solution, according to Pete Hardee, product management director at Cadence. This solution combines technologies from the Cadence Incisive Enterprise Verifier and Incisive Formal Verifier with JasperGold formal analysis engines.

For example, to ease migration from Incisive formal tools, Cadence has integrated an Incisive common front end into the JasperGold apps platform. Jasper formal engines can run within the Incisive run-time environment. Cadence has also brought some selected Incisive formal engines into JasperGold.

As shown to the right, the JasperGold platform supports both the existing JasperGold front-end parser and the Incisive front-end parser. Hardee observed that this dual parser arrangement simplifies migration from Incisive formal tools to JasperGold, and provides a common compilation environment for people who want to use JasperGold with Incisive simulation. Further, the common run-time environment enables formal-assisted simulation.

The combination of JasperGold engines and Incisive engines supports two use models for formal analysis: formal proofs and bug hunting. In the first case, formal engines try all combinations of inputs without a testbench. The test is driven by formal properties written in languages such as SVA (SystemVerilog assertions) or PSL (Property Specification Language). Completion of a property is exhaustive proof that something can or cannot happen. This provides a “much stronger result” than simulation, Hardee said.

He also noted that formal analysis doesn’t necessarily require that all properties are completed. “You can get a lot of value even if proofs don’t complete,” he said. “Proofs that run deep enough to find bugs are just fine.”

Bug hunting involves random searches, and JasperGold bug hunting engines are very fast. However, these engines don’t necessarily use the most optimal path to get to a bug. So, Cadence engineers brought a constraint solver from Incisive and integrated it into JasperGold. “It looks at the constraints in the environment and gives you a better starting point,” Hardee said. “It takes more up-front time, but once you’ve done that the bug hunting engines can actually take a shorter path and find a bug a lot quicker.”

Another new JasperGold capability from the Incisive Formal Verifier is called “search pointing.” This uses simulation to penetrate deeply into the state space, and then kicks off a random formal search from a given point that you’ve reached in simulation. This technique makes it possible to find bugs that are very deep in the design.

It is probably clear by now that a number of different formal “engines” may be required to solve a given verification problem. Traditionally, a formal tool (or user) will farm a problem out to many engines and see which one works best. To put more intelligence into that process, Cadence launched the Trident “multi-cooperating engine” a couple of years ago. That has now been brought into JasperGold, where it helps “orchestrate” the engines according to what will work best for the design. This is a big part of the reason for the 15X speedup noted earlier in this post.

Integration with System Development Suite

The Cadence System Development Suite is an integrated set of hardware/software development and verification engines, including virtual prototyping, Incisive simulation, emulation, and FPGA-based prototyping. As shown below, JasperGold technology is integrated into the System Development Suite in several places, including formal-assisted debug, formal-assisted verification closure, formal-assisted simulation, formal-assisted emulation, and the Incisive vManager verification planning tool.

Formal-assisted emulation sounds like it should be easy, especially since Cadence has both accelerated verification IP (VIP) and assertion-based VIP. However, there’s a complication. Accelerated VIP represents less verification content than simulation VIP, because you have to remove many checkers to get VIP to compile on a Palladium emulator. That’s because the Palladium requires synthesizable code.

What you can do, however, is use assertion-based VIP in “snoop mode” as shown below. Assertion-based VIP coded in synthesizable SystemVerilog can replace the missing checkers in accelerated VIP. In this diagram, everything in the green box is running in the emulator and is thus completely accelerated.

 

Another example of formal-assisted emulation has to do with deep traces. As Hardee noted, emulation will produce very long traces, and it can be very difficult to find a point of interest in the trace and determine what caused an error. With formal-assisted emulation, users can find interesting events within the traces and create properties that mark them, so a debugger can find these events and trace back to the root cause.

Formal-assisted verification closure is available with the new JasperGold release. This is possible because you can use the vManager product to determine which tasks were completed by formal engines. It’s important information for verification managers who are not used to formal tools, Hardee noted.

Another aspect of formal-assisted verification closure is the JasperGold Unreachability Analysis (UNR) App, which can save simulation users weeks of time and effort. This App takes in the simulation coverage database and RTL, and automatically generates properties to explore coverage holes and determine if holes are reachable or unreachable. The App then generates an unreachable coverage point database. If the unreachable code does something useful, there’s a bug in the design or the testbench; if not, you don’t have to worry about it. The diagram below shows how it works.

Formal-Assisted Debugging

The third major component of the JasperGold announcement is the integration of formal analysis into the Indago debugging platform. As shown below, this platform has several apps, including the Indago Debug Analyzer. Two formal debug capabilities from the Jasper Visualize environment have been added to the the Indago Debug Analyzer:

  • Highlight Relevant Logic: This highlights the “cone of influence,” or the logic that is involved in reaching a given point
  • Why: This button highlights the immediate causes for a given event, and allows users to trace backwards in time

 

More formal capabilities will come with the Indago Advanced Debug Analyzer app, scheduled for release towards the end of 2015. This includes Quiet Trace, a Jasper capability that reduces trace activity to transactions relevant to an event. Also, a what-if analysis allows on-the-fly trace editing and recalculation to explore effects and sensitivities, without having to re-compile and re-execute the simulation.

Finally, Cadence has a Superlint flow that is now fully integrated with the JasperGold Visualize debugger. This two-tiered flow includes a basic lint capability as well as automated formal analysis based on the JasperGold Structural Property Synthesis app. “This could be a very good entry point for designers to start using formal,” Hardee said.

“Formal is taking off,” Hardee concluded. “People are no longer talking about return on investment for formal—they have established that. Now they’re supporting a proliferation of formal in their companies such that a wider set of people experience the benefit from that proven return on investment.”

Further information is available at the JasperGold Formal Verification Platform (Apps) page.

Richard Goering

Related Blog Posts

JUG Keynote—How Jasper Formal Verification Technology Fits into the Cadence Flow

Why Cadence Bought Jasper—A New Era in Formal Analysis

Q&A: An R&D Perspective on Formal Verification—Past, Present and Future




v

DAC 2015: How Academia and Industry Collaboration Can Revitalize EDA

Let’s face it – the EDA industry needs new people and new ideas. One of the best places to find both is academia, and a presentation at the Cadence Theater at the recent Design Automation Conference (DAC 2015) described collaboration models that are working today.

The presentation was titled “Industry/Academia Engagement Models – From PhD Contests to R&D Collaborations.” It included these speakers, shown from left to right in the photo below:

  • Prof. Xin Li, Electrical and Computer Engineering, Carnegie-Mellon University (CMU)
  • Chuck Alpert, Senior Software Architect, Cadence
  • Prof. Laleh Behjat, Department of Electrical and Computer Engineering, University of Calgary

 

Alpert, who was filling in for Zhuo Li, Software Architect at Cadence, was the vice chair of DAC 2015 and will be the general chair of DAC 2016 in Austin, Texas. “My team at Cadence really likes to collaborate with universities,” he said. “We’re a big proponent of education because we really need the best and brightest students in our industry.”

Contests Boost EDA Research

One way that Cadence collaborates with academia is participation in contests. “It’s a great way to formulate problems to academia,” Alpert said. “We can have the universities work on these problems and get some strategic direction.”

For example, Cadence has been involved with the annual CAD contest at the International Conference on Computer-Aided Design (ICCAD) since the contest was launched in 2012. This is the largest worldwide EDA R&D contest, and it is sponsored by the IEEE Council on EDA (CEDA) and the Taiwan Ministry of Education. Its goals are to boost EDA research in advanced real-world problems and to foster industry-academia collaboration.

Contestants can participate in one of more problems in the three areas of system design, logic synthesis and verification, and physical design. The 2015 contest has attracted 112 teams from 12 regions. Cadence contributes one problem per year in the logic synthesis area. Zhuo Li was the 2012 co-chair and the 2013 chair. The awards will be given at ICCAD in November 2015.

Another step that Cadence has taken, Alpert said, is to “hire lots of interns.” His own team has four interns at the moment. One advantage to interning at Cadence, he said, is that students get to see real-world designs and understand how the tools work. “It helps you drive your research in a more practical and useful direction,” he said.

The Cadence Academic Network co-sponsors the ACM SIGDA PhD Forum at DAC, and Xin Li and Zhuo Li are on the organizing committee. This event is a poster session for PhD students to present and discuss their dissertation research with people in the EDA community. This year’s forum was “packed,” Alpert said, and it’s clear that the event needs a bigger room.

Finally, Alpert noted, Cadence researchers write and publish technical papers at DAC and other conferences, and Cadence people serve on the DAC technical program committee. “We try to be involved with the academic community on a regular basis,” Alpert said. “We want the best and the brightest people to go into EDA because there is still so much innovation that’s needed. It’s a really cool place to be.”

Research Collaboration Exposes Failure Rates

Xin Li presented an example of a successful research collaboration between CMU and Cadence. The challenge was to find a better way to estimate potential failure rates in memory. As noted in a previous blog post, PhD student Shupeng Sun met this challenge with a new statistical methodology that won a Best Poster award at the ACM SIGDA PhD Forum at DAC 2014.

The new methodology is called Scaled-Sigma Sampling (SSS). It calculates the failure rate and accounts for variability in the manufacturing process while only requiring a few hundred, or a few thousand, sample circuit blocks. Previously, millions of samples were required for an accurate validation of a new design, and each sample could take minutes or hours to simulate. It could take a few weeks or months to run one validation.

The SSS methodology requires greatly reduced simulation times. It makes it possible, Li noted, to run simulations overnight and see the results in the morning.

Li shared his secret for success in collaborations. “I want to emphasize that before the collaboration, you have to understand the goal. If you don’t have a clear goal, don’t collaborate. Once you define the goal, stick to it and make it happen.”

Contest Provides Learning Experience

Last year Laleh Behjat handed two of her new PhD students a challenge. “I told them there is an ISPD [International Symposium for Physical Design] contest on placement, and I expect you to participate and I expect you to win. Not knowing anything about placement, I don’t think they realized what I was asking them.”

The 2015 contest was called the Blockage-Aware Detailed Routing-Driven Placement Contest. Results were announced at the end of March at ISPD. And the University of Calgary team, despite its lack of placement experience, took second place.

Such contests provide a good learning tool, according to Behjat. Graduate students in EDA, she said, “have to be good programmers. They have to work in teams and be collaborative, be able to innovate, and solve the hardest problems I have seen in engineering and science. And they have to think outside the box.” A contest can bring out all these attributes, she said.

Further, Behjat noted, contest participants had access to benchmarks and to a placement tool. They didn’t have to write tools to find out if their results were good. Industry sponsors, meanwhile, got access to good students and new approaches for solving problems.

“You can see Cadence putting a big amount of time, effort and money to get students here and get them excited about doing contests,” she said. She advised students in the theater audience to “talk to people in the Cadence booth and see if you can have more ideas for collaboration.”

Richard Goering

Related Blog Posts

EDA Plus Academia: A Perfect Game, Set and Match

Cadence Aims to Strengthen Academic Partnerships

BSIM-CMG FinFET Model – How Academia and Industry Empowered the Next Transistor




v

DAC 2015: Jim Hogan Warns of “Looming Crisis” in Automotive Electronics

EDA investor and former executive Jim Hogan is optimistic about automotive electronics, but he has some concerns as well. At the recent Design Automation Conference (DAC 2015), he delivered a speech titled “The Looming Quality, Reliability, and Safety Crisis in Automotive Electronics...Why is it and what can we do to avoid it?"

Hogan gave the keynote speech for IP Talks!, a series of over 30 half-hour presentations located at the ChipEstimate.com booth. Presenters included ARM, Cadence, eSilicon, Kilopass, Sidense, SilabTech, Sonics, Synopsys, True Circuits, and TSMC. Held in an informal setting, the talks addressed the challenges faced by SoC design teams and showed how the latest developments in semiconductor IP can contribute to design success.

Jim Hogan delivers keynote speech at DAC 2015 IP Talks!

Hogan talked about several phases of automotive electronics. These include assisted driving to avoid collisions, controlled automation of isolated tasks such as parallel parking, and, finally, fully autonomous vehicles, which Hogan expects to see in 15 to 20 years. The top immediate priorities for automotive electronics designers, he said, will be government regulation, fuel economy, advanced safety, and infotainment.

More Code than a Boeing 777

According to Hogan, today’s automobiles use 50-100 microcontrollers per car, resulting in a worldwide automotive semiconductor market of around $40 billion. The global market for advanced automotive electronics is expected to reach $240 billion by 2020. Software is growing faster in the automotive market than it is in smartphones. Hogan quoted a Ford vice president who observed that there are more lines of code in a Ford Fusion car than a Boeing 777 airplane.

One unique challenge for automotive electronics designers is long-term reliability. This is because a typical U.S. car stays on the road for 15 years, Hogan said. Americans are holding onto new vehicles for a record 71.4 months.

Another challenge is regulatory compliance. Aeronautics is highly regulated from manufacturing to air traffic control, and the same will probably be true of automated cars. Hogan speculated that the Department of Transportation will be the regulatory authority for autonomous cars. Today, automotive electronics providers must comply with the ISO26262 automotive functional safety specification.

So where do we go from here? “We’ve got to change our mindset,” Hogan said. “We’ve got to focus on safety and reliability and demand a different kind of engineering discipline.” You can watch Hogan’s entire presentation by clicking on the video icon below, or clicking here. You can also watch other IP Talks! videos from DAC 2015 here.

https://youtu.be/qL4kAEu-PNw

 

Richard Goering

Related Blog Posts

DAC 2015: See the Latest in Semiconductor IP at “IP Talks!”

Automotive Functional Safety Drives New Chapter in IC Verification




v

EDA Retrospective: 30+ Years of Highlights and Lowlights, and What Comes Next

In 1985, as a relatively new editor at Computer Design magazine, I was asked to go forth and cover a new business called CAE (computer-aided engineering). I knew nothing about it, but I had been writing about design for test, so there seemed to be somewhat of a connection. Little did I know that “CAE” would turn into “EDA” and that I’d write about it for the next 30 years, for Computer Design, EE Times, Cadence, and a few others.

Now that I’m about to retire, I’m looking back over those 30 years. What a ride it has been! By the numbers I covered 31 Design Automation Conferences (DACs), hundreds of new products, dozens of acquisitions and startups, dozens of lawsuits, and some blind alleys that didn’t work out (like “silicon compilation”). Chip design went from gate arrays and PLDs with a few thousand gates to processors and SoCs with billions of transistors.

In 1985 there were three big CAE vendors – Daisy Systems, Mentor Graphics, and Valid Logic. All sold bundled packages that included workstations and CAE software; in fact, Daisy and Valid designed and manufactured their own workstations. In the early 1980s a workstation with schematic capture and gate-level logic simulation might have set you back $120,000. In 1985 OrCAD, now part of Cadence, came out with a $500 schematic capture package running on IBM PCs.

Cadence and Synopsys emerged in the late 1980s, and by the 1990s the EDA industry was pretty much a software-only business (apart from specialized machines like simulation accelerators). Since the early 1990s the “big three” EDA vendors have been Cadence, Synopsys, and Mentor, giving the industry stability but allowing for competition and innovation.

Here, in my view, are some of the highlights that occurred during the past 30 years of EDA.

EDA is a Highlight

The biggest highlight in EDA is the existence of a commercial EDA industry! Marching hand in hand with the fabless semiconductor revolution, commercial EDA made it possible for hundreds of companies to design semiconductors, as opposed to a small handful that could afford large internal CAD operations and fabs. With hundreds of semiconductor companies as opposed to a half-dozen, there’s a lot more creativity, and you get the level of sophistication and intelligence that you see in your smartphone, video camera, tablet, gaming console, and car today.

CAE + CAD = EDA. This is not just a terminology issue. By the mid-1980s it became clear that front-end design (CAE) and physical design (CAD) belonged together. The big CAE vendors got involved in IC and PCB CAD, and presented increasingly integrated solutions. People got tired of writing “CAE/CAD” and “EDA” was born.

The move from gate-level design to RTL. This move happened around 1990, and in my view this is EDA’s primary technology success story during the past 30 years. Moving up in abstraction made the design and verification of much larger chips possible. Going from gate-level schematics to a hardware description language (HDL) revolutionized logic design and verification. Which would you rather do – draw all the gates that form an adder, or write a few lines of code and let a synthesis tool find an adder in your chosen technology?

Two developments made this shift in design possible. One was the emergence of commercial RTL synthesis (or “logic synthesis”) tools from Synopsys and other companies, which happened around 1990. Another was the availability of Verilog, developed by Gateway Design Automation and purchased by Cadence in 1989, as a standard RTL HDL. Although most EDA vendors at the time were pushing VHDL, designers wanted Verilog and that’s what most still use (with SystemVerilog coming on strong in the verification space).

IC functional verification underwent huge changes in the late 1990s and early 2000s, largely due to new technology developed by Verisity, which was acquired by Cadence in 2005. Before Verisity, verification engineers were writing and running directed tests in an ad-hoc manner. Verisity introduced or improved technologies such as pseudo-random test generation, coverage metrics, reusable verification IP, and semi-automated verification planning. The Verisity “e” language became a widely used hardware verification language (HVL).

The biggest way that EDA has expanded its focus has been through semiconductor IP. Today Synopsys and Cadence are leading providers in this area. Thanks to the availability of design and verification IP, many SoC designs today reuse as much as 80% of previous content. This makes it much, much faster to design the remaining portion. While IP began with fairly simple elements, today commercially available IP can include whole subsystems along with the software that runs on them. With IP, EDA vendors are providing not only design tools but design content.

Finally, the EDA industry has done an amazing job of keeping up with SoC complexity and with advanced process nodes. Thanks to intense and early collaboration between foundries, IP, and EDA providers, tools and IP have been ready for process nodes going down to 10nm.

Where Does ESL Fit?

In some ways, electronic system level (ESL) design is both a lowlight and a highlight. It’s a lowlight because people have been talking about it for 30 years and the acceptance and adoption have come very slowly. ESL is a highlight because it’s finally starting to happen, and its impact on design and verification flows could be dramatic. Still, ESL is vaguely defined and can be used to describe almost anything that happens at a higher abstraction level than RTL.

High-level synthesis (HLS) is an ESL technology that is seeing increasing use in production environments. Current HLS tools are not restricted to datapaths, and they produce RTL code that gives better quality of results than hand-written RTL. Another ESL methodology that’s catching on is virtual prototyping, which lets software developers write software pre-silicon using SystemC models. Both HLS and virtual prototyping are made possible by the standardization of SystemC and transaction-level modeling (TLM). However, it’s still not easy to use the same SystemC code for HLS and virtual prototyping.

And Now, Some Lowlights

Every new industry has some twists and turns, and EDA is no exception. For example, the EDA industry in the 1980s and 1990s sparked a lot of lawsuits. At EE Times my colleagues and I wrote a number of articles about EDA legal disputes, mostly about intellectual property, trade secrets, or patent issues. Over the past decade, fortunately, there have been far fewer EDA lawsuits than we had before the turn of the century.

Another issue that was troublesome in the 1980s and 1990s was so-called “standards wars.” These would occur as EDA vendors picked one side or the other in a standards dispute. For example, power intent formats were a point of conflict in the early 2000s, but the Common Power Format (CPF) and the Unified Power Format (UPF) are on the road to convergence today with the IEEE 1801 effort. As mentioned previously, Verilog and VHDL were competing for adoption in the early 1990s. For the most part, Verilog won, showing that the designer community makes the final decision about which standards will be used.

How on earth did there get to be something like 30 DFM (design for manufacturability) companies 10-12 years ago? To my knowledge, none of these companies are around today. A few were acquired, but most simply faded away. A lot of investors lost money. Today, VCs and angel investors are funding very few EDA or IP startups. There are fewer EDA startups than there used to be, and that’s too bad, because that’s where a lot of the innovation comes from.

Here’s another current lowlight -- not enough bright engineering or computer science students are joining EDA companies. They’re going to Google, Apple, Facebook, and the like. EDA is perceived as a mature industry that is still technically very difficult. We need to bring some excitement back into EDA.

Where Is EDA Headed?

Now we come to what you might call “headlights” and look at what’s coming. My list includes:

  • System Design Enablement. This term has been coined by Cadence to describe a focus on whole systems or end products including chips, packages, boards, embedded software, and mechanical components. There are far more systems companies than semiconductor companies, leaving a large untapped market that’s looking for solutions.
  • New frontiers for EDA. At a 2015 Design Automation Conference speech, analyst Gary Smith suggested that EDA can move into markets such as embedded software, mechanical CAD, biomedical, optics, and more.
  • Vertical markets. EDA has until now been “horizontal,” providing the same solution for all market segments. Going forward, markets like consumer, automotive, and industrial will have differing needs and will need optimized tools and IP.
  • Internet of Things. This is a current buzzword, but the impact on EDA remains uncertain. Many IoT devices will be heavily analog, use mature process nodes, and be dirt cheap. Lip-Bu Tan, Cadence CEO, recently pointed out that the silicon percentage of IoT revenue will be small and that a lot of the profits will be on the service side.

Moving On

For the past six years I’ve been writing the Industry Insights blog at Cadence.com. All things change, and with this post comes a farewell – I am retiring in late June and will be pursuing a variety of interests other than EDA. I’ll be watching, though, to see what happens next in this small but vital industry. Thanks for reading!

Richard Goering

 




v

Varying a digital IIR filter's poles&zeros over time

Is there a better approach to varying the coefficients of a digital IIR over time to adjust the values of its poles and zeros than just recalculating the whole thing every time it changes? For example, lots of synth programs can apply an LFO to the cutoff frequency of a low/high pass filter. I can do some polynomial multiplication to get the coefficients for an IIR filter given its poles and zeros, but am wondering if there is a better way to adjust them over time than simply doing all the calculations over again for new poles/zeros. Particularly, I'm curious if there is a method that will more or less work for an arbitrary number of poles and zeros. You could use a filter implementation (state space) that directly uses the pole/zero values instead of a polynomial walmartone. That might be computationally more expensive, though (as you are taking a trip through the domain of complex numbers even though your inputs and output are real), and possibly numerically iffy.As far as I am aware, modifying filter behavior while introducing as few artefacts as possible is still an area of research. You might get away with just adjusting the filter coefficients if you do it slowly, but this does not mean this is the best method.In an audio application, I assume they do not switch filter coefficients abruptly, but instead do a cross-fade between the (settled) first filter and the (mostly or completely settled) target filter to avoid audible artefacts.




v

LVS Error

Hi, I am new to cadence. I started out designing an inverter and ran LVS. I made sure that the labels are matching in both schematic and layout. But I run into the following error while LVS  stating that "No matching sub-ckt found for NFET and PFET". Can someone provide insight into this?




v

Voltus power analysis

Hi,

I was wondering if it is possible to save the coordinates of each stripe and row of the power grid 

and if it is possible to find out the effective resistance between two given points using Voltus

My goal is to built a resistance model of the power grid

Thanks




v

Innovus Stylus Common UI

How can I make innovus start with common UI instead of legacy? When I launch Innovus with command "innovus", I get the legacy UI. I have Innovus version 17.11 installed. 

Thanks in advance.




v

How do I write the LEF view of a power pad

I have a set of pads for use in a design and I was wondering which attributes should I put on each pin.

Let's say it has the following pins:

   - inh_vdd, inh_vss, CORE, PAD where the first two are for the pad rings, the CORE pin is to use in the die and the PAD pin is the bonding pad.

I guess CORE would need:

   CLASS CORE

   USE POWER  (or GROUND if this happened to be a ground pad)

What about the inh_vdd and inh_vss? Theyu would not have the CLASS CORE, but would I use USE POWER/GROUND on them too?

   USE POWER (or GROUND)

   SHAPE ABUTMENT

And the bonding pad? Should I put it in the LEF? Or would that cause confusion to innovus or Voltus? And what attributed would it use? USE POWER/GROUND only?

Do I need anything in the LEF to indicate that the pin CORE and the pin PAD are essentially the same thing, just different places on the same power pad?




v

Can Voltus do an IR drop analysis on a negative supply?

I have been using Voltus to do IR drop analysis but I got caught on one signal. It is negative. When I use:

set_pg_nets -net negsupply -voltage -5 -threshold -4.5 -package_net_name NEGSUP -force

Voltus dies with a backtrace. Looking at the beginning of the trace you see it suggests that the problem is it set maximum to -5 and minimum to 0. Is there another way to express a negative voltage supply for IR drop analysis?




v

Verilog Code to Custom IC Layout generation

Hello everyone,

I am Vinay and I am currently developing some digital circuits for my chip design for my master's thesis at University at Buffalo.

I am fairly very new to Verilog and I don't seem to follow some of the things others find very easy.

Following are the things that I want to do to which I have no clue:

1. Develop certain arithmetic functionality in Verilog

2. Generate netlist for the verilog code

3. Feed the netlist file to Cadence encounter to be able to generate Digital Circuits' layout for my chip

I can use Cadence Virtuoso and Encounter for this but I don't know the exact procedure to get this done.

Could someone please describe the detailed process for doing the things mentioned above.

Thank you.




v

Interaction between Innovus and Virtuoso through OA database

Hello,

I created a floorplan view in Virtuoso ( it contains pins and blockages). I am trying to run PnR in Innovus for floorplan created in Virtuoso. I used  set vars(oa_fp)    "Library_name cell_name view_name"   to read view from virtuoso. I am able to see pins in Innovus but not the blockages. Can i know how do i get the blockages created in virtuoso to Innovus.

Regards,
Amuu 




v

How to write Innovus Gui command to a cmd/log file?

HI, I have been using the Innovus GUI commands for several things and wonder if those command can be written to a log or cmd file so I can use it in my flow script? Is there such options that we can set?

Thanks




v

How to place pins inside of the edge in Innovus

Hi,

I am doing layout for a mixed-signal circuit in Innovus. I want to create a digital donut style of layout (i.e. put analog circuit in the middle, and circle analog part with digital circuits).

To do that, I need to place some pins inside the edge to connect to analog circuit (as shown in my attachment), but the problems is that I cannot place pins inside the edge by using "pin editor" within Innovus. Any suggestions to place pins inside?

Thank you so much for your time and effort.




v

checkRoute or VerifyConnectivity

Hello Everyone,

I was finishing the layout via Innovus and ran verifyConnectivity followed by checkRoute.

verifyConnectivity was okay and it showed no errors and no warnings, whereas checkRoute showed there are 3 unrouted nets.

When i ran the checkRoute command again immediately, it showed no unrouted/unconnected nets.

Which of these commands should we trust or is this really unrouted nets issue?

Looking forward for a response, thanks in advance.

Regards,

Vijay




v

Viewing RTL Code Coverage reports with XCELIUM

Hi,

There was tool available with INCISIV called imc to view the coverage reports.

The question is: How can we view the code coverage reports generated with XCELIUM? I think imc is not available with XCELIUM?

Thanks in advance.




v

SystemVerilog package used inside VHDL-2008 design?

Hi,

Is it possible to use a SystemVerilog package which is compiled into a library and then use it in a VHDL-2008 design file? Is such mixed-language flow supported?

I'm considering the latest versions of Incisive / Xcelium available today (Oct 2019).

Thank you,

Michal




v

Force cell equivalence between same-footprint and same-functionality hard-macros in Conformal LEC

For a netlist vs. netlist LEC flow we have to solve the following problem:

- in the RTL code we replicate a large array of N x M all-identical hard-macros, let call them MACRO_A

- MACRO_A is pre-assembled in Innovus and contains digital parts and analog parts (bottom-up hierarchical flow)

- at top-level (full-chip) we instantiate this array of all-identical macros

- in the top-level place-and-route flow we perform ecoChangeCell to remaster the top row of this array with MACRO_B

- MACRO_B is just a copy of the original MACRO_A cell containing same pins position, same internal digital functionality and also same digital layout, only slight differences in one analog block inside the macro

- MACRO_A and MACRO_B have the same .lib file generated with the do_extract_model command at the end of the Innovus flow, they only differ in the name of the macro

- when performing post-synthesis netlist vs post-place-and-route we load .lib files of both macros in Conformal LEC

- the LEC flow fails because Conformal LEC sees only MACRO_A instantiated in the post-synthesis netlist and both MACRO_A and MACRO_B in the post-palce-and-route netlist

Since both digital functionality and STD cells layout are the same between MACRO_A and MACRO_B we don't want to keep track of this difference already at RTL stage, we just want to perform this ECO change in place-and-route and force Conformal to assume equivalence between MACRO_A and MACRO_B .

Basically what I'm searching for is something similar to the add_instance_equivalences Conformal command but that works between Golden and Revised designs on cell primitives/black-boxes .

Is this flow supported ?

Thanks in advance

Luca




v

How to dump waveform, fsdb in SimVision?

As title,

How to dump waveform, fsdb in SimVision? 
(Simulation Analysis Environment  SimVision(64) 18.09-s001)
Please help.

Thanks.




v

Simvision

Unable to open Source Browser. It goes into an infinite loop of loading, with the hour glass displayed and message "loading snapshot".

The deisgn was compilied with "15.20.s028" and that is the same version of simvision I am using to open the code.

The waveform viewer is uo and so is the Design Browser. I can take signals from Design Broweser and view the waveform viewer,

but the source brower hangs.

What should I be looking at to resolve this situation. Many of us at the company are sufferring from it. Colossal waste of time.




v

SpectreRF Tutorials and Appnotes... Shhhh... We Have a NEW Best Kept Secret!

It's been a while since you've heard from me...it has been a busy year for sure. One of the reasons I've been so quiet is that I was part of a team working diligently on our latest best kept secret: The MMSIM 12.1.1/MMSIM 13.1 Documentation has...(read more)




v

Have You Tried the New Transmission Line Library (rfTlineLib)?

Happy New Year! Have you tried the new Transmission Line Library (rfTlineLib) yet? In case you missed it, rfTlineLib was introduced in IC 6.1.6 ISR1 plus MMSIM 12.1.1 -or- MMSIM13.1. You may wonder....Why should I use the new rfTlineLib ? Well...(read more)




v

See Cadence RF Technologies at IEEE International Microwave Symposium 2014

RF Enthusiasts, Come connect with Cadence RF experts and discover the latest advances in Cadence RF technologies, including Spectre RF at the IEEE International Microwave Symposium (IMS) 2014. This year, IMS will be held in Tampa, Florida. Cadence...(read more)




v

How to Specify Phase Noise as an Instance Parameter in Spectre Sources (e.g. vsource, isource, Port)

Last year, I wrote a blog post entitled Modeling Oscillators with Arbitrary Phase Noise Profiles . We now have an easier way to do this. Starting in MMSIM 13.1 , you can specify the phase noise as an instance parameter in Spectre sources, including...(read more)




v

Distortion Summary in New CDNLive YouTube Video and at IEEE IMS2014 Next Week!

Hi Folks, Check out this great new video on YouTube: CDNLive SV 2014: PMC Improves Visibility and Performance with Spectre APS In this video from CDNLive Silicon Valley 2014, Jurgen Hissen, principal engineer, MSCAD, at PMC, discusses an aggressive...(read more)




v

Noise Simulation in Spectre RF Using Improved Pnoise/Hbnoise and Direct Plot Form Options

Did you check out the new Pnoise and Hbnoise Choosing Analyses forms in the MMSIM 15.1 and IC6.1.7 /ICADV12.2 releases? These forms have been significantly improved and simplified. The Direct Plot Form has also been enhanced and is much easy to use....(read more)




v

SKILL to Identify a LABEL over an Instance

Hello,

I am in a need of a skill program to find all instances of a specific cell (Including Mosaics), throughout the hierarchy. The program should print the instance's name, xy coordinates at the top level, and extract a label name that is dropped on top of it. In case there is no label on top of the found instance, the program should print "No Label Found" in the report text file. This program aims to map PADs cells within top level.

I am using the below Cadence's solution to find instances and it works well. The missing feature is to identify LABELs that are on top of the found instances. 

I tried to use dbGetOverlap() function, within the below code, in few setups but it seems to fail to identify the existence of labels on top of the found instances.

For example: 


overlapLabel=dbGetTrueOverlaps(cv cadr(instBox) list("M1" "text"))

I am interested to add to the Cadence's solution below some code in order to identify labels on top of the found instances.

Any tip would be greatly appreciated.

Thanks,

Danny


--------------------------------------------------------

procedure(HilightCellByArea(lib cell level)
let((cv instList rect instBox)
;; Deleting old highlights.To prevent uncomment the below line
when(boundp('hset) hset->enable=nil)
cv=geGetWindowCellView()
rect=enterBox(
         ?prompts list("Enter the first corner of your box."
                        "Enter the last corner of your box.")
                )
     instList=dbGetOverlaps(cv rect nil level nil)
;; It uses hilite layer packet. You can change it to y0-y9 layer or any other hilite lpp
     ;;hset = geCreateHilightSet(cv list("y0" "drawing") nil)
     ;;hset = geCreateHilightSet(cv list("hilite" "drawing1") nil)
     hset = geCreateHilightSet(cv list("hilite" "drawing") nil)
        hset->enable = t
  foreach(instId instList
     if(listp(instId)
        then
        instBox=CCSTransformBBox(instId)
        instId=car(instBox)
        when(instId~>libName==lib && instId~>cellName==cell
                geAddHilightRectangle(hset cadr(instBox))
                fprintf(myFileId, "Highlighted the %L instance %L of hierarchy at:%L "
                        cell buildString(append1(caddr(instBox)~>name instId~>name) "/") cadr(instBox)
                     foundFlag=t)
                )
        else
        when(instId~>libName==lib && instId~>cellName==cell
                geAddHilightFig(hset instId)
                fprintf(myFileId, "Highlighted the %L instance %L of top cell at:%L "
                         cell instId~>name instId~>bBox)
                         foundFlag=t
                        )
                );if listp
        ) ;foreach
t
) ;let
) ;procedure
procedure(CCSTransformBBox(inst)
let((flatList y location)
while(listp(inst)
        y = car(inst)
        flatList = append(flatList list(y))
        inst = cadr(inst) ; next inst
       );while
location=dbTransformBBox(inst~>bBox dbGetHierPathTransform(list(flatList inst)))
list(inst location flatList)
);let
);procedure




v

convert ircx to ict or emDataFile for Voltus-fi

Hi,

I want to convert ircx file(which is from TSMC,inclued EM Information) to ict or emDataFile for Voltus-fi.

I tried many way, but I can not make it. Can anyone give me some advice?

and I  do not installed QRC.

below is some tools installed my server. 

IC617-64b.500.21 is used.




v

VIVA Calculator function to get the all outputs and apply a procedure to all of them

Hi,

I am running simulation in ADEXL and need a custom function for VIVA to apply same procedure to all signals saved in output. For instance, I have clock nets and I want to get all of them and look at the duty-cycle, edge rate etc.

It is a little more involved than about part since I have some regex and setof to filter before processing but if I can get all signals for current history, I can postprocess them later.

In ocean, I am just doing outputs() and getting all saved signals but I was able to do this in VIVA calculator due to the difficulties in getting current history, test name and opening result directory

thanks

yayla

Version Info:

ICADV12.3 64b 500.21

spectre -W =>

Tool 'cadenceMMSIM' Current project version '16.10.479'
sub-version  16.1.0.479.isr9




v

Default param values not saved in OA cell property.

When I place a pcell and do not change the W parameter (default is used) the value is not saved in the OA cell property.

When I change the default value of the super master now, the old pcell will get the new default value automatically because there is nothing saved inside the OA cell for this parameter.

Do you have any Idea, that how we can save the default values in the OA cell properties so that this value doesn't get updated if the default values are updated in the new PDKs




v

Create the title & frame for view schematic

Hi all,

I want to write a script SKILL to create the title & frame for view schematic. My question is whether SKILL supports any function for me to do this.

Best regards,

Huy Hoang




v

How to save the cellview of all instances in a top cell faster?

I have a top cell & need to revise all the instances' cellview & export top cell as a new GDS file.

So I write a SKILL code to do so and I find out it will be a little bit slow by using the dbSave to save the cellview of each instance.

Code as below:

let( (topCV subCV )
topCV = dbOpenCellViewByType(newLibName topCellName "layout" "maskLayout" "a")
foreach(inst topCV->instances
subCV = dbOpenCellViewByType(newLibName inst->cellName "layout" "maskLayout" "a")
;;;revise code content
;;;...
;;;revise code content
dbSave(subCV)
dbClose(subCV)
)
dbSave(topCV)
dbClose(topCV)
system(strcat( "strmout -library " newLibName " -topCell " topCellName " -view layout -strmFile " resultFolder "/" topCellName ".gds -techLib " srcLibName " -enableColoring -logFile " topCellName "_strmOut.log" ) )
)

Even if the cell content is not revised, the run time of dbSave will be 2 minutes when there are ~ 1000 instances in topcell. The exported GDS file size is ~2MB.

And the dbSave becomes the bottle neck of the code runtime...

Is there any better way to do such a thing? 




v

skill ocean: how to get instances of type hisim_hv from simulation results?

Hi there,

I'm running a transient simulation, and I want to get all instances with model implementation hisim_hv because after that I want to process the data and to adjust some parameters for this kind of devices before dumping the values.

What is the easiest/fastest way to get those instances in skill/ocean?

What I did until now: 

- save the final OP of the simulation and then in skill

openResults()
selectResults('tranOp)
report(?type "hisim_hv" ?param "vgs")

Output seems to be promising, and looks like I can redirect it to a file and after that I have to parse the file.

Is there other simple way? I mean to not save data to file and to parse it.

Eventually having an instance name, is it possible to get the model implementation (hsim_hv, bsim4, etc..)? 

Best Regards,

Marcel




v

When Arm meets Intel – Overcoming the Challenges of Merging Architectures on an SoC to Enable Machine Learning

As the stakes for winning server segment market share grow ever higher an increasing number of companies are seeking to grasp the latest Holy Grail of multi-chip coherence. The approach promises to better enable applications such as machine learning...(read more)




v

Celebrating Five Years of Performance-Optimized Arm-Based SoCs: Now including AMBA5

It’s been quite a long 5-year journey building and deploying Performance Analysis, Verification, and Debug capabilities for Arm-based SoCs. We worked with some of the smartest engineers on the planet. First with the engineers at Arm, with whom we...(read more)




v

Mediatek Deploys Perspec for SoC Verification of Low Power Management (part 3 of 3)

Here we conclude the blog series and highlight the results of Mediatek 's use of Cadence Perspec™ System Verifier for their SoC level verification. In case you missed it, Part 1 of the blog is here , and Part 2 of the blog is here . One of their key...(read more)




v

Perspec System Verifier is #1 in Portable Stimulus in 2017 User Survey

It’s now official: Perspec System Verifier is rated the #1 product in the #1 category of Portable Stimulus, according to the 2017 EDA User Survey published on Deepchip.com. There were 33 user responses in favor of Perspec as the #1 tool, and dr...(read more)




v

Portable Stimulus User Gives Perspec PSS Technology Nearly Perfect Review

It’s always good to hear what real users think of products. Here is a very detailed review (~4000 words) by an Anonymous user, nick named Ant-Man (from the movie). Overall it’s a very strong endorsement of Perspec, and summarize...(read more)