io

The Desperate Passion of Ben Foster

I could barely recognize Ben Foster in 3:10 to Yuma, but I was blown away just the same by him as in his star making turn from Hostage. What makes Foster so special in Yuma?

Yuma contains two of Hollywood’s finest: Russell Crowe and Christian Bale. Bale is excellent, Crowe a little too relaxed to be cock-sure-dangerous. Both are unable to provide the powder-keg relationship that the movie demands.

Into this void steps Ben Foster. He plays Charlie Prince, sidekick to Crowe’s dangerous and celebrated outlaw Ben Wade. When Wade is captured, Prince is infuriated. He initiates an effort suffused with desperate passion to rescue his boss.

Playing Prince with a mildly effeminate gait, Foster quickly becomes the movie’s beating heart. What struck me in particular was that Foster was able to balance method acting with just plain good acting. He plays his character organically but isn’t above drawing attention with controlled staginess.

Gradually, Foster’s willingness to control a scene blend in with that of Prince’s. Is the character manipulating his circumstances in the movie or is it the actor playing a fine hand? Foster is so entertaining, the answer is immaterial.

Rave Out © 2007 IndiaUncut.com. All rights reserved.
India Uncut * The IU Blog * Rave Out * Extrowords * Workoutable * Linkastic




io

We Must Reclaim Nationalism From the BJP

This is the 18th installment of The Rationalist, my column for the Times of India.

The man who gave us our national anthem, Rabindranath Tagore, once wrote that nationalism was “a great menace.” He went on to say, “It is the particular thing which for years has been at the bottom of India’s troubles.”

Not just India’s, but the world’s: In his book The Open Society and its Enemies, published in 1945 as Adolf Hitler was defeated, Karl Popper ripped into nationalism, with all its “appeals to our tribal instincts, to passion and to prejudice, and to our nostalgic desire to be relieved from the strain of individual responsibility which it attempts to replace by a collective or group responsibility.”

Nationalism is resurgent today, stomping across the globe hand-in-hand with populism. In India, too, it is tearing us apart. But must nationalism always be a bad thing? A provocative new book by the Israeli thinker Yael Tamir argues otherwise.

In her book Why Nationalism, Tamir makes the following arguments. One, nation-states are here to stay. Two, the state needs the nation to be viable. Three, people need nationalism for the sense of community and belonging it gives them. Four, therefore, we need to build a better nationalism, which brings people together instead of driving them apart.

The first point needs no elaboration. We are a globalised world, but we are also trapped by geography and circumstance. “Only 3.3 percent of the world’s population,” Tamir points out, “lives outside their country of birth.” Nutopia, the borderless state dreamed up by John Lennon and Yoko Ono, is not happening anytime soon.

If the only thing that citizens of a state have in common is geographical circumstance, it is not enough. If the state is a necessary construct, a nation is its necessary justification. “Political institutions crave to form long-term political bonding,” writes Tamir, “and for that matter they must create a community that is neither momentary nor meaningless.” Nationalism, she says, “endows the state with intimate feelings linking the past, the present, and the future.”

More pertinently, Tamir argues, people need nationalism. I am a humanist with a belief in individual rights, but Tamir says that this is not enough. “The term ‘human’ is a far too thin mode of delineation,” she writes. “Individuals need to rely on ‘thick identities’ to make their lives meaningful.” This involves a shared past, a common culture and distinctive values.

Tamir also points out that there is a “strong correlation between social class and political preferences.” The privileged elites can afford to be globalists, but those less well off are inevitably drawn to other narratives that enrich their lives. “Rather than seeing nationalism as the last refuge of the scoundrel,” writes Tamir, “we should start thinking of nationalism as the last hope of the needy.”

Tamir’s book bases its arguments on the West, but the argument holds in India as well. In a country with so much poverty, is it any wonder that nationalism is on the rise? The cosmopolitan, globe-trotting elites don’t have daily realities to escape, but how are those less fortunate to find meaning in their lives?

I have one question, though. Why is our nationalism so exclusionary when our nation is so inclusive?

In the nationalism that our ruling party promotes, there are some communities who belong here, and others who don’t. (And even among those who ‘belong’, they exploit divisions.) In their us-vs-them vision of the world, some religions are foreign, some values are foreign, even some culinary traditions are foreign – and therefore frowned upon. But the India I know and love is just the opposite of that.

We embrace influences from all over. Our language, our food, our clothes, our music, our cinema have absorbed so many diverse influences that to pretend they come from a single legit source is absurd. (Even the elegant churidar-kurtas our prime minister wears have an Islamic origin.) As an example, take the recent film Gully Boy: its style of music, the clothes its protagonists wear, even the attitudes in the film would have seemed alien to us a few decades ago. And yet, could there be a truer portrait of young India?

This inclusiveness, this joyous khichdi that we are, is what makes our nation a model for the rest of the world. No nation embraces all other nations as ours does. My India celebrates differences, and I do as well. I wear my kurta with jeans, I listen to ghazals, I eat dhansak and kababs, and I dream in the Indian language called English. This is my nationalism.

Those who try to divide us, therefore, are the true anti-nationals. We must reclaim nationalism from them.



© 2007 IndiaUncut.com. All rights reserved.
India Uncut * The IU Blog * Rave Out * Extrowords * Workoutable * Linkastic




io

Population Is Not a Problem, but Our Greatest Strength

This is the 21st installment of The Rationalist, my column for the Times of India.

When all political parties agree on something, you know you might have a problem. Giriraj Singh, a minister in Narendra Modi’s new cabinet, tweeted this week that our population control law should become a “movement.” This is something that would find bipartisan support – we are taught from school onwards that India’s population is a big problem, and we need to control it.

This is wrong. Contrary to popular belief, our population is not a problem. It is our greatest strength.

The notion that we should worry about a growing population is an intuitive one. The world has limited resources. People keep increasing. Something’s gotta give.

Robert Malthus made just this point in his 1798 book, An Essay on the Principle of Population. He was worried that our population would grow exponentially while resources would grow arithmetically. As more people entered the workforce, wages would fall and goods would become scarce. Calamity was inevitable.

Malthus’s rationale was so influential that this mode of thinking was soon called ‘Malthusian.’ (It is a pejorative today.) A 20th-century follower of his, Harrison Brown, came up with one of my favourite images on this subject, arguing that a growing population would lead to the earth being “covered completely and to a considerable depth with a writhing mass of human beings, much as a dead cow is covered with a pulsating mass of maggots.”

Another Malthusian, Paul Ehrlich, published a book called The Population Bomb in 1968, which began with the stirring lines, “The battle to feed all of humanity is over. In the 1970s hundreds of millions of people will starve to death in spite of any crash programs embarked upon now.” Ehrlich was, as you’d guess, a big supporter of India’s coercive family planning programs. ““I don’t see,” he wrote, “how India could possibly feed two hundred million more people by 1980.”

None of these fears have come true. A 2007 study by Nicholas Eberstadt called ‘Too Many People?’ found no correlation between population density and poverty. The greater the density of people, the more you’d expect them to fight for resources – and yet, Monaco, which has 40 times the population density of Bangladesh, is doing well for itself. So is Bahrain, which has three times the population density of India.

Not only does population not cause poverty, it makes us more prosperous. The economist Julian Simon pointed out in a 1981 book that through history, whenever there has been a spurt in population, it has coincided with a spurt in productivity. Such as, for example, between Malthus’s time and now. There were around a billion people on earth in 1798, and there are around 7.7 billion today. As you read these words, consider that you are better off than the richest person on the planet then.

Why is this? The answer lies in the title of Simon’s book: The Ultimate Resource. When we speak of resources, we forget that human beings are the finest resource of all. There is no limit to our ingenuity. And we interact with each other in positive-sum ways – every voluntary interactions leaves both people better off, and the amount of value in the world goes up. This is why we want to be part of economic networks that are as large, and as dense, as possible. This is why most people migrate to cities rather than away from them – and why cities are so much richer than towns or villages.

If Malthusians were right, essential commodities like wheat, maize and rice would become relatively scarcer over time, and thus more expensive – but they have actually become much cheaper in real terms. This is thanks to the productivity and creativity of humans, who, in Eberstadt’s words, are “in practice always renewable and in theory entirely inexhaustible.”

The error made by Malthus, Brown and Ehrlich is the same error that our politicians make today, and not just in the context of population: zero-sum thinking. If our population grows and resources stays the same, of course there will be scarcity. But this is never the case. All we need to do to learn this lesson is look at our cities!

This mistaken thinking has had savage humanitarian consequences in India. Think of the unborn millions over the decades because of our brutal family planning policies. How many Tendulkars, Rahmans and Satyajit Rays have we lost? Think of the immoral coercion still carried out on poor people across the country. And finally, think of the condescension of our politicians, asserting that people are India’s problem – but always other people, never themselves.

This arrogance is India’s greatest problem, not our people.



© 2007 IndiaUncut.com. All rights reserved.
India Uncut * The IU Blog * Rave Out * Extrowords * Workoutable * Linkastic




io

Cadence Genus Synthesis Solution – the Next Generation of RTL Synthesis

Physical synthesis has been around in various forms for many years. The basic idea is to bring some awareness of physical layout into synthesis. This week (June 3, 2015) Cadence is rolling out the Genus™ Synthesis Solution, a next-generation RTL synthesis tool that takes physical awareness in some new directions.

Here are four important things to know about Genus technology:

  • A massively parallel architecture improves turnaround time by up to 5X while maintaining quality of results
  • The Genus solution synthesizes up to 10M+ instances flat without impacting power, performance and area (PPA)
  • The Genus solution provides tight correlation with the Innovus Implementation System, using the same placement and routing algorithms
  • Globally focused PPA optimization saves up to 20% datapath area and power

Compared to previous-generation products such as the Cadence Encounter RTL Compiler Advanced Physical Option, the Genus solution approaches physical synthesis in a different way. The Encounter solution applied physical optimization “at the tail end of synthesis,” said David Stratman, senior principal product manager at Cadence. “We were doing a final incremental push, but we could only do so much, since we had locked in a lot of the earlier steps from a logical-only synthesis perspective.”

Genus Synthesis Solution supports the physical synthesis features in the previous Encounter solution, but it also brings the full physical scope upstream to RTL logic designers. “It’s going to enable the unit-level RTL designer to gain the benefits of physical synthesis without having to understand it,” Stratman said. As an example, users can apply generic (unmapped) placement at the earliest stages of synthesis, using a lightweight version of the Innovus placement engine. The bottom line: “Genus is a full solution where every step of synthesis can be done physically.”

Getting Massively Parallel

If you bring physical data into synthesis, you need a way to improve capacity and runtimes, especially with today’s gigantic advance-node SoCs. That’s why a massively parallel architecture is the cornerstone of the Genus solution. In this way, the Genus solution is following in the footsteps of the Innovus Implementation System, which also provides a massively parallel architecture.

Both the Innovus and Genus solutions can handle blocks of 10M instances flat. Given that SoCs today may have up to 100M instances, and often up to 50-100 top-level blocks, this is an important capability. Many tools today will only handle blocks of 1M instances. As a result, design teams often have to constrain block sizes.

Genus technology offers timing-driven, multi-level design partitioning across multiple threads and machines. It enables a near-linear runtime scaling without impacting PPA. According to Stratman, the Genus solution will scale well beyond 64 CPUs for a large design, with a “sweet spot” around 8-20 CPUs for today’s typical block sizes. Runs that used to take days, he noted, can now be done in hours.

As shown below, Genus technology leverages parallelism at three levels. The Genus solution can distribute design partitions to multiple threads or CPUs, and also supports local algorithm-level multithreading on each machine with shared memory. An adaptive scheduler ensures the best use of the available CPUs.


Fig. 1 – Genus Synthesis Solution provides three levels of parallelism

With its massive parallelism, Stratman said, Genus technology can obtain production-level quality of results (QoR) in runtimes typically seen in “prototype-level” synthesis runs. The “secret sauce,” he said, is in the partitioning. Cadence has found a way to generate partitions in a way that “slices the design more intelligently, and takes advantage of the Genus database to merge partitions without losing timing, power, or area,” Stratman said.

Playing in the Sandbox

In the Genus Synthesis Solution, a process called “sandboxing” allows any subset or partition of a design to be extracted along with full timing and a physical context. Optimization algorithms will treat a sandbox as a complete design.

The “Clipper” flow clips out or extracts the context of the larger SoC blocks. “It’s kind of a skeleton floorplan but it has all the timing information,” Stratman said. These extracted contexts include all the critical physical information to make the right RTL synthesis choices at the unit level. This information is used to streamline the handoffs between unit-level RTL designers, integration engineers, and implementation engineers. It’s a way for logic designers to gain some physical knowledge without having to be a physical synthesis expert, or without having to run a full top-level synthesis.

Fig. 2 – Clipper flow provides context for unit-level blocks

Correlation with Innovus Implementation System

Although Genus technology can work with third-party IC implementation systems, it shares algorithms and engines with Innovus Implementation System, as well as a common user interface. As shown below, both the Genus and Innovus solutions use a table-based Quantus QRC parasitic extraction, effective current source model (ECSM) and composite current source (CCS) delay calculations, and a unified global routing engine. Timing and wire length claim a 5% correlation.

Fig. 3 – Genus Synthesis Solution offers tight correlation with Innovus Implementation System

Genus technology doesn’t model everything to the same level of accuracy as the Innovus solution, however. “We chose to be lighter weight and more nimble to get expected runtimes,” Stratman said. A tight correlation is possible because the Genus and Innovus solutions use a similar code base. This correlation will be tighter than that between Encounter RTL Compiler Advanced Physical Option and the Encounter Digital Implementation System today.

Genus Synthesis Solution uses a new Hybrid Global Router that provides the ability to resolve congestion and construct layer-aware, timing-driven wire topologies. This accelerates analysis and debug, and reduces iterations. Users can avoid blockages and see a full Manhattan route as opposed to “flight lines.” Layer awareness is particularly important, given the large RC variations within the metal stack at advanced process nodes.

A version of the Innovus GigaPlace engine is available within the Genus solution. Here, users can do an RTL-level generic gate placement early in the synthesis flow (“generic gate” means there is no mapping into standard cell libraries, but there’s still an area estimate). This helps designers understand PPA tradeoffs earlier.

While users can go all the way to a design-rule “legal” placement with Genus Synthesis Solution, this isn’t generally recommended. “You can do a placement and use the same algorithms as GigaPlace and get a nice correlation without all the runtimes and additional steps of doing a fully legal placement,” Stratman said.

So where does Genus technology end and Innovus technology begin? That’s up to the user. You could use the Genus solution for logical synthesis and run all physical implementation in the Innovus system. If you run physical synthesis within the Genus solution, there’s more work earlier in the flow, but you get better insights into downstream problems and reduce iterations.

“Physical synthesis should be no more than 2X [runtime] of logic synthesis,” Stratman said. “All of the runtime that moves up should be shaved off of the place-and-route stages, because now you can do lightweight incremental optimization and incremental placement. The overall flow should be runtime neutral or better.”

Be Globally Aware

Finally, Genus Synthesis Solution offers a globally focused early PPA optimization across the whole datapath, delivering up to a 20% area reduction in the datapath. Stratman noted that this capability is a follow-on to an RCP feature called “globally focused mapping” that can determine the best cells to use in a library. What’s new with the Genus solution is that this concept has been applied at the arithmetic level.

For example, there are many ways to configure a multiplier – you may want to prioritize speed, power, or size. In the past, Stratman noted, synthesis tools have not been very good at globally optimizing the architecture selection for PPA optimization. “We can [now] find the most efficient global datapath implementation for a given region,” he said.

For further information about the Cadence Genus Synthesis Solution, including a datasheet and technical product brief, see this landing page.

Richard Goering

Related Blog Posts

Designer View – RTL Synthesis Success Strategies at 28nm and Below

Front-End Design Summit: The Future of RTL Synthesis and Design for Test

Physically-Aware Synthesis Helps Design a New Computer Architecture

 




io

Cadence JasperGold Brings Formal Verification into Mainstream IC Verification Flows

Formal verification is a complex technology that has traditionally required experts or specialized teams who stood apart from the IC design and verification flow. Taking a different approach, a new release of the Cadence JasperGold formal verification platform (June 8, 2015) provides formal techniques that complement simulation, emulation, and debugging in the form of “Apps” or under-the-hood solutions that any design or verification engineer can use.

JasperGold was the initial (in fact only) product of Jasper Design Automation, acquired by Cadence in 2014. Jasper pioneered the formal Apps concept several years ago. While the company had previously sold JasperGold as a one-size-fits-all solution, Jasper began selling semi-automated JasperGold Apps that solved specific problems using formal analysis technology.

The new release is the next generation of JasperGold and will be available later this month. It includes three major improvements over previous Cadence and Jasper formal analysis offerings:

  • A unified Cadence Incisive and JasperGold formal verification platform delivers up to 15X performance gain over previous solutions.
  • JasperGold is integrated into the Cadence System Development Suite, where it provides formal-assisted simulation, emulation, and coverage. As a result, System Development Suite users can find bugs three months earlier than existing verification methods.
  • JasperGold’s formal analysis engines are integrated with the recently announced Indago debug platform, automating root cause analysis and on-the-fly, what-if exploration.

Best of Both Formal Verification Worlds

Taking advantage of technologies from both Cadence and Jasper, the new JasperGold represents a “best of both worlds” solution, according to Pete Hardee, product management director at Cadence. This solution combines technologies from the Cadence Incisive Enterprise Verifier and Incisive Formal Verifier with JasperGold formal analysis engines.

For example, to ease migration from Incisive formal tools, Cadence has integrated an Incisive common front end into the JasperGold apps platform. Jasper formal engines can run within the Incisive run-time environment. Cadence has also brought some selected Incisive formal engines into JasperGold.

As shown to the right, the JasperGold platform supports both the existing JasperGold front-end parser and the Incisive front-end parser. Hardee observed that this dual parser arrangement simplifies migration from Incisive formal tools to JasperGold, and provides a common compilation environment for people who want to use JasperGold with Incisive simulation. Further, the common run-time environment enables formal-assisted simulation.

The combination of JasperGold engines and Incisive engines supports two use models for formal analysis: formal proofs and bug hunting. In the first case, formal engines try all combinations of inputs without a testbench. The test is driven by formal properties written in languages such as SVA (SystemVerilog assertions) or PSL (Property Specification Language). Completion of a property is exhaustive proof that something can or cannot happen. This provides a “much stronger result” than simulation, Hardee said.

He also noted that formal analysis doesn’t necessarily require that all properties are completed. “You can get a lot of value even if proofs don’t complete,” he said. “Proofs that run deep enough to find bugs are just fine.”

Bug hunting involves random searches, and JasperGold bug hunting engines are very fast. However, these engines don’t necessarily use the most optimal path to get to a bug. So, Cadence engineers brought a constraint solver from Incisive and integrated it into JasperGold. “It looks at the constraints in the environment and gives you a better starting point,” Hardee said. “It takes more up-front time, but once you’ve done that the bug hunting engines can actually take a shorter path and find a bug a lot quicker.”

Another new JasperGold capability from the Incisive Formal Verifier is called “search pointing.” This uses simulation to penetrate deeply into the state space, and then kicks off a random formal search from a given point that you’ve reached in simulation. This technique makes it possible to find bugs that are very deep in the design.

It is probably clear by now that a number of different formal “engines” may be required to solve a given verification problem. Traditionally, a formal tool (or user) will farm a problem out to many engines and see which one works best. To put more intelligence into that process, Cadence launched the Trident “multi-cooperating engine” a couple of years ago. That has now been brought into JasperGold, where it helps “orchestrate” the engines according to what will work best for the design. This is a big part of the reason for the 15X speedup noted earlier in this post.

Integration with System Development Suite

The Cadence System Development Suite is an integrated set of hardware/software development and verification engines, including virtual prototyping, Incisive simulation, emulation, and FPGA-based prototyping. As shown below, JasperGold technology is integrated into the System Development Suite in several places, including formal-assisted debug, formal-assisted verification closure, formal-assisted simulation, formal-assisted emulation, and the Incisive vManager verification planning tool.

Formal-assisted emulation sounds like it should be easy, especially since Cadence has both accelerated verification IP (VIP) and assertion-based VIP. However, there’s a complication. Accelerated VIP represents less verification content than simulation VIP, because you have to remove many checkers to get VIP to compile on a Palladium emulator. That’s because the Palladium requires synthesizable code.

What you can do, however, is use assertion-based VIP in “snoop mode” as shown below. Assertion-based VIP coded in synthesizable SystemVerilog can replace the missing checkers in accelerated VIP. In this diagram, everything in the green box is running in the emulator and is thus completely accelerated.

 

Another example of formal-assisted emulation has to do with deep traces. As Hardee noted, emulation will produce very long traces, and it can be very difficult to find a point of interest in the trace and determine what caused an error. With formal-assisted emulation, users can find interesting events within the traces and create properties that mark them, so a debugger can find these events and trace back to the root cause.

Formal-assisted verification closure is available with the new JasperGold release. This is possible because you can use the vManager product to determine which tasks were completed by formal engines. It’s important information for verification managers who are not used to formal tools, Hardee noted.

Another aspect of formal-assisted verification closure is the JasperGold Unreachability Analysis (UNR) App, which can save simulation users weeks of time and effort. This App takes in the simulation coverage database and RTL, and automatically generates properties to explore coverage holes and determine if holes are reachable or unreachable. The App then generates an unreachable coverage point database. If the unreachable code does something useful, there’s a bug in the design or the testbench; if not, you don’t have to worry about it. The diagram below shows how it works.

Formal-Assisted Debugging

The third major component of the JasperGold announcement is the integration of formal analysis into the Indago debugging platform. As shown below, this platform has several apps, including the Indago Debug Analyzer. Two formal debug capabilities from the Jasper Visualize environment have been added to the the Indago Debug Analyzer:

  • Highlight Relevant Logic: This highlights the “cone of influence,” or the logic that is involved in reaching a given point
  • Why: This button highlights the immediate causes for a given event, and allows users to trace backwards in time

 

More formal capabilities will come with the Indago Advanced Debug Analyzer app, scheduled for release towards the end of 2015. This includes Quiet Trace, a Jasper capability that reduces trace activity to transactions relevant to an event. Also, a what-if analysis allows on-the-fly trace editing and recalculation to explore effects and sensitivities, without having to re-compile and re-execute the simulation.

Finally, Cadence has a Superlint flow that is now fully integrated with the JasperGold Visualize debugger. This two-tiered flow includes a basic lint capability as well as automated formal analysis based on the JasperGold Structural Property Synthesis app. “This could be a very good entry point for designers to start using formal,” Hardee said.

“Formal is taking off,” Hardee concluded. “People are no longer talking about return on investment for formal—they have established that. Now they’re supporting a proliferation of formal in their companies such that a wider set of people experience the benefit from that proven return on investment.”

Further information is available at the JasperGold Formal Verification Platform (Apps) page.

Richard Goering

Related Blog Posts

JUG Keynote—How Jasper Formal Verification Technology Fits into the Cadence Flow

Why Cadence Bought Jasper—A New Era in Formal Analysis

Q&A: An R&D Perspective on Formal Verification—Past, Present and Future




io

Gary Smith at DAC 2015: How EDA Can Expand Into New Directions

First, the good news. The EDA industry will grow from $6.2 billion in 2015 to $9.0 billion in 2019, according to Gary Smith, chief analyst at Gary Smith EDA. Year-to-year growth rates will range from +4% to +11.2%.

But in his annual presentation on the eve of the Design Automation Conference (DAC 2015), Smith noted that Wall Street is unimpressed. “The people I talk to want long-term steady growth, no sharp up-turns, no sharp downturns,” Smith said. “To the rest of Wall Street, we’re boring.”

Smith spent the rest of his talk noting how EDA can be a lot less boring and, potentially, a whole lot bigger. For starters, what if we add semiconductor IP to EDA revenues? Now we’re looking at $12.2 billion in revenue by 2019, Smith said. (He acknowledged, however, that the IP market itself is going to take a “dip” due to the move towards platform-based IP and away from conventional piecemeal IP).

This still is not enough to get Wall Street’s attention. Another possibility is to bring embedded software development into the EDA industry. This is not a huge market – about $2.6 billion today – but it is an “easy growth market for us,” according to Smith.

Chasing the Big Bucks

But the “big bucks” are in mechanical CAD (MCAD), Smith said. In the past the MCAD market has always been bigger than EDA, but now EDA is catching up. The MCAD market is about $6.6 billion now. Synopsys and Cadence are larger than PTC and Siemens, two of the main players in MCAD.

There may be some good acquisition possibilities coming up for EDA vendors, Smith said – and if we don’t buy MCAD companies, they might buy EDA companies. Consider, for example, that Ansoft bought Apache and Dassault bought Synchronicity. (Note: Siemens PLM Software is a first-time exhibitor at DAC 2015).

What about other domains? Smith said that EDA companies could conceivably move into optical design, applications development software, biomedical design, and chemical design. The last if these is probably the most tenuous; Smith noted that EDA vendors have yet to look into chemical design.

Applications development software is the biggest market on the above list, but that means competing with Microsoft, IBM, and Oracle. “You’re in with the big boys – is that a good idea?” Smith asked.

Perhaps there’s an opening for a “big play” for an MCAD provider. Smith noted that mechanical vendors are focusing on product data management (PDM). This “is really the IT of design,” Smith said. “They have a lot of hope that the IoT [Internet of things] market is going to give them an opportunity to capture the software that goes from the ground to the cloud. Maybe we can let them have PDM and see if we can take the tool market away from them, or acquire it away from them.”

In conclusion, Smith asked, should the EDA industry accelerate its growth? “The mechanical vendors have already shown interest in acquiring EDA vendors,” he said. “We may not have a choice.”

Richard Goering

NOTE: Catch our live blog from DAC 2015, beginning Monday morning, June 8! Click here

 

 

 




io

DAC 2015 Accellera Panel: Why Standards are Needed for Internet of Things (IoT)

Design and verification standards are critical if we want to get a new generation of Internet of Things (IoT) devices into the market, according to panelists at an Accellera Systems Initiative breakfast at the Design Automation Conference (DAC 2015) June 9. However, IoT devices for different vertical markets pose very different challenges and requirements, making the standards picture extremely complicated.

The panel was titled “Design and Verification Standards in the Era of IoT.” It was moderated by industry editor John Blyler, CEO of JB Systems Media and Technology. Panelists were as follows, shown left to right in the photo below:

  • Lu Dai, director of engineering, Qualcomm
  • Wael William Diab, senior director for strategy marketing, industry development and standardization, Huawei
  • Chris Rowen, CTO, IP Group, Cadence Design Systems, Inc.

 

In opening remarks, Blyler recalled a conversation from the recent IEEE International Microwave Symposium in which a panelist pointed to the networking and application layers as the key problem areas for RF and wireless standardization. Similarly, in the IoT space, we need to look “higher up” at the systems level and consider both software and hardware development, Blyler said.

Rowen helped set some context for the discussion by noting three important points about IoT:

  • IoT is not a product segment. Vertical product segments such as automotive, medical devices, and home automation all have very different characteristics.
  • IoT “devices” are components within a hierarchy of systems that includes sensors, applications, user interface, gateway application (such as cell phone), and finally the cloud, where all data is aggregated.
  • A bifurcation is taking place in design. We are going from extreme scale SoCs to “extreme fit” SoCs that are specialized, low energy, and very low cost.

Here are some of the questions and answers that were addressed during the panel discussion.

Q: The claim was recently made that given the level of interaction between sensors and gateways, 50X more verification nodes would have to be checked for IoT. What standards need to be enhanced or changed to accomplish that?

Rowen: That’s a huge number of design dimensions, and the way you attack a problem of that scale is by modularization. You define areas that are protected and encapsulated by standards, and you prove that individual elements will be compliant with that interface. We will see that many interesting problems will be in the software layers.

Q: Why is standardization so important for IoT?

Dai: A company that is trying to make a lot of chips has to deal with a variety of standards. If you have to deal with hundreds of standards, it’s a big bottleneck for bringing your products to market. If you have good standardization within the development process of the IC, that helps time to market.

When I first joined Qualcomm a few years ago, there was no internal verification methodology. When we had a new hire, it took months to ramp up on our internal methodology to become effective. Then came UVM [Universal Verification Methodology], and as UVM became standard, we reduced our ramp-up time tremendously. We’ve seen good engineers ramp up within days.

Diab: When we start to look at standards, we have to do a better job of understanding how they’re all going to play with each other. I don’t think one set of standards can solve the IoT problem. Some standards can grow vertically in markets like industrial, and other standards are getting more horizontal. Security is very important and is probably one thing that goes horizontally.

Requirements for verticals may be different, but processing capability, latency, bandwidth, and messaging capability are common [horizontal] concerns. I think a lot of standards organizations this year will work on horizontal slices [of IoT].

Q: IoT interoperability is important. Any suggestions for getting that done and moving forward?

Rowen: The interoperability problem is that many of these [IoT] devices are wireless. Wireless is interesting because it is really hard – it’s not like a USB plug. Wireless lacks the infrastructure that exists today around wired standards. If we do things in a heavily wireless way, there will be major barriers to overcome.

Dai: There are different standards for 4G LTE technology for different [geographical] markets. We have to make a chip that can work for 20 or 30 wireless technologies, and the cost for that is tremendous. The U.S., Europe, and China all have different tweaks. A good standard that works across the globe would reduce the cost a lot.

Q: If we’re talking about the need to define requirements, a good example to look at is power. Certainly you have UPF [Unified Power Format] for the chip, board, and module.

Rowen: There is certainly a big role for standards about power management. But there is also a domain in which we’re woefully under-equipped, and that is the ability to accurately model the different power usage scenarios at the applications level. Too often power devolves into something that runs over thousands of cycles to confirm that you can switch between power management levels successfully. That’s important, but it tells you very little about how much power your system is going to dissipate.

Dai: There are products that claim to be UPF compliant, but my biggest problem with my most recent chip was still with UPF. These tools are not necessarily 100% UPF compliant.

One other concern I have is that I cannot get one simulator to pass my Verilog code and then go to another that will pass. Even though we have a lot of tools, there is no certification process for a language standard.

Q: When we create a standard, does there need to be a companion compliance test?

Rowen: I think compliance is important. Compliance is being able to prove that you followed what you said you would follow. It also plays into functional safety requirements, where you need to prove you adhered to the flow.

Dai: When we [Qualcomm] sell our 4G chips, we have to go through a lot of certifications. It’s often a differentiating factor.

Q: For IoT you need power management and verification that includes analog. Comments?

Rowen: Small, cheap sensor nodes tend to be very analog-rich, lower scale in terms of digital content, and have lots of software. Part of understanding what’s different about standardization is built on understanding what’s different about the design process, and what does it mean to have a software-rich and analog-rich world.

Dai: Analog is important in this era of IoT. Analog needs to come into the standards community.

Richard Goering

Cadence Blog Posts About DAC 2015

Gary Smith at DAC 2015: How EDA Can Expand Into New Directions

DAC 2015: Google Smart Contact Lens Project Stretches Limits of IC Design

DAC 2015: Lip-Bu Tan, Cadence CEO, Sees Profound Changes in Semiconductors and EDA

DAC 2015: “Level of Compute in Vision Processing Extraordinary” – Chris Rowen

DAC 2015: Can We Build a Virtual Silicon Valley?

DAC 2015: Cadence Vision-Design Presentation Wins Best Paper Honors

 

 

 




io

DAC 2015: How Academia and Industry Collaboration Can Revitalize EDA

Let’s face it – the EDA industry needs new people and new ideas. One of the best places to find both is academia, and a presentation at the Cadence Theater at the recent Design Automation Conference (DAC 2015) described collaboration models that are working today.

The presentation was titled “Industry/Academia Engagement Models – From PhD Contests to R&D Collaborations.” It included these speakers, shown from left to right in the photo below:

  • Prof. Xin Li, Electrical and Computer Engineering, Carnegie-Mellon University (CMU)
  • Chuck Alpert, Senior Software Architect, Cadence
  • Prof. Laleh Behjat, Department of Electrical and Computer Engineering, University of Calgary

 

Alpert, who was filling in for Zhuo Li, Software Architect at Cadence, was the vice chair of DAC 2015 and will be the general chair of DAC 2016 in Austin, Texas. “My team at Cadence really likes to collaborate with universities,” he said. “We’re a big proponent of education because we really need the best and brightest students in our industry.”

Contests Boost EDA Research

One way that Cadence collaborates with academia is participation in contests. “It’s a great way to formulate problems to academia,” Alpert said. “We can have the universities work on these problems and get some strategic direction.”

For example, Cadence has been involved with the annual CAD contest at the International Conference on Computer-Aided Design (ICCAD) since the contest was launched in 2012. This is the largest worldwide EDA R&D contest, and it is sponsored by the IEEE Council on EDA (CEDA) and the Taiwan Ministry of Education. Its goals are to boost EDA research in advanced real-world problems and to foster industry-academia collaboration.

Contestants can participate in one of more problems in the three areas of system design, logic synthesis and verification, and physical design. The 2015 contest has attracted 112 teams from 12 regions. Cadence contributes one problem per year in the logic synthesis area. Zhuo Li was the 2012 co-chair and the 2013 chair. The awards will be given at ICCAD in November 2015.

Another step that Cadence has taken, Alpert said, is to “hire lots of interns.” His own team has four interns at the moment. One advantage to interning at Cadence, he said, is that students get to see real-world designs and understand how the tools work. “It helps you drive your research in a more practical and useful direction,” he said.

The Cadence Academic Network co-sponsors the ACM SIGDA PhD Forum at DAC, and Xin Li and Zhuo Li are on the organizing committee. This event is a poster session for PhD students to present and discuss their dissertation research with people in the EDA community. This year’s forum was “packed,” Alpert said, and it’s clear that the event needs a bigger room.

Finally, Alpert noted, Cadence researchers write and publish technical papers at DAC and other conferences, and Cadence people serve on the DAC technical program committee. “We try to be involved with the academic community on a regular basis,” Alpert said. “We want the best and the brightest people to go into EDA because there is still so much innovation that’s needed. It’s a really cool place to be.”

Research Collaboration Exposes Failure Rates

Xin Li presented an example of a successful research collaboration between CMU and Cadence. The challenge was to find a better way to estimate potential failure rates in memory. As noted in a previous blog post, PhD student Shupeng Sun met this challenge with a new statistical methodology that won a Best Poster award at the ACM SIGDA PhD Forum at DAC 2014.

The new methodology is called Scaled-Sigma Sampling (SSS). It calculates the failure rate and accounts for variability in the manufacturing process while only requiring a few hundred, or a few thousand, sample circuit blocks. Previously, millions of samples were required for an accurate validation of a new design, and each sample could take minutes or hours to simulate. It could take a few weeks or months to run one validation.

The SSS methodology requires greatly reduced simulation times. It makes it possible, Li noted, to run simulations overnight and see the results in the morning.

Li shared his secret for success in collaborations. “I want to emphasize that before the collaboration, you have to understand the goal. If you don’t have a clear goal, don’t collaborate. Once you define the goal, stick to it and make it happen.”

Contest Provides Learning Experience

Last year Laleh Behjat handed two of her new PhD students a challenge. “I told them there is an ISPD [International Symposium for Physical Design] contest on placement, and I expect you to participate and I expect you to win. Not knowing anything about placement, I don’t think they realized what I was asking them.”

The 2015 contest was called the Blockage-Aware Detailed Routing-Driven Placement Contest. Results were announced at the end of March at ISPD. And the University of Calgary team, despite its lack of placement experience, took second place.

Such contests provide a good learning tool, according to Behjat. Graduate students in EDA, she said, “have to be good programmers. They have to work in teams and be collaborative, be able to innovate, and solve the hardest problems I have seen in engineering and science. And they have to think outside the box.” A contest can bring out all these attributes, she said.

Further, Behjat noted, contest participants had access to benchmarks and to a placement tool. They didn’t have to write tools to find out if their results were good. Industry sponsors, meanwhile, got access to good students and new approaches for solving problems.

“You can see Cadence putting a big amount of time, effort and money to get students here and get them excited about doing contests,” she said. She advised students in the theater audience to “talk to people in the Cadence booth and see if you can have more ideas for collaboration.”

Richard Goering

Related Blog Posts

EDA Plus Academia: A Perfect Game, Set and Match

Cadence Aims to Strengthen Academic Partnerships

BSIM-CMG FinFET Model – How Academia and Industry Empowered the Next Transistor




io

Verilog Code to Custom IC Layout generation

Hello everyone,

I am Vinay and I am currently developing some digital circuits for my chip design for my master's thesis at University at Buffalo.

I am fairly very new to Verilog and I don't seem to follow some of the things others find very easy.

Following are the things that I want to do to which I have no clue:

1. Develop certain arithmetic functionality in Verilog

2. Generate netlist for the verilog code

3. Feed the netlist file to Cadence encounter to be able to generate Digital Circuits' layout for my chip

I can use Cadence Virtuoso and Encounter for this but I don't know the exact procedure to get this done.

Could someone please describe the detailed process for doing the things mentioned above.

Thank you.




io

Quantus Qrc Extraction of a block

I have completed physical design of a block in innovus. I want to extract rc of that block using quantus .  It will be very helpful if you give step by step procedure and command to run quantus to extract rc of that block.




io

Interaction between Innovus and Virtuoso through OA database

Hello,

I created a floorplan view in Virtuoso ( it contains pins and blockages). I am trying to run PnR in Innovus for floorplan created in Virtuoso. I used  set vars(oa_fp)    "Library_name cell_name view_name"   to read view from virtuoso. I am able to see pins in Innovus but not the blockages. Can i know how do i get the blockages created in virtuoso to Innovus.

Regards,
Amuu 




io

Post-synthesis Simulation Failing when lp_insert_clock_gating true

When I enable clock gating in my synthesis flow (using Genus 18.15), my simulation (using Xcelium) on the post-synthesis netlist fails. The simulation succeeds pre-synthesis and also if I remove clock-gating in the design. I use set_db  lp_insert_clock_gating true to enable clock gating during synthesis. I printed out some of the signals from the netlist and can see where it fails (it incorrectly writes a register). However, I am not sure how to solve this issue or what I should be looking for. Any help would be appreciated. Thanks.




io

Force cell equivalence between same-footprint and same-functionality hard-macros in Conformal LEC

For a netlist vs. netlist LEC flow we have to solve the following problem:

- in the RTL code we replicate a large array of N x M all-identical hard-macros, let call them MACRO_A

- MACRO_A is pre-assembled in Innovus and contains digital parts and analog parts (bottom-up hierarchical flow)

- at top-level (full-chip) we instantiate this array of all-identical macros

- in the top-level place-and-route flow we perform ecoChangeCell to remaster the top row of this array with MACRO_B

- MACRO_B is just a copy of the original MACRO_A cell containing same pins position, same internal digital functionality and also same digital layout, only slight differences in one analog block inside the macro

- MACRO_A and MACRO_B have the same .lib file generated with the do_extract_model command at the end of the Innovus flow, they only differ in the name of the macro

- when performing post-synthesis netlist vs post-place-and-route we load .lib files of both macros in Conformal LEC

- the LEC flow fails because Conformal LEC sees only MACRO_A instantiated in the post-synthesis netlist and both MACRO_A and MACRO_B in the post-palce-and-route netlist

Since both digital functionality and STD cells layout are the same between MACRO_A and MACRO_B we don't want to keep track of this difference already at RTL stage, we just want to perform this ECO change in place-and-route and force Conformal to assume equivalence between MACRO_A and MACRO_B .

Basically what I'm searching for is something similar to the add_instance_equivalences Conformal command but that works between Golden and Revised designs on cell primitives/black-boxes .

Is this flow supported ?

Thanks in advance

Luca




io

How to dump waveform, fsdb in SimVision?

As title,

How to dump waveform, fsdb in SimVision? 
(Simulation Analysis Environment  SimVision(64) 18.09-s001)
Please help.

Thanks.




io

Simvision

Unable to open Source Browser. It goes into an infinite loop of loading, with the hour glass displayed and message "loading snapshot".

The deisgn was compilied with "15.20.s028" and that is the same version of simvision I am using to open the code.

The waveform viewer is uo and so is the Design Browser. I can take signals from Design Broweser and view the waveform viewer,

but the source brower hangs.

What should I be looking at to resolve this situation. Many of us at the company are sufferring from it. Colossal waste of time.




io

Have You Tried the New Transmission Line Library (rfTlineLib)?

Happy New Year! Have you tried the new Transmission Line Library (rfTlineLib) yet? In case you missed it, rfTlineLib was introduced in IC 6.1.6 ISR1 plus MMSIM 12.1.1 -or- MMSIM13.1. You may wonder....Why should I use the new rfTlineLib ? Well...(read more)




io

New Memory Estimator Helps Determine Amount of Memory Required for Large Harmonic Balance Simulations

Hi Folks, A question that I've often received from designers, "Is there a method to determine the amount of memory required before I submit a job? I use distributed processing and need to provide an estimate before submitting jobs." The answer...(read more)




io

Broadband SPICE -- New Tool for S-Parameter Simulation in Spectre RF

Hi All, Here's another great new feature that I've found very helpful... Broadband SPICE is a new tool for S-parameter simulation in Spectre RF. In the MMSIM13.1.1 ( MMSIM13.1 USR1) release (now available on http://downloads.cadence.com), a...(read more)




io

See Cadence RF Technologies at IEEE International Microwave Symposium 2014

RF Enthusiasts, Come connect with Cadence RF experts and discover the latest advances in Cadence RF technologies, including Spectre RF at the IEEE International Microwave Symposium (IMS) 2014. This year, IMS will be held in Tampa, Florida. Cadence...(read more)




io

Distortion Summary in New CDNLive YouTube Video and at IEEE IMS2014 Next Week!

Hi Folks, Check out this great new video on YouTube: CDNLive SV 2014: PMC Improves Visibility and Performance with Spectre APS In this video from CDNLive Silicon Valley 2014, Jurgen Hissen, principal engineer, MSCAD, at PMC, discusses an aggressive...(read more)




io

Noise Simulation in Spectre RF Using Improved Pnoise/Hbnoise and Direct Plot Form Options

Did you check out the new Pnoise and Hbnoise Choosing Analyses forms in the MMSIM 15.1 and IC6.1.7 /ICADV12.2 releases? These forms have been significantly improved and simplified. The Direct Plot Form has also been enhanced and is much easy to use....(read more)




io

How to get test name from test session object?

Hi,

I have a test session object that I am getting like this:

maeTstSession=maeGetTestSession(test ?session session)

Is it possible to get the test name from this object? I am asking because this object passed to several levels of functions and I don't want to pass an additional argument with the test name




io

post-execution on an interrupted SKILL routine

I have a SKILL script that executes the callback of a menu item, and depends on first redefining an environment variable. 

When a user interrupts the script with ctrl-C, the script cannot finish to set the environment variable back to its default value.

How can I write the script in a way that handles a user interrupt to reset the changed environment variable after the interrupt?




io

VIVA Calculator function to get the all outputs and apply a procedure to all of them

Hi,

I am running simulation in ADEXL and need a custom function for VIVA to apply same procedure to all signals saved in output. For instance, I have clock nets and I want to get all of them and look at the duty-cycle, edge rate etc.

It is a little more involved than about part since I have some regex and setof to filter before processing but if I can get all signals for current history, I can postprocess them later.

In ocean, I am just doing outputs() and getting all saved signals but I was able to do this in VIVA calculator due to the difficulties in getting current history, test name and opening result directory

thanks

yayla

Version Info:

ICADV12.3 64b 500.21

spectre -W =>

Tool 'cadenceMMSIM' Current project version '16.10.479'
sub-version  16.1.0.479.isr9




io

Displaying contents of a modeless dialog box during execution of a SKILL script

I have a modeless informational dialog box defined at the beginning of a SKILL script, but its contents don't display until the script finishes.

How do you get a modeless dialog box contents to display while a SKILL script is running?

procedure(myproc()

   prog((myvars)

     hiDisplayAppDBox()    ; opens blank dialog box - no dboxText contents show until script completes!

     ....rest of SKILL code in script...launches child processes

   );prog

);proc




io

Choices in radio field to be displayed in two rows

Hi,

I am trying add multiple choices to my radio field in cdf parameters. when i see the select the instance and try editing the Instance properties I can not view them in a single window. Instead i get a vertical sliding bar. Is there a way to display them in multiple rows?

-Haareeth




io

skill ocean: how to get instances of type hisim_hv from simulation results?

Hi there,

I'm running a transient simulation, and I want to get all instances with model implementation hisim_hv because after that I want to process the data and to adjust some parameters for this kind of devices before dumping the values.

What is the easiest/fastest way to get those instances in skill/ocean?

What I did until now: 

- save the final OP of the simulation and then in skill

openResults()
selectResults('tranOp)
report(?type "hisim_hv" ?param "vgs")

Output seems to be promising, and looks like I can redirect it to a file and after that I have to parse the file.

Is there other simple way? I mean to not save data to file and to parse it.

Eventually having an instance name, is it possible to get the model implementation (hsim_hv, bsim4, etc..)? 

Best Regards,

Marcel




io

Mediatek Deploys Perspec for SoC Verification of Low Power Management (part 3 of 3)

Here we conclude the blog series and highlight the results of Mediatek 's use of Cadence Perspec™ System Verifier for their SoC level verification. In case you missed it, Part 1 of the blog is here , and Part 2 of the blog is here . One of their key...(read more)




io

Cadence Collaborates with Test & Verification Solutions on Portable Stimulus

The Cadence® Connections® Verification Program brings together a worldwide network of services, training, and IP development experts that support Cadence verification solutions. The program members help customer accelerate the adoption of new...(read more)




io

Preparing Accellera Portable Stimulus Standard for Ratification

The Accellera Portable Stimulus Working Group met at the DVCon 2018 to move the process forward towards ratification. While we can't predict exactly when it will be ratified, the goal is now more clearly in sight! Cadence booth was busy with a lo...(read more)




io

What’s Hot in Verification at this Year’s CDNLive? It’s Portable Stimulus Again!

CDNLive is a user conference, and verification is one of the largest categories of content with multiple tracks covering multiple days. Portable stimulus is one of the hottest new areas in verification, and continues to be popular in all venues. At l...(read more)




io

Integration and Verification of PCIe Gen4 Root Complex IP into an Arm-Based Server SoC Application

Learn about the challenges and solutions for integrating and verification PCIe(r) Gen4 into an Arm-Based Server SoC. Listen to this relatively short webinar by Arm and Cadence, as they describe the collaboration and results, including methodology and...(read more)




io

Verification Reflections on 2018

In my predictions for 2018 I had identified five key trends driving verification in 2018 – Security, Safety, Application Specificity, Processor Ecosystems and System Design Enablement, all centered around ecosystems. Looking back now as the yea...(read more)




io

DAC 2019 Preview – Multi-MHz Prototyping for Billion Gate Designs, AI, ML, 5G, Safety, Security and More

Vegas, here we come. All of us fun EDA engineers at once. Be prepared, next week’s Design Automation Conference will be busy! The trends I had outlined after last DAC in 2018—system design, cloud, and machine learning—have...(read more)




io

My Journey - From a Layout Designer to an Application Engineer

Today, we are living in the era where whatever we think of as an idea is not far from being implemented…thanks to machine learning (ML) and artificial intelligence (AI) entering into the...

[[ Click on the title to access the full blog on the Cadence Community site. ]]




io

1G Mobile: AMPS, TOPS, C-450, Radiocom 2000, and All Those Japanese Ones

You can't read anything about technology these days without reading about 5G. But before there was 5G, there was 4G. And before that 3G, 2G, and 1G. A 0G even. For the next few Thursdays,...

[[ Click on the title to access the full blog on the Cadence Community site. ]]




io

Library Characterization Tidbits: Recharacterize What Matters - Save Time!

Recently, I read an article about how failure is the stepping stone to success in life. It instantly struck a chord and a thought came zinging from nowhere about what happens to the failed arcs of a...

[[ Click on the title to access the full blog on the Cadence Community site. ]]




io

Computational Software: A New Paradigm for EDA Tools

Cadence has a new white paper out on Computational Software . I've written on these topics in Breakfast Bytes, most recently in the posts: Computational Software System Analysis: Computational...

[[ Click on the title to access the full blog on the Cadence Community site. ]]




io

RAK Attack: Better Driver Tracing, Faster Palladium Build Time, UVM Register Map Automation

Looking to learn? There's a bunch of new RAKs (Rapid Adoption Kits) available online now!

1) Indago 19.09 Better Driver Tracing and More

Are you new to Indago and not sure where to start? Luckily, there’s a new Rapid Adoption Kit for you: the Indago 19.09 Overview RAK! This neat package contains everything you need to get your debugging started through Indago. In four short labs, plus a brief introductory lab, you’ll have all the basics of Indago 19.09 down—the Indago working environment, the SmartLog, how Indago interacts with the rest of the Cadence Verification Suite, and how Indago uses HDL driver tracing.

Lab 1 discusses the various debugging tools included in Indago and teaches you how to customize your Indago windows and environment settings. Lab 2 covers the SmartLog feature and talks about analyzing and filtering its messages to suit your needs, as well as how to interact with the waveform marker. Lab 3 is an interactive Indago debugging experience—it’ll walk you through how to use Indago and its features in an actual working environment: setting breakpoints, using simulator commands in the Indago console, toolbars, switches, and more. Lab 4 is all things HDL tracing—recording debug data, an introduction to debug assertions, waveform visualizations, driving expression analysis, and single-step driver tracing, among other things.

Interested? Check out the RAK here.

2) IXCOM MSIE: Faster Palladium Build Time

Got several testbenches you want to compile with the same DUT and tests and you want to do it fast? With IXCOM, all you have to do to compile those different testbenches is use the xrun command for each after compiling your DUT. But what exactly is IXCOM, and how does one start using it? This quick RAK can help—here, you’ll learn the basics of using MSIE features with IXCOM, complete with an example to get you started. Using MSIE can vastly improve your build times with Palladium and using IXCOM is the best way to shrink that tedious rebuild time as small as it can get. Check out this RAK here.

3)  JasperGold Control and Status Register Verification App Automates UVM Register Map Verification

New to the JasperGold Control and Status Register (CSR) Verification App for your UVM testbenches? Don’t worry; there’s a RAK for that! This eponymous RAK can get you up and running with this in no time, helping you automate your checks from UVM register map specs. With this RAK, you’ll learn the basics of the JasperGold CSR, how to use JasperGold CSR’s Proof Accelerator, and more. CSR features a model-based approach to predicting a register’s expected value, supports pipeline interfaces, all IP-XACT access policies, and it can fully model any expected register value. It also supports register aliases, read and write semantics, and separate read/write data latencies in any given field.

If this functionality sounds up your alley, you can take a look at this RAK here.




io

Metamorphic Testing: The Future of Verification?

Curious about what’s going on behind the scenes with verification? Bernard Murphy, Jim Hogan, and our own Paul Cunningham are on the case with the “Innovation in Verification” blog stream over at semiwiki.com. Every month, this trio reviews a newly-published paper in academia that pertains to verification and discusses its implications. Be sure to stop by—it’s a great place to see what might be coming down the pipeline someday.

This month, they discuss the implications of metamorphic testing. The purpose of metamorphic testing is to define a verification approach where is there is no “golden reference.” This situation comes up a lot now as designs grow in complexity, and it begs the question: “how does one know the design is verified if there is no standard to compare to?”. Metamorphic testing addresses the problem of not having a “gold standard” to compare to by comparing the results of related tests instead. The paper reviewed by this team used metamorphic testing to study methods of managing JavaScript tags.

Paul saw this as a valuable new class of coverage. Metamorphic testing represents a way to create better distribution analyses through understanding the relationships among tests. This can reveal critical-but-complex issues that traditional verification methods may overlook. He saw this as an emerging class of coverage that new verification tools could be built around. Paul asserted that a future metamorphic-testing-based tool’s main contribution to the field of verification would be to better analyze noisy performance results where the noise is multi-modal. It could be useful in detecting race conditions and similar hard-to-debug anomalies. Paul also sees metamorphic testing as ripe for ML techniques. Overall—Paul sees a bright future for metamorphic testing in verification.

Jim is reminded of Solido and Spice—these metamorphic testing capabilities are “more than just a feature”—they might be a product. Maybe even a whole new class of verification tools, as Paul said.

Bernard says that this topic is “too rich to address in one blog”, so be sure to head over to the post to see more of what the future has in store for verification.




io

New Rapid Adoption Kit (RAK) Enables Productive Mixed-Signal, Low Power Structural Verification

All engineers can enhance their mixed-signal low-power structural verification productivity by learning while doing with a PIEA RAK (Power Intent Export Assistant Rapid Adoption Kit). They can verify the mixed-signal chip by a generating macromodel for their analog block automatically, and run it through Conformal Low Power (CLP) to perform a low power structural check.  

The power structure integrity of a mixed-signal, low-power block is verified via Conformal Low Power integrated into the Virtuoso Schematic Editor Power Intent Export Assistant (VSE-PIEA). Here is the flow.

 

Applying the flow iteratively from lower to higher levels can verify the power structure.

Cadence customers can learn more in a Rapid Adoption Kit (RAK) titled IC 6.1.5 Virtuoso Schematic Editor XL PIEA, Conformal Low Power: Mixed-Signal Low Power Structural Verification.

The RAK includes Rapid Adoption Kit with demo design (instructions are provided on how to setup the user environment). It Introduces the Power Intent Export Assistant (PIEA) feature that has been implemented in the Virtuoso IC615 release.  The power intent extracted is then verified by calling Conformal Low Power (CLP) inside the Virtuoso environment.

  • Last Update: 11/15/2012.
  • Validated with IC 6.1.5 and CLP 11.1

The RAK uses a sample test case to go through PIEA + CLP flow as follows:

  • Setup for PIEA
  • Perform power intent extraction
  • CPF Import: It is recommended to Import macro CPF, as oppose to designing CPF for sub-blocks. If you choose to import design CPF files please make sure the design CPF file has power domain information for all the top level boundary ports
  • Generate macro CPF and design CPF
  • Perform low power verification by running CLP

It is also recommended to go through older RAKs as prerequisites.

  • Conformal Low Power, RTL Compiler and Incisive: Low Power Verification for Beginners
  • Conformal Low Power: CPF Macro Models
  • Conformal Low Power and RTL Compiler: Low Power Verification for Advanced Users

To access all these RAKs, visit our RAK Home Page to access Synthesis, Test and Verification flow

Note: To access above docs, use your Cadence credentials to logon to the Cadence Online Support (COS) web site. Cadence Online Support website https://support.cadence.com/ is your 24/7 partner for getting help and resolving issues related to Cadence software. If you are signed up for e-mail notifications, you can receive new solutions, Application Notes (Technical Papers), Videos, Manuals, and more.

You can send us your feedback by adding a comment below or using the feedback box on Cadence Online Support.

Sumeet Aggarwal




io

New Incisive Low-Power Verification for CPF and IEEE 1801 / UPF

On May 7, 2013 Cadence announced a 30% productivity gain in the June 2013 Incisive Enterprise Simulator 13.1 release.  Advanced debug visualization, faster turn-around time, and the extension of eight years of low-power verification innovation to IEEE 1801/UPF are the key capabilities in the release.

When we talk about low-power verification its easy to equate it with simulation.  For certain, simulation is the heart of a low-power verification solution. Simulation enables engineers to run their design in the context of power intent.  The challenge is that a simulation-only approach is inadequate. For example, if engineers could achieve SoC quality by verifying the individual function of each power control module (PCM), then simulation could be enough.  For a single power domain, simulation can be enough. 

However, when the SoC has multiple power domains -- and we have seen SoCs with hundreds of them -- engineers have to check the PCMs and all of the arcs between the power modes.  These SoCs often synchronize some of the domain switching to reduce overall complexity, creating the potential for signal skew errors on the control signals for the connected domains.  Managing these complexities requires verification methodologies including advanced debug, verification planning, assertion-based verification, Universal Verification Methodology - Low Power (UVM-LP), and more (see Figure 1).

 

Figure 1:  Comprehensive Low-Power Verification 

But even advanced verification methodologies on top of simulation aren't enough.  For example, the state machine that defines the legal and illegal power mode transitions is often written in software. The speed and capacity of the Palladium emulation platform is ideal to verify in this context, and it is integrated with simulation sharing debug, UVM acceleration, and static checks for low-power. And, it reports verification progress into a holistic plan for the SoC.  Another example is the ability to compare the design in the implementation flow with the design running in simulation to make sure that what we verify is what we intend to build.

Taken together, verification across multiple engines provides the comprehensive low-power verification needed for today's advanced node SoCs.  That's the heart of this low-power verification announcement. 

Another point you may have noticed is the extension of the Common Power Format (CPF) based power-aware support in the Incisive Enterprise Simulator to IEEE 1801.  We chose to bring IEEE 1801 to simulation first because users like you sometimes need to mix vendors for regression flows.  Over time, Cadence will extend the low-power capabilities throughout its product suite to IEEE 1801.

If you are using CPF today, you already have the best low-power solution. The evidence is clear:  the upcoming IEEE 1801-2013 update includes many of the CPF features contributed to 1801/UPF to enable methodology convergence.  Since you already have those features in the CPF flow, any migration before you have a mature IEEE 1801-2013 tool flow would reduce the functionality you have today.

If you are using Unified Power Format (UPF) 1.0 today, you want to start planning your move toward the IEEE 1801-2013 standard.  A good first step would be to move to the IEEE 1801-2009 standard.  It fills holes in the earlier UPF 1.0 definition.  While it does lack key features in -2013, it is an improvement that will make the migration to -2013 easier. The Incisive 13.1 release will run both UPF 1.0 and IEEE 1801-2009 power intent today.

Over the next few weeks you'll see more technical blogs about the low-power capabilities coming in the Incisive 13.1 release.  You can also join us on June 19 for a webinar that will introduce those capabilities using the reference design supplied with the Incisive Enterprise Simulator release.

=Adam "The Jouler" Sherer

(Yes, "Sherilog" is still here.  :-) )




io

Low-Power IEEE 1801 / UPF Simulation Rapid Adoption Kit Now Available

There is no better way other than a self-help training kit -- (rapid adoption kit, or RAK) -- to demonstrate the Incisive Enterprise Simulator's IEEE 1801 / UPF low-power features and its usage. The features include:

  • Unique SimVision debugging 
  • Patent-pending power supply network visualization and debugging
  • Tcl extensions for LP debugging
  • Support for Liberty file power description
  • Standby mode support
  • Support for Verilog, VHDL, and mixed language
  • Automatic understanding of complex feedthroughs
  • Replay of initial blocks
  • ‘x' corruption for integers and enumerated types
  • Automatic understanding of loop variables
  • Automatic support for analog interconnections

 

Mickey Rodriguez, AVS Staff Solutions Engineer has developed a low power UPF-based RAK, which is now available on Cadence Online Support for you to download.

  • This rapid adoption kit illustrates Incisive Enterprise Simulator (IES) support for the IEEE 1801 power intent standard. 

Patent-Pending Power Supply Network Browser. (Only available with the LP option to IES)

  • In addition to an overview of IES features, SimVision and Tcl debug features, a lab is provided to give the user an opportunity to try these out.

The complete RAK and associated overview presentation can be downloaded from our SoC and Functional Verification RAK page:

Rapid Adoption Kits

Overview

RAK Database

Introduction to IEEE-1801 Low Power Simulation

View

Download (2.3 MB)

 

We are covering the following technologies through our RAKs at this moment:

Synthesis, Test and Verification flow
Encounter Digital Implementation (EDI) System and Sign-off Flow
Virtuoso Custom IC and Sign-off Flow
Silicon-Package-Board Design
Verification IP
SOC and IP level Functional Verification
System level verification and validation with Palladium XP

Please visit https://support.cadence.com/raks to download your copy of RAK.

We will continue to provide self-help content on Cadence Online Support, your 24/7 partner for learning more about Cadence tools, technologies, and methodologies as well as getting help in resolving issues related to Cadence software. If you are signed up for e-mail notifications, you're likely to notice new solutions, application notes (technical papers), videos, manuals, etc.

Note: To access the above documents, click a link and use your Cadence credentials to log on to the Cadence Online Support https://support.cadence.com/ website.

Happy Learning!

Sumeet Aggarwal and Adam Sherer




io

ST Microelectronics Success with IEEE 1801 / UPF Incisive Simulation - Video

ST Microelectronics reported their success with IEEE 1801 / UPF low-power simulation using Incisive Enterprise Simulator at CDNLive India in November 2013. We were able to meet with Mohit Jain just after his presentation and recorded this video that explains the key points in his paper.

With eight years of experience and pioneering technology in native low-power simulation, Mohit was able to apply Incisive Enterprise Simulator to a low-power demonstrator in preparation for use with a production set-top box chip.  Mohit was impressed with the ease in which he was able to reuse his existing IEEE 1801 / UPF code successfully, including the power format files and the macro models coded in his Liberty files. Mohit also discusses how he used the power-aware Cadence SimVision debugger.

The Cadence low-power verification solution for IEEE 1801 / UPF also incorporates the patent-pending Power Supply Network visualization in the SimVision debugger.  You can learn more about that in the Incisive low-power verification Rapid Adoption Kit for IEEE 1801 / UPF here in Cadence Online Support.

Just another happy Cadence low-power verification user!

Regards,

 Adam "The Jouler" Sherer 




io

Freescale Success Stepping Up to Low-Power Verification - Video

Freescale was a successful Incisive® simulation CPF low-power user when they decided to step up their game. In November 2013, at CDNLive India, they presented a paper explaining how they improved their ability to find power-related bugs using a more sophisticated verification flow.  We were able to catch up with Abhinav Nawal just after his presentation to capture this video explaining the key points in his paper.

Abhinav had already established a low-power simulation process using directed tests for a design with power intent captured in CPF. While that is a sound approach, it tends to focus on the states associated with each power control module and at least some of the critical power mode changes.  Since the full system can potentially exercise unforeseen combinations of power states, the directed test approach may be insufficient. Abhinav built a more complete low-power verification approach rooted in a low-power verification plan captured in Cadence® Incisive Enterprise Manager.  He still used Incisive Enterprise Simulator and the SimVision debugger to execute and debug his design, but he also added Incisive Metric Center to analyze coverage from his low-power tests and connect that data back to the low-power verification plan.  As a result, he was able to find many critical system-level corner case issues, which, left undetected, would have been catastrophic for his SoC.  In the paper, Abhinav presents some of the key problems this approach was able to find.

You can achieve results similar to Abhinav. Incisive Enterprise Simulator can generate a low-power verification plan from the power format, power-aware assertions, and it can collect power-aware knowledge.  To get started, you can use the Incisive Low-Power Simulation Rapid Adoption Kit (RAK) for CPF available on Cadence Online Support.

Just another happy Cadence low-power verification user!

Regards,

Adam "The Jouler" Sherer  

 

 




io

netlist extraction from assembler in cadence virtuoso

Hello , i am trying to extract netlist from a circuit  in assembler

I have found the manual shown bellow , however there is no such option in tools in assembler.

how do i view the NETLIST of this circuit?

Thanks.



ASSEMBLER VIEW menu




io

mixer pxf simulation error(IC5141,Cadence workshop document)

Hello

The document I referenced is https://filebox.ece.vt.edu/~symort/rfworkshop/Mixer_workshop_instruction.pdf. (This is cadence workshop document)

While following the pxf simulation in the above article, the results are different and I have a question.

My result picture is shown below.

<my result error>

<document result>

<my direct plot>

<document direct plot>

The difference with the documentation is that in the direct plot screen after the pxf simulation,

1.output harmonics-> input sideband

2.Frequency axis: out-> frequency axis: absin

3.The results for port0 (RF port) are also different (see photo below).

4.The frequency values in the box are different.

My screen shows 5G, 10G, 1K ~ 10M, but the document is the same as 1K ~ 10M.

Ask for a solution. Thank you.




io

Noise figure optimization of LNA

Hello, I looking for reaching minimal NOISE FIRUGE by peaking a combination of the RLC on the output load as shown bellow.
I tried to do it with PNOISE  but it gives me an error on the |f(in)| i am not sure regarding the sidebands in my case(its not a mixer)

How can i show Noise Figure and optimize it using my output RLC?

Thanks





io

Power gain circle interpretation question

Hello, i have made a power gain circle for 30dB,for setting a GAIN we need to set a matching network for input and output inpedance.

but in this Gain circles it shows me only one complex number instead of two.(As shown bellow)

Where did i go wrong with using it to find the input and output impedancies needed to be matched in order to have 30dB gain?
Thanks.




io

cadence simulation error

Hi, all

Recently, I meet the simulation error as the picture shows when I simulate my circuit with transient.  how can I solve this problem?

thank you~