it

Cleared to Land: An Interview with Cadence Veterans ERG Lead Johnathan Edmonds

Each November, we are reminded of the bravery and dedication of those who have served our country. At Cadence, we thank our Veteran employees for their patriotism by reaffirming our commitment to honoring their sacrifices and recognizing their contributions to our business success. Our diverse and inclusive culture is strengthened by the unique perspective of our Veteran employees, and we are proud to support the Veterans Inclusion Group as a space for community members and their allies to connect. In celebration of Veterans Day, we were excited to catch up with Johnathan Edmonds, Veterans Inclusion Group Lead and Design Engineering Director, for a heartfelt chat on his journey through military service to leadership within Cadence. Throughout the conversation, he shared the importance of creating space for Veterans, the skills they offer, and his aspirations for what the Veterans Inclusion Group will achieve in the years ahead. Oh yeah, and he flies planes, too! Join us as we dive into what makes this holiday special for so many across the nation and how we can respectfully commemorate it together. Johnathan, you’re a retired Air Force Reservist, pilot, and now a Design Engineering Director. Can you tell us about your journey from the military to your current role at Cadence? I started my military and electronics journey in the Navy. I enlisted at 18 and served for six years as an aviation electronics technician. During this time, I was able to learn about and repair electronics on planes. This set me up for success, and when I was honorably discharged, I attended Virginia Tech to study computer engineering. Once I graduated, I continued my career as an engineer, but I still wanted to be a military pilot. From my past experience, I knew the reserves were an option where I could learn to fly and still have a civilian career. Not only was I lucky enough to get selected to go to pilot training, but after I returned from flight school, my luck grew, and I was hired at Cadence. Cadence has supported me throughout my military career, which has been a great benefit, as many companies don’t support reservists. The best thing about serving and being employed at Cadence is how I could blend my skill sets to further the Air Force’s mission and achieve great things in engineering. As the first lead of Cadence’s Veterans Inclusion Group, you played an integral part in growing our culture and building community at the company since launching the group four years ago. What inspired you to take on the role of Inclusion Group Lead? I was inspired by three things: camaraderie, service, and outreach. I wanted to see if we could achieve a similar sense of community through the Veterans Inclusion Group as we had during our service life. I also wanted to see how we could better serve our Veterans here at Cadence. I wanted to explore any benefits that could be expanded, roles that could be developed by Vets, and, lastly, I wanted to serve a broader community. COVID-19 put a damper on some of the community support, but we are getting back on track with Veteran employment programs and volunteer efforts like Carry the Load and Gold Star Families. Why is it important to have this space dedicated to Veteran employees? There are many reasons! Networking, for one, creates a stronger, more unified Cadence culture. Two, Vets face a variety of issues not generally understood by those who have not served, such as PTSD, where to get help for disabilities, how to get an old medical record, etc. As I mentioned, I’m also passionate about connecting Veterans with employment and job opportunities. It is so nice to work for a company that actively recruits Vets. We have our own “language,” if you will, so it’s nice to have a space to talk in the language that we are familiar with. What have been some of your favorite moments leading this group over the past few years? Are there any “wins” that you would like to recognize? We have a lot of wins. Events held during COVID-19 and getting past COVID-19, donating to worthwhile causes, and hosting guest speakers are all fantastic milestones and accomplishments. That said, the biggest win is the hiring of new Veteran employees. Mark Murphy, Corporate VP of Sales Operations, and I have both welcomed Vets to our team during this time, and it is such a joy to watch what someone can do when given the opportunity to succeed in the right environment. As you are set to transition out of the lead role next year, what do you hope to see the Veterans Inclusion Group accomplish next? My hope is that the Veterans Inclusion Group partners with other companies, expanding our reach externally and exploring new opportunities to engage Veterans outside of Cadence. Johnathan (left) speaks on an inclusion group panel, along with David Sallard (center), lead of Cadence's Black Inclusion Group and Sr. Principal Application Engineer; Christina Jamerson (on screen), lead of Cadence's Abilities Inclusion Group and Demand Generation Director; and Dianne Rambke (right), lead of Cadence's Latinx Inclusion Group and Marketing Communications Director. What are the important ways that people can signal inclusion and respectfully honor Veterans at work? What are the most meaningful or impactful actions employees everywhere can take to support Veteran coworkers? I think there is one answer to both questions. I recommend that people engage with their companies’ employee resource groups (ERGs) and have conversations with them. Opening up the lines of communication will lead to new paths in their journeys. What are you looking forward to in 2025, both personally and professionally? In 2025, professionally, I am looking forward to taking mixed-signal systems and verification to another level by including emulation, automatic model generation, and seeing which boundaries we can push in our SerDes and Chiplets products. Personally, I am looking forward to making my SXS street legal so I can drive places without getting a ticket, seeing my children participate in sports, church, and school, and taking my wife on vacation to Europe or somewhere else we can unplug. Learn more about Cadence’s Inclusion Groups, diverse culture, and commitment to belonging.




it

Randomization considerations for PCIe Integrity and Data Encryption Verification Challenges

Peripheral Component Interconnect Express (PCIe) is a high-speed interface standard widely used for connecting processors, memory, and peripherals. With the increasing reliance on PCIe to handle sensitive data and critical high-speed data transfer, ensuring data integrity and encryption during verification is the most essential goal. As we know, in the field of verification, randomization is a key technique that drives robust PCIe verification. It introduces unpredictability to simulate real-world conditions and uncover hidden bugs from the design. This blog examines the significance of randomization in PCIe IDE verification, focusing on how it ensures data integrity and encryption reliability, while also highlighting the unique challenges it presents. For more relevant details and understanding on PCIe IDE you can refer to Introducing PCIe's Integrity and Data Encryption Feature . The Importance of Data Integrity and Data Encryption in PCIe Devices Data Integrity : Ensures that the transmitted data arrives unchanged from source to destination. Even minor corruption in data packets can compromise system reliability, making integrity a critical aspect of PCIe verification. Data Encryption : Protects sensitive data from unauthorized access during transmission. Encryption in PCIe follows a standard to secure information while operating at high speeds. Maintaining both data integrity and data encryption at PCIe’s high-speed data transfer rate of 64GT/s in PCIe 6.0 and 128GT/s in PCIe 7.0 is essential for all end point devices. However, validating these mechanisms requires comprehensive testing and verification methodologies, which is where randomization plays a very crucial role. You can refer to Why IDE Security Technology for PCIe and CXL? for more details on this. Randomization in PCIe Verification Randomization refers to the generation of test scenarios with unpredictable inputs and conditions to expose corner cases. In PCIe verification, this technique helps us to ensure that all possible behaviors are tested, including rare or unexpected situations that could cause data corruption or encryption failures that may cause serious hindrances later. So, for PCIe IDE verification, we are considering the randomization that helps us verify behavior more efficiently. Randomization for Data Integrity Verification Here are some approaches of randomized verifications that mimic real-world traffic conditions, uncovering subtle integrity issues that might not surface in normal verification methods. 1. Randomized Packet Injection: This technique randomized data packets and injected into the communication stream between devices. Here we Inject random, malformed, or out-of-sequence packets into the PCIe link and mix valid and invalid IDE-encrypted packets to check the system’s ability to detect and reject unauthorized or invalid packets. Checking if encryption/decryption occurs correctly across packets. On verifying, we check if the system logs proper errors or alerts when encountering invalid packets. It ensures coverage of different data paths and robust protocol check. This technique helps assess the resilience of the IDE feature in PCIe in below terms: (i) Data corruption: Detecting if the system can maintain data integrity. (ii) Encryption failures: Testing the robustness of the encryption under random data injection. (iii) Packet ordering errors: Ensuring reordering does not affect data delivery. 2. Random Errors and Fault Injection: It involves simulating random bit flips, PCRC errors, or protocol violations to help validate the robustness of error detection and correction mechanisms of PCIe. These techniques help assess how well the PCIe IDE implementation: (i) Detects and responds to unexpected errors. (ii) Maintains secure communication under stress. (iii) Follows the PCIe error recovery and reporting mechanisms (AER – Advanced Error Reporting). (iv) Ensures encryption and decryption states stay synchronized across endpoints. 3. Traffic Pattern Randomization: Randomizing the sequence, size, and timing of data packets helps test how the device maintains data integrity under heavy, unpredictable traffic loads. Randomization for Data Encryption Verification Encryption adds complexity to verification, as encrypted data streams are not readable for traditional checks. Randomization becomes essential to test how encryption behaves under different scenarios. Randomization in data encryption verification ensures that vulnerabilities, such as key reuse or predictable patterns, are identified and mitigated. 1. Random Encryption Keys and Payloads: Randomly varying keys and payloads help validate the correctness of encryption without hardcoding assumptions. This ensures that encryption logic behaves correctly across all possible inputs. 2. Randomized Initialization Vectors (IVs): Many encryption protocols require a unique IV for each transaction. Randomized IVs ensure that encryption does not repeat patterns. To understand the IDE Key management flow, we can follow the below diagram that illustrates a detailed example key programming flow using the IDE_KM protocol. Figure 1: IDE_KM Example As Figure 1 shows, the functionality of the IDE_KM protocol involves Start of IDE_KM Session, Device Capability Discovery, Key Request from the Host, Key Programming to PCIe Device, and Key Acknowledgment. First, the Host starts the IDE_KM session by detecting the presence of the PCIe devices; if the device supports the IDE protocol, the system continues with the key programming process. Then a query occurs to discover the device’s encryption capabilities; it ensures whether the device supports dynamic key updates or static keys. Then the host sends a request to the Key Management Entity to obtain a key suitable for the devices. Once the key is obtained, the host programs the key into the IDE Controller on the PCIe endpoint. Both the host and the device now share the same key to encrypt and authenticate traffic. The device acknowledges that it has received and successfully installed the encryption key and the acknowledgment message is sent back to the host. Once both the host and the PCIe endpoint are configured with the key, a secure communication channel is established. From this point, all data transmitted over the PCIe link is encrypted to maintain confidentiality and integrity. IDE_KM plays a crucial role in distributing keys in a secure manner and maintaining encryption and integrity for PCIe transactions. This key programming flow ensures that a secure communication channel is established between the host and the PCIe device. Hence, the Randomized key approach ensures that the encryption does not repeat patterns. 3. Randomization PHE: Partial Header Encryption (PHE) is an additional mechanism added to Integrity and Data Encryption (IDE) in PCIe 6.0. PHE validation using a variety of traffic; incorporating randomization in APIs provided for validating PHE feature can add more robust Encryption to the data. Partial Header Encryption in Integrity and Data Encryption for PCIe has more detailed information on this. Figure 2: High-Level Flow for Partial Header Encryption 4. Randomization on IDE Address Association Register values: IDE Address Association Register 1/2/3 are supposed to be configured considering the memory address range of IDE partner ports. The fields of IDE address registers are split multiple values such as Memory Base Lower, Memory Limit Lower, Memory Base Upper, and Memory Limit Upper. IDE implementation can have multiple register blocks considering addresses with 32 or 64, different registers sizes, 0-255 selective streams, 0-15 address blocks, etc. This Randomization verification can help verify all the corner cases. Please refer to Figure 2. Figure 3: IDE Address Association Register 5. Random Faults During Encryption: Injecting random faults (e.g., dropped packets or timing mismatches) ensures the system can handle disruptions and prevent data leakage. Challenges of IDE Randomization and its Solution Randomization introduces a vast number of scenarios, making it computationally intensive to simulate every possibility. Constrained randomization limits random inputs to valid ranges while still covering edge cases. Again, using coverage-driven verification to ensure critical scenarios are tested without excessive redundancy. Verifying encrypted data with random inputs increases complexity. Encryption masks data, making it hard to verify outputs without compromising security. Here we can implement various IDE checks on the IDE callback to analyze encrypted traffic without decrypting it. Randomization can trigger unexpected failures, which are often difficult to reproduce. By using seed-based randomization, a specific seed generates a repeatable random sequence. This helps in reproducing and analyzing the behavior more precisely. Conclusion Randomization is a powerful technique in PCIe verification, ensuring robust validation of both data integrity and data encryption. It helps us to uncover subtle bugs and edge cases that a non-randomized testing might miss. In Cadence PCIe VIP, we support full-fledged IDE Verification with rigorous randomized verification that ensures data integrity. Robust and reliable encryption mechanisms ensure secure and efficient data communication. However, randomization also brings various challenges, and to overcome them we adopt a combination of constrained randomization, seed-based testing, and coverage-driven verification. As PCIe continues to evolve with higher speeds and focuses on high security demands, our Cadence PCIe VIP ensures it is in line with industry demand and verify high-performance systems that safeguard data in real-world environments with excellence. For more information, you can refer to Verification of Integrity and Data Encryption(IDE) for PCIe Devices and Industry's First Adopted VIP for PCIe 7.0 . More Information: For more info on how Cadence PCIe Verification IP and TripleCheck VIP enables users to confidently verify IDE, see our VIP for PCI Express , VIP for Compute Express Link for and TripleCheck for PCI Express For more information on PCIe in general, and on the various PCI standards, see the PCI-SIG website .




it

A Guide to Build A Mini Guitar/Audio Amplifier Based on LM386

Hey, is it suitable to post here? I wanted a small yet robust amp for practicing while I travel. I wanted something that would fit in my pocket yet still be loud enough to hear.

Presented here is a amplifier based upon the LM386 Audio Amplifier.

There is a standard circuit in the data sheet that is an excellent place to start.

Materials needed:
1 - HM359 project box
1 - 668-1237 speaker
1 - BS6I battery conn
1 - CP1-3515 stereo jack
1 - SC1316 stereo jack
2 - 450-1742 knob
1 - 679-1856 switch
1- 3mm LED
1 - 10 ohm 1/4W resistor
1 - 10uF ceramic cap
1 - .05 uF ceramic cap
1 - 420 uF electrolytic cap
1 - 8 ohm resistor
2 - 51AADB24 10K pot
1 - HM1252 circuit board
1 - LM386N-4 amplifier

Wire and Solder
Step 1: Prep the enclosure

Careful planning is required the first time you free build a circuit. The circuit board has solder pads but not traces. You will have to use thin wire to make the connections for the circuit to work.

Begin by laying out the components on the circuit board that will need to pass through the enclosure. This enclosure has a removable top panel which will be used for the volume, gain and 1/4 inch stereo jack.

Space is limited to check for fit before drilling.

All drilling of the plastic should be done with a step drill bit. This will make the cleanest holes without breaking the plastic.

Lay out the pots a few spaces back but still in line with the desired position. mark the center of each pot shaft then drill with a step drill tot he tightest fitting hole size. Make a center mark between the pot holes then drill for the stereo jack

On the inside of the top cover position and mark where the speaker will go.

Make a template on grid paper the same size as the speaker.

Tape the template to the inside of the cover as shown then use a step bit to drill holes on the center of every square in the grid. This will form the speaker grille. clean up the holes.

Step 2: place the major components

Solder the pots to the circuit board as shown. then place the stereo jack(note in order to get the final fit I had to trim and modify the stereo jack housing a little)

Next, position and solder the switch on the circuit board and mark a space on the top cover that will need to be cut for the switch opening. Use a small file to cut the opening.

Use a sharp knife to bevel the edges of the switch hole to allow for easier operation.

Drill a hole in the side of the upper case for the headphone jack and fasten it in place. ( I had to recess the hole a bit for the retaining nut to grab)

Step 3: Build the circuit

The speaker is held in place by using 2 small brackets that come with the serial cable connector hood. ( I had a bunch around that would never be used)

Refer the the circuit shown from the datasheet and the datasheet for the LM386. The basic circuit only has the volume control while the datasheet shows how to add a gain control across pins 1 and 8 of the amplifier.

The speaker is wired in series with the headphone jack. The headphone jack has internal switches that shut the speaker off when the phones are plugged in.

I chose to use a chip socket for the amplifier which make prototyping easier since you do not have to worry about solder heating as much.

Carefully lay the circuit out on the board and begin wiring components together. I added a second pot and cap in series between pins 1 and 8 of the amp to be able to manually set the gain in addition to volume.

Check you connections with a multimeter before adding the amplifier.

I chose to add a LED indicator for power. This was done by using one side of switch contacts from the battery. The LED is in series with a 220 ohm resistor.

Assemble the case and insert the battery.

Step 4: Final notes

If the speaker is noisy while the headphones work normally, try reversing the speaker connections. If it does not correct the issue, connect a 8 ohm resistor across the speaker contacts.

You may have to place an insulating layer between the speaker and the place where the stereo jack comes through to prevent contact. This will be noted by a loud buzz.

You may have to add some foam in the battery compartment to stop the battery from banging around.

For reference, I've also read an article about amplifiers: http://www.apogeeweb.net/article/60.html

Thanks for reading!




it

Matlab cannot open Pspice, to prompt orCEFSimpleUI.exe that it has stopped working!

Cadence_SPB_17.4-2019 + Matlab R2019a

请参考本文档中的步骤进行操作

1,打开BJT_AMP.opj

2,设置Matlab路径

3,打开BJT_AMP_SLPS.slx

4,打开后,设置PSpiceBlock,出现或CEFSimpleUI.exe停止工作

5,添加模块

6,相同

7,打开pspsim.slx

8,相同

9,打开C: Cadence Cadence_SPB_17.4-2019 tools bin

orCEFSimpleUI.exe和orCEFSimple.exe

 

10,相同

我想问一下如何解决,非常感谢!




it

Issue With Loudness Normalization

Hello everyone. In recent days, I'm having a weird problem with sound output on my Windows 10 PC. In fact, I can't control the loudness of it. So is there any possibility of PCB of sound card being damaged?




it

How to design enhancement mode eGaN (EPC8002) switch in cadence

Hi,

I need to design EPC8002 eGaN switch in cadence. Can someone provide me step by step guide on hoe to add EPC8002 into my cadence. I am working on BCD180.

Thank you 

Ihsan




it

Here Is Why the Indian Voter Is Saddled With Bad Economics

This is the 15th installment of The Rationalist, my column for the Times of India.

It’s election season, and promises are raining down on voters like rose petals on naïve newlyweds. Earlier this week, the Congress party announced a minimum income guarantee for the poor. This Friday, the Modi government released a budget full of sops. As the days go by, the promises will get bolder, and you might feel important that so much attention is being given to you. Well, the joke is on you.

Every election, HL Mencken once said, is “an advance auction sale of stolen goods.” A bunch of competing mafias fight to rule over you for the next five years. You decide who wins, on the basis of who can bribe you better with your own money. This is an absurd situation, which I tried to express in a limerick I wrote for this page a couple of years ago:

POLITICS: A neta who loves currency notes/ Told me what his line of work denotes./ ‘It is kind of funny./ We steal people’s money/And use some of it to buy their votes.’

We’re the dupes here, and we pay far more to keep this circus going than this circus costs. It would be okay if the parties, once they came to power, provided good governance. But voters have given up on that, and now only want patronage and handouts. That leads to one of the biggest problems in Indian politics: We are stuck in an equilibrium where all good politics is bad economics, and vice versa.

For example, the minimum guarantee for the poor is good politics, because the optics are great. It’s basically Garibi Hatao: that slogan made Indira Gandhi a political juggernaut in the 1970s, at the same time that she unleashed a series of economic policies that kept millions of people in garibi for decades longer than they should have been.

This time, the Congress has released no details, and keeping it vague makes sense because I find it hard to see how it can make economic sense. Depending on how they define ‘poor’, how much income they offer and what the cost is, the plan will either be ineffective or unworkable.

The Modi government’s interim budget announced a handout for poor farmers that seemed rather pointless. Given our agricultural distress, offering a poor farmer 500 bucks a month seems almost like mockery.

Such condescending handouts solve nothing. The poor want jobs and opportunities. Those come with growth, which requires structural reforms. Structural reforms don’t sound sexy as election promises. Handouts do.

A classic example is farm loan waivers. We have reached a stage in our politics where every party has to promise them to assuage farmers, who are a strong vote bank everywhere. You can’t blame farmers for wanting them – they are a necessary anaesthetic. But no government has yet made a serious attempt at tackling the root causes of our agricultural crisis.

Why is it that Good Politics in India is always Bad Economics? Let me put forth some possible reasons. One, voters tend to think in zero-sum ways, as if the pie is fixed, and the only way to bring people out of poverty is to redistribute. The truth is that trade is a positive-sum game, and nations can only be lifted out of poverty when the whole pie grows. But this is unintuitive.

Two, Indian politics revolves around identity and patronage. The spoils of power are limited – that is indeed a zero-sum game – so you’re likely to vote for whoever can look after the interests of your in-group rather than care about the economy as a whole.

Three, voters tend to stay uninformed for good reasons, because of what Public Choice economists call Rational Ignorance. A single vote is unlikely to make a difference in an election, so why put in the effort to understand the nuances of economics and governance? Just ask, what is in it for me, and go with whatever seems to be the best answer.

Four, Politicians have a short-term horizon, geared towards winning the next election. A good policy that may take years to play out is unattractive. A policy that will win them votes in the short term is preferable.

Sadly, no Indian party has shown a willingness to aim for the long term. The Congress has produced new Gandhis, but not new ideas. And while the BJP did make some solid promises in 2014, they did not walk that talk, and have proved to be, as Arun Shourie once called them, UPA + Cow. Even the Congress is adopting the cow, in fact, so maybe the BJP will add Temple to that mix?

Benjamin Franklin once said, “Democracy is two wolves and a lamb voting on what to have for lunch.” This election season, my friends, the people of India are on the menu. You have been deveined and deboned, marinated with rhetoric, seasoned with narrative – now enter the oven and vote.

The India Uncut Blog © 2010 Amit Varma. All rights reserved.
Follow me on Twitter.




it

India’s Problem is Poverty, Not Inequality

This is the 16th installment of The Rationalist, my column for the Times of India.

Steven Pinker, in his book Enlightenment Now, relates an old Russian joke about two peasants named Boris and Igor. They are both poor. Boris has a goat. Igor does not. One day, Igor is granted a wish by a visiting fairy. What will he wish for?

“I wish,” he says, “that Boris’s goat should die.”

The joke ends there, revealing as much about human nature as about economics. Consider the three things that happen if the fairy grants the wish. One, Boris becomes poorer. Two, Igor stays poor. Three, inequality reduces. Is any of them a good outcome?

I feel exasperated when I hear intellectuals and columnists talking about economic inequality. It is my contention that India’s problem is poverty – and that poverty and inequality are two very different things that often do not coincide.

To illustrate this, I sometimes ask this question: In which of the following countries would you rather be poor: USA or Bangladesh? The obvious answer is USA, where the poor are much better off than the poor of Bangladesh. And yet, while Bangladesh has greater poverty, the USA has higher inequality.

Indeed, take a look at the countries of the world measured by the Gini Index, which is that standard metric used to measure inequality, and you will find that USA, Hong Kong, Singapore and the United Kingdom all have greater inequality than Bangladesh, Liberia, Pakistan and Sierra Leone, which are much poorer. And yet, while the poor of Bangladesh would love to migrate to unequal USA, I don’t hear of too many people wishing to go in the opposite direction.

Indeed, people vote with their feet when it comes to choosing between poverty and inequality. All of human history is a story of migration from rural areas to cities – which have greater inequality.

If poverty and inequality are so different, why do people conflate the two? A key reason is that we tend to think of the world in zero-sum ways. For someone to win, someone else must lose. If the rich get richer, the poor must be getting poorer, and the presence of poverty must be proof of inequality.

But that’s not how the world works. The pie is not fixed. Economic growth is a positive-sum game and leads to an expansion of the pie, and everybody benefits. In absolute terms, the rich get richer, and so do the poor, often enough to come out of poverty. And so, in any growing economy, as poverty reduces, inequality tends to increase. (This is counter-intuitive, I know, so used are we to zero-sum thinking.) This is exactly what has happened in India since we liberalised parts of our economy in 1991.

Most people who complain about inequality in India are using the wrong word, and are really worried about poverty. Put a millionaire in a room with a billionaire, and no one will complain about the inequality in that room. But put a starving beggar in there, and the situation is morally objectionable. It is the poverty that makes it a problem, not the inequality.

You might think that this is just semantics, but words matter. Poverty and inequality are different phenomena with opposite solutions. You can solve for inequality by making everyone equally poor. Or you could solve for it by redistributing from the rich to the poor, as if the pie was fixed. The problem with this, as any economist will tell you, is that there is a trade-off between redistribution and growth. All redistribution comes at the cost of growing the pie – and only growth can solve the problem of poverty in a country like ours.

It has been estimated that in India, for every one percent rise in GDP, two million people come out of poverty. That is a stunning statistic. When millions of Indians don’t have enough money to eat properly or sleep with a roof over their heads, it is our moral imperative to help them rise out of poverty. The policies that will make this possible – allowing free markets, incentivising investment and job creation, removing state oppression – are likely to lead to greater inequality. So what? It is more urgent to make sure that every Indian has enough to fulfil his basic needs – what the philosopher Harry Frankfurt, in his fine book On Inequality, called the Doctrine of Sufficiency.

The elite in their airconditioned drawing rooms, and those who live in rich countries, can follow the fashions of the West and talk compassionately about inequality. India does not have that luxury.

The India Uncut Blog © 2010 Amit Varma. All rights reserved.
Follow me on Twitter.




it

Can Amit Shah do for India what he did for the BJP?

This is the 20th installment of The Rationalist, my column for the Times of India.

Amit Shah’s induction into the union cabinet is such an interesting moment. Even partisans who oppose the BJP, as I do, would admit that Shah is a political genius. Under his leadership, the BJP has become an electoral behemoth in the most complicated political landscape in the world. The big question that now arises is this: can Shah do for India what he did for the BJP?

This raises a perplexing question: in the last five years, as the BJP has flourished, India has languished. And yet, the leadership of both the party and the nation are more or less the same. Then why hasn’t the ability to manage the party translated to governing the country?

I would argue that there are two reasons for this. One, the skills required in those two tasks are different. Two, so are the incentives in play.

Let’s look at the skills first. Managing a party like the BJP is, in some ways, like managing a large multinational company. Shah is a master at top-down planning and micro-management. How he went about winning the 2014 elections, described in detail in Prashant Jha’s book How the BJP Wins, should be a Harvard Business School case study. The book describes how he fixed the BJP’s ground game in Uttar Pradesh, picking teams for 147,000 booths in Uttar Pradesh, monitoring them, and keeping them accountable.

Shah looked at the market segmentation in UP, and hit upon his now famous “60% formula”. He realised he could not deliver the votes of Muslims, Yadavs and Jatavs, who were 40% of the population. So he focussed on wooing the other 60%, including non-Yadav OBCs and non-Jatav Dalits. He carried out versions of these caste reconfigurations across states, and according to Jha, covered “over 5 lakh kilometres” between 2014 and 2017, consolidating market share in every state in this country. He nurtured “a pool of a thousand new OBC and Dalit leaders”, going well beyond the posturing of other parties.

That so many Dalits and OBCs voted for the BJP in 2019 is astonishing. Shah went past Mandal politics, managing to subsume previously antagonistic castes and sub-castes into a broad Hindutva identity. And as the BJP increased its depth, it expanded its breadth as well. What it has done in West Bengal, wiping out the Left and weakening Mamata Banerjee, is jaw-dropping. With hindsight, it may one day seem inevitable, but only a madman could have conceived it, and only a genius could have executed it.

Good man to be Home Minister then, eh? Not quite. A country is not like a large company or even a political party. It is much too complex to be managed from the top down, and a control freak is bound to flounder. The approach needed is very different.

Some tasks of governance, it is true, are tailor-made for efficient managers. Building infrastructure, taking care of roads and power, building toilets (even without an underlying drainage system) and PR campaigns can all be executed by good managers. But the deeper tasks of making an economy flourish require a different approach. They need a light touch, not a heavy hand.

The 20th century is full of cautionary tales that show that economies cannot be centrally planned from the top down. Examples of that ‘fatal conceit’, to use my hero Friedrich Hayek’s term, include the Soviet Union, Mao’s China, and even the lady Modi most reminds me of, Indira Gandhi.

The task of the state, when it comes to the economy, is to administer a strong rule of law, and to make sure it is applied equally. No special favours to cronies or special interest groups. Just unleash the natural creativity of the people, and don’t try to micro-manage.

Sadly, the BJP’s impulse, like that of most governments of the past, is a statist one. India should have a small state that does a few things well. Instead, we have a large state that does many things badly, and acts as a parasite on its people.

As it happens, the few things that we should do well are all right up Shah’s managerial alley. For example, the rule of law is effectively absent in India today, especially for the poor. As Home Minister, Shah could fix this if he applied the same zeal to governing India as he did to growing the BJP. But will he?

And here we come to the question of incentives. What drives Amit Shah: maximising power, or serving the nation? What is good for the country will often coincide with what is good for the party – but not always. When they diverge, which path will Shah choose? So much rests on that.

The India Uncut Blog © 2010 Amit Varma. All rights reserved.
Follow me on Twitter.




it

How Do You Ensure the Reliability of Your Design in Virtuoso Studio?

Designers have long recognized the need to analyze the reliability of ICs. Two commonly used approaches for performing reliability analysis include calculating the change in device degradation and relying on safe operating checks in circuit simulators. 

With the advent of the ever-increasing use of ICs in mission-critical applications, the need for reliable reliability analysis has become of paramount importance. Over the years, you have been using reliability analysis in Virtuoso ADE Assembler and Virtuoso ADE Explorer to measure and review aging effects, such as device characteristic degradations, model parameter changes, self-heating effects, and so on.

Reliability analysis can be performed using two modes: Spectre native and RelXpert. The reliability analysis analyzes the effect of time on circuit performance drift and predicts the reliability of designs in terms of performance. In ADE Assembler, you can run the reliability simulation for fresh test (when time is zero), stress test (to generate degradation data), and aged test (at specific intervals, such as one year, three years, or 10 years). In the stress test, extreme environmental conditions are used to stress devices before aging analysis.

The following figure shows the reliability simulation flow.

 

 

The Reliability Options form has the following four tabs: 

  • Basic: Enables you to specify analysis type, aging options, start and stop time of reliability simulation, and options related to device masking, degradation ratio, and lifetime calculation. 
  • Modeling: Enables you to choose the modeling type you want to use during reliability simulation. 
  • Degradation: Enables you to specify the options to print device and subcircuit degradation information into a .bt0 file. 
  • Output: Enables you to specify the degradation reports to be generated and methods to filter degradation results in the reports.

While the Basic and the Output tabs are used by design engineers, the Modeling and the Degradation tabs are primarily used by model developers.

 

Reviewing degradation reports in text or XML formats can be a tiresome exercise because degradation data can be large and can contain a large number of instances due to advanced technology nodes and post-layout simulations. For you to work effectively and interactively with these reports, the new reliability report is based on the SQLite database, which adds the benefit of improved performance and capabilities of sorting and filtering reliability data using SQLite operators.

 

As they say, watching this in action might help you more than reading about it, so please take a look at our Training Bytes video channel, which offers many helpful videos on how to run Reliability Analysis in Virtuoso Studio.

All the related videos are linked together in a channel so that you can easily access and watch as many as you like.

Reliability Analysis in Virtuoso Studio

 

Want to Learn More?

For lab instructions and a downloadable design, enroll for the online training courses of your interest on

Reliability Analysis in Virtuoso Studio vIC23.1 (Online)

 Training is also available as "Blended" or "live" class.

Digital Badge Available

You can become Cadence Certified once you complete the course (s) and share your knowledge and certifications on social media channels. Go straight to the course exam at the Learning and Support Portal.

Note: Some of the above links are accessible only to Cadence customers who have a valid login ID for the Cadence Learning and Support Portal.

Do You Have Access to the Cadence Support Portal?

If not, follow the steps below to create your account.

  • On the Cadence Support portal, select Register Now and provide the requested information on the Registration page.
  • You will need an email address and host ID in order to sign up.
  • If you need help with registration, contact support@cadence.com.

To stay up-to-date with the latest news and information about Cadence training and webinars, subscribe to the Cadence Training emails.

If you have questions about courses, schedules, online, public, or live onsite training, reach out to us at Cadence Training.

Related Resources

  Training Bytes (Videos)

Virtuoso ADE Explorer Graphical User Interface

What is the need for Reliability Analysis? (Video)

  Blogs

Come Join Us and Learn from the Cadence Training Offerings

It’s the Digital Era; Why Not Showcase Your Brand Through a Digital Badge!

  Online Course

Reliability Analysis in Virtuoso Studio vIC23.1 (Online)

 

About Knowledge Booster Training Bytes

Knowledge Booster Training Bytes is an online journal that relays information about Cadence Training videos, online courses, and upcoming webinars that are available in the Learning section of the Cadence Learning and Support portal. This blog category brings you direct links to these videos, courses, and other related material on a regular basis.

Niyati Singh

On behalf of the Cadence Training team




it

Knowledge Booster Training Bytes - Writing Physical Verification Language Rules

Have you ever wanted to write a DRC rule deck to check for space or width constraints on polygons? Or have you wondered how the multiple lines of an LVS rule deck extract and conduct a comparison between the schematic and layout? Maybe you've been curious about the role of rule deck writers in creating high-quality designs ready for tape-out.

If any of these questions interest you, there is good news: the latest version (v23.1) of the Physical Verification Rules Writer (PVLRW) course is designed to teach you rule deck writing. This free 16-hour online course includes audio and labs designed to make your learning experience comfortable and flexible. Whether you are new to the concept or an experienced CAD/PDK engineer, the course is structured to enhance your rule deck writing skills.

The PVLRW course covers six core modules: Layer Processing, DRC Rules, Layout Extraction, ERC and LVS Rules, Schematic Netlisting, and Coloring Rules. There are also three optional appendix sections. Each module explains relevant rules with syntax, concepts, graphics, examples, and case studies.

This course is based on tool versions PEGASUS231 and Virtuoso Studio IC231.

Pegasus Input and Output

Pegasus is a cloud-ready physical verification signoff solution that enables engineers to support faster delivery of advanced-node integrated circuits (ICs) to market.

Pegasus requires input data in the form of layout geometry, schematic netlists, and rules that direct the tool operation. The rules fall into two categories: those that describe the fabrication process and those that control the job-specific operation.

Pegasus provides log and report files, netlists, databases, and error databases as output.

Overview of Pegasus Rule File

The rule decks written in Physical Verification Language (PVL) work for the Cadence PV signoff tools Pegasus and PVS (Physical Verification System).   

The PVL rules are placed in a file that gets selected in a run from the GUI or the command line, as the user directs. PVL rules may be on separate lines within the file and can also be contained in named rule blocks.

Each line of code starts with a PVL rule that uses prefix type notation. It consists of a keyword followed by options, input layer or variable names, and output layer or variable names.

A rule block has the format of the keyword rule, followed by a rule name you wish to give it, followed by an opening curly brace. You enter the rules you wish to perform, followed by a closing curly brace on the last separate line.

  Sample Rule deck with individual lines of code and rule blocks.

DRC Rules

The first step in a typical Pegasus flow is a Design Rule Check (DRC), which verifies that layout geometries conform to the minimum width, spacing, and other fabrication process rules required by an IC foundry. Each foundry specifies its own process-dependent rules that must be met by the layout design.

There are three types of DRC rules: layer definition rules, layer derivation rules, and DRC design check rules. Layer definition rules identify the layers contained in the input layout database, and layer derivation rules derive additional layers from the original input layers, allowing the tool to test the design against specific foundry requirements using the design check rules.

A sample DRC Rule deck

A layout view displaying the DRC violations

LVS Rules

The Pegasus Layout Versus Schematic (LVS) tool compares the layout netlist with the schematic netlist to check for discrepancies.

There are two essential LVS rule sets: LVS extraction rules and comparison rules. LVS extraction rules help extract drawn devices and connectivity information from the input layout geometry data and outputs into a layout netlist. The LVS extraction rule set also includes the layer definition, derivation, extraction, connectivity, and net listing rules.

LVS comparison rules are associated with comparing the extracted layout netlist to a schematic netlist.

A sample LVS Rule deck. 

TCL, Macros, and Conditional commands

Tcl is supported and used in various Pegasus functionalities, such as Pegasus rule files and Pegasus configurator. Macros are functional templates that are defined once and can be used multiple times in a rule file. Conditional Commands are used to process or skip specific commands in the rule file.

Do You Have Access to the Cadence Support Portal?

If not, follow the steps below to create your account.

  • On the Cadence Support portal, select Register Now and provide the requested information on the Registration page.
  • You will need an email address and host ID to sign up.
  • If you need help with registration, contact support@cadence.com.

To stay up to date with the latest news and information about Cadence training and webinars, subscribe to the Cadence Training emails.

If you have questions about courses, schedules, online, public, or live onsite training, reach out to us at Cadence Training.

For any questions, general feedback, or future blog topic suggestions, please leave a comment.

Related Resources

Product Manuals

Cadence Pegasus Developers Guide

Rapid Adoption Kits     Running Pegasus DRC/LVS/FILL in Batch Mode
Training Byte Videos

What Is the Run Command File?

How to Run PVS-Pegasus Jobs in GUI and Batch modes?

PVS DRC Run From - Setup Rules

What Is PVS/Pegasus Layer Viewer?

PVL Coloring Ruledecks with Docolor and Stitchcolor 

PLV Commands: dfm_property with Primary & Secondary Layer

PVS Quantus QRC Overview 

Online Courses

Pegasus Verification System

PVS (Physical Verification System)

Virtuoso Layout Design Basics

About Knowledge Booster Training Bytes

Knowledge Booster Training Bytes is an online journal that relays information about Cadence Training videos, online courses, and upcoming webinars in the Learning section of the Cadence Learning and Support portal. This blog category brings you direct links to these videos, courses, and other related material on a regular basis. Subscribe to receive email notifications about our latest Custom IC Design blog posts.




it

How to transfer custom title block from Orcad Capture to PCB Editor

Hi,

So I was trying to update the title block of a schematic that I have. The title block that was on there was out of date . I clicked on place --> title block and was able to find the title block that I need. I also have a .OLB file that contains that title block. Then I created a Netlist with the old BRD file as the input file (To keep it as is but modify changes) but when I do that I still do not see / cannot place the title block that I need. Under Place --> format symbols in Allegro , I do see a title block that is coming from the database (But it's the old one). I don't know what to do at this point and would appreciate any tips. I did make sure that the path to where the library is , is defined in the user preferences. 
I also tried copying the title block under the library folder in capture before creating my Netlist and that did not work either.

Thank you all.




it

Purging duplicate vias in pcb editor

How do we purge/remove the duplicated vias in the same location of the PCB editor? These vias are not the one stacked and they are just blind vias running in internal layers 12-14. I find there is an additional copy of the blind via at the same location. Not sure what caused this issue.




it

Launch footprint editor from Capture or PCB Editor?

I'd like to be able to edit a footprint for a part in my design without needing to find the footprint filepath and directly open that file in PCB Editor. I see that I can view footprints from Capture, and that doing so shows me the footprint path, but I can't find any way to launch the editor. Is there any way to go directly from a part in a Capture schematic or a placed part in a PCB Editor board design to editing that part's footprint?




it

Can I align pin numbers in edit part windows in Orcad Capture?

Hello..

I'm updating part in part editor in orcad capture, and I wonder how to align pin numbers using menu or tcl/tk command.

Please, let me know. Thank you.




it

17.4 Design Sync Fails without providing errors

As the title suggests I am unable to perform design sync between OrCAD Capture and Allegro. When I add a layout and try to sync to it I am given ERROR(ORCAP-2426): Cannot run Design Sync because of errors. See session log for error details.

Session Log

[ORPCBFLOW] : Invoking ECO dialog.
INFO(ORNET-1176): Netlisting the design
INFO(ORNET-1178): Design Name:
C:USERSDDOYLEDOCUMENTSCADENCEBOARDSREMOTE POWER DEVICECAPTUREREMOTE_POWER_DEVICE.DSN
Netlist Directory:
c:usersddoyledocumentscadenceoards emote power devicelayoutallegro
Configuration File:
C:CadenceSPB_17.4 ools/capture/allegro.cfg
pstswp.exe - pst - d "C:USERSDDOYLEDOCUMENTSCADENCEBOARDSREMOTE POWER DEVICECAPTUREREMOTE_POWER_DEVICE.DSN"- n "c:usersddoyledocumentscadenceoards emote power devicelayoutallegro" - c "C:CadenceSPB_17.4 ools/capture/allegro.cfg" - v 3 - l 31 - s "" - j "PCB Footprint" - hpath "HPathForCollision"
Spawning... pstswp.exe - pst - d "C:USERSDDOYLEDOCUMENTSCADENCEBOARDSREMOTE POWER DEVICECAPTUREREMOTE_POWER_DEVICE.DSN"- n "c:usersddoyledocumentscadenceoards emote power devicelayoutallegro" - c "C:CadenceSPB_17.4 ools/capture/allegro.cfg" - v 3 - l 31 - s "" - j "PCB Footprint" - hpath "HPathForCollision"
{ Using PSTWRITER 17.4.0 d001Dec-14-2021 at 09:00:49 }

INFO(ORCAP-36080): Scanning netlist files ...

Loading... c:usersddoyledocumentscadenceoards emote power devicelayoutallegropstchip.dat

Loading... c:usersddoyledocumentscadenceoards emote power devicelayoutallegropstchip.dat

Loading... c:usersddoyledocumentscadenceoards emote power devicelayoutallegropstxprt.dat

Loading... c:usersddoyledocumentscadenceoards emote power devicelayoutallegropstxnet.dat
packaging the design view...
Exiting... pstswp.exe - pst - d "C:USERSDDOYLEDOCUMENTSCADENCEBOARDSREMOTE POWER DEVICECAPTUREREMOTE_POWER_DEVICE.DSN"- n "c:usersddoyledocumentscadenceoards emote power devicelayoutallegro" - c "C:CadenceSPB_17.4 ools/capture/allegro.cfg" - v 3 - l 31 - s "" - j "PCB Footprint" - hpath "HPathForCollision"
INFO(ORNET-1179): *** Done ***

This issue started to occur after I changed parts that exist on previously created PCBs. I changed the following leading up to this:

1. Added height in Allegro to many of my components using the Setup->Area->Package Height tool.

2. Changed the reference designator category in OrCAD Capture to TP for several components on board.

Any advice here would be most welcome. Thanks!




it

Sense line and decoupling capacitors

Hello,

A mybe silly question came to my mind: When routing sense lines, is it better to hav them as close as possible to DUT or afer the decoupling capacitors ?

Force in red, sense in purple.

Best way is 1 or 2 ?

Thanks in advance and Merry Christmas to everybody !




it

UI issues of PCB Environment Editor 17.4

Hi,

I found that under the Dark Theme of PCB Environment Editor 17.4,

the window background is not all dark, resulting in unclear text display。

As shown in the figure below:




it

Display Resource Editor: Different Colors for Schematic and Layout Axis

Hi

In the environment I'm currently working, axes are shown for schematic, symbol, and layout views.For schematics and symbols, I'd prefer a dim gray, such that the axes are just visible but not dominant. For the layout, I'd prefer a brighter color. Is there a way to realize this? So far when I change the color of the 'axis' layer in the display resource editor, the axes in all three views get changed together:

Thanks very much for your input!




it

A problem with setup when Monte Carlo simulation starts

Hi, 

When I try to run Monte Carlo it gives me a 3 items message for possible failure:

1. It says the machine selected in the current job setup policy isnot reachable

2. The Cadence hierarchy is not detected, not installed properly. or

3. Job start script (with a path and a name like swiftNetlistService#) is not found on the remote machines.

Any recommendation on how to fix this?




it

How can I place stacked vias with the size exact same cut width without metals around?

How can I place stacked vias with the size exact same cut width without metals around?
As the red part only in the image below?




it

Xcelium/Simvision/xrun running very slow (waiting for SimVision/Verisium Debug to connect...)

Hello,


I would like to use the simulation software xrun/simvision that comes with XCELIUM. We are currently using classroom licenses and want to disable all ip addresses on the student pcs except the license server ip. We want to make sure that students cannot copy confidential data from the Cadence tools.


Problem:

When I launch the xrun simulation while all ip addresses are blocked, it starts but the performance is very slow. The GUI starts after 5 minutes and the simulation is ready after 10 minutes. The interesting thing is that when I enable all blocked ip addresses, everything works at a reasonable speed.

Terminal Output (execution without internet connection):

xrun -gui design.vhd

waiting for SimVision/Verisium Debug to connect...


Is there a way to run the simulation tools without an Internet connection? Or can you give me the ip addresses that are used by the simulation tools so that I can enable only those specific ips?


Regards,

Max




it

How to Set Up a Config View to Easily Switch Between Schematic and Calibre of DUT for Multiple Testbenches?

Hello everyone,

I hope you're all doing well. I’ve set up two testbenches (TB1 and TB2) for my Design Under Test (DUT) using Cadence IC6.1.8-64b.500.21 tools, as shown in the attached figure. The DUT has multiple views available: schematic, Calibre, Maestro, and Symbol, and each testbench uses the same DUT in different scenarios. Currently, I have to manually switch between these views, but I would like to streamline this process.

My goal is to use a single config view that allows me to switch between the schematic and the extracted (Calibre) views. Ideally, I would like to have a configuration file where making changes once would update both testbenches (TB1 and TB2) automatically. In other words, when I modify one config, both testbenches should reflect this update for a single simulation run.

I would really appreciate it if you could guide me on the following:

  1. How to create a config view for my DUT that can be used to easily switch between the schematic and extracted views, impacting both TB1 and TB2.
  2. Where to specify view priorities or other settings to control which view is used during simulation.
  3. Best practices for using a config file in this scenario, so that it ensures consistency across multiple testbenches.

Please refer to the attached figure to get a better understanding of the setup I’m using, where both TB1 and TB2 include the same DUT with multiple available views.

Thank you so much for your time and assistance!




it

vManager crashes when analyzing multiple sessions simultaneously with a fatal error detected by the Java Runtime Environment

When analyzing multiple sessions simultaneously Verisium Manager crashed and reported below error messages:

# A fatal error has been detected by the Java Runtime Environment:
#
#  SIGSEGV (0xb) at pc=0x00007efc52861b74, pid=14182, tid=18380
#
# JRE version: OpenJDK Runtime Environment Temurin-17.0.3+7 (17.0.3+7) (build 17.0.3+7)
# Java VM: OpenJDK 64-Bit Server VM Temurin-17.0.3+7 (17.0.3+7, mixed mode, sharing, tiered, compressed oops, compressed class ptrs, g1 gc, linux-amd64)
# Problematic frame:
# C  [libucis.so+0x238b74]

......

For more details please refer to the attached log file "hs_err_pid21143.log".

Two approaches were tried to solve this problem but neither has worked.
Method.1:

Setting larger heap size of Java process by "-memlimit" options.For example "vmanager -memlimit 8G".

Method.2:

Enlarging stack memory size limit of the Coverage engine by setting "IMC_NATIVE_STACKSIZE" environment variable to a larger value. For example "setenv IMC_NATIVE_STACKSIZE 1024000"

According to "hs_err_pid*.log" it is almost certain that the memory overflow triggered Java's CrashOnOutOfMemoryError and caused Verisium Manager to crash. There are some arguments about memory management of Java like "Xms, Xmx, ThreadStackSize, Xss5048k etc" and maybe this problem can be fixed by setting these arguments during analysis. However, how exactly does Verisium Manager specify these arguments during analysis? I tried to set them by the form of setting environment variables before analysis but it didn't work in analysis and their values didn't change.

Is there something wrong with my operation or is there a better solution?

Thank you very much.




it

How do I create a basic connectivity csv?

First time user of JasperGold. Chip level verif. I want to prove that an arbiter and a buffer are connected. I want to use the connectivity app to do that.

I see from the user guide, that I should provide a connectivity map, but i have no idea how to construct one.
The training videos said use this command: check_conn -generate_template jasper_template.xlsm -xlsm
But that did nothing, or at least it did not produce a file that i could find


[<embedded>] % check_conn -generate_template jasper_template.xlsm -xlsm
ERROR (ESW104): Invalid command formation.
Problem occurs with "-generate_template -xlsm".

And even if it did, I don't even know what's in the file, and whether it contains the two ips I'm trying to check.

I'm hoping someone can give me a bit of a boost here with some knowledge.




it

[Xcelium][xrun] Simulate with multiple builds

I want to do a 2-step build->simulate as follow:

1. Make multiple builds using xrun -elaborate [other options]. The purpose is to create multiple builds with different compile-time macros (+define+MACROA +define+MACROB=ABC). Each build is located in a different directory.

2. Run simulation with xrun -r. This is where I need help. How do I specify which build to simulate? Also, I need the simulation directory (with log files, …) to be different than the build directory.

Has anyone been able to achieve this or similar solutions?




it

Is it possible to automatically exclude registers or wires that are not used from toggle coverage?

Hello,

I have a question about toggle coverage.

In my case, there are many unused registers or wires that are affecting the toggle coverage score negatively.

Is it possible to automatically exclude registers or wires that are not used from toggle coverage?

My RTL code is as follows, Is it possible to automatically disable tb.top1.b and tb.top1.c without using an exclude file?

module top1;

  reg a;

  reg b;

  reg [31:0] c;

  initial

  begin

  #1 a=1'b0;

  #1 a=1'b1;

  #1 a=1'b0;

  end

endmodule

module tb;

  top1 top1();

endmodule




it

c interface with specman

Hi,

 I need to call a  c function form specman . I had followed the below steps.


File vb_pattern.e
---------------------------------

 
struct vb_pattern_s
{
  %data_in_ch0 : uint (bits : 4);   // data on channel 0
  %data_in_ch1 : uint (bits : 4);   // data on channed 1
  %data_in_ch2 : uint (bits : 4);   // data on channel 2
  %mode : uint (bits : 1);          // mode 
  %enable : uint (bits : 1);        // enable

};

C export vb_pattern_s;

------

file  x_output_bfm.e
--------------------------------------------

check_patterns()@clk_e is
{

  ...
 exp_viterbi_op();

}

routine exp_viterbi_op() is C routine viterbi_c_func;

---- EOF------

X.c
#include "vb_pattern.h"

void viterbi_c_func ()
{
 SN_TYPE(vb_pattern_s) vb_packet;
 SN_TYPE(mode)   mode;
 vb_packet = SN_SYS->ops
 mode = vb_packet->mode;
 printf(" Printing from C environment MODE = %h ", mode);

}


------------------- EOF----

x_top.e
------------
import  tb/vb_pattern.e;
import  tb/x_input_bfm.e;
import  tb/x_output_bfm.e;
import  tb/x_cover_dut.e;
import  tb/x_env.e;




I  did the following comand


>> sn_compile.sh -h_only x_top.e -o vb_pattern.h
>> gcc -c viterbi.c -o viterbi.o

I am getting the following error


viterbi.c: In function `viterbi_c_func':
viterbi.c:6: error: `t__mode' undeclared (first use in this function)
viterbi.c:6: error: (Each undeclared identifier is reported only once
viterbi.c:6: error: for each function it appears in.)
viterbi.c:6: error: syntax error before "mode"
viterbi.c:7: error: `mode' undeclared (first use in this function)


Please help me resolve this.

Kesav



  








 


Originally posted in cdnusers.org by kesava




it

Cisco's utilities library donation

Dear users,
Cisco has graciously agreed to donate a library of several utilities packages to the e community. Please refer to the LIBRARY_README.txt for general information, and to each of the packages' PACKAGE_README.txt file for more information on each package. The tar file containing the utilities library is attached to this message.

The zip file containing informational slides on Cisco's utility library packages is also attached. The zip file is 9 mg so may take a bit to download.  The file is too big to fit on this post, so the unzipped files are posted in three separate entries below.

For your convenience, we have also extracted the document “Directory Structure.doc” from the csco_base_env/docs location.
 
Note: The library contains the csco_testflow package, adding phases to e's run phase. Cadence strongly encourages Customers to adopt the testflow phases feature that Cadence is releasing in Specman6.2. The new phases in e will be similar to the phases defined in the csco_testflow package, but will be a formal part of the e language. For more information please contact IPCM@cadence.com.


Originally posted in cdnusers.org by meirav




it

Macro for multiple-value when definitions

The two macros below introduce new syntax for adding definitions to more than one 'when' determinant value at the same time. The first macro overloads 'extend' keyword and the second is the equivalent for 'when' keyword.

A use example:

extend [HUGE, BIG] packet {
    // definitions that pertain to these subtypes
};


The above code would be expanded in the following (naive) way:

extend HUGE packet {
    // definitions that pertain to these subtypes
};
extend BIG packet {
    // definitions that pertain to these subtypes
};



The macros code:

define 'statement>
       "extend ['name>,...] 'name> ({;...})" as computed {
    for each in 'names> do {
        result = appendf("%sextend %s %s %s;",result,it,'name>,);
    };
    result = appendf("{%s}",result); // required only for versions 6.1.1 or earlier
};

define 'struct_member>
       "when ['name>,...] 'name> ({;...})" as computed {
    for each in 'names> do {
        result = appendf("%swhen %s %s %s;",result,it,'name>,);
    };
    result = appendf("{%s}",result); // required only for versions 6.1.1 or earlier
};


Originally posted in cdnusers.org by matanvax




it

Item constraint macro

The following macros encapsulates a design pattern that enables constraining data item fields by 'do' actions of a high level sequence. This can be done without presupposing anything about the sequence tree generated under the do-ing sequence.

The tar file consists of 4 files:
- item_constraint_macro.e - the 'item_constraint' and 'sequence_export' macros definition
- item_constraint_example.e and sequence_export_example.e - use examples (one per each new construct)
- packet_seq.e - an auxiliary definition file for the examples

This topic was discussed in a Club-T presentation (Israel, Sophia-Antipolis, Munich). The presentation is also publicly avaiable.


Originally posted in cdnusers.org by matanvax




it

ce_tools directory no longer shipped with Specman

Hello All,

starting with version 8.1 the contents of the ce_tools directory will no longer
be shipped with Specman. The directory contains some unsupported AE/R&D
ware and has not been updated for several releases (i.e. most of those old
packages don't work with the latest release).
 
Attached is the contents of this directory. Please read the README before
using any of the packages.


Regards,
-hannes


Originally posted in cdnusers.org by hannes




it

Specman Makefile generator utility

I've heard lots of people asking for a way to generate Makefiles for Specman code, and it seems there are some who don't use "irun" which takes care of this automatically. So I wrote this little utility to build a basic Makefile based on the compiled and loaded e code.

It's really easy to use: at any time load the snmakedeps.e into Specman, and use "write makefile <name> [-ignore_test]".
This will dump a Makefile with a set of variables corresponding to the loaded packages, and targets to build any compiled modules.
Using -ignore_test will avoid having the test file in the Makefile, in case you switch tests often (who doesn't?).

It also writes a stub target so you can do "make stub_ncvlog" or "make stub vhdl" etc.

The targets are pretty basic, I thought it was more useful to #include this into the main Makefile and define your own more complex targets / dependencies as required.

The package uses the "reflection" facility of the e language, which is now documented since Specman 8.1, so you can extend this utility if you want (please share any enhancements you make).

 Enjoy! :-)

Steve.




it

vr_ad register definition utility

Hi All.

I put together a small Perl script to generate vr_ad register definitions from SPIRIT (IP-XACT) XML.
If you've got not idea what IP-XACT is, have a look here www.spiritconsortium.org/, then start pestering your design manager to use it :-)

The script can filter out registers and override R/W access types if needed.

An example XML file is included with the package so that you can play with it, and there's a detailed README.txt as well.

Here's an example of the generated e code:

// Automatically generated from xdmac.xml
// DO NOT EDIT, or your changes may be lost
<'

import vr_ad/e/vr_ad_top;

// Component = XDMAC
// memoryMap = xdmac
extend vr_ad_map_kind : [XDMAC];

// addressBlock = dma_eth
extend vr_ad_reg_file_kind : [DMA_ETH];

extend DMA_ETH vr_ad_reg_file {
keep size == 20;
keep addressing_width_in_bytes == 4;
};

// Register = command
// Reset = 0x00
reg_def COMMAND DMA_ETH 0x0 {
// Field resv3 = command[31:29]
reg_fld resv3 : uint(bits:3) : R : 0 : cov ;
// Field transfer_size = command[28:19]
reg_fld transfer_size : uint(bits:10) : RW : 0 : cov ;
// Field dma_transfer_target = command[18:14]
reg_fld dma_transfer_target : uint(bits:5) : RW : 0 : cov ;
// Field resv2 = command[13:10]
reg_fld resv2 : uint(bits:4) : R : 0 : cov ;
// Field transmit_receive = command[9:9]
reg_fld transmit_receive : uint(bits:1) : RW : 0 : cov ;
// Field resv1 = command[8:5]
reg_fld resv1 : uint(bits:4) : R : 0 : cov ;
// Field dest_address_enable = command[4:4]
reg_fld dest_address_enable : uint(bits:1) : RW : 0 : cov ;
// Field source_address_enable = command[3:3]
reg_fld source_address_enable : uint(bits:1) : RW : 0 : cov ;
// Field word_size = command[2:0]
reg_fld word_size : uint(bits:3) : R : 0 : cov ;
};

// Register = queue_exec
// Reset = 0x00
reg_def QUEUE_EXEC DMA_ETH 0x10 {
// Field resv = queue_exec[31:1]
reg_fld resv : uint(bits:31) : R : 0 : cov ;
// Field exec = queue_exec[0:0]
reg_fld exec : uint(bits:1) : RW : 0 : cov ;
};

extend XDMAC vr_ad_map {
dma_eth : DMA_ETH vr_ad_reg_file;

post_generate() is also {
add_with_offset(0x00, dma_eth);
dma_eth.reset();
};
}
'>

 

Any comments, please feed them back to me so I can enhance the script.
Note that this forum forces me to post a .zip file rather than .tgz, please be careful to unpack the file under Linux, not Windows, else the DOS linefeeds will corrupt the Perl and XML files.

Steve




it

IntelliGen Statistics Metrics Collection Utilility

As noted in white papers, posts on the Team Specman Blog, and the Specman documentation, IntelliGen is a totally new stimulus generator than the original "Pgen" and, as a result, there is some amount of effort needed to migrate an existing verification environment to fully leverage the power of IntelliGen.  One of the main steps in migrating code is running the linters on your code and adressing the issues highlighted. 

Included below is a simple utility you can include in your environment that allows you to collect some valuable statistics about your code base to allow you to better gauge the amount of work that might be required to migrate from Pgen to IntelliGen.  The ICFS statistics reported are of particular benefit as the utility not only identifies the approximate number of ICFSs in the environment, it also breaks the total number down according to generation contexts (structs/units and gen-on-the-fly statements) allowing you to better focus your migration efforts. 

IMPORTANT: Sometimes a given environment can trigger a large number of IntelliGen linting messages right off the bat.  Don't let this freak you out!  This does not mean that migration will be a long effort as quite often some slight changes to an environment remove a large number of identified issues.  I recently encountered a situation where a simple change to three locations in the environment, removed 500+ ICFSs!

The methods included in the utility can be used to report information on the following:
- Number of e modules
- Number of lines in the environment (including blanks and comments)
- Number and type of IntelliGen Guidelines linting messages
- Number of Inconsistently Connected Field Sets (ICFSs)
- Number of ICFS contexts and how many ICFSs per context
- Number of soft..select overlays found in the envioronment
- Number of Laces identified in the environment


To use the code below, simply load it before/after loading e-code and then
you can execute any of the following methods:

- sys.print_file_stats()             : prints # of lines and files
- sys.print_constraint_stats()   : prints # of constraints in the environment
- sys.print_guideline_stats()    : prints # of each type of linting message
- sys.print_icfs_stats()            : prints # of ICFSs, contexts and #ICFS/context
- sys.print_soft_select_stats() : prints # of soft select overlay issues
- sys.print_lace_stats()           : *Only works for SPMNv6.2s4 and later* prints # of laces identified in the environment

Each of the above calls to methods produces it's own log files (stored in the current working directory) containing relevant information for more detailed analysis.
- file_stats_log.elog : Output of "show modules" command
- constraint_log.elog : Output of the "show constraint" command
- guidelines_log.elog : Output of "gen lint -g" (with notification set to MAX_INT in order to get all warnings)
- icfs_log.elog       : Output of "gen lint -i" command
- soft_select_log.elog: Output of the "gen lint -s" command
- lace_log.elog       : Output of the "show lace" command


Happy generating!

Corey Goss




it

help with automating adding CLP files to DRA files

Question for forum:
I’m currently working on a code to automatically add CLP files to DRA files and then add two classes called “APPROVED” and “CLP”. To do this manually you have to open a DRA file, click file import subdrawing and choose the clp file with the same name as dra. (path already set). You then set the clp to position x 0 0. And then click on Set Up > Subclasses > Package geometry and type in “Approved” and “Clp.”
So far we’ve recorded the macros in Allegro for all of these actions. The macros correspond to one specific file name and we want to apply this to numerous files. To do this we created a python program that locates all of the specified CLP and DRA files, and if they have a matching name, runs a for loop that puts each file name into a stored variable that runs a loop for each file. We converted this script into batch and then added a function that we thought would run Allegro macros from batch.
In order to get the script working, we need to have an allegro batch command that will run the script without opening the Allegro start popup, or closing the popup when it appears.  We need to do this to run any script from starting Allegro.
I’ve done another similar program in batch where I made a for loop for each dra file and within the loop there was a batch a2dxf command that converted all dra files to dxf files. Is there a similar batch command for adding clp files to position 0 0 and/ or adding classes? If anyone has done something similar please let me know!
Thank you very much for the help.
Jen




it

Creating cover items for sparse values/queue or define in specman

Hello,

I have a question I want to create a cover that consists a sparse values, pre-computed (a list or define) for example l = {1; 4; 7; 9; 2048; 700} I'd like to cover that data a (uint(bits:16)) had those values, Any suggestion on how to achieve this, I'd prefer to stay away from macros, and avoid to write a lot of code

struct inst {

  data :uint(bits:16);
  opcode :uint(bits:16);
  !valid_data : list of uint(bits:16) = {0; 12; 10; 700; 890; 293;};
  event data_e;
  event opcode_e;

  cover data_e is {
     item data using radix = HEX, ranges = {
     //I dont want to write all of this
     range([0], "My range1");
     range([10], "My range2");
     //... many values in between
    range([700], "My rangen");
    };


    item opcode;


   cross data, opcode;
};

post_generate() is also {
    emit data_e;
};
};






it

Clarity Encrypted Connectors!

Cadence Clarity 3D Solver supports encrypted component models! Using this functionality, vendors can supply their 3D components, such as connectors, to end customers without revealing the physical IP of these designs. The first connector vendor to take advantage of this functionality is Japan Aviation Electronics (JAE),(read more)




it

Sigrity and Systems Analysis 2022.1 HF2 Release Now Available

The Sigrity and Systems Analysis (SIGRITY/SYSANLS) 2022.1 HF2 release is now available for download at Cadence Downloads. For the list of CCRs fixed in the 2022.1 HF2 release, see the README.txt file in the installation hierarchy.(read more)




it

Quickchat Video Interview: Introducing Cadence Optimality and OnCloud for Systems Analysis and Signoff

Microwaves & RF's David Maliniak interviews Sherry Hess of Cadence about recently announced products of Optimality and OnCloud.(read more)




it

BoardSurfers: Training Insights: What’s New in the Allegro PCB Editor Basic Techniques Course

The Allegro PCB Editor Basic Techniques course provides all the essential training required to start working with Allegro® PCB Editor. The course covers all the design tasks, including padstack and symbol creation, logic import, constraints setup...(read more)




it

Plot on Smith Chart from HB Simulation

Dear All,

To design an outphasing combiner, I need to extract the input impedances when the circuit is driven by two sources concurrently with a varying phase-shift and plot them on a Smith Chart. However, I can't find a way to display the simulated S-parameters on a Smith Chart.

The testbench, shown below, consists of two sources set to 50 Ohm with variable phase (PORT0: theta, PORT1: -theta) swept from -90° to 90°. The NPORTs are couplers used to isolate the forward and reflected power at each source, i.e. the S-parameters are:

  • S13 = S31 = 1; through path
  • S21 = S41 = 1; forward and reflected power
  • All other are zero

The testbench is simulated with an "hb" analysis instead of "sp", as the two sources have to be excited simultaneously with varying phase-shift to see their load-modulation effect. The sweep is setup in the "Choosing Analysis" window. The powers of the forward (pXa) and reflected (pXb) waves are found through the "Direct Plot" window, e.g. pvr('hb "/p1a" 0 50.0 '(1)) as the power for p1a, and named P1A_Watt. The S-parameters are then calculated as the reflected power w.r.t. the forward power P1B_Watt/P1A_Watt. This approach is based on a Hot S-parameter testbench from ADS.

At this point I would like to display these S-parameters on a Smith Chart. However, this seemed more challenging than expected. One straightforward method would seem to create an empty Smith Chart window in the Display Window and dragging the S-parameters from the rectangular plot on it, but this just deletes them from the Display Window. Hence my question:

How can I display these S-parameters on a Smith Chart?




it

Error with launching Python Script Via AWRDE VBA Script

Hello Team,
I am currently following this AWR script posted on them, to run a python script directly from inside AWR using VBA script.

Launching Python from AWRDE VBA Script - AWR Scripts - AWR Knowledgebase

Following all the basic steps with no changes to the code. I have Vscode and python-3.12.2 installed in my system. 

This is the error I am getting while running this code. 

 

Thank you for your help 

Best Regards

SID




it

VAR("") does not work within some expressions

Hi,

My Virtuoso and Spectre Version: ICADVM20.1-64b.NYISR30.2

I have an expression where the EvalType is "sweeps". Here is the expression (I also attached the snapshot):

(peakToPeak(leafValue(swapSweep(delay(?wf1 clip((VT("/clk0") - VT("/clk180")) (VAR("mt_stop") - (4.0 / VAR("datarate"))) VAR("mt_stop")) ?value1 0 ?edge1 "rising" ?nth1 1 ?td1 0 ?tol1 nil ?wf2 clip((VT("/tx_padp") - VT("/tx_padn")) (VAR("mt_stop") - (4.0 / VAR("datarate"))) VAR("mt_stop")) ?value2 0 ?edge2 "rising" ?nth2 1 ?tol2 nil ?td2 nil ?stop nil ?multiple nil) "VDD_FIXED_NOISE") "VREGLN_cmode" 0.85 "VREGDRV_novn" 0.4 "datarate" 1.658e+10) ?overall t) / 10.0)

What this expression does is that it compares the delay between the output data with respect to a reference clock. I then get this information for two conditions (VDD_FIXED_NOISE = 0 or 10mV) to get the effect of the supply-induced jitter. In the expression, I need to give the value of each parameter in different modes to distinguish them from each other. Now I want to sweep the base supply values and see the supply variation effects. For example, I want to change VREGLN_cmode from 0.85 to 0.81 and see how my supply-induced jitter changes. For that, the hard way is to copy the expression and change that value accordingly (e.g. "VREGLN_cmode" 0.81). I'm looking for an easier way to use a variable in the expression. Something like VAR("VREGLN_Sweep"). But I see it doesn't work in my expression and it gives an eVal error. I tested this before in other expressions (not sweep type) and it always worked. I have only one test and these variables are all Design Variables and not Global variables.
I want to know what mistake am I doing here and is there a way to make this work. Sorry that if I could not explain better my inquiry. Thank you.








it

Measuring DDJ (data dependent jitter). Cross function on eye-diagram

Hi,
My Virtuoso and Spectre Version: ICADVM20.1-64b.NYISR30.2
I plot an eye diagram using a built in function. I want to see the data-dependent jitter. I want to measure the eye diagram edges at zero crossing (width of that diamond part) shown in the pic by vertical and horizontal markers. I can put a marker and read the numbers there and get what I want. But now I want to run Monte Carlo and I can't do this for all samples. I wish I could write an expression for this. Unfortunately, I see that the function "cross" is not working on the eye diagram. Basically, when I send the eye diagram data to a table, I see that it actually is just the prbs data and not the eye diagram data. Is there a hack that can help me achieve my goal which is: having an expression to measure the edges of the eye diagram at zero crossing?
There is a script that Andrew wrote (https://support.cadence.com/wps/mypoc/cos?uri=deeplinkmin%3AViewSolution%3BsolutionNumber%3D11395772). This is a good script but it puts all edges on top of each other. I want to distinguish the two edges. In the attached pic (two-period eye diagram) you can see what I mean by the two edges (diamond shapes). I want to measure each of the two and take the maximum. Having all the edges on top of each other won't give me what I want. All edges together will lso include DCD. I purely want to measure DDJ. DCD is measured separately. I have very little experience with writing scripts and could not modify Andrew's script.
Your help is much appreciated. Thank you.




it

Colpitts Oscillator output power simulation

Hello everybody,

As you can find in the attached image, I am trying to simulate a Colpitts oscillator. However, using pss analysis it shows a high output power. 

My question is where is the problem of my structure or simulation setup?

Best,





it

Virtuosity: Reliability Analysis Report-Reliable Results Made Interactive

Read through this blog to know more about the new Reliability Report view in Virtuoso ADE Assembler and Virtuoso ADE Explorer.(read more)