v Cadence Collaborates with Test & Verification Solutions on Portable Stimulus By feedproxy.google.com Published On :: Thu, 18 Jan 2018 15:01:00 GMT The Cadence® Connections® Verification Program brings together a worldwide network of services, training, and IP development experts that support Cadence verification solutions. The program members help customer accelerate the adoption of new...(read more) Full Article CDNLive Test DVcon pss verification
v What’s Hot in Verification at this Year’s CDNLive? It’s Portable Stimulus Again! By feedproxy.google.com Published On :: Tue, 27 Mar 2018 21:23:00 GMT CDNLive is a user conference, and verification is one of the largest categories of content with multiple tracks covering multiple days. Portable stimulus is one of the hottest new areas in verification, and continues to be popular in all venues. At l...(read more) Full Article CDNLive Perspec pss portable stimulus
v AMIQ and Cadence demonstrate Accellera PSS v1.0 interoperability By feedproxy.google.com Published On :: Thu, 12 Jul 2018 00:04:00 GMT There’s nothing like the heat of a DAC demo to stress new technology and the engineers behind it! Such was the case at DAC 2018 at the new locale of Moscone Center West, San Francisco. Cadence and AMIQ were two of several vendors who announced ...(read more) Full Article Perspec perspec system verifier AMIQ Accellera pss portable stimulus
v Integration and Verification of PCIe Gen4 Root Complex IP into an Arm-Based Server SoC Application By feedproxy.google.com Published On :: Thu, 16 Aug 2018 22:17:00 GMT Learn about the challenges and solutions for integrating and verification PCIe(r) Gen4 into an Arm-Based Server SoC. Listen to this relatively short webinar by Arm and Cadence, as they describe the collaboration and results, including methodology and...(read more) Full Article
v Willamette HDL and Cadence Develop the Industry's First PSS Training Course for Perspec System Verifier By feedproxy.google.com Published On :: Sat, 01 Dec 2018 01:20:00 GMT Cadence continues to be a leader in SoC verification and has expanded our industry investment in Accellera portable stimulus language standardization. Some customers have expressed reservations that portable stimulus requires the effort of learn...(read more) Full Article whdl Perspec perspec system verifier willamette hdl Accellera pss portable stimulus Accellera PSS
v Verification Reflections on 2018 By feedproxy.google.com Published On :: Thu, 20 Dec 2018 15:57:00 GMT In my predictions for 2018 I had identified five key trends driving verification in 2018 – Security, Safety, Application Specificity, Processor Ecosystems and System Design Enablement, all centered around ecosystems. Looking back now as the yea...(read more) Full Article security functional safety verification
v DAC 2019 Preview – Multi-MHz Prototyping for Billion Gate Designs, AI, ML, 5G, Safety, Security and More By feedproxy.google.com Published On :: Wed, 29 May 2019 23:45:00 GMT Vegas, here we come. All of us fun EDA engineers at once. Be prepared, next week’s Design Automation Conference will be busy! The trends I had outlined after last DAC in 2018—system design, cloud, and machine learning—have...(read more) Full Article security 5G DAC DAC2019 prototyping palladium z1 Safety tortuga logic Protium Emulation ARM AI
v Generating IBIS models in cadence virtuoso By feedproxy.google.com Published On :: Wed, 04 Sep 2019 20:25:36 GMT I'm trying to generate IBIS models for the parts that I'm designing. I'm designing using CADENCE Virtuoso. I'm wondering if there is a tutorial for generating IBIS models in CADENCE Virtuoso. Please pardon me if my question is broad. Full Article
v Visibility to "component value" property in Edit/Properties dialog? By feedproxy.google.com Published On :: Thu, 12 Sep 2019 18:59:09 GMT Hi, I want to add values to components in my SiP design such as 1nF or 15nH. There is already in existence a COMP_VALUE property reserved for this as shown during BOM generation. This property is not visible under the Edit/Properties dialog for component or symbol find filters. We have already created user properties called COMP_MFG and COMP_MFG_PN that it editable at a component level. When we try to add COMP_VALUE it is reported as a reserved name in Cadence but this name is not listed in the properties dialog. Is there a way to turn on the visibility and editablility of this or other hidden reserved Cadence property names? How can I assign a string value to the COMP_VALUE property? Thanks Full Article
v SIP to Allegro pcb designer 17.2 ver By feedproxy.google.com Published On :: Tue, 28 Jan 2020 13:25:18 GMT Iam new to Package design SIP tool. I had created the DIE package using SIP. Kindly give the direction how to map the created DIE package in Allegro pcb editor 17.2 ver. In Allegro design capture CIS tool we had created the schematics file. The DIE which we are using is having 100pins, We had created the DIE in SIP tool. Out of 100 Die pins, only 90 pins is getting connected others are NC pins. We had mapped the Bond fingers only for 90 Die pins in the SIP package. But in the Schematics we had created the DIE logic symbol for 100 pins. Please advice whether we can able to import the DIE package in the allegro tool. In this scenario while importing the 100 pin DIE package in allegro pcb editor will the net connectivity will be shown from the DIE pad to Bond fingers and from Bond fingers to respective components? Please suggest whether we are going in the right path or please advice what we have to proceed with. Thanks in Advance, Rajesh Full Article
v How to check a cluster of same net vias spacing, with have no shape or cline covered By feedproxy.google.com Published On :: Fri, 14 Feb 2020 04:12:15 GMT Hi all, I have a question regarding the manufacture : how to check a cluster of same net vias spacing, with have no shape or cline covered Full Article
v IC Packagers: Shape Connectivity in the Allegro Data Model By community.cadence.com Published On :: Tue, 28 Apr 2020 13:14:00 GMT Those who work in the IC Packaging design space have some unique challenges. We bridge between the IC design world (90/45-degree traces with rectangular and octagonal pins) and the PCB domain... [[ Click on the title to access the full blog on the Cadence Community site. ]] Full Article
v RAMAC Park and the Origin of the Disk Drive By community.cadence.com Published On :: Wed, 29 Apr 2020 12:00:00 GMT Did you know that there is a park in San Jose named after a disk drive? Actually, technically it is named after the first computer that used disk drives. You couldn't just go and buy a disk drive... [[ Click on the title to access the full blog on the Cadence Community site. ]] Full Article
v Whiteboard Wednesdays - Low Power SoC Design with High-Level Synthesis By community.cadence.com Published On :: Wed, 29 Apr 2020 15:00:00 GMT In this week’s Whiteboard Wednesdays video, Dave Apte discusses how to create the lowest power design possible by using architectural exploration and Cadence’s Stratus HLS solution.... [[ Click on the title to access the full blog on the Cadence Community site. ]] Full Article
v Specman’s Callback Coverage API By community.cadence.com Published On :: Thu, 30 Apr 2020 14:30:00 GMT Our customers’ tests have become more complex, longer, and consume more resources than before. This increases the need to optimize the regression while not compromising on coverage. Some advanced... [[ Click on the title to access the full blog on the Cadence Community site. ]] Full Article
v Library Characterization Tidbits: Recharacterize What Matters - Save Time! By community.cadence.com Published On :: Thu, 30 Apr 2020 14:50:00 GMT Recently, I read an article about how failure is the stepping stone to success in life. It instantly struck a chord and a thought came zinging from nowhere about what happens to the failed arcs of a... [[ Click on the title to access the full blog on the Cadence Community site. ]] Full Article
v 2019 HF1 Release for Clarity, Celsius, and Sigrity Tools Now Available By community.cadence.com Published On :: Fri, 01 May 2020 21:20:00 GMT The 2019 HF1 production release for Clarity, Celsius, and Sigrity Tools is now available for download at Cadence Downloads . SIGRITY2019 HF1 For information about supported platforms, compatibility... [[ Click on the title to access the full blog on the Cadence Community site. ]] Full Article
v Sunday Brunch Video for 3rd May 2020 By community.cadence.com Published On :: Sun, 03 May 2020 12:00:00 GMT www.youtube.com/watch Made on my balcony (camera Carey Guo) Monday: EDA101 Video Tuesday: Weekend Update Wednesday: RAMAC Park and the Origin of the Disk Drive Thursday: 1G Mobile: AMPS, TOPS, C-450,... [[ Click on the title to access the full blog on the Cadence Community site. ]] Full Article
v IC Packagers: Advanced In-Design Symbol Editing By community.cadence.com Published On :: Wed, 06 May 2020 14:09:00 GMT We have talked about aspects of the in-design symbol edit application mode in the past. This is the environment specific to the Allegro® Package Designer Plus layout tools allowing you to work... [[ Click on the title to access the full blog on the Cadence Community site. ]] Full Article
v Automotive Security in the World of Tomorrow - Part 1 of 2 By feedproxy.google.com Published On :: Wed, 21 Aug 2019 18:41:00 GMT Autonomous vehicles are coming. In a statistic from the U.S. Department of Transportation, about 37,000 people died in car accidents in the United States in 2018. Having safe, fully automatic vehicles could drastically reduce that number—but the trick is figuring out how to make an autonomous vehicle safe. Internet-enabled systems in cars are more common than ever, and it’s unlikely that the use of them will slow or stop—and while they provide many conveniences to a driver, they also represent another attack surface that a potential criminal could use to disable a vehicle while driving. So—what’s being done to combat this? Green Hills Software is on the case, and they explained the landscape of security in automotive systems in a presentation given by Max Hinson in the Cadence Theater at DAC 2019. They have software embedded [FS1] in most parts of a car, and all the major OEMs use their tech. The challenge they’ve taken on is far from a simple one—between the sheer complexity of modern automotive computer systems, safety requirements like the ISO 26262 standard, and the cost to develop and deploy software, they’ve got their work cut out for them. It’s the complexity of the systems that represents the biggest challenge, though. The autonomous cars of the future have dynamic behaviors, cognitive networks, require security certification to at least ASIL-D, require cyber security like you’d have on an important regular computer system to cover for the internet-enabled systems—and all of this comes with a caveat: under current verification abilities, it’s not possible to test every test case for the autonomous system. You’d be looking at trillions of test cases to reach full coverage—not even the strongest emulation units can cover that today. With regular cars, you could do testing with crash-test dummies, and ramming the car into walls at high speeds in a lab and studying the results. Today, though, that won’t cut it. Testing like that doesn’t see if a car has side-channel vulnerabilities in its infotainment system, or if it can tell the difference between a stop sign and a yield sign. While driving might seem simple enough to those of us that have been doing it for a long time, to a computer, the sheer number of variables is astounding. A regular person can easily filter what’s important and what’s not, but a machine learning system would have to learn all of that from scratch. Green Hills Software posits that it would take nine billion miles of driving for a machine learning system of today’s caliber to reach an average driver’s level—and for an autonomous car, “average” isn’t good enough. It has to be perfect. A certifier for autonomous vehicles has a herculean task, then. And if that doesn’t sound hard enough, consider this: in modern machine-vision systems, something called the “single pixel hack” can be exploited to mess them up. Let’s say you have a stop sign, and a system designed to recognize that object as a stop sign. Randomly, you change one pixel of the image to a different color, and then check to see if the system still recognizes the stop sign. To a human, who knows that a stop sign is octagonal, red, and has “STOP” written in white block letters, a stop sign that’s half blue and maybe bent a bit out of shape is still, obviously, a stop sign—plus, we can use context clues to ascertain that sign at an intersection where there’s a white line on the pavement in front of our vehicle probably means we should stop. We can do this because we can process the factors that identify a stop sign “softly”—it’s okay if it’s not quite right; we know what it’s supposed to be. Having a computer do the same is much more difficult. What if the stop sign has graffiti on it? Will the system still recognize it as a stop sign? How big of an aberration needs to be present before the system no longer acknowledges the mostly-red, mostly-octagonal object that might at one point have had “stop” written on it as a stop sign? To us, a stop sign is a stop sign, even with one pixel changed—but change it in the right spot, and the computer might disagree. The National Institute of Security and Technology tracks vulnerabilities along those lines in all sorts of systems; by their database, a major vulnerability is found in Linux every three days. And despite all our efforts to promote security, this isn’t a battle we’re winning right now—the number of vulnerabilities is increasing all the time. Check back next time to see the other side: what does Green Hills Software propose we do about these problems? Read part 2 now. Full Article security automotive Functional Verification Green Hills Software
v Automotive Security in the World of Tomorrow - Part 2 of 2 By feedproxy.google.com Published On :: Thu, 22 Aug 2019 21:37:00 GMT If you missed the first part of this series, you can find it here. So: what does Green Hills Software propose we do? The issue of “solving security” is, at its core, impossible—security can never be 100% assured. What we can do is make it as difficult as possible for security holes to develop. This can be done in a couple ways; one is to make small code in small packs executed by a “safing plan”—having each individual component be easier to verify goes a long way toward ensuring the security of the system. Don’t have sensors connect directly to objects—instead have them output to the safing plan first, which can establish control and ensure that nothing can be used incorrectly or in unintended ways. Make sure individual software components are sufficiently isolated to minimize the chances of a side-channel attack being viable. What all of these practices mean, however, is that a system needs to be architected with security in mind from the very beginning. Managers need to emphasize and reward secure development right from the planning stages, or the comprehensive approach required to ensure that a system is as secure as it can be won’t come together. When something in someone else’s software breaks, pay attention—mistakes are costly, but only one person has to make it before others can learn from it and ensure it doesn’t happen again. Experts are experts for a reason—when an independent expert tells you something in your design is not secure, don’t brush them off because the fix is expensive. This is what Green Hills Software does, and it’s how they ensure that their software is secure. Now, where does Cadence fit into all of this? Cadence has a number of certified secure offerings a user can take advantage of when planning their new designs. The Tensilica portfolio of IP is a great way to ensure basic components of your design are foolproof. As always, the Cadence Verification Suite is great for security verification in both simulation and emulation, and JasperGold platform’s formal apps are a part of that suite as well. We are entering a new age of autonomous technology, and with that new age we have to update our security measures to match. It’s not good enough to “patch up” security at the end—security needs to beat the forefront of a verification engineer or hardware designer’s mind at all stages of development. For a lot of applications, quite literally, lives are at stake. It’s uncharted territory out there, but with Green Hills Software and Cadence’s tools and secure IP, we can ensure the safety of tomorrow. Full Article security automotive Functional Verification Green Hills Software
v Specman: Analyze Your Coverage with Python By feedproxy.google.com Published On :: Wed, 06 Nov 2019 13:31:00 GMT In the former blog about Python and Specman: Specman: Python Is here!, we described the technical information around Specman-Python integration. Since Python provides so many easy to use existing libraries in various fields, it is very tempting to leverage these cool Python apps. Coverage has always been the center of the verification methodology, however in the last few years it gets even more focus as people develop advanced utilities, usually using Machine Learning aids. Anyhow, any attempt to leverage your coverage usually starts with some analysis of the behavior and trends of some typical tests. Visualizing the data makes it easier to understand, analyze, and communicate. Fortunately, Python has many Visualization libraries. In this blog, we show an example of how you can use the plotting Python library (matplotlib) to easily display coverage information during a run. In this blog, we use the Specman Coverage API to extract coverage data, and a Python module to display coverage grades interactively during a single run and the way to connect both. Before we look at the example, if you have read the former blog about Specman and Python and were concerned about the fact that python3 is not supported, we are glad to update that in Specman 19.09, Python3 is now supported (in addition to Python2). The TestcaseLet’s say I have a stable verification environment and I want to make it more efficient. For example: I want to check whether I can make the tests shorter while hardly harming the coverage. I am not sure exactly how to attack this task, so a good place to start is to visually analyze the behavior of the coverage on some typical test I chose. The first thing we need to do is to extract the coverage information of the interesting entities. This can be done using the old Coverage API. Coverage APICoverage API is a simple interface to extract coverage information at a certain point. It is implemented through a predefined struct type named user_cover_struct. To use it, you need to do the following: Define a child of user_cover_structusing like inheritance (my_cover_struct below). Extend its relevant methods (in our example we extend only the end_group() method) and access the relevant members (you can read about the other available methods and members in cdnshelp). Create an instance of the user_cover_structchild and call the predefined scan_cover() method whenever you want to query the data (even in every cycle). Calling this method will result in calling the methods you extended in step 2. The code example below demonstrates these three steps. We chose to extend the end_group() method and we keep the group grade in some local variable. Note that we divide it by 100,000,000 to get a number between 0 to 1 since the grade in this API is an integer from 0 to 100,000,000. struct my_cover_struct like user_cover_struct { !cur_group_grade:real; //Here we extend user_cover_struct methods end_group() is also { cur_group_grade = group_grade/100000000; }}; extend sys{ !cover_info : my_cover_struct; run() is also { start monitor_cover (); }; monitor_cover() @any is { cover_info = new; while(TRUE) { // wait some delay, for example – wait [10000] * cycles; // scan the packet.packet_cover cover group compute cover_info.scan_cover("packet.packet_cover"); };//while };// monitor_cover};//sys Pass the Data to a Python ModuleAfter we have extracted the group grade, we need to pass the grade along with the cycle and the coverage group name (assuming there are a few) to a Python module. We will take a look at the Python module itself later. For now, we will first take a look at how to pass the information from the e code to Python. Note that in addition to passing the grade at certain points (addVal method), we need an initialization method (init_plot) with the number of cycles, so that the X axis can be drawn at the beginning, and end_plot() to mark interesting points on the plot at the end. But to begin with, let’s have empty methods on the Python side and make sure we can just call them from the e code. # plot_i.pydef init_plot(numCycles): print (numCycles)def addVal(groupName,cycle,grade): print (groupName,cycle,grade)def end_plot(): print ("end_plot") And add the calls from e code: struct my_cover_struct like user_cover_struct { @import_python(module_name="plot_i", python_name="addVal") addVal(groupName:string, cycle:int,grade:real) is imported; !cur_group_grade:real; //Here we extend user_cover_struct methods end_group() is also { cur_group_grade = group_grade/100000000; //Pass the values to the Python module addVal(group_name,sys.time, cur_group_grade); } //end_group};//user_cover_struct extend sys{ @import_python(module_name="plot_i", python_name="init_plot" init_plot(numCycles:int) is imported; @import_python(module_name="plot_i", python_name="end_plot") end_plot() is imported; !cover_info : my_cover_struct; run() is also { start scenario(); }; scenario() @any is { //initialize the plot in python init_plot(numCycles); while(sys.time<numCycles) { //Here you add your logic //get the current coverage information for packet cover_info = new; var num_items:= cover_info.scan_cover("packet.packet_cover"); //Here you add your logic };//while //Finish the plot in python end_plot(); }//scenario}//sys The green lines define the methods as they are called from the e The blue lines are pre-defined annotations that state that the method in the following line is imported from Python and define the Python module and the name of the method in it. The red lines are the calls to the Python methods. Before running this, note that you need to ensure that Specman finds the Python include and lib directories, and Python finds our Python module. To do this, you need to define a few environment variables: SPECMAN_PYTHON_INCLUDE_DIR, SPECMAN_PYTHON_LIB_DIR, and PYTHONPATH. The Python Module to Draw the PlotAfter we extracted the coverage information and ensured that we can pass it to a Python module, we need to display this data in the Python module. There are many code examples out there for drawing a graph with Python, especially with matplotlib. You can either accumulate the data and draw a graph at the end of the run or draw a graph interactively during the run itself- which is very useful especially for long runs. Below is a code that draws the coverage grade of multiple groups interactively during the run and at the end of the run it prints circles around the maximum point and adds some text to it. I am new to Python so there might be better or simpler ways to do so, but it does the work. The cool thing is that there are so many examples to rely on that you can produce this kind of code very fast. # plot_i.pyimport matplotlibimport matplotlib.pyplot as plt plt.style.use('bmh') #set interactive modeplt.ion() fig = plt.figure(1)ax = fig.add_subplot(111) # Holds a specific cover groupclass CGroup: def __init__(self, name, cycle,grade ): self.name = name self.XCycles=[] self.XCycles.append(cycle) self.YGrades=[] self.YGrades.append(grade) self.line_Object= ax.plot(self.XCycles, self.YGrades,label=name)[-1] self.firstMaxCycle=cycle self.firstMaxGrade=grade def add(self,cycle,grade): self.XCycles.append(cycle) self.YGrades.append(grade) if grade>self.firstMaxGrade: self.firstMaxGrade=grade self.firstMaxCycle=cycle self.line_Object.set_xdata(self.XCycles) self.line_Object.set_ydata(self.YGrades) plt.legend(shadow=True) fig.canvas.draw() #Holds all the data of all cover groups class CData: groupsList=[] def add (self,groupName,cycle,grade): found=0 for group in self.groupsList: if groupName in group.name: group.add(cycle,grade) found=1 break if found==0: obj=CGroup(groupName,cycle,grade) self.groupsList.append(obj) def drawFirstMaxGrade(self): for group in self.groupsList: left, right = plt.xlim() x=group.firstMaxCycle y=group.firstMaxGrade #draw arrow #ax.annotate("first maximum grade", xy=(x,y), #xytext=(right-50, 0.4),arrowprops=dict(facecolor='blue', shrink=0.05),) #mark the points on the plot plt.scatter(group.firstMaxCycle, group.firstMaxGrade,color=group.line_Object.get_color()) #Add text next to the point text='cycle:'+str(x)+' grade:'+str(y) plt.text(x+3, y-0.1, text, fontsize=9, bbox=dict(boxstyle='round4',color=group.line_Object.get_color())) #Global datamyData=CData() #Initialize the plot, should be called oncedef init_plot(numCycles): plt.xlabel('cycles') plt.ylabel('grade') plt.title('Grade over time') plt.ylim(0,1) plt.xlim(0,numCycles) #Add values to the plotdef addVal(groupName,cycle,grade): myData.add(groupName,cycle,grade) #Mark interesting points on the plot and keep it showndef end_plot(): plt.ioff(); myData.drawFirstMaxGrade(); #Make sure the plot is being shown plt.show(); #uncomment the following lines to run this script with simple example to make sure #it runs properly regardless of the Specman interaction #init_plot(300)#addVal("xx",1,0)#addVal("yy",1,0)#addVal("xx",50,0.3)#addVal("yy",60,0.4)#addVal("xx",100,0.8)#addVal("xx",120,0.8)#addVal("xx",180,0.8)#addVal("yy",200,0.9)#addVal("yy",210,0.9)#addVal("yy",290,0.9)#end_plot() In the example we used, we had two interesting entities: packet and state_machine, thus we had two equivalent coverage groups. When running our example connecting to the Python module, we get the following graph which is displayed interactively during the run. When analyzing this specific example, we can see two things. First, packet gets to a high coverage quite fast and significant part of the run does not contribute to its coverage. On the other hand, something interesting happens relating to state_machine around cycle 700 which suddenly boosts its coverage. The next step would be to try to dump graphic information relating to other entities and see if something noticeable happens around cycle 700. To run a complete example, you can download the files from: https://github.com/okirsh/Specman-Python/ Do you feel like analyzing the coverage behavior in your environment? We will be happy to hear about your outcomes and other usages of the Python interface. Orit KirshenbergSpecman team Full Article Specman Specman coverage engine coverage Python Functional Verification Specman e e e language specman elite functional coverage
v RAK Attack: Better Driver Tracing, Faster Palladium Build Time, UVM Register Map Automation By feedproxy.google.com Published On :: Sun, 15 Mar 2020 00:52:00 GMT Looking to learn? There's a bunch of new RAKs (Rapid Adoption Kits) available online now! 1) Indago 19.09 Better Driver Tracing and More Are you new to Indago and not sure where to start? Luckily, there’s a new Rapid Adoption Kit for you: the Indago 19.09 Overview RAK! This neat package contains everything you need to get your debugging started through Indago. In four short labs, plus a brief introductory lab, you’ll have all the basics of Indago 19.09 down—the Indago working environment, the SmartLog, how Indago interacts with the rest of the Cadence Verification Suite, and how Indago uses HDL driver tracing. Lab 1 discusses the various debugging tools included in Indago and teaches you how to customize your Indago windows and environment settings. Lab 2 covers the SmartLog feature and talks about analyzing and filtering its messages to suit your needs, as well as how to interact with the waveform marker. Lab 3 is an interactive Indago debugging experience—it’ll walk you through how to use Indago and its features in an actual working environment: setting breakpoints, using simulator commands in the Indago console, toolbars, switches, and more. Lab 4 is all things HDL tracing—recording debug data, an introduction to debug assertions, waveform visualizations, driving expression analysis, and single-step driver tracing, among other things. Interested? Check out the RAK here. 2) IXCOM MSIE: Faster Palladium Build Time Got several testbenches you want to compile with the same DUT and tests and you want to do it fast? With IXCOM, all you have to do to compile those different testbenches is use the xrun command for each after compiling your DUT. But what exactly is IXCOM, and how does one start using it? This quick RAK can help—here, you’ll learn the basics of using MSIE features with IXCOM, complete with an example to get you started. Using MSIE can vastly improve your build times with Palladium and using IXCOM is the best way to shrink that tedious rebuild time as small as it can get. Check out this RAK here. 3) JasperGold Control and Status Register Verification App Automates UVM Register Map Verification New to the JasperGold Control and Status Register (CSR) Verification App for your UVM testbenches? Don’t worry; there’s a RAK for that! This eponymous RAK can get you up and running with this in no time, helping you automate your checks from UVM register map specs. With this RAK, you’ll learn the basics of the JasperGold CSR, how to use JasperGold CSR’s Proof Accelerator, and more. CSR features a model-based approach to predicting a register’s expected value, supports pipeline interfaces, all IP-XACT access policies, and it can fully model any expected register value. It also supports register aliases, read and write semantics, and separate read/write data latencies in any given field. If this functionality sounds up your alley, you can take a look at this RAK here. Full Article Rapid Adoption Kit IXCOM RAK Indago JasperGold
v Metamorphic Testing: The Future of Verification? By feedproxy.google.com Published On :: Thu, 16 Apr 2020 22:00:00 GMT Curious about what’s going on behind the scenes with verification? Bernard Murphy, Jim Hogan, and our own Paul Cunningham are on the case with the “Innovation in Verification” blog stream over at semiwiki.com. Every month, this trio reviews a newly-published paper in academia that pertains to verification and discusses its implications. Be sure to stop by—it’s a great place to see what might be coming down the pipeline someday. This month, they discuss the implications of metamorphic testing. The purpose of metamorphic testing is to define a verification approach where is there is no “golden reference.” This situation comes up a lot now as designs grow in complexity, and it begs the question: “how does one know the design is verified if there is no standard to compare to?”. Metamorphic testing addresses the problem of not having a “gold standard” to compare to by comparing the results of related tests instead. The paper reviewed by this team used metamorphic testing to study methods of managing JavaScript tags. Paul saw this as a valuable new class of coverage. Metamorphic testing represents a way to create better distribution analyses through understanding the relationships among tests. This can reveal critical-but-complex issues that traditional verification methods may overlook. He saw this as an emerging class of coverage that new verification tools could be built around. Paul asserted that a future metamorphic-testing-based tool’s main contribution to the field of verification would be to better analyze noisy performance results where the noise is multi-modal. It could be useful in detecting race conditions and similar hard-to-debug anomalies. Paul also sees metamorphic testing as ripe for ML techniques. Overall—Paul sees a bright future for metamorphic testing in verification. Jim is reminded of Solido and Spice—these metamorphic testing capabilities are “more than just a feature”—they might be a product. Maybe even a whole new class of verification tools, as Paul said. Bernard says that this topic is “too rich to address in one blog”, so be sure to head over to the post to see more of what the future has in store for verification. Full Article Functional Verification Semiwiki metamorphic testing
v Specman’s Callback Coverage API By feedproxy.google.com Published On :: Thu, 30 Apr 2020 14:30:00 GMT Our customers’ tests have become more complex, longer, and consume more resources than before. This increases the need to optimize the regression while not compromising on coverage. Some advanced customers of Specman use Machine Learning based solutions to optimize the regressions while some use simpler solutions. Based on a request of an advanced customer, we added a new Coverage API in Specman 19.09 called Coverage Callback. In 20.03, we have further enhanced this API by adding more options. Now there are two Coverage APIs that provide coverage information during the run (the old scan_cover API and this new Callback API). This blog presents these two APIs and compares between them while focusing on the newer one. Before we get into the specifics of each API, let’s discuss what is common between these APIs and why we need them. Typically, people observe the coverage model after the test ends, and get to know the overall contribution of the test to the coverage. With these two APIs, you can observe the coverage model during the test, and hence, get more insight into the test progress. Are you wondering about what you can do with this information? Let’s look at some examples. Recognize cases when the test continues to run long after it already reached its coverage goal. View the performance of the coverage curve. If a test is “stuck” at the same grade for a long time, this might indicate that the test is not very good and is just a waste of resource. These analyses can be performed in the test itself, and then a test can decide to either stop the run, or change something in it configuration, or – post run. You can also present them visually for some analysis, as shown in the blog: Analyze Your Coverage with Python. scan_cover API (or “Scanning the Coverage Model”) With this API you can get the current status for any cover group or item you are interested in at any point in time during the test (by calling scan_cover()). It is very simple to use; however it has performance penalty. For getting the coverage grade of any cover group during the test, you should1. Trigger the scan_cover at any time when you want the coverage model to be scanned.2. Implement the scan_cover related methods, such as start_item() and end_bucket(). In these methods, you can query the current grade of group/item/bucket.The blog mentioned earlier: Analyze Your Coverage with Python describes the details and provides an example. Callback API The Callback API enables you to get a callback for a desired cover group(s), whenever it is sampled. This API also provides various query methods for getting coverage related information such as what the current sampled value is. So, in essence, it is similar to scan_cover API, but as the phrase says: “same same but different”: Callback API has almost no performance penalty while scan_cover API does. Callback API contains a richer set of query methods that provide a lot of information about the current sampled value (vs just the grade with scan_cover). Using scan_cover API, you decide when you want to query the coverage information (you call scan_cover), while with the Callback API you query the coverage information when the coverage is sampled (from do_callback). So, scan_cover gives you more flexibility, but you do need to find the right timing for this call. There is no absolute advantage of either of these APIs, this only depends on what you want to do. Callback API details The Callback API is based on a predefined struct called: cover_sampling_callback. In order to use this API, you need to: Define a struct inheriting cover_sampling_callback (cover_cb_save_data below) Extend the predefined do_callback() method. This method is a hook being called whenever any of the cover groups that are registered to the cover_sampling_callback instance is being sampled. From do_callback() you can access coverage data by using queries such as: is_currently_per_type(), get_current_group_grade() and get_current_cover_group() (as in the example below) and many more such as: get_relevant_group_layers() and get_simple_cross_sampled_bucket_name(). Register the desired cover group(s) to this struct instance using the register() method. Take a look at the following code: // Define a coverage callback.// Its behavior – print to screen the current grade.struct cover_cb_save_data like cover_sampling_callback { do_callback() is only { // In this example, we care only about the per_type grade, and not per_instance if is_currently_per_type() { var cur_grade : real = get_current_group_grade(); sys.save_data (get_current_cover_group().get_name(), cur_grade); };//if };//do_callback()};// cover_cb_send_dataextend sys { !cb : cover_cb_save_data; // Instantiate the coverage callback, and register to it two of my coverage groups run() is also { cb = new with { var gr1:=rf_manager.get_struct_by_name("packet").get_cover_group("packet_cover"); .register(gr1); var gr2:=rf_manager.get_struct_by_name("sys").get_cover_group("mem_cover"); .register(gr2); };//new };//run() save_data(group_name : string, group_grade : real) is { //here you either print the values to the screen, update a graph you show or save to a db };// save_data};//sys In the blog Analyze Your Coverage with Python mentioned above, we show an example of how you can use the scan_cover API to extract coverage information during the run, and then use the Specman-Python API to display the coverage interactively during the run (using plotting Python library - matplotlib). If you find this usage interesting and you want to use the same example, by the Callback API instead of the scan_cover API, you can download the full example from GIT from here: https://github.com/efratcdn/cover_callback. Specman Team Full Article Specman/e Specman coverage engine coverage Specman e specman elite Coverage Driven Verification
v BoardSurfers: Allegro In-Design IR Drop Analysis: Essential for Optimal Power Delivery Design By feedproxy.google.com Published On :: Wed, 01 Apr 2020 15:12:00 GMT All PCB designers know the importance of proper power delivery for successful board design. Integrated circuits need the power to turn on, and ICs with marginal power delivery will not operate reliably. Since power planes can...(read more) Full Article PCB PI PCB design power
v BoardSurfers: Five Easy Steps to Create Footprints Using Packages in Library Creator By feedproxy.google.com Published On :: Thu, 16 Apr 2020 14:19:00 GMT In my previous blog, I talked about creating a footprint using an existing template in Allegro ECAD-MCAD Library Creator and explained how easily you can access an existing template and create a package from it by just clicking a button. In this blog...(read more) Full Article Library Creator PCB Editor 17.4-2019 ECAD-MCAD Library Creator PCB design
v New Rapid Adoption Kit (RAK) Enables Productive Mixed-Signal, Low Power Structural Verification By feedproxy.google.com Published On :: Mon, 10 Dec 2012 13:32:00 GMT All engineers can enhance their mixed-signal low-power structural verification productivity by learning while doing with a PIEA RAK (Power Intent Export Assistant Rapid Adoption Kit). They can verify the mixed-signal chip by a generating macromodel for their analog block automatically, and run it through Conformal Low Power (CLP) to perform a low power structural check. The power structure integrity of a mixed-signal, low-power block is verified via Conformal Low Power integrated into the Virtuoso Schematic Editor Power Intent Export Assistant (VSE-PIEA). Here is the flow. Applying the flow iteratively from lower to higher levels can verify the power structure. Cadence customers can learn more in a Rapid Adoption Kit (RAK) titled IC 6.1.5 Virtuoso Schematic Editor XL PIEA, Conformal Low Power: Mixed-Signal Low Power Structural Verification. To read the overview presentation, click on following link: PIEA Overview To download this PIEA RAK click on following link: PIEA RAK Download The RAK includes Rapid Adoption Kit with demo design (instructions are provided on how to setup the user environment). It Introduces the Power Intent Export Assistant (PIEA) feature that has been implemented in the Virtuoso IC615 release. The power intent extracted is then verified by calling Conformal Low Power (CLP) inside the Virtuoso environment. Last Update: 11/15/2012. Validated with IC 6.1.5 and CLP 11.1 The RAK uses a sample test case to go through PIEA + CLP flow as follows: Setup for PIEA Perform power intent extraction CPF Import: It is recommended to Import macro CPF, as oppose to designing CPF for sub-blocks. If you choose to import design CPF files please make sure the design CPF file has power domain information for all the top level boundary ports Generate macro CPF and design CPF Perform low power verification by running CLP It is also recommended to go through older RAKs as prerequisites. Conformal Low Power, RTL Compiler and Incisive: Low Power Verification for Beginners Conformal Low Power: CPF Macro Models Conformal Low Power and RTL Compiler: Low Power Verification for Advanced Users To access all these RAKs, visit our RAK Home Page to access Synthesis, Test and Verification flow Note: To access above docs, use your Cadence credentials to logon to the Cadence Online Support (COS) web site. Cadence Online Support website https://support.cadence.com/ is your 24/7 partner for getting help and resolving issues related to Cadence software. If you are signed up for e-mail notifications, you can receive new solutions, Application Notes (Technical Papers), Videos, Manuals, and more. You can send us your feedback by adding a comment below or using the feedback box on Cadence Online Support. Sumeet Aggarwal Full Article COS conformal VSE Virtuoso Schematic Editor Low Power clp Conformal Low Power Cadence Online Support Mixed Signal Verification mixed-signal low-power Mixed-Signal Virtuoso Power Intent Export Assistant PIEA mixed signal design CPF CPF Macro Modelling Digital Front-End Design
v New Incisive Low-Power Verification for CPF and IEEE 1801 / UPF By feedproxy.google.com Published On :: Tue, 07 May 2013 17:41:00 GMT On May 7, 2013 Cadence announced a 30% productivity gain in the June 2013 Incisive Enterprise Simulator 13.1 release. Advanced debug visualization, faster turn-around time, and the extension of eight years of low-power verification innovation to IEEE 1801/UPF are the key capabilities in the release. When we talk about low-power verification its easy to equate it with simulation. For certain, simulation is the heart of a low-power verification solution. Simulation enables engineers to run their design in the context of power intent. The challenge is that a simulation-only approach is inadequate. For example, if engineers could achieve SoC quality by verifying the individual function of each power control module (PCM), then simulation could be enough. For a single power domain, simulation can be enough. However, when the SoC has multiple power domains -- and we have seen SoCs with hundreds of them -- engineers have to check the PCMs and all of the arcs between the power modes. These SoCs often synchronize some of the domain switching to reduce overall complexity, creating the potential for signal skew errors on the control signals for the connected domains. Managing these complexities requires verification methodologies including advanced debug, verification planning, assertion-based verification, Universal Verification Methodology - Low Power (UVM-LP), and more (see Figure 1). Figure 1: Comprehensive Low-Power Verification But even advanced verification methodologies on top of simulation aren't enough. For example, the state machine that defines the legal and illegal power mode transitions is often written in software. The speed and capacity of the Palladium emulation platform is ideal to verify in this context, and it is integrated with simulation sharing debug, UVM acceleration, and static checks for low-power. And, it reports verification progress into a holistic plan for the SoC. Another example is the ability to compare the design in the implementation flow with the design running in simulation to make sure that what we verify is what we intend to build. Taken together, verification across multiple engines provides the comprehensive low-power verification needed for today's advanced node SoCs. That's the heart of this low-power verification announcement. Another point you may have noticed is the extension of the Common Power Format (CPF) based power-aware support in the Incisive Enterprise Simulator to IEEE 1801. We chose to bring IEEE 1801 to simulation first because users like you sometimes need to mix vendors for regression flows. Over time, Cadence will extend the low-power capabilities throughout its product suite to IEEE 1801. If you are using CPF today, you already have the best low-power solution. The evidence is clear: the upcoming IEEE 1801-2013 update includes many of the CPF features contributed to 1801/UPF to enable methodology convergence. Since you already have those features in the CPF flow, any migration before you have a mature IEEE 1801-2013 tool flow would reduce the functionality you have today. If you are using Unified Power Format (UPF) 1.0 today, you want to start planning your move toward the IEEE 1801-2013 standard. A good first step would be to move to the IEEE 1801-2009 standard. It fills holes in the earlier UPF 1.0 definition. While it does lack key features in -2013, it is an improvement that will make the migration to -2013 easier. The Incisive 13.1 release will run both UPF 1.0 and IEEE 1801-2009 power intent today. Over the next few weeks you'll see more technical blogs about the low-power capabilities coming in the Incisive 13.1 release. You can also join us on June 19 for a webinar that will introduce those capabilities using the reference design supplied with the Incisive Enterprise Simulator release. =Adam "The Jouler" Sherer (Yes, "Sherilog" is still here. :-) ) Full Article CPF 2.0 uvm Low Power IEEE 1801 PSO CDNLive CPF Incisive Enterprise Simulator IEEE 1801-2009 power shutoff Incisive Adam Sherer dpa low-power design UPF power IES verification
v Low-Power IEEE 1801 / UPF Simulation Rapid Adoption Kit Now Available By feedproxy.google.com Published On :: Fri, 22 Nov 2013 03:59:00 GMT There is no better way other than a self-help training kit -- (rapid adoption kit, or RAK) -- to demonstrate the Incisive Enterprise Simulator's IEEE 1801 / UPF low-power features and its usage. The features include: Unique SimVision debugging Patent-pending power supply network visualization and debugging Tcl extensions for LP debugging Support for Liberty file power description Standby mode support Support for Verilog, VHDL, and mixed language Automatic understanding of complex feedthroughs Replay of initial blocks ‘x' corruption for integers and enumerated types Automatic understanding of loop variables Automatic support for analog interconnections Mickey Rodriguez, AVS Staff Solutions Engineer has developed a low power UPF-based RAK, which is now available on Cadence Online Support for you to download. This rapid adoption kit illustrates Incisive Enterprise Simulator (IES) support for the IEEE 1801 power intent standard. Patent-Pending Power Supply Network Browser. (Only available with the LP option to IES) In addition to an overview of IES features, SimVision and Tcl debug features, a lab is provided to give the user an opportunity to try these out. The complete RAK and associated overview presentation can be downloaded from our SoC and Functional Verification RAK page: Rapid Adoption Kits Overview RAK Database Introduction to IEEE-1801 Low Power Simulation View Download (2.3 MB) We are covering the following technologies through our RAKs at this moment: Synthesis, Test and Verification flow Encounter Digital Implementation (EDI) System and Sign-off Flow Virtuoso Custom IC and Sign-off Flow Silicon-Package-Board Design Verification IP SOC and IP level Functional Verification System level verification and validation with Palladium XP Please visit https://support.cadence.com/raks to download your copy of RAK. We will continue to provide self-help content on Cadence Online Support, your 24/7 partner for learning more about Cadence tools, technologies, and methodologies as well as getting help in resolving issues related to Cadence software. If you are signed up for e-mail notifications, you're likely to notice new solutions, application notes (technical papers), videos, manuals, etc. Note: To access the above documents, click a link and use your Cadence credentials to log on to the Cadence Online Support https://support.cadence.com/ website. Happy Learning! Sumeet Aggarwal and Adam Sherer Full Article Low Power IEEE 1801 Functional Verification Incisive Enterprise Simulator IEEE 1801-2013 IEEE 1801-2009 RAK Incisive 1801 UPF 2.1 UPF RAKs simulation IES
v ST Microelectronics Success with IEEE 1801 / UPF Incisive Simulation - Video By feedproxy.google.com Published On :: Thu, 16 Jan 2014 06:45:00 GMT ST Microelectronics reported their success with IEEE 1801 / UPF low-power simulation using Incisive Enterprise Simulator at CDNLive India in November 2013. We were able to meet with Mohit Jain just after his presentation and recorded this video that explains the key points in his paper. With eight years of experience and pioneering technology in native low-power simulation, Mohit was able to apply Incisive Enterprise Simulator to a low-power demonstrator in preparation for use with a production set-top box chip. Mohit was impressed with the ease in which he was able to reuse his existing IEEE 1801 / UPF code successfully, including the power format files and the macro models coded in his Liberty files. Mohit also discusses how he used the power-aware Cadence SimVision debugger. The Cadence low-power verification solution for IEEE 1801 / UPF also incorporates the patent-pending Power Supply Network visualization in the SimVision debugger. You can learn more about that in the Incisive low-power verification Rapid Adoption Kit for IEEE 1801 / UPF here in Cadence Online Support. Just another happy Cadence low-power verification user! Regards, Adam "The Jouler" Sherer Full Article IEEE 1801 simvision Incisive Enterprise Simulator UPF simulation verification
v Freescale Success Stepping Up to Low-Power Verification - Video By feedproxy.google.com Published On :: Fri, 17 Jan 2014 12:18:00 GMT Freescale was a successful Incisive® simulation CPF low-power user when they decided to step up their game. In November 2013, at CDNLive India, they presented a paper explaining how they improved their ability to find power-related bugs using a more sophisticated verification flow. We were able to catch up with Abhinav Nawal just after his presentation to capture this video explaining the key points in his paper. Abhinav had already established a low-power simulation process using directed tests for a design with power intent captured in CPF. While that is a sound approach, it tends to focus on the states associated with each power control module and at least some of the critical power mode changes. Since the full system can potentially exercise unforeseen combinations of power states, the directed test approach may be insufficient. Abhinav built a more complete low-power verification approach rooted in a low-power verification plan captured in Cadence® Incisive Enterprise Manager. He still used Incisive Enterprise Simulator and the SimVision debugger to execute and debug his design, but he also added Incisive Metric Center to analyze coverage from his low-power tests and connect that data back to the low-power verification plan. As a result, he was able to find many critical system-level corner case issues, which, left undetected, would have been catastrophic for his SoC. In the paper, Abhinav presents some of the key problems this approach was able to find. You can achieve results similar to Abhinav. Incisive Enterprise Simulator can generate a low-power verification plan from the power format, power-aware assertions, and it can collect power-aware knowledge. To get started, you can use the Incisive Low-Power Simulation Rapid Adoption Kit (RAK) for CPF available on Cadence Online Support. Just another happy Cadence low-power verification user! Regards, Adam "The Jouler" Sherer Full Article simvision CPF Incisive Enterprise Simulator Incisive Enterprise Manager MDV simulation verification
v cadence ADE EXPLORER vs MAESTRO By feedproxy.google.com Published On :: Fri, 21 Feb 2020 13:58:41 GMT Hello, i saw that MAESTRO is a plotting addon is it a part of ADE EXPLORER? i cant see the relation between the two.i started to read manual and regarding MAESTRO i only see code. is there some simple examples?Thanks. Full Article
v Copying read only problen in cadence virtuoso By feedproxy.google.com Published On :: Sun, 23 Feb 2020 15:45:24 GMT Hello, i have a realy mistick thing going with copying libraries in cadence virtuoso, When i copy straight forwart the whole library it gives me a warning that accsess was denied,but when i go into the library and copy it as a single file, then it goes fine. another problem is it doesnt show in the massage console ALL the files which could not be copied.(which is the much bigger problem,becuase i would have to pass threw all the subdirectories to verify if all files are there) Is there a way to see which files wasnt able to be copied? Thanks. Full Article
v netlist extraction from assembler in cadence virtuoso By feedproxy.google.com Published On :: Thu, 27 Feb 2020 10:23:03 GMT Hello , i am trying to extract netlist from a circuit in assembler I have found the manual shown bellow , however there is no such option in tools in assembler. how do i view the NETLIST of this circuit? Thanks. ASSEMBLER VIEW menu Full Article
v zpm can't be evaluated By feedproxy.google.com Published On :: Fri, 28 Feb 2020 10:12:24 GMT Virtuoso Version -- IC6.1.7-64b.500.23 Cadence Spectre Version -- 17.10.515 I have a very simple circuit. Please find attached. It is basically a resistor across a port. I run a S-param simulation and can plot the S-params, but unfortunately not the Z-param or Y-param. /resized-image/__size/320x240/__key/communityserver-discussions-components-files/33/Capture_5F00_Sch.JPG /resized-image/__size/320x240/__key/communityserver-discussions-components-files/33/Capture_5F00_Error.JPG Can anyone point me in the correct direction to sort out this problem? The zpm does work in another design environment, but not in the new design environment (a new project). The virtuoso and the cadence-spectre versions match in both the project environments. I am at a loss at what to look for. Full Article
v QPSS with non-50% dutycycle square wave clocks (For sample and hold) By feedproxy.google.com Published On :: Sat, 29 Feb 2020 11:07:00 GMT Hello, Would anyone know how to setup a PSS or QPSS simulation with 25% dutycycle clock sources or if such a thing is possible with QPSS. Fig1 (below) is a snapshot of the circuit I am trying to characterize. This has 4 clock ports each with 25%duty cycle in the ON state. Fig2 below shows two of these clocks. Each path in the circuit consists of two switches with a low pass RC sandwiched in between. The Input is a 50Ohm port sine wave and the output is a 1K resistor. The output nets of all paths are connected together. I am trying to determine the swept frequency response from input to output (voltage) when the input is from 500Mhz to 510MHz. The Period (T=1/Fp) of each of the pulses is such that Fp=500MHz. The first pulse source has a delay=0, second has delay=T/4, third delay=2T/4, etc... I am currently getting it working and seeing the correct result (bandpass response) with Transient but the problem is doing a dft at 500MHz with 10KHz spacings needs at least 100us and takes up a lot of time and disk space. Many Thanks,Chris. Fig1 Fig2 Full Article
v searching for transistor inside hyrarchy in cadence virtuoso By feedproxy.google.com Published On :: Sat, 29 Feb 2020 14:00:41 GMT Hello, I have a problem with a certain type of transistor,my hyrarchy has a lot components an sub components and visually inspecting them is very hard. is there a way like in other cadence layout viewer tools , to enter the name of the component or a NET somewhere and it will focus on it visualy or give the hyrarchy path to it? Thanks. Full Article
v gm of an active mixer By feedproxy.google.com Published On :: Wed, 18 Mar 2020 11:36:34 GMT Hi all, What is the most accurate way to simulate the gm of RF transistors (RF stage) of an active mixer (single balanced or Gilbert cell)? I tried to simulate it with many ways such as: 1. DC annotation (but of course its incorrect due to the switching operation of the mixer) 2. d(i_ds)/d(v_gs) using HB analysis and then taking the value at zero (since it is a DC characteristic). In this way I chose in the simulator results of HB: Voltage, spectrum, rms, magnitude. 3. Using the OP, OPT buttons in the calculator and then extracting the gm of the transistor. The problem is that each way gives a different value which makes the procedure of designing an active mixer very difficult. In addition, when I simulate the voltage conversion gain of the active mixer and trying to compare it to the formula (2/pi)*gm*RL (either in linear or dB), I get numbers which are way too far from simulations. I understand that I would not get the same results but not different by hundreds percent. I see in many publications that people are plotting graphs of mixer's gm vs. different parameters and starting to doubt whether the results are correct. I would appreciate any help, Thanks in advance Full Article
v producing gain circles in cadence virtuoso By feedproxy.google.com Published On :: Fri, 27 Mar 2020 20:20:32 GMT Hello, i am trying to produce a gain circles on a simple transistor as shown bellow. i have defined the range from 1 til 30 dB and i dont get any circle just dots in infinity? Where did i go wrong?Thanks. Full Article
v matching network problem in cadence virtuoso By feedproxy.google.com Published On :: Sat, 28 Mar 2020 14:24:42 GMT Hello, i have built a matching network of 13dB gain and NF as shown bellow step by step.(including all the plots and matlab ) its just not working at all,i am doing it exacly by the thoery taking a point inside the circle-> converting its gamma to Z_source->converting gamma_s into gamma_L with the formulla bellow as shown in the matlab->converting the gamma_L into Z_L-> building the matching network for conjugate of Z_L and Z_c.Its just not working. where did i got wrong? Thanks. gamma_s=75.8966*exp(deg2rad(280.88)*i);z_s=gamma2z(gamma_s,50);s11=0.99875-0.03202*is12=721.33*10^(-6)+8.622*10^(-3)*is21=-188.37*10^(-3)+30.611*10^(-3)*is22=875.51*10^(-3)-100.72*10^(-3)*igamma_L=conj((s22+(s12*s21*gamma_s)/(1-s11*gamma_s)))z_L=gamma2z(gamma_L,50) Full Article
v LNA output noise floor at receiver front end. By feedproxy.google.com Published On :: Thu, 02 Apr 2020 07:30:37 GMT Hi, i am designing a broadband (100 MHz -6 GHz BW) receiver chain for radar/rcs measurement tester. i will put Low noise amplifier after antenna input followed by mixed(10 MHz IF BW and digitizer. I am facing problem regarding LAN. bandwidth of LAN is approx 6 GHz(100 MHz-6GHz), gain 25-35 dB, with NF less than 2. I am uncertain about noise floor at the output of LNA. I dont know exact SNR at the input of LNA but it shall be good.System operation will be on stepped CW waveform so receiver input signal will sweep over the BW and some step size. so LNA output r noise floor will be? i assume, we can neglect thr role of input noise because it will be lesser than internal noise of LNA. will it be LNA internal noise (Thermal noise due to BW) only ? will it be LNA internal noise (Thermal noise due to BW) + LAN Gain ? -78+25 =-53 dB? internal noise shall be lesser because NF is less than 3 . i have practically observed that that output noise floor is much lesser then even thermal noise( over LNA BW). i have gone through some tutorial where formula says( internal noise+input noise)+gain. in my case input noise shall be much less than theoretical internal noise. Thanks Full Article
v input output circle equivalent in cadence virtuoso By feedproxy.google.com Published On :: Thu, 23 Apr 2020 11:07:36 GMT Hello, There is a manual in matlab of matching LNA shown in the link bellow. In it as shown in the plot bellow they mention input and output circle plots. Is there such option of input and output circle in cadence virtuoso? https://www.mathworks.com/help/rf/examples/designing-matching-networks-part-1-networks-with-an-lna-and-lumped-elements.html Full Article
v Equivalent skill for Create Detail By feedproxy.google.com Published On :: Tue, 11 Feb 2020 01:54:04 GMT Hi Guys, Anyone know equivalent skill for create detail. Eugene Full Article
v axlShapeAutoVoid not voiding Backdrill shapes By feedproxy.google.com Published On :: Fri, 13 Mar 2020 22:49:44 GMT Hi all, I am creating shapes on plane layers for a coupon and want to void them using axlShapeAutoVoid() The shapes are attached to a symbol. I've tried using axlShapeAutoVoid, but this only voids the pins, not the route keepouts created by nc_backdrill. I also tried selecting the shape, individually, then running axlShapeAutoVoid. That was unsuccessful, also. planeShapes is a list of shapes I created. The code for voiding: ;run backdrill to get route keepouts axlShell("setwindow pcb;backdrill setup ;setwindow form.nc_backdrill;FORM nc_backdrill apply ;FORM nc_backdrill close") foreach(sHape planeShapes axlShapeAutoVoid(car(sHape)) ) Full Article
v Looking for ADVFC32 SPICE Model By feedproxy.google.com Published On :: Mon, 16 Mar 2020 13:56:51 GMT I'm working on a circuit that requires the input voltage to be converted to a frequency, transmitted over an optical cable, and then converted back to a voltage. I am attempting to simulate this circuit using Eagle ngSpice simulations. The voltage to frequency converters that I am using are ADVFC32 and made by Analog Devices. However, I can't seem to find a SPICE model for this component. Analog Devices does not provide it on their website. Can anyone find a SPICE Model for this part? I'm new to working with electronics so any help/advice you can provide would be appreciated. Full Article
v Inconsistent behaviour of warn() between Virtuoso and Allegro By feedproxy.google.com Published On :: Thu, 23 Apr 2020 09:27:22 GMT For a project, we depend on capturing warnings. This works fine in Virtuoso but behaves differently in Allegro. In our observations Virtuoso: >>> warn("Hello") *WARNING* Hello Allegro: >>> warn("Hello") *WARNING* Hello But when we capture the warning: Virtuoso: >>> warn("Hello") getWarn() "Hello" Allegro: >>> warn("Hello") getWarn() "*WARNING* Hello" This is a Problem for because we put an empty String in the warn and depend on the fact that no Warning results in an empty String but on Allegro the output always begins with *WARNING* Is there a way to make the behavior consistent in both versions? Full Article
v Here Is Why the Indian Voter Is Saddled With Bad Economics By feedproxy.google.com Published On :: 2019-02-03T03:54:17+00:00 This is the 15th installment of The Rationalist, my column for the Times of India. It’s election season, and promises are raining down on voters like rose petals on naïve newlyweds. Earlier this week, the Congress party announced a minimum income guarantee for the poor. This Friday, the Modi government released a budget full of sops. As the days go by, the promises will get bolder, and you might feel important that so much attention is being given to you. Well, the joke is on you. Every election, HL Mencken once said, is “an advance auction sale of stolen goods.” A bunch of competing mafias fight to rule over you for the next five years. You decide who wins, on the basis of who can bribe you better with your own money. This is an absurd situation, which I tried to express in a limerick I wrote for this page a couple of years ago: POLITICS: A neta who loves currency notes/ Told me what his line of work denotes./ ‘It is kind of funny./ We steal people’s money/And use some of it to buy their votes.’ We’re the dupes here, and we pay far more to keep this circus going than this circus costs. It would be okay if the parties, once they came to power, provided good governance. But voters have given up on that, and now only want patronage and handouts. That leads to one of the biggest problems in Indian politics: We are stuck in an equilibrium where all good politics is bad economics, and vice versa. For example, the minimum guarantee for the poor is good politics, because the optics are great. It’s basically Garibi Hatao: that slogan made Indira Gandhi a political juggernaut in the 1970s, at the same time that she unleashed a series of economic policies that kept millions of people in garibi for decades longer than they should have been. This time, the Congress has released no details, and keeping it vague makes sense because I find it hard to see how it can make economic sense. Depending on how they define ‘poor’, how much income they offer and what the cost is, the plan will either be ineffective or unworkable. The Modi government’s interim budget announced a handout for poor farmers that seemed rather pointless. Given our agricultural distress, offering a poor farmer 500 bucks a month seems almost like mockery. Such condescending handouts solve nothing. The poor want jobs and opportunities. Those come with growth, which requires structural reforms. Structural reforms don’t sound sexy as election promises. Handouts do. A classic example is farm loan waivers. We have reached a stage in our politics where every party has to promise them to assuage farmers, who are a strong vote bank everywhere. You can’t blame farmers for wanting them – they are a necessary anaesthetic. But no government has yet made a serious attempt at tackling the root causes of our agricultural crisis. Why is it that Good Politics in India is always Bad Economics? Let me put forth some possible reasons. One, voters tend to think in zero-sum ways, as if the pie is fixed, and the only way to bring people out of poverty is to redistribute. The truth is that trade is a positive-sum game, and nations can only be lifted out of poverty when the whole pie grows. But this is unintuitive. Two, Indian politics revolves around identity and patronage. The spoils of power are limited – that is indeed a zero-sum game – so you’re likely to vote for whoever can look after the interests of your in-group rather than care about the economy as a whole. Three, voters tend to stay uninformed for good reasons, because of what Public Choice economists call Rational Ignorance. A single vote is unlikely to make a difference in an election, so why put in the effort to understand the nuances of economics and governance? Just ask, what is in it for me, and go with whatever seems to be the best answer. Four, Politicians have a short-term horizon, geared towards winning the next election. A good policy that may take years to play out is unattractive. A policy that will win them votes in the short term is preferable. Sadly, no Indian party has shown a willingness to aim for the long term. The Congress has produced new Gandhis, but not new ideas. And while the BJP did make some solid promises in 2014, they did not walk that talk, and have proved to be, as Arun Shourie once called them, UPA + Cow. Even the Congress is adopting the cow, in fact, so maybe the BJP will add Temple to that mix? Benjamin Franklin once said, “Democracy is two wolves and a lamb voting on what to have for lunch.” This election season, my friends, the people of India are on the menu. You have been deveined and deboned, marinated with rhetoric, seasoned with narrative – now enter the oven and vote. The India Uncut Blog © 2010 Amit Varma. All rights reserved. Follow me on Twitter. Full Article
v India’s Problem is Poverty, Not Inequality By feedproxy.google.com Published On :: 2019-02-17T04:23:30+00:00 This is the 16th installment of The Rationalist, my column for the Times of India. Steven Pinker, in his book Enlightenment Now, relates an old Russian joke about two peasants named Boris and Igor. They are both poor. Boris has a goat. Igor does not. One day, Igor is granted a wish by a visiting fairy. What will he wish for? “I wish,” he says, “that Boris’s goat should die.” The joke ends there, revealing as much about human nature as about economics. Consider the three things that happen if the fairy grants the wish. One, Boris becomes poorer. Two, Igor stays poor. Three, inequality reduces. Is any of them a good outcome? I feel exasperated when I hear intellectuals and columnists talking about economic inequality. It is my contention that India’s problem is poverty – and that poverty and inequality are two very different things that often do not coincide. To illustrate this, I sometimes ask this question: In which of the following countries would you rather be poor: USA or Bangladesh? The obvious answer is USA, where the poor are much better off than the poor of Bangladesh. And yet, while Bangladesh has greater poverty, the USA has higher inequality. Indeed, take a look at the countries of the world measured by the Gini Index, which is that standard metric used to measure inequality, and you will find that USA, Hong Kong, Singapore and the United Kingdom all have greater inequality than Bangladesh, Liberia, Pakistan and Sierra Leone, which are much poorer. And yet, while the poor of Bangladesh would love to migrate to unequal USA, I don’t hear of too many people wishing to go in the opposite direction. Indeed, people vote with their feet when it comes to choosing between poverty and inequality. All of human history is a story of migration from rural areas to cities – which have greater inequality. If poverty and inequality are so different, why do people conflate the two? A key reason is that we tend to think of the world in zero-sum ways. For someone to win, someone else must lose. If the rich get richer, the poor must be getting poorer, and the presence of poverty must be proof of inequality. But that’s not how the world works. The pie is not fixed. Economic growth is a positive-sum game and leads to an expansion of the pie, and everybody benefits. In absolute terms, the rich get richer, and so do the poor, often enough to come out of poverty. And so, in any growing economy, as poverty reduces, inequality tends to increase. (This is counter-intuitive, I know, so used are we to zero-sum thinking.) This is exactly what has happened in India since we liberalised parts of our economy in 1991. Most people who complain about inequality in India are using the wrong word, and are really worried about poverty. Put a millionaire in a room with a billionaire, and no one will complain about the inequality in that room. But put a starving beggar in there, and the situation is morally objectionable. It is the poverty that makes it a problem, not the inequality. You might think that this is just semantics, but words matter. Poverty and inequality are different phenomena with opposite solutions. You can solve for inequality by making everyone equally poor. Or you could solve for it by redistributing from the rich to the poor, as if the pie was fixed. The problem with this, as any economist will tell you, is that there is a trade-off between redistribution and growth. All redistribution comes at the cost of growing the pie – and only growth can solve the problem of poverty in a country like ours. It has been estimated that in India, for every one percent rise in GDP, two million people come out of poverty. That is a stunning statistic. When millions of Indians don’t have enough money to eat properly or sleep with a roof over their heads, it is our moral imperative to help them rise out of poverty. The policies that will make this possible – allowing free markets, incentivising investment and job creation, removing state oppression – are likely to lead to greater inequality. So what? It is more urgent to make sure that every Indian has enough to fulfil his basic needs – what the philosopher Harry Frankfurt, in his fine book On Inequality, called the Doctrine of Sufficiency. The elite in their airconditioned drawing rooms, and those who live in rich countries, can follow the fashions of the West and talk compassionately about inequality. India does not have that luxury. The India Uncut Blog © 2010 Amit Varma. All rights reserved. Follow me on Twitter. Full Article
v For this Brave New World of cricket, we have IPL and England to thank By feedproxy.google.com Published On :: 2019-07-13T23:50:53+00:00 This is the 24th installment of The Rationalist, my column for the Times of India. Back in the last decade, I was a cricket journalist for a few years. Then, around 12 years ago, I quit. I was jaded as hell. Every game seemed like déjà vu, nothing new, just another round on the treadmill. Although I would remember her fondly, I thought me and cricket were done. And then I fell in love again. Cricket has changed in the last few years in glorious ways. There have been new ways of thinking about the game. There have been new ways of playing the game. Every season, new kinds of drama form, new nuances spring up into sight. This is true even of what had once seemed the dullest form of the game, one-day cricket. We are entering into a brave new world, and the team leading us there is England. No matter what happens in the World Cup final today – a single game involves a huge amount of luck – this England side are extraordinary. They are the bridge between eras, leading us into a Golden Age of Cricket. I know that sounds hyperbolic, so let me stun you further by saying that I give the IPL credit for this. And now, having woken up you up with such a jolt on this lovely Sunday morning, let me explain. Twenty20 cricket changed the game in two fundamental ways. Both ended up changing one-day cricket. The first was strategy. When the first T20 games took place, teams applied an ODI template to innings-building: pinch-hit, build, slog. But this was not an optimal approach. In ODIs, teams have 11 players over 50 overs. In T20s, they have 11 players over 20 overs. The equation between resources and constraints is different. This means that the cost of a wicket goes down, and the cost of a dot ball goes up. Critically, it means that the value of aggression rises. A team need not follow the ODI template. In some instances, attacking for all 20 overs – or as I call it, ‘frontloading’ – may be optimal. West Indies won the T20 World Cup in 2016 by doing just this, and England played similarly. And some sides began to realise was that they had been underestimating the value of aggression in one-day cricket as well. The second fundamental way in which T20 cricket changed cricket was in terms of skills. The IPL and other leagues brought big money into the game. This changed incentives for budding cricketers. Relatively few people break into Test or ODI cricket, and play for their countries. A much wider pool can aspire to play T20 cricket – which also provides much more money. So it makes sense to spend the hundreds of hours you are in the nets honing T20 skills rather than Test match skills. Go to any nets practice, and you will find many more kids practising innovative aggressive strokes than playing the forward defensive. As a result, batsmen today have a wider array of attacking strokes than earlier generations. Because every run counts more in T20 cricket, the standard of fielding has also shot up. And bowlers have also reacted to this by expanding their arsenal of tricks. Everyone has had to lift their game. In one-day cricket, thus, two things have happened. One, there is better strategic understanding about the value of aggression. Two, batsmen are better equipped to act on the aggressive imperative. The game has continued to evolve. Bowlers have reacted to this with greater aggression on their part, and this ongoing dialogue has been fascinating. The cricket writer Gideon Haigh once told me on my podcast that the 2015 World Cup featured a battle between T20 batting and Test match bowling. This England team is the high watermark so far. Their aggression does not come from slogging. They bat with a combination of intent and skills that allows them to coast at 6-an-over, without needing to take too many risks. In normal conditions, thus, they can coast to 300 – any hitting they do beyond that is the bonus that takes them to 350 or 400. It’s a whole new level, illustrated by the fact that at one point a few days ago, they had seven consecutive scores of 300 to their name. Look at their scores over the last few years, in fact, and it is clear that this is the greatest batting side in the history of one-day cricket – by a margin. There have been stumbles in this World Cup, but in the bigger picture, those are outliers. If England have a bad day in the final and New Zealand play their A-game, England might even lose today. But if Captain Morgan’s men play their A-game, they will coast to victory. New Zealand does not have those gears. No other team in the world does – for now. But one day, they will all have to learn to play like this. The India Uncut Blog © 2010 Amit Varma. All rights reserved. Follow me on Twitter. Full Article