an Relative delay analysis is impacted by pbar By community.cadence.com Published On :: Thu, 23 Nov 2023 21:32:03 GMT Does anyone know how to not include a pbar in a constraint manager analysis? I have some relative delay constraints applied on a group of differential nets. When I analyze the design these all show an error. If I delete the plating bar from the design they are all passing. The plating bar gets generated on the Substrate Geometry / Plating_Bar class. I understand that I could just delete the plating bar to verify the constraint but the issue is when I archive this design I would like it to be clean meaning it is in the final state for manufacturing AND passing all constraints according to design reviews. Anyone have an idea? Thank you! Full Article
an What is Allegro X Advanced Package Designer and why do I not see Allegro Package Designer Plus (APD+) in 23.1? By community.cadence.com Published On :: Fri, 01 Dec 2023 09:46:22 GMT Starting SPB 23.1, Allegro Package Designer Plus (APD+) has been rebranded as Allegro X Advanced Package Designer (Allegro X APD). The splash screen for Allegro X APD will appear as shown below, instead of showing APD+ 2023: For the Windows Start menu in 23.1, it will display as Allegro X APD 2023 instead of APD+ 2023, as shown below 23.1 Start menu In the Product Choices window for 23.1, you will see Allegro X Advanced Package Designer in the place of Allegro Package Designer +, as shown below: 23.1 product title Full Article
an Introducing new 3DX Canvas in Allegro X Advanced Package Designer By community.cadence.com Published On :: Tue, 05 Dec 2023 12:50:25 GMT Have you heard that starting SPB 23.1, Allegro Package Designer Plus (APD+) will be renamed as Allegro X Advanced Package Designer (Allegro X APD)? Allegro X APD offers multiple new features and enhancements on topics like Via Structures, Wirebond, Etchback, Text Wizards, 3D Canvas, and more. This post presents the new 3DX Canvas introduced in SPB 23.1. This can be invoked from Allegro X APD (from the menu item View > 3DX Canvas). Some of the key benefits of the new canvas: This canvas addresses the scale and complexity in large modern package designs. It provides highly efficient visual representation and implementation of packages. The new architecture enables high-performance 3D incremental updates by utilizing GPU for fast rendering. Real-time 3D incremental updates are supported, which means that the 3D view is in sync with all changes to the database. The new canvas provides 3D visualization support for packaging objects such as wire bonds, ball, die bump/pillar geometries, die stacks, etch back, and plating bar. This release also introduces the interactive measurement tool for a 3D view of packages. Once you open 3DX Canvas, press the Alt key and you can select the objects you want to measure. 3DX Canvas provides new 3D DRC Bond Wire Clearances with Real 3D DRC Checks. True 3D DRC in Constraint Manager has been introduced. If you open Constraint Manager, there will be a new worksheet added. Following DRC checks are supported: Wire to Wire Wire to Finger Wire to Shape Wire to Cline Wire to Component Full Article
an How to access the Transmission Line Calculator in Allegro X APD By community.cadence.com Published On :: Tue, 02 Jan 2024 17:05:21 GMT Have you ever thought of a handy utility to specify all necessary transmission line parameters to decide upon the stackup? Starting SPB 23.1, a handy feature Transmission Line Calculator, is built into Allegro X Advanced Package Designer (Allegro X APD). This feature will require either an SiP Layout license or can be accessed through SiP Layout Bundle. From the Analyze dropdown menu in the 23.1 Allegro X APD toolbar, you can choose Transmission Line Calculator. You can use this calculator to help decide constraints and stackup for laminate-based PCB or Packages. You can calculate the correct stackup material and width/spacing to meet any requirements that may be later entered in a constraint. This is truly a calculated number and not a true field solver. The different types of calculations that the Transmission Line Calculator can provide are Microstrip, Embedded microstrip, Stripline, CPW (Coplanar), FGCPW (frequency-dependent Coplanar), Asymmetric stripline, Coupled microstrip (Differential Pair), Coupled stripline (Differential Pair), and Dual striplines. This feature is important for customers relying on fabricators/spreadsheets to provide this information or need to test a quick spacing/width as per the impedance value. Let us know your comments on this new feature in 23.1 Allegro X APD. Full Article
an How to export and import symbols and component properties through Die Text wizards By community.cadence.com Published On :: Thu, 04 Jan 2024 15:50:39 GMT Starting SPB 23.1, Allegro X APD lets you import/export the symbol and component properties by using Die Text-In/Out wizards. Exporting the symbol You can export the symbol by using File > Export > Die Text-Out Wizard. In the Die Text-Out Wizard window, you can see the newly added options, that is, Component Properties and Symbol Properties. This entire information including the properties will be saved in a text file. Importing the symbol You can import the same text file in Allegro X APD by using Die Text-In Wizard. Choose the text file you want to import. Symbol properties added in the text file will be visible in the Die Text-In Wizard window. Full Article
an Skill to delete selected net and padstakck via By community.cadence.com Published On :: Thu, 01 Feb 2024 09:57:23 GMT Hi, I want to delete via use skill,but i dont write this skill. can you help me. This skill has Interactive interface,the interface can imput Select Net and select padstack; I can use temp group to select the via; example,i want to delete via,the padstack is L1:L3,the net is vss. i can imput padstack L1:L3 and select net: VSS; Note: The green is VSS,the padstack L1:L3 and L3:L5 ; thanks Full Article
an modify bump and export the modified bump By community.cadence.com Published On :: Fri, 23 Feb 2024 13:23:01 GMT hello, help me! There are many change in the bump design. I want to design bump by APD. The bump(die) is a stagger , create it by die generator. Because,the pin is not isometric. In order to RDL routing, so the bump is not isometric. I move the symbol pin in APD symbol edit(as show in the picture), and selected symbol RBM write device file, write library symbol. Export the bga text( bga text out) ,But the bump is not modified, the bump is still stagger. Can you help me! pitch2> pitch1 thanks Full Article
an Find Routing problem (Route Vision) and quickly to fix these problems By community.cadence.com Published On :: Mon, 18 Mar 2024 03:45:55 GMT The vision manager is good tool for routing check. but no quickly or effective tool to fix or optimize this problems to be optimized. For example, parallel Gap less than preferred, min seg/Arc length,uncoupled diff-pair segs,and so on. I only know use spread between voids to fix the non-optimized segs. in fact it is inefficient. the parallel gap less than preferred is only to slice evry trace, its inefficient. If i set the paraller gap less than 50um, Is there any tool to quickly fix these problems(gap less than 50um)? For other problems,i can use tool to quickly fix the min seg/Arc length,uncoupled diff pair segs,accoding to select by polygon or select by windows. Full Article
an Creating Power and Ground rings in Allegro X Package Designer Plus By community.cadence.com Published On :: Fri, 31 May 2024 13:19:12 GMT Power and Ground rings are exposed rings of metal surrounding a die that supply power/ground to the die and create a low-impedance path for the current flow. These rings ensure stable power distribution and reduce noise. Allegro X Package Designer Plus has a utility called Power/Ground Ring Generator which lets you define and place one or more shapes in the form of a ring around a die. To run the PWR/GND Generator Wizard, go to Route > Power/Ground Ring Generator or type "pring wizard" in the APD command window to invoke the Wizard. This Wizard lets you define and place one or more shapes in the form of a ring around a die. The Power/Ground Ring Wizard creates up to 12 rings (shapes) at a time. If you require more rings, you can run the Power/Ground Ring Wizard as many times as needed. This command displays a wizard in which you can specify: The number of rings to be generated The creation of the first ring as a die flag (Die flag is the boundary of the die like the power ring.) If you create a die flag and the first ring is the same net as the flag, you can enter a negative distance to overlap the ring and the die flag. Multiple options for placement of the rings with respect to: Origination point Distance from the edge of the die Distance from the nearest die pin on each die side The reference designator of the die with which the rings will be used The distance between rings The width of each ring The corner types on each ring (arc, chamfer, and right-angle) An assigned net name for each ring A label for each ring The rings are basic in nature. For other shape geometries or split rings, choose Shape > Polygon or Shape > Compose/Decompose Shape from the menu in the design window. Depending on the options selected, the Power/Ground Ring Wizard UI changes, representing how the rings will be created. Verify the Wizard settings to ensure that the rings are created as intended. When the Power/Ground Ring Wizard appears, set the number of rings to 2, accept the other defaults, and click Next. You can set Create first ring as die flag to create a basic die flag. 2. Define Ring 1 and the net associated with it. a) Browse and choose Vss in the Net Names dialog box. b) Click OK. c) Specify the label as VSS. d) Click Next. The first ring should appear in your design. It is associated with the proper net; in this case, VSS. For the second ring, choose the net as Vdd and specify the label as VDD. Click Next. Click Finish in the Result Verification screen to complete the process. The completed rings appear as shown below. Now, when you click on Power and Ground Die Pin and add wirebonds, you will see that the wirebonds are placed directly on the Power and Ground rings. Full Article
an Allegro X APD : Tip of the Week: ‘Auto-blank other rats’ feature By community.cadence.com Published On :: Wed, 12 Jun 2024 09:25:34 GMT When working on a complex design, it is common to have very many net ratlines. Quantities like 1000 ratlines are possible. It can result in a cluttered view while routing. Therefore, it is useful to make all other ratlines invisible while routing interactively. You would like to make all ratlines visible again when each route action is completed. You can easily do this by enabling the Auto-blank other rats option during routing. When enabled, all rats other than the primary ones are suppressed during the Add Connect command. Full Article
an Database Maintenance: DBDoctor By community.cadence.com Published On :: Wed, 21 Aug 2024 11:12:28 GMT The DBDoctor application checks the database for errors and other problems, and presents a report about them. DBDoctor supports .brd, .mcm, .mdd, .psm, .dra, .pad, .sav, and .scf databases. DBDoctor can: Analyze and fix database problems. Eliminate duplicate vias. Perform batch design rule checking (DRC). Upgrade databases more than one revision old. To verify the integrity of a drawing database at any time during the design cycle, run DBDoctor at regular intervals but make sure you always run it after completing a design. You can run DBDoctor to verify work in progress, or from a terminal window outside the layout editor, perhaps to check multiple input designs in batch mode by using wildcards and various switches. You do not have to run the layout editor to use DBDoctor. To run this from Allegro X APD and Allegro PCB Editor, go to Tools > Database Check. You can also go to the Start menu and select Cadence PCB Utilities 2023 > PCB DB Doctor 2023. You can also use the following command to run DBDoctor in batch mode in the system command prompt: dbdoctor [-check_only] [-drc] [-drc_only] [-shapes][-no_backup] [-outfile <newboardname.brd>]> Comment below if you want to know more about this command and its integration with SKILL programming!! Full Article
an How to transfer etch/conductor delays from Allegro Package Designer (APD) to pin delays in Allegro PCB Editor By community.cadence.com Published On :: Sun, 10 Nov 2024 23:39:10 GMT The packaging group has finished their design in Allegro Package Designer (APD) and I want to use the etch/conductor delay information from the mcm file in the board design in Allegro PCB Designer. Is there a method to do this? This can be done by exporting the etch/conductor data from APD and importing it as PIN_DELAY information into Allegro PCB Editor. If you are generating a length report for use in Allegro Pin Delay, you should consider changing the APD units to Mils and uncheck the Time Delay Report. In Allegro Package Designer: Select File > Export > Board Level Component. Select HDL for the Output format and select OK. 3. Choose a padstack for use when generating the component and select OK. This will create a file, package_pin_delay.rpt, in the component subdirectory of the current working directory. This file will contain the etch/conductor delay information that can be imported into Allegro. In Allegro PCB Editor: Make sure that the device you want to import delays to is placed in your board design and is visible. Select File > Import > Pin delay. Browse to the component directory and select package_pin_delay.rpt. The browser defaults to look for *.csv files so you will need to change the Files of type to *.* to select the file. You may be prompted with an error message stating that the component cannot be found and you should select one. If so, select the appropriate component. Select Import. Once the import is completed, select Close. Note: It is important that all non-trace shapes have a VOLTAGE property so they will not be processed by the the 2D field solver. You should run Reports > Net Delay Report in APD prior to generating the board-level component. This will display the net name of each net as it is processed. If you miss a VOLTAGE property on a net, the net name will show in the report processing window, and you will know which net needs the property. Full Article
an Maximizing Display Performance with Display Stream Compression (DSC) By community.cadence.com Published On :: Wed, 11 Sep 2024 12:50:00 GMT Display Stream Compression (DSC) is a lossless or near-lossless image compression standard developed by the Video Electronics Standards Association (VESA) for reducing the bandwidth required to transmit high-resolution video and images. DSC compresses video streams in real-time, allowing for higher resolutions, refresh rates, and color depths while minimizing the data load on transmission interfaces such as DisplayPort, HDMI, and embedded display interfaces. Why Is DSC Needed? In the ever-evolving landscape of display technology, the pursuit of higher resolutions and better visual quality is relentless. As display capabilities advance, so do the challenges of managing the immense amounts of data required to drive these high-performance screens. This is where DSC steps in. DSC is designed to address the challenges of transmitting ultra-high-definition content without sacrificing quality or performance. As displays grow in resolution and capability, the amount of data they need to transmit increases exponentially. DSC addresses these issues by compressing video streams in real-time, significantly reducing the bandwidth needed while preserving image quality. DSC Use in End-to-end System DSC Key Features Encoding tools: Modified Median-Adaptive Prediction (MMAP) Block Prediction (BP) Midpoint Prediction (MPP) Indexed color history (ICH) Entropy coding using delta size unit-variable length coding (DSU-VLC) The DSC bitstream and decoding process are designed to facilitate the decoding of 3 pixels/clock in practical hardware decoder implementations. Hardware encoder implementations are possible at 1 pixel/clock. DSC uses an intra-frame, line-based coding algorithm, which results in very low latency for encoding and decoding. DSC encoding algorithm Compression can be done to a fractional bpp. The compressed bits per pixel ranges from 6 to 63.9375. For validation/compliance certification of DSC compression and decompression engines, cyclic redundancy checks (CRCs) are used to verify the correctness of the bitstream and the reconstructed image. DSC supports more color bit depths, including 8, 10, 12, 14, and 16 bpc. DSC supports RGB and YCbCr input format, supporting 4:4:4, 4:2:2, and 4:2:0 sampling. Maximum decompressor-supported bits/pixel values are as listed in the Maximum Allowed Bit Rate column in the table below DP DSC Source device shall program the bit rate within the range of Minimum Allowed Bit Rate column in the table: Summary Display Stream Compression (DSC) is a technology used in DisplayPort to enable higher resolutions and refresh rates while maintaining high image quality. It works by compressing the video data transmitted from the source to the display, effectively reducing the bandwidth required. DSC uses a visually lossless algorithm, meaning that the compression is designed to be imperceptible to the human eye, preserving the fidelity of the image. This technology allows for smoother, more detailed visuals at higher resolutions, such as 4K or 8K, without requiring a significant increase in data bandwidth. More Information Cadence has a very mature Verification IP solution. Verification over many different configurations can be used with DisplayPort 2.1 and DisplayPort 1.4 designs, so you can choose the best version for your specific needs. The DisplayPort VIP provides a full-stack solution for Sink and Source devices with a comprehensive coverage model, protocol checkers, and an extensive test suite. More details are available on the DisplayPort Verification IP product page, Simulation VIP pages. If you have any queries, feel free to contact us at talk_to_vip_expert@cadence.com Full Article resolution DisplayPort Display Stream Compression lossless
an Training Insights – Palladium Emulation Course for Beginner and Advanced Users By community.cadence.com Published On :: Fri, 13 Sep 2024 23:00:00 GMT The Cadence Palladium Emulation Platform is a hardware system that implements the design, accelerating its execution and verification. Itoffers the highest performance and fastest bring-up times for pre-silicon validation of billion-gate designs, using a custom processor built by Cadence. This Palladium Introduction course is based on the Palladium 23.03 ISR4 version and covers the following modules: Introduction Palladium flow Running a design on the Palladium system This course starts with an “Introduction” module that explains Palladium and other verification platforms to show its place in the big picture. It also compares Palladium with Protium and simulation and discusses its usage and limitations. The “Palladium Flow” module includes two stages at a high level, which are Compile and Run. Then, it covers these stages in detail. First, it covers the ICE compile flow and IXCOM compile flow steps in detail. Then it explains Run, which is common for both ICE and IXCOM modes. The third module, “Running Design on the Palladium System,” covers all the items required for running your design on the Palladium system, including: Software stack requirements Basic concepts required to understand the flow Compute machine requirements In addition, this course contains labs for both the ICE and IXCOM flows with detailed steps to exercise the features provided by the Palladium system. The lab explains a practical example of multiple counters and exercising their signals for force, monitor, and deposit features, along with frequency calculation using a real-time clock. The course is available on the Cadence support page: There is also a Digital Badge available. You will find the Badge exam opportunity when you enroll in the Online training or after you have taken the training as "live" training. For questions and inquiries, or issues with registration, reach out to us at Cadence Training. Want to stay up to date on webinars and courses? Subscribe to Cadence Training emails. To view our complete training offerings, visit the Cadence Training website. Related Training Bytes Palladium: What Are Verification Platforms Palladium: What Is Processor Based Emulation Palladium: Comparing Emulation (Z2) and Prototyping (X2) Palladium: What Are ICE and IXCOM Compile Flow Palladium: How to Process a Design to Run on Palladium Palladium: XCOM Compile Flow (TB+RTL to Palladium Database) Palladium: ICE Compile Flow (RTL to Palladium Database) Palladium: Legacy ICE Compile Flow Palladium: Cadence Software Releases for Palladium and Protium Flow Palladium: Setting of PATHs for Using Palladium Palladium: Z2 Hardware Structure (Blade and Boards) Palladium: What Is Sourceless and Loadless nets Palladium: Design Clocks Palladium: Step Count and Step Clock Palladium: Steps for Running the Design on Palladium Z2 Related Courses Verilog Language and Application Training SystemVerilog for Design and Verification Xcelium Simulator Related Blogs Training Insights – A New Free Online Course on the Protium System for Beginner and Advanced Users It’s the Digital Era; Why Not Showcase Your Brand Through a Digital Badge! Training Insights - Free Online Courses on Cadence Learning and Support Portal Full Article digital badge live training blended training Palladium Training Insights online training
an Partial Header Encryption in Integrity and Data Encryption for PCIe By community.cadence.com Published On :: Mon, 07 Oct 2024 02:25:00 GMT Cadence PCIe/CXL VIP support for Partial Header Encryption in Integrity and Data Encryption.(read more) Full Article CXL Verification IP PCIe IDE
an Unveiling the Capabilities of Verisium Manager for Optimized Operations By community.cadence.com Published On :: Thu, 17 Oct 2024 06:13:06 GMT In SoC development, the verification cycle is a crucial phase that ensures products meet their specifications and function correctly. However, the complexity of modern SoC projects, with their constant data flow, multiple validation teams working in parallel, and tight schedules, presents significant challenges. This article explores these challenges and introduces Verisium Manager as a solution that embodies the 'One Tool Fits All' concept. This means that Verisium Manager is designed to handle all aspects of the verification process for SoC development, from planning to coverage analysis to regression testing, thereby addressing the complex needs of SoC verification. The Hurdles in Traditional Validation Cycles A typical validation process involves planning, coverage analysis, and regression testing. This complexity is compounded by using separate tools for each activity, leading to multiple control environments, APIs, and databases, not to mention the array of tool owners. Such fragmentation results in constant data transfer and translation between systems, from the planning tool to the coverage analysis tool and then to the regression testing tool. This continuous movement of data causes delays, system instability, poor user experiences, and, ultimately, a dip in the quality of the validation process. The use of multiple platforms leads to inefficiency and reduced productivity. What's needed is a unified system that can streamline the workflow, simplify the verification process, and enhance its effectiveness. Envisioning the Ideal Solution: Verisium Manager The cornerstone of an efficient validation cycle is integration and simplicity. The ideal solution is a singular platform that consolidates planning, coverage analysis, and regression management into one smooth, unified process. Verisium Manager emerges as this much-needed solution, encompassing all the functionalities necessary to streamline the validation process. Its comprehensive nature instills confidence in its ability to handle all aspects of the verification cycle. It can be fully customized to address and enforce any validation methodology and can facilitate smooth integration into any customer environment. Features that stand out in Verisium Manager include: Unified Workflow: It acts as a single cockpit from which all activities are orchestrated, ensuring the validation teams' work is uninterrupted and seamlessly integrated. Customization and Integration: Verisium Manager supports customizing test-plan structures and mapping results per project, ensuring a perfect fit for various project requirements. Its ability to smoothly integrate into the project's environment and compute platforms is unparalleled. Support for Continuous Updates and Migration: The tool accommodates constant updates to project data and supports the migration of legacy data, ensuring that no historical data is lost in the transition to a new system. Addressing Project-Specific Needs Verisium Manager recognizes diversity in different projects and offers project-specific solutions, including: Enforcing Project Test-Plan Structures and Attributes: It supports and enforces each project's unique test-plan structure and mapping guidelines. Unified Data Views and Measurements: Verisium Manager promotes a unified view of data across all teams and enforces unified measurements, ensuring consistency and clarity in the validation process. Enabling Project-Specific Actions and Integrations: The tool is designed to support project-specific actions directly from its graphical user interface and allows for smooth integration with in-house databases, dashboards, and the project execution stack. Verisium Manager is the epitome of efficiency in software/hardware validation. Its differentiating features, such as support for customization, unified data view, and comprehensive coverage and regression requirements, make it an indispensable tool for any validation team looking to elevate their workflow. Full Article validation vPlan verisium Verisium Manager vManager verification
an Deferrable Memory Write Usage and Verification Challenges By community.cadence.com Published On :: Thu, 17 Oct 2024 21:00:00 GMT The application of real-time data processing or responsiveness is crucial, such as in high-performance computing, data centers, or applications requiring low-latency data transfers. It enables efficient use of PCIe bandwidth and resources by intelligently managing memory write operations based on system dynamics and workload priorities. By effectively leveraging Deferrable Memory Write [DMWr], Devices can achieve optimized performance and responsiveness, aligning with the evolving demands of modern computing applications. What Is Deferrable Memory Write? Deferrable Memory Write (DMWr) ECN introduced this new memory transaction type, which was later officially incorporated in PCIe 5.0 to CXL2.0. This enhanced type of memory transaction is Deferrable Memory Write [DMWr], which flows as another type of existing Read/Write memory transaction; the major difference of this Deferrable Memory Write, where the Requester attempts to write to a given location in Memory Space using the non-posted DMWr TLP Type, it Postponing their completion of memory write transactions to improve overall system efficiency and performance, those memory write operation can be delay or deferred until other priority task complete. The Deferrable Memory Write (DMWr) requires the Completer to return an acknowledgment to the Requester and provides a mechanism for the recipient to defer (temporarily refuse to service) the Request. DMWr provides a mechanism for Endpoints and hosts to choose to carry out or defer incoming DMWr Requests. This mechanism can be used by Endpoints and Hosts to simplify the design of flow control, reduce latency, and improve throughput. The Deferrable Memory writes TLP format in Figure A. (Fig A) Deferrable Memory writes TLP format. Example Scenario Here's how the DMWr works with a simplified example: Imagine a system with an endpoint device (Device A) and a host CPU (Device B). Device B wants to write data to Device A's memory, but due to varying reasons such as system bus congestion or prioritization of other transactions, Device A can defer the completion of the memory write request. Just follow these steps: Initiation of Memory Write: Device B initiates a memory write transaction to Device A. This involves sending the memory write request along with the data payload over the PCIe physical layer link. Acknowledgment and Deferral: Upon receiving the memory write request, Device A acknowledges the transaction but may decide to defer its completion. Device A sends an acknowledgment (ACK) back to Device B, indicating it has received the data and intends to complete the write operation but not immediately. Deferred Completion: Device A defers the completion of the memory write operation to a later, more opportune time. This deferral allows Device A to prioritize other transactions or optimize the use of system resources, such as memory bandwidth or processor availability. Completion and Response: At a later point, Device A completes the deferred memory write operation and sends a completion indication back to Device B. This completion typically includes any status updates or additional information related to the transaction. Usage or Importance of DMWr Deferrable Memory Write usage provides the improvement in the following aspects: Reduced Latency: By deferring less critical memory write operations, more critical transactions can be processed with lower latency, improving overall system responsiveness. Improved Efficiency: Optimizes the utilization of system resources such as memory bandwidth and CPU cycles, enhancing the efficiency of data transfers within the PCIe architecture. Enhanced Performance: Allows devices to manage and prioritize transactions dynamically, potentially increasing overall system throughput and reducing contention. Challenges in the Implementation of DMWr Transactions The implementation of deferrable memory writes (DMWr) introduces several advancements and challenges in terms of usage and verification: Timing and Synchronization: DMWr allows transactions to be deferred, complicating timing requirements or completing them within acceptable timing windows to avoid protocol violations. Ensuring proper synchronization between devices becomes critical to prevent data loss or corruption. Protocol Compliance: Verification must ensure compliance with ECN PCIe 6.0 and CXL specifications regarding when and how DMWr transactions can be initiated and completed. Performance Optimization: While DMWr can improve overall system performance by reducing latency, verifying its impact on system performance and ensuring it meets expected benchmarks is crucial. Error Handling: Handling errors related to deferred transactions adds complexity. Verifying error detection and recovery mechanisms under various scenarios (e.g., timeout during deferral) is essential. Verification Challenges of DMWr Transactions The challenges to verifying the DMWr transaction consist of all checks with respect to Function, Timing, Protocol compliance, improvement, Error scenario, and security usage on purpose, as well as Data integrity at the PCIe and CXL. Functional Verification: Verifying the correct implementation of DMWr at both ends of the PCIe link (transmitter and receiver) to ensure proper functionality and adherence to specifications. Timing Verification: Validating timing constraints associated with deferring writes and ensuring transactions are completed within specified windows without violating protocol rules. Protocol Compliance Verification: Checking that DMWr transactions adhere to PCIe and CXL protocol rules, including ordering rules and any restrictions on deferral based on the transaction type. Performance Verification: Assessing the impact of DMWr on overall system performance, including latency reduction and bandwidth utilization, through simulation and testing. Error Scenario Verification: Creating and testing scenarios to verify error handling mechanisms related to DMWr, such as timeouts, retries, and recovery procedures. Security Considerations: Assessing potential security vulnerabilities related to DMWr, such as data integrity risks during deferred transactions or exposure to timing-based attacks. Major verification challenges and approaches are timing and synchronization verification in the context of implementing deferrable memory writes (DMWr), which is crucial due to the inherent complexities introduced by deferred transactions. Here are the key issues and approaches to address them: Timing and Synchronization Issues Transaction Completion Timing: Issue: Ensuring deferred transactions are completed within the specified time window without violating protocol timing constraints. Approach: Design an internal timer and checker to model worst-case scenarios where transactions are deferred and verify that they are complete within allowable latency limits. This involves simulating various traffic loads and conditions to assess timing under different scenarios. Ordering and Dependencies: Issue: Verifying that transactions deferred using DMWr maintain the correct ordering and dependencies relative to non-deferred transactions. Approach: Implement test scenarios that include mixed traffic of DMWr and non-DMWr transactions. Verify through simulation or emulation that dependencies and ordering requirements are correctly maintained across the PCIe link. Interrupt Handling and Response Times: Issue: Verify the handling of interrupts and ensure timely responses from devices involved in DMWr transactions. Approach: Implement test cases that simulate interrupt generation during DMWr transactions. Measure and verify the response times to interrupts to ensure they meet system latency requirements. In conclusion, while deferrable memory writes in PCIe and CXL offer significant performance benefits, their implementation and verification present several challenges related to timing, protocol compliance, performance optimization, and error handling. Addressing these challenges requires rigorous testing and testbench of traffic, advanced verification methodologies, and a thorough understanding of PCIe specifications and also the motivation behind introducing this Deferrable Write is effectively used in the CXL further. Outcomes of Deferrable Memory Write verify that the performance benefits of DMWr (reduced latency, improved throughput) are achieved without compromising timing integrity or violating protocol specifications. In summary, PCIe and CXL are complex protocols with many verification challenges. You must understand many new Spec changes and consider the robust verification plan for the new features and backward compatible tests impacted by new features. Cadence's PCIe 6.0 Verification IP is fully compliant with the latest PCIe Express 6.0 specifications and provides an effective and efficient way to verify the components interfacing with the PCIe 6.0 interface. Cadence VIP for PCIe 6.0 provides exhaustive verification of PCIe-based IP and SoCs, and we are working with Early Adopter customers to speed up every verification stage. More Information For more info on how Cadence PCIe Verification IP and TripleCheck VIP enable users to confidently verify PCIe 6.0, see our VIP for PCI Express, VIP for Compute Express Link, and TripleCheck for PCI Express See the PCI-SIG website for more details on PCIe in general and the different PCI standards. Full Article CXL PCIe PCIe Gen5 Deferrable memory write transaction
an Sigrity and Systems Analysis 2024.1 Release Now Available By community.cadence.com Published On :: Wed, 23 Oct 2024 11:16:00 GMT The Sigrity and Systems Analysis (SIGRITY/SYSANLS) 2024.1 release is now available for download at Cadence Downloads . For the list of CCRs fixed in this release, see the README.txt file in the installation hierarchy. SIGRITY/SYSANLS 2024.1 Here is a list of some of the key updates in the SIGRITY/SYSANLS 2024.1 release: For more details about these and all the other new and enhanced features introduced in this release , refer to the following document: Sigrity Release Overview and Common Tools What's New . Supported Platforms and Operating Systems Platform and Architecture X86_64 (lnx86) Windows (64 bit) Development OS RHEL 8.4 Windows Server 2022 Supported OS RHEL 8.4 and above RHEL 9 SLES 15 (SP3 and above) Windows 10 Windows 11 Windows Server 2019 Windows Server 2022 Systems Analysis 2024.1 Clarity 3D Solver Clarity 3D Layout Structure Optimization Workflow : A new workflow, Clarity 3D Layout Structure Optimization Workflow, has been added to Clarity 3D Layout. This workflow integrates Allegro PCB Designer with Clarity 3D Layout for high-speed structure optimization. Component Geometry Model Editor : The new Clarity 3D Layout editor lets you set up ports, solder bumps/balls/extrusions, and two-terminal and multi-terminal circuits using a single GUI. Coaxial Open Port Option Added to Port Setup Wizard : The Coaxial Open Port option lets you create ports for each target net pin and reference net pin in Clarity 3D Layout. The nearby reference net pins are then used as a reference for each target net pin, reducing the number of ports needed. In addition, the ports of unused reference net pins are shorted to the ground. Parametric Import Option Added : Two new options, Parametric Import and Default Import , have been added to the Tools – Launch Clarity3DWorkbench menu. The Parametric Import option lets you import the design along with its parameters into Clarity 3D Workbench. The Default Import option lets you ignore the parameters when importing the design into Clarity 3D Workbench. Component Library Added to Generate 3D Components : Clarity 3D Workbench now includes a new component library that lets you use predefined 3D component templates or add existing 3D components to create 3D designs and simulation models. AI-Powered Content Search Capability : Clarity 3D Workbench and Clarity 3D Transient Solver now support an AI-powered capability for searching the content and displaying relevant information. Expression Parser to Handle Undefined Parameters : Clarity 3D Workbench and Clarity 3D Transient Solver support writing expressions or equations containing undefined parameters in the Property window to describe a simulation variable. The improved expression parser automatically detects any undefined parameter in an expression and prompts users to specify their values. This capability lets you define a model or a simulation variable as a function instead of specifying static values. For detailed information, refer to Clarity 3D Layout User Guide and Clarity 3D Workbench User Guide on the Cadence Support portal. Clarity 3D Transient Solver Mesh Processing Improved to Simulate Large Use Cases : Clarity 3D Transient Solver leverages a new meshing algorithm that enhances overall mesh processing, specifically for large designs and use cases. The new algorithm dramatically improves the mesh quality, minimum mesh size, number of mesh key points, total mesh number, and memory usage. Advanced Material Processing Engine : The material processing capability has been enhanced to handle thin outer metal, which previously resulted in open and short issues in some designs. In addition, the material processing engine offers improved mode extraction for particular use cases, including waveguide and coaxial designs. Characteristic Impedance Calculation Improved : The solver engine now uses a new analytical calculation method to calculate the characteristic impedance of coaxial designs with improved accuracy. For detailed information, refer to Clarity 3D Transient Solver User Guide on the Cadence Support portal. Celsius Studio Celsius Interchange Model Introduced : Celsius Studio now supports Celsius Interchange Model generation, which is a 3D model derived from detailed physical designs for multi-physics and multi-scale analysis. This Celsius Interchange Model file ( .cim ) serves as a design information carrier across Celsius Studio tools, enabling a variety of simulation and analysis tasks . Celsius 3DIC Thermal Workflow Improvements : The Thermal Simulation workflows in Celsius 3DIC have been significantly enhanced. Key improvements include: Advanced Power Setup with Transient Power Function and Multi Mode options Enhanced GUI for the Mesh Control and Simulation Control tabs Improved meshing capabilities Celsius Interchange Model ( .cim ) generation Material library support for block and connections Import of Heat Transfer Coefficients (HTCs) from a CFD file Bump creation through the Bump Array Wizard Layer Stackup CSV file generation Celsius 3DIC Warpage and Stress Workflow Enhancements : The Warpage and Stress workflow in Celsius 3DIC has undergone significant improvements, such as: Improved multi-stage warpage simulation flow for 3DIC packaging process Enhanced GUI for the Mesh Control , Simulation Control , and Stress Boundary Conditions tabs Support for large deformations and temperature profiles Bump creation through the Bump Array Wizard New constraint types Enhanced meshing capabilities Geometric Nonlinearity Support in Warpage and Stress Analysis : Large deformation analysis is now supported in warpage and stress studies. This study uses the Total Lagrangian approach to model geometric nonlinearities in simulation, which allows accurate prediction of final deformations. Thermal Network Extraction and Simulation : In the solid extraction flow in Celsius 3D Workbench, you can now import area-based power map files to create terminals. For designs with multiple blocks, this capability allows automatic terminal creation, eliminating the need to manually create and set up 2D sheets individually. Additionally, thermal throttling feature is now supported in Celsius Thermal Network. This makes it ideal for preliminary analyses or when a quick estimation is required. It runs significantly faster than 3D models, allowing for quicker iterations and more efficient decision-making. For detailed information, refer to the Celsius 3DIC User Guide , Celsius Layout User Guide and Celsius 3D Workbench User Guide on the Cadence Support portal. Sigrity 2024.1 Layout Workbench Improved Graphical User Interface : A new option, Use Improved User Interface , has been added in the Themes page of the Options dialog box in the Layout Workbench GUI. In the new GUI, the toolbar icons and menu options have been enhanced and rearranged. For detailed information, refer to Layout Workbench User Guide on the Cadence Support portal. Broadband SPICE Python Script Integration with Command Line for Simulation Tasks : Broadband SPICE lets you run Python scripts directly from the command line for performing simulation and analysis. The new -py and *.py options make it easier to integrate Python scripts with the command-line operations. This update streamlines the process of automating and customizing simulations from the command line, which makes your simulation tasks faster and easier. For detailed information, refer to Broadband SPICE User Guide on the Cadence Support portal. Celsius PowerDC Block Power Assignment (BPA) File Format Support : PowerDC now supports the BPA file format. Similar to the Pin Location (PLOC) file, the BPA file is a current assignment file that defines the total current of a power grid cell, which is then equally distributed across the power pins within the cell. This provides better control over the power distribution. Ability to Run Multiple IR Drop Cases Sequentially : You can now select multiple result sinks from the Current-Limited IR Drop flow and run IR Drop analysis for them sequentially. PowerDC automatically runs the simulations in sequence after you select multiple result sinks. This saves time by automating the process. Enhanced Support for Mixed Conversion Devices : PowerDC now supports mixing different conversion devices, such as switching regulators and linear regulators within a single DC-DC/LDO instance. This enhancement offers added flexibility by letting you configure each instance in your design according to your specific needs. For detailed information, refer to PowerDC User Guide on the Cadence Support portal. PowerSI Monte Carlo Method Added : A new option, Monte Carlo Method, has been added in the Optimality dialog box. This option lets you create multiple random samples to depict variations in the input parameters and assess the output. Channel Check Optimization Added : The S-Parameter Assessment workflow in PowerSI now supports Channel Check Optimization . It uses the AI-driven Multidisciplinary Analysis and Optimization (MDAO) technology that lets you optimize your design quickly and efficiently with no accuracy loss. For detailed information, refer to PowerSI User Guide on the Cadence Support portal. SPEEDEM Multi-threaded Matrix Solver Support Added : The Enable Multi-threaded Matrix Solver check box has been added that lets you accelerate the simulation speed for high-performance computing. This check box provides two options, Automatic and Always, to include the -lhpc4 or -lhpc5 parameter, respectively, in the SPEEDEM Simulator (SPDSIM) before running the simulation. For detailed information, refer to the SPEEDEM User Guide on the Cadence Support portal. XtractIM Options to Skip or Calculate Special DC-R Simulation Results : The Skip DC_R of Each Path and Only DC_R of Each Path options have been added to the Setup menu. Skip DC_R of Each Path : This option lets you skip the calculation of the DC-R result during the simulation. Other results, such as SPICE T-model , RL_C of Each Path , Coupling of Each Path , etc., are still calculated. Only DC_R of Each Path : This option lets you calculate the DC-R result only during the simulation. Other results, such as SPICE T-model , RL_C of Each Path , Coupling of Each Path , etc., are not calculated. Color Assignment for Pin Matching : The MCP Auto Connection window includes the Display Color Editor , which lets you assign a color for pin matching. It helps you easily identify the matching pins in the left and right sections of the MCP Auto Connection window . Ability to Save Simulations Individually : The Save each simulation individually check box has been added to the Tools - Options - Edit Options - Simulation (Basic) - General form. Select this check box and run the simulation to generate a simulation results folder containing files and logs with a timestamp for each simulation. Reuse of SPD File Settings : The XtractIM setup check box lets you import an existing package setup to reuse the configurations and settings from one .spd file to another. For detailed information, refer to XtractIM User Guide on the Cadence Support portal. Documentation Enhancements Cloud-Based Help System Upgraded The cloud-based help system, Doc Assistant, has been upgraded to version 24.10, which contains several new features and enhancements over the previous 2.03 version. Sigrity Release Team Please send your questions and feedback to sigrity_rmt@cadence.com . Full Article
an Wild River Collaborates with Cadence on CMP-70 Channel Modeling By community.cadence.com Published On :: Wed, 23 Oct 2024 23:00:00 GMT Wild River Technology (WRT), the leading supplier of signal integrity measurement and optimization test fixtures for high-speed channels at data rates of up to 224G, has announced the availability of a new advanced channel modeling solution that helps achieve extreme signal integrity design to 70GHz. Read the press release. The CMP-70 program continues the industry-first simulation-to-measurement collaboration with Cadence that was initially established with the CMP-50. Significant resources were dedicated to the development of the CMP-70 by Cadence and WRT over almost three years. The CMP-70 will be on display at DesignCon 2025 , January 28-30, in Cadence booth 827 to benchmark the Cadence Clarity 3D Solver . “I am not a fan of hype-based programs that simply get attention,” remarked Alfred P. Neves, WRT’s co-founder and chief technical officer. “Both Cadence and Wild River brought substantial skills to the table in this project as we continued our industry-first simulation-to-measurement collaboration. The result is a proven, robust and accurate platform that brings extreme signal integrity to 70GHz designs. This application package has also been instrumental in demonstrating the robust 3D EM simulation capability of the Cadence Clarity solver.” “We’re delighted to continue the joint development and validation program with WRT that started with the CMP-50,” said Gary Lytle, product management director at Cadence. “The skilled and experienced signal integrity technologists that both companies bring to the program results in a superior signal integrity solution for our mutual customers.” CMP-70 Solution Features The solution is available both in a standard configuration and as a custom solution for customer-specific stackups and fabrication. The primary target application is to support a 3D EM solver analysis modeling versus the time- and frequency-domain measurement methodologies. The solution features include: The CMP-70 platform, assembled and 100% TDR NIST traceable tested, with custom stands Material Identification overview web-based meeting including anisotropic 3D material identification A cross-section PCB report and structures for using as-fabricated geometries Measured S-parameters, pre-tested for quality (passivity/causality and resampled for time domain simulations) A host of novel crosstalk structures suited for 112G HD level project analysis PCB layout design files (NDA required) An EDA starter library including loss models with industry-first accurate surface roughness models Comprehensive training available for 3D EM analysis – correspondence, material ID in X-Y and Z axis for a host of EDA tools Industry-First Hausdorff Technique The WRT application package also includes an industry-first modified Hausdorff (MHD) technique , included as MATLAB code. This algorithmic approach provides an accurate way to compare two sets of measurements in multi-dimensional space to determine how well they match. The technique is used to compare the results simulated by the Clarity solver with those measured on the CMP-70 platform. The methodology and initial results are shown in the figure below, where the figure of merit (FOM) is calculated from 10, 35, and finally to 50GHz. The MHD algorithm requires a MATLAB license, but WRT also accommodates customer data as another option, where WRT provides the comparison between measured and simulated data. Additional Resources If you are attending DesignCon 2025 , be sure to stop by Cadence booth 827 to see WRT’s CMP-70 advanced channel modeling solution in action with the Clarity 3D Solver. Check out our on-demand webinar, " Validating Clarity 3D Solver Accuracy Through Measurement Correlation ." Learn more about the CMP-70 solution and the Clarity 3D Solver . For more information about Cadence’s full suite of integrated multiphysics simulation solutions, download our Multiphysics System Analysis Solutions Portfolio . Full Article
an McLaren and Cadence Are Engineering Success By community.cadence.com Published On :: Thu, 31 Oct 2024 14:00:00 GMT Celebrated for their unparalleled engineering expertise and pioneering mindset, McLaren stands at the forefront of innovation. Theirs is a story of engineering excellence, a symphony of speed driven by the relentless pursuit of aerodynamic perfection. In 2022, Cadence was named an Official Technology Partner of the McLaren Formula 1 Team. The multi-year partnership between McLaren and Cadence has helped redefine the boundaries of what’s possible in Formula 1 aerodynamics. Shaving off a fraction of a second per lap can make all the difference in a podium finish, and track conditions bring layers of complexity to the design process. That’s where Cadence steps in with Fidelity CFD Software. The Cadence Fidelity CFD software is a comprehensive suite of computational fluid dynamics (CFD) solutions. Access to this solution allows the McLaren F1 team to accelerate their CFD workflow, enabling them to assess designs faster and more precisely. It also allows them to investigate airflows and tackle design projects that require advanced compute power and precision. With Fidelity Flow’s solver capabilities and Python-driven automation, Cadence’s CFD software aids the advancement of aerodynamic simulations that go into McLaren’s F1 cars. With a customized, high-quality, multi-block meshing strategy and optimized workflow, Fidelity CFD makes design exploration more automated, thereby helping establish a strong foundation for McLaren’s future success on the track. Lando Norris, F1 driver for McLaren, said, “As a driver, I saw the impact of every decision made in the design room in every simulation run. The work on aerodynamics directly translates to the confidence I have on track, the grip in every turn, and the speed on every straight. This partnership, this technology, is what will give us the edge. It's not just about battling opponents; it's about mastering the airflow around the car in every driving condition on every track.” If you’re interested in learning more about the importance of CFD in McLaren’s racing success, be sure to attend our upcoming webinar, “CFD and Experimental Aerodynamics in McLaren F1 Engineering.” Christian Schramm, McLaren’s director of advanced projects, and Cadence’s Benjamin Leroy will be the main speakers for the event. Register today to secure your spot! For more insights on the Formula 1 car design process, take a look at the case study, “ McLaren Formula 1 Car Aerodynamics Simulation with Cadence Fidelity CFD Software .” Learn more about how McLaren and Cadence are engineering success . “Designed with Cadence” is a series of videos that showcases creative products and technologies that are accelerating industry innovation using Cadence tools and solutions. For more Designed with Cadence videos, check out the Cadence website and YouTube channel . Full Article
an Solutions to Maximize Data Center Performance Featured at OCP Global Summit 2024 By community.cadence.com Published On :: Wed, 06 Nov 2024 00:21:00 GMT The demand for higher compute performance, energy efficiency, and faster time-to-market drove the conversations at this year's Open Compute Project (OCP) Global Summit in San Jose, California. It was the scene of showcasing groundbreaking innovations, expert-led sessions, and networking opportunities to drive the future of data center technology. For those who didn't get to attend or stop by our booth, here's a recap of Cadence's comprehensive solutions that enable next-generation compute technology, AI data center design, analysis, and optimization. Optimized Data Center Design and Operations As the data center community increasingly faces demands for enhanced efficiency, thermal management, sustainability, and performance optimization, data center operators, IT managers, and executives are looking for solutions to these challenges. At the Cadence booth, attendees explored the Cadence Reality Digital Twin Platform and Celsius EC Solver. These technologies are pivotal in achieving high-performance standards for AI data centers, providing advanced digital twin modeling capabilities that redefine next-generation data center design and operation. The Celsius EC Solver demonstration showed how it solves challenging thermal and electronics cooling management problems with precision and speed. CadenceCONNECT: Take the Heat Out of Your AI Data Center Cadence hosted a networking reception on October 16 titled "Take the Heat Out of Your AI Data Center." In today's AI era, managing the heat generated by high-density computing environments is more critical than ever. This reception offered insights into current and emerging data center technologies, digital twin cooling strategies that deliver energy-saving operations, and a chance to engage with industry leaders, Cadence experts, and peers to explore the latest cooling, AI, and GPU acceleration advancements. Here's a recap: Researcher, author, and entrepreneur Dr. Jon Koomey highlighted the inefficiency of data centers in his talk "The Rise of Zombie Data Centers," noting that 20-30% of their capacity is stranded and unused. He advocated for organizational changes and technological solutions like digital twins to reduce wasted energy and improve computational effectiveness as AI deployments increase. In "A New Millennium in Multiphysics System Analysis," Cadence Corporate VP Ben Gu explained the company's significant strides in multiphysics system analysis, evolving from chip simulation to a broader application of computational software for simulating various physical systems, including entire data centers. He noted that the latest Cadence venture, a digital twin platform for data center optimization, opened the opportunity to use simulation technology to optimize the efficiency of data centers. Senior Software Engineering Group Director Albert Zeng highlighted the Cadence Reality DC suite's ability to transform data center operations through simulation, emphasizing its multi-phase engine for optimal thermal performance and the integration of AI capabilities for enhanced design and management. A panel discussion titled "Turning AI Factory Blueprints into Reality at the Speed of Light" featured industry experts from NVIDIA, Norman Wright Precision Environmental and Power, NV5, Switch Data Centers, and Cadence, who explored the evolving requirements and multidimensional challenges of AI factories, emphasizing the need for collaboration across the supply chain to achieve high-performing and sustainable data centers. Watch the highlights. Transforming Designs from Chips to Data Centers The OCP Global Summit 2024 has reaffirmed its status as a pivotal event for data center professionals seeking to stay at the forefront of technological advancements. Cadence's contributions, from groundbreaking digital twin technologies to innovative cooling strategies, have shed light on the path forward for efficient, sustainable data centers. For data center professionals, IT managers, and engineers, the insights gained at this summit are invaluable in navigating the challenges and opportunities presented by the burgeoning AI era. Partnering with Arm Arm Total Design Cadence is a member of the Arm Total Design program. At an invitation-only special Arm event, Cadence's VP of Research and Development, Lokesh Korlipara, delivered a presentation focusing on data center challenges and design solutions with Arm Neoverse Compute Subsystem (CSS). The session highlighted: Efficient integration of Arm Neoverse CSS into system on chips (SoCs) with pre-integrated connectivity IP Performance analysis and verification of the Neoverse CSS integration into the SoC through Cadence's System VIP verification suite and automated testbench creation, enhancing both quality and productivity Jumpstarting designs through Cadence's collaboration with Arm for 3D-IC system planning, chiplets, and interposers Design Services readiness and global scale to support and/or deliver the most demanding Arm Neoverse CSS-based SoC design projects Cadence Supports Arm CSS in Arm Booth During the event, Cadence conducted a demo in the Arm booth that showcased the Cadence System VIP verification suite. The demo highlighted automated testbench creation and performance analysis for integrating the Arm CSS into SoCs while enhancing verification quality and productivity. Summary Cadence offers data center solutions for designing everything from the compute and networking chips to the board, racks, data centers, and campuses. Stay connected with Cadence and other industry leaders to continue exploring the innovations set to redefine the future of data centers. Learn More Cadence Joins Arm Total Design Cadence Arm-Based Solutions Cadence Reality Digital Twin Platform Full Article
an Celebrating Milestones: The Cadence Bangalore Toastmasters Club’s Journey By community.cadence.com Published On :: Wed, 06 Nov 2024 18:00:00 GMT On November 5, 2024, the Cadence Bangalore Toastmasters Club celebrated a significant milestone by hosting its 50th meeting. Established in December 2020, the club was created to provide a supportive environment for individuals looking to improve their communication and leadership skills. Over the years, the club has evolved into a vibrant community filled with success stories of personal development and newfound confidence. A testament to the club's dedication is its achievement of the "Select Distinguished Club" status during the 2023-2024 program year. By fulfilling 7 out of 10 distinguished goals, the club highlighted its commitment to excellence—a success driven by its vibrant members' relentless focus and perseverance. The strategic insight gained from regular Toastmasters committee meetings and the influential "Moments of Truth" sessions held in 2023 and 2024 are key to this success. Our club members have consistently demonstrated strong performance in various speech contests, with notable achievements across multiple levels. In 2023, members excelled in Evaluation and Table Topics contests, reaching the district level while advancing to the Division Level in the International Speech Contest. Continuing their success into 2024, members again qualified for area-level contests, securing third-place positions in the Evaluation and Table Topics categories, highlighting the club's dedication and competitive spirit. The 50th meeting was based on the theme of serendipity. It was not only a milestone celebration but also a vibrant festival of achievements and growth. The day buzzed with energy as activities like a spirited Treasure Hunt injected enthusiasm and camaraderie among attendees. Distinguished guests, including Kripa Venkitachalam and Madhavi Rao, enriched the occasion with inspiring speeches. Madhavi reignited the club's spirit, while Kripa's discourse on the Growth Mindset and the "Power of Yet" encouraged members to pursue continuous self-improvement. The Cadence Bangalore Toastmasters Club is enthusiastic about its promising future and is committed to creating an environment that promotes personal and professional growth. Many members are close to completing their Toastmasters levels and pathways, and this term, a new group of approximately 30 individuals has joined, bringing the total membership to 52. This vibrant community is just beginning its journey and is eager to reach new milestones together through mutual support and a shared commitment to excellence. The transformations experienced by many club members are truly compelling. They often share how the club has significantly improved their communication skills and boosted their confidence. One member recalls, "Before joining, I found public speaking intimidating. Now, I embrace every opportunity to share my ideas." Another member highlights how the club's supportive environment helped him overcome his fear of public speaking, propelling his career to new heights. This culture of constructive feedback and continuous improvement has inspired countless members to pursue their dreams with renewed determination and optimism. The Cadence Bangalore Toastmasters Club's journey is a living testament to the power of community and the potential within each of us to grow and achieve greatness. As the club continues to evolve and inspire, it serves as a beacon for those aspiring to transform their skills and seize their moment in the spotlight. Learn more about life at Cadence. Full Article
an Cleared to Land: An Interview with Cadence Veterans ERG Lead Johnathan Edmonds By community.cadence.com Published On :: Thu, 07 Nov 2024 18:30:00 GMT Each November, we are reminded of the bravery and dedication of those who have served our country. At Cadence, we thank our Veteran employees for their patriotism by reaffirming our commitment to honoring their sacrifices and recognizing their contributions to our business success. Our diverse and inclusive culture is strengthened by the unique perspective of our Veteran employees, and we are proud to support the Veterans Inclusion Group as a space for community members and their allies to connect. In celebration of Veterans Day, we were excited to catch up with Johnathan Edmonds, Veterans Inclusion Group Lead and Design Engineering Director, for a heartfelt chat on his journey through military service to leadership within Cadence. Throughout the conversation, he shared the importance of creating space for Veterans, the skills they offer, and his aspirations for what the Veterans Inclusion Group will achieve in the years ahead. Oh yeah, and he flies planes, too! Join us as we dive into what makes this holiday special for so many across the nation and how we can respectfully commemorate it together. Johnathan, you’re a retired Air Force Reservist, pilot, and now a Design Engineering Director. Can you tell us about your journey from the military to your current role at Cadence? I started my military and electronics journey in the Navy. I enlisted at 18 and served for six years as an aviation electronics technician. During this time, I was able to learn about and repair electronics on planes. This set me up for success, and when I was honorably discharged, I attended Virginia Tech to study computer engineering. Once I graduated, I continued my career as an engineer, but I still wanted to be a military pilot. From my past experience, I knew the reserves were an option where I could learn to fly and still have a civilian career. Not only was I lucky enough to get selected to go to pilot training, but after I returned from flight school, my luck grew, and I was hired at Cadence. Cadence has supported me throughout my military career, which has been a great benefit, as many companies don’t support reservists. The best thing about serving and being employed at Cadence is how I could blend my skill sets to further the Air Force’s mission and achieve great things in engineering. As the first lead of Cadence’s Veterans Inclusion Group, you played an integral part in growing our culture and building community at the company since launching the group four years ago. What inspired you to take on the role of Inclusion Group Lead? I was inspired by three things: camaraderie, service, and outreach. I wanted to see if we could achieve a similar sense of community through the Veterans Inclusion Group as we had during our service life. I also wanted to see how we could better serve our Veterans here at Cadence. I wanted to explore any benefits that could be expanded, roles that could be developed by Vets, and, lastly, I wanted to serve a broader community. COVID-19 put a damper on some of the community support, but we are getting back on track with Veteran employment programs and volunteer efforts like Carry the Load and Gold Star Families. Why is it important to have this space dedicated to Veteran employees? There are many reasons! Networking, for one, creates a stronger, more unified Cadence culture. Two, Vets face a variety of issues not generally understood by those who have not served, such as PTSD, where to get help for disabilities, how to get an old medical record, etc. As I mentioned, I’m also passionate about connecting Veterans with employment and job opportunities. It is so nice to work for a company that actively recruits Vets. We have our own “language,” if you will, so it’s nice to have a space to talk in the language that we are familiar with. What have been some of your favorite moments leading this group over the past few years? Are there any “wins” that you would like to recognize? We have a lot of wins. Events held during COVID-19 and getting past COVID-19, donating to worthwhile causes, and hosting guest speakers are all fantastic milestones and accomplishments. That said, the biggest win is the hiring of new Veteran employees. Mark Murphy, Corporate VP of Sales Operations, and I have both welcomed Vets to our team during this time, and it is such a joy to watch what someone can do when given the opportunity to succeed in the right environment. As you are set to transition out of the lead role next year, what do you hope to see the Veterans Inclusion Group accomplish next? My hope is that the Veterans Inclusion Group partners with other companies, expanding our reach externally and exploring new opportunities to engage Veterans outside of Cadence. Johnathan (left) speaks on an inclusion group panel, along with David Sallard (center), lead of Cadence's Black Inclusion Group and Sr. Principal Application Engineer; Christina Jamerson (on screen), lead of Cadence's Abilities Inclusion Group and Demand Generation Director; and Dianne Rambke (right), lead of Cadence's Latinx Inclusion Group and Marketing Communications Director. What are the important ways that people can signal inclusion and respectfully honor Veterans at work? What are the most meaningful or impactful actions employees everywhere can take to support Veteran coworkers? I think there is one answer to both questions. I recommend that people engage with their companies’ employee resource groups (ERGs) and have conversations with them. Opening up the lines of communication will lead to new paths in their journeys. What are you looking forward to in 2025, both personally and professionally? In 2025, professionally, I am looking forward to taking mixed-signal systems and verification to another level by including emulation, automatic model generation, and seeing which boundaries we can push in our SerDes and Chiplets products. Personally, I am looking forward to making my SXS street legal so I can drive places without getting a ticket, seeing my children participate in sports, church, and school, and taking my wife on vacation to Europe or somewhere else we can unplug. Learn more about Cadence’s Inclusion Groups, diverse culture, and commitment to belonging. Full Article
an Randomization considerations for PCIe Integrity and Data Encryption Verification Challenges By community.cadence.com Published On :: Fri, 08 Nov 2024 05:00:00 GMT Peripheral Component Interconnect Express (PCIe) is a high-speed interface standard widely used for connecting processors, memory, and peripherals. With the increasing reliance on PCIe to handle sensitive data and critical high-speed data transfer, ensuring data integrity and encryption during verification is the most essential goal. As we know, in the field of verification, randomization is a key technique that drives robust PCIe verification. It introduces unpredictability to simulate real-world conditions and uncover hidden bugs from the design. This blog examines the significance of randomization in PCIe IDE verification, focusing on how it ensures data integrity and encryption reliability, while also highlighting the unique challenges it presents. For more relevant details and understanding on PCIe IDE you can refer to Introducing PCIe's Integrity and Data Encryption Feature . The Importance of Data Integrity and Data Encryption in PCIe Devices Data Integrity : Ensures that the transmitted data arrives unchanged from source to destination. Even minor corruption in data packets can compromise system reliability, making integrity a critical aspect of PCIe verification. Data Encryption : Protects sensitive data from unauthorized access during transmission. Encryption in PCIe follows a standard to secure information while operating at high speeds. Maintaining both data integrity and data encryption at PCIe’s high-speed data transfer rate of 64GT/s in PCIe 6.0 and 128GT/s in PCIe 7.0 is essential for all end point devices. However, validating these mechanisms requires comprehensive testing and verification methodologies, which is where randomization plays a very crucial role. You can refer to Why IDE Security Technology for PCIe and CXL? for more details on this. Randomization in PCIe Verification Randomization refers to the generation of test scenarios with unpredictable inputs and conditions to expose corner cases. In PCIe verification, this technique helps us to ensure that all possible behaviors are tested, including rare or unexpected situations that could cause data corruption or encryption failures that may cause serious hindrances later. So, for PCIe IDE verification, we are considering the randomization that helps us verify behavior more efficiently. Randomization for Data Integrity Verification Here are some approaches of randomized verifications that mimic real-world traffic conditions, uncovering subtle integrity issues that might not surface in normal verification methods. 1. Randomized Packet Injection: This technique randomized data packets and injected into the communication stream between devices. Here we Inject random, malformed, or out-of-sequence packets into the PCIe link and mix valid and invalid IDE-encrypted packets to check the system’s ability to detect and reject unauthorized or invalid packets. Checking if encryption/decryption occurs correctly across packets. On verifying, we check if the system logs proper errors or alerts when encountering invalid packets. It ensures coverage of different data paths and robust protocol check. This technique helps assess the resilience of the IDE feature in PCIe in below terms: (i) Data corruption: Detecting if the system can maintain data integrity. (ii) Encryption failures: Testing the robustness of the encryption under random data injection. (iii) Packet ordering errors: Ensuring reordering does not affect data delivery. 2. Random Errors and Fault Injection: It involves simulating random bit flips, PCRC errors, or protocol violations to help validate the robustness of error detection and correction mechanisms of PCIe. These techniques help assess how well the PCIe IDE implementation: (i) Detects and responds to unexpected errors. (ii) Maintains secure communication under stress. (iii) Follows the PCIe error recovery and reporting mechanisms (AER – Advanced Error Reporting). (iv) Ensures encryption and decryption states stay synchronized across endpoints. 3. Traffic Pattern Randomization: Randomizing the sequence, size, and timing of data packets helps test how the device maintains data integrity under heavy, unpredictable traffic loads. Randomization for Data Encryption Verification Encryption adds complexity to verification, as encrypted data streams are not readable for traditional checks. Randomization becomes essential to test how encryption behaves under different scenarios. Randomization in data encryption verification ensures that vulnerabilities, such as key reuse or predictable patterns, are identified and mitigated. 1. Random Encryption Keys and Payloads: Randomly varying keys and payloads help validate the correctness of encryption without hardcoding assumptions. This ensures that encryption logic behaves correctly across all possible inputs. 2. Randomized Initialization Vectors (IVs): Many encryption protocols require a unique IV for each transaction. Randomized IVs ensure that encryption does not repeat patterns. To understand the IDE Key management flow, we can follow the below diagram that illustrates a detailed example key programming flow using the IDE_KM protocol. Figure 1: IDE_KM Example As Figure 1 shows, the functionality of the IDE_KM protocol involves Start of IDE_KM Session, Device Capability Discovery, Key Request from the Host, Key Programming to PCIe Device, and Key Acknowledgment. First, the Host starts the IDE_KM session by detecting the presence of the PCIe devices; if the device supports the IDE protocol, the system continues with the key programming process. Then a query occurs to discover the device’s encryption capabilities; it ensures whether the device supports dynamic key updates or static keys. Then the host sends a request to the Key Management Entity to obtain a key suitable for the devices. Once the key is obtained, the host programs the key into the IDE Controller on the PCIe endpoint. Both the host and the device now share the same key to encrypt and authenticate traffic. The device acknowledges that it has received and successfully installed the encryption key and the acknowledgment message is sent back to the host. Once both the host and the PCIe endpoint are configured with the key, a secure communication channel is established. From this point, all data transmitted over the PCIe link is encrypted to maintain confidentiality and integrity. IDE_KM plays a crucial role in distributing keys in a secure manner and maintaining encryption and integrity for PCIe transactions. This key programming flow ensures that a secure communication channel is established between the host and the PCIe device. Hence, the Randomized key approach ensures that the encryption does not repeat patterns. 3. Randomization PHE: Partial Header Encryption (PHE) is an additional mechanism added to Integrity and Data Encryption (IDE) in PCIe 6.0. PHE validation using a variety of traffic; incorporating randomization in APIs provided for validating PHE feature can add more robust Encryption to the data. Partial Header Encryption in Integrity and Data Encryption for PCIe has more detailed information on this. Figure 2: High-Level Flow for Partial Header Encryption 4. Randomization on IDE Address Association Register values: IDE Address Association Register 1/2/3 are supposed to be configured considering the memory address range of IDE partner ports. The fields of IDE address registers are split multiple values such as Memory Base Lower, Memory Limit Lower, Memory Base Upper, and Memory Limit Upper. IDE implementation can have multiple register blocks considering addresses with 32 or 64, different registers sizes, 0-255 selective streams, 0-15 address blocks, etc. This Randomization verification can help verify all the corner cases. Please refer to Figure 2. Figure 3: IDE Address Association Register 5. Random Faults During Encryption: Injecting random faults (e.g., dropped packets or timing mismatches) ensures the system can handle disruptions and prevent data leakage. Challenges of IDE Randomization and its Solution Randomization introduces a vast number of scenarios, making it computationally intensive to simulate every possibility. Constrained randomization limits random inputs to valid ranges while still covering edge cases. Again, using coverage-driven verification to ensure critical scenarios are tested without excessive redundancy. Verifying encrypted data with random inputs increases complexity. Encryption masks data, making it hard to verify outputs without compromising security. Here we can implement various IDE checks on the IDE callback to analyze encrypted traffic without decrypting it. Randomization can trigger unexpected failures, which are often difficult to reproduce. By using seed-based randomization, a specific seed generates a repeatable random sequence. This helps in reproducing and analyzing the behavior more precisely. Conclusion Randomization is a powerful technique in PCIe verification, ensuring robust validation of both data integrity and data encryption. It helps us to uncover subtle bugs and edge cases that a non-randomized testing might miss. In Cadence PCIe VIP, we support full-fledged IDE Verification with rigorous randomized verification that ensures data integrity. Robust and reliable encryption mechanisms ensure secure and efficient data communication. However, randomization also brings various challenges, and to overcome them we adopt a combination of constrained randomization, seed-based testing, and coverage-driven verification. As PCIe continues to evolve with higher speeds and focuses on high security demands, our Cadence PCIe VIP ensures it is in line with industry demand and verify high-performance systems that safeguard data in real-world environments with excellence. For more information, you can refer to Verification of Integrity and Data Encryption(IDE) for PCIe Devices and Industry's First Adopted VIP for PCIe 7.0 . More Information: For more info on how Cadence PCIe Verification IP and TripleCheck VIP enables users to confidently verify IDE, see our VIP for PCI Express , VIP for Compute Express Link for and TripleCheck for PCI Express For more information on PCIe in general, and on the various PCI standards, see the PCI-SIG website . Full Article
an Replace Cache useing TCL command By community.cadence.com Published On :: Wed, 21 Mar 2018 09:30:10 GMT Hello, I'm using OrCad 17.2 and in the company I'm wokring at there was a change in the database folder (from driver F to G for example) and it effects the option of synchronise using the Part Manager. and changing manually each part in the Desgin Cahce can be a pain. Is there any way I can make a TCL script that will run and replace a part cahce with other? Better if I can call from a table to read, and write from other collum. I would really be happy for an example. Thanks for the help. Full Article
an TensorFlow Optimization in DSVM: Azure and Cadence By community.cadence.com Published On :: Mon, 22 Oct 2018 12:41:39 GMT Hello Folks, Problem statement first: How does one properly setup tensorflow for running on a DSVM using a remote Docker environment? Can this be done in aml_config/*.runconfig? I receive the following message and I would like to be able to utilize the increased speeds of the extended FMA operations. tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA Background: I utilize a local docker environment managed through Azure ML Workbench for initial testing and code validation so that I'm not running an expensive DSVM constantly. Once I assess that my code is to my liking, I then run it on a remote docker instance on an Azure DSVM. I want a consistent conda environment across my compute environments, so this works out extremely well. However, I cannot figure out how to control the tensorflow build to optimize for the hardware at hand (i.e. my local docker on macOS vs. remote docker on Ubuntu DSVM) Full Article
an Matlab cannot open Pspice, to prompt orCEFSimpleUI.exe that it has stopped working! By community.cadence.com Published On :: Thu, 09 Apr 2020 12:08:58 GMT Cadence_SPB_17.4-2019 + Matlab R2019a 请参考本文档中的步骤进行操作 1,打开BJT_AMP.opj 2,设置Matlab路径 3,打开BJT_AMP_SLPS.slx 4,打开后,设置PSpiceBlock,出现或CEFSimpleUI.exe停止工作 5,添加模块 6,相同 7,打开pspsim.slx 8,相同 9,打开C: Cadence Cadence_SPB_17.4-2019 tools bin orCEFSimpleUI.exe和orCEFSimple.exe 10,相同 我想问一下如何解决,非常感谢! Full Article
an The code used to Replace Cache useing TCL command By community.cadence.com Published On :: Fri, 19 Apr 2024 10:16:17 GMT use the DBO function DboLib_RepalceCache to do the job of "Replace cache" in order to easy the job , type the code below . the code is a wrapper of the function metioned above set lStatus [DboState]set lSession $::DboSession_s_pDboSessionDboSession -this $lSessionset lDesignsIter [$lSession NewDesignsIter $lStatus]set lDesign [$lDesignsIter NextDesign $lStatus]set lNullObj NULL set oldLibName [DboTclHelper_sMakeCString "E:\PROJECT_WORKLIB.OLB"]set newLibName [DboTclHelper_sMakeCString "E:\MCU_PARTS_LIB.OLB"] #DboLib_ReplaceCache wrapperproc ReplaceCacheByName {partName} { global oldLibName global newLibName global lDesign set lPartStr [DboTclHelper_sMakeCString $partName] #set lNewStr [DboTclHelper_sMakeCString $newName] $lDesign ReplaceCache $lPartStr $oldLibName $lPartStr $newLibName 0 1} then use the tcl command like below to do the real job : ReplaceCacheByName "CL10B104KB8NNNC_C12" Full Article
an How to design enhancement mode eGaN (EPC8002) switch in cadence By community.cadence.com Published On :: Tue, 06 Aug 2024 08:44:04 GMT Hi, I need to design EPC8002 eGaN switch in cadence. Can someone provide me step by step guide on hoe to add EPC8002 into my cadence. I am working on BCD180. Thank you Ihsan Full Article
an Can't request Tensilica SDK - Error 500 By community.cadence.com Published On :: Tue, 22 Oct 2024 14:25:12 GMT Hi, I'm looking to download Tensilica SDK for evaluation, but I can't get past the registration form: Full Article
an Here Is Why the Indian Voter Is Saddled With Bad Economics By indiauncut.com Published On :: 2019-02-03T03:54:17+00:00 This is the 15th installment of The Rationalist, my column for the Times of India. It’s election season, and promises are raining down on voters like rose petals on naïve newlyweds. Earlier this week, the Congress party announced a minimum income guarantee for the poor. This Friday, the Modi government released a budget full of sops. As the days go by, the promises will get bolder, and you might feel important that so much attention is being given to you. Well, the joke is on you. Every election, HL Mencken once said, is “an advance auction sale of stolen goods.” A bunch of competing mafias fight to rule over you for the next five years. You decide who wins, on the basis of who can bribe you better with your own money. This is an absurd situation, which I tried to express in a limerick I wrote for this page a couple of years ago: POLITICS: A neta who loves currency notes/ Told me what his line of work denotes./ ‘It is kind of funny./ We steal people’s money/And use some of it to buy their votes.’ We’re the dupes here, and we pay far more to keep this circus going than this circus costs. It would be okay if the parties, once they came to power, provided good governance. But voters have given up on that, and now only want patronage and handouts. That leads to one of the biggest problems in Indian politics: We are stuck in an equilibrium where all good politics is bad economics, and vice versa. For example, the minimum guarantee for the poor is good politics, because the optics are great. It’s basically Garibi Hatao: that slogan made Indira Gandhi a political juggernaut in the 1970s, at the same time that she unleashed a series of economic policies that kept millions of people in garibi for decades longer than they should have been. This time, the Congress has released no details, and keeping it vague makes sense because I find it hard to see how it can make economic sense. Depending on how they define ‘poor’, how much income they offer and what the cost is, the plan will either be ineffective or unworkable. The Modi government’s interim budget announced a handout for poor farmers that seemed rather pointless. Given our agricultural distress, offering a poor farmer 500 bucks a month seems almost like mockery. Such condescending handouts solve nothing. The poor want jobs and opportunities. Those come with growth, which requires structural reforms. Structural reforms don’t sound sexy as election promises. Handouts do. A classic example is farm loan waivers. We have reached a stage in our politics where every party has to promise them to assuage farmers, who are a strong vote bank everywhere. You can’t blame farmers for wanting them – they are a necessary anaesthetic. But no government has yet made a serious attempt at tackling the root causes of our agricultural crisis. Why is it that Good Politics in India is always Bad Economics? Let me put forth some possible reasons. One, voters tend to think in zero-sum ways, as if the pie is fixed, and the only way to bring people out of poverty is to redistribute. The truth is that trade is a positive-sum game, and nations can only be lifted out of poverty when the whole pie grows. But this is unintuitive. Two, Indian politics revolves around identity and patronage. The spoils of power are limited – that is indeed a zero-sum game – so you’re likely to vote for whoever can look after the interests of your in-group rather than care about the economy as a whole. Three, voters tend to stay uninformed for good reasons, because of what Public Choice economists call Rational Ignorance. A single vote is unlikely to make a difference in an election, so why put in the effort to understand the nuances of economics and governance? Just ask, what is in it for me, and go with whatever seems to be the best answer. Four, Politicians have a short-term horizon, geared towards winning the next election. A good policy that may take years to play out is unattractive. A policy that will win them votes in the short term is preferable. Sadly, no Indian party has shown a willingness to aim for the long term. The Congress has produced new Gandhis, but not new ideas. And while the BJP did make some solid promises in 2014, they did not walk that talk, and have proved to be, as Arun Shourie once called them, UPA + Cow. Even the Congress is adopting the cow, in fact, so maybe the BJP will add Temple to that mix? Benjamin Franklin once said, “Democracy is two wolves and a lamb voting on what to have for lunch.” This election season, my friends, the people of India are on the menu. You have been deveined and deboned, marinated with rhetoric, seasoned with narrative – now enter the oven and vote. The India Uncut Blog © 2010 Amit Varma. All rights reserved. Follow me on Twitter. Full Article
an To Escalate or Not? This Is Modi’s Zugzwang Moment By indiauncut.com Published On :: 2019-03-03T03:19:05+00:00 This is the 17th installment of The Rationalist, my column for the Times of India. One of my favourite English words comes from chess. If it is your turn to move, but any move you make makes your position worse, you are in ‘Zugzwang’. Narendra Modi was in zugzwang after the Pulwama attacks a few days ago—as any Indian prime minister in his place would have been. An Indian PM, after an attack for which Pakistan is held responsible, has only unsavoury choices in front of him. He is pulled in two opposite directions. One, strategy dictates that he must not escalate. Two, politics dictates that he must. Let’s unpack that. First, consider the strategic imperatives. Ever since both India and Pakistan became nuclear powers, a conventional war has become next to impossible because of the threat of a nuclear war. If India escalates beyond a point, Pakistan might bring their nuclear weapons into play. Even a limited nuclear war could cause millions of casualties and devastate our economy. Thus, no matter what the provocation, India needs to calibrate its response so that the Pakistan doesn’t take it all the way. It’s impossible to predict what actions Pakistan might view as sufficient provocation, so India has tended to play it safe. Don’t capture territory, don’t attack military assets, don’t kill civilians. In other words, surgical strikes on alleged terrorist camps is the most we can do. Given that Pakistan knows that it is irrational for India to react, and our leaders tend to be rational, they can ‘bleed us with a thousand cuts’, as their doctrine states, with impunity. Both in 2001, when our parliament was attacked and the BJP’s Atal Bihari Vajpayee was PM, and in 2008, when Mumbai was attacked and the Congress’s Manmohan Singh was PM, our leaders considered all the options on the table—but were forced to do nothing. But is doing nothing an option in an election year? Leave strategy aside and turn to politics. India has been attacked. Forty soldiers have been killed, and the nation is traumatised and baying for blood. It is now politically impossible to not retaliate—especially for a PM who has criticized his predecessor for being weak, and portrayed himself as a 56-inch-chested man of action. I have no doubt that Modi is a rational man, and knows the possible consequences of escalation. But he also knows the possible consequences of not escalating—he could dilute his brand and lose the elections. Thus, he is forced to act. And after he acts, his Pakistan counterpart will face the same domestic pressure to retaliate, and will have to attack back. And so on till my home in Versova is swallowed up by a nuclear crater, right? Well, not exactly. There is a way to resolve this paradox. India and Pakistan can both escalate, not via military actions, but via optics. Modi and Imran Khan, who you’d expect to feel like the loneliest men on earth right now, can find sweet company in each other. Their incentives are aligned. Neither man wants this to turn into a full-fledged war. Both men want to appear macho in front of their domestic constituencies. Both men are masters at building narratives, and have a pliant media that will help them. Thus, India can carry out a surgical strike and claim it destroyed a camp, killed terrorists, and forced Pakistan to return a braveheart prisoner of war. Pakistan can say India merely destroyed two trees plus a rock, and claim the high moral ground by returning the prisoner after giving him good masala tea. A benign military equilibrium is maintained, and both men come out looking like strong leaders: a win-win game for the PMs that avoids a lose-lose game for their nations. They can give themselves a high-five in private when they meet next, and Imran can whisper to Modi, “You’re a good spinner, bro.” There is one problem here, though: what if the optics don’t work? If Modi feels that his public is too sceptical and he needs to do more, he might feel forced to resort to actual military escalation. The fog of politics might obscure the possible consequences. If the resultant Indian military action causes serious damage, Pakistan will have to respond in kind. In the chain of events that then begins, with body bags piling up, neither man may be able to back down. They could end up as prisoners of circumstance—and so could we. *** Also check out: Why Modi Must Learn to Play the Game of Chicken With Pakistan—Amit Varma The Two Pakistans—Episode 79 of The Seen and the Unseen India in the Nuclear Age—Episode 80 of The Seen and the Unseen The India Uncut Blog © 2010 Amit Varma. All rights reserved. Follow me on Twitter. Full Article
an Lessons from an Ankhon Dekhi Prime Minister By indiauncut.com Published On :: 2019-05-05T03:17:51+00:00 This is the 19th installment of The Rationalist, my column for the Times of India. A friend of mine was very impressed by the interview Narendra Modi granted last week to Akshay Kumar. ‘Such a charming man, such great work ethic,’ he gushed. ‘He is the kind of uncle I would want my kids to have.’ And then, in the same breath, he asked, ‘How can such a good man be such a bad prime minister?” I don’t want to be uncharitable and suggest that Modi’s image is entirely manufactured, so let’s take the interview at face value. Let’s also grant Modi his claims about the purity of his neeyat (intentions), and reframe the question this way: when it comes to public policy, why do good intentions often lead to bad outcomes? To attempt an answer, I’ll refer to a story a friend of mine, who knows Modi well, once told me about him. Modi was chilling with his friends at home more than a decade ago, and told them an incident from his childhood. His mother was ill once, and the young Narendra was tending to her. The heat was enervating, so the boy went to the switchboard to switch on the fan. But there was no electricity. My friend said that as he told this story, Modi’s eyes filled with tears. Even after all these years, he was moved by the memory. My friend used this story to make the point that Modi’s vision of the world is experiential. If he experiences something, he understands it. When he became chief minister of Gujarat, he made it his stated mission to get reliable electricity to every part of Gujarat. No doubt this was shaped by the time he flicked a switch as a young boy and the fan did not budge. Similarly, he has given importance to things like roads and cleanliness, since he would have experienced the impact of those as a young man. My term for him, inspired by Rajat Kapoor’s 2014 film, is ‘the ankhon dekhi prime minister’. At one level, this is a good thing. He sees a problem and works for the rest of his life to solve it. But what of things he cannot experience? The economy is a complex beast, as is society itself, and beyond a certain level, you need to grasp abstract concepts to understand how the world works. You cannot experience them. For example, spontaneous order, or the idea that society and markets, like language, cannot be centrally directed or planned. Or the positive-sum nature of things, which is the engine of our prosperity: the idea that every transaction is a win-win game, and that for one person to win, another does not have to lose. Or, indeed, respect for individual rights and free speech. One understands abstract concepts by reading about them, understanding them, applying them to the real world. Modi is not known to be a reader, and this is not his fault. Given his background, it is a near-miracle that he has made it this far. He wasn’t born into a home with a reading culture, and did not have either the resources or the time when he was young to devote to reading. The only way he could learn about the world, thus, was by experiencing it. There are two lessons here, one for Modi himself and others in his position, and another for everyone. The lesson in this for Modi is a lesson for anyone who rises to such an important position, even if he is the smartest person in the world. That lesson is to have humility about the bounds of your knowledge, and to surround yourself with experts who can advise you well. Be driven by values and not confidence in your own knowledge. Gather intellectual giants around you, and stand on their shoulders. Modi did not do this in the case of demonetisation, which he carried out against the advice of every expert he consulted. We all know the damage it caused to the economy. The other learning from this is for all of us. How do we make sense of the world? By connecting dots. An ankhon-dekhi approach will get us very few dots, and our view of the world will be blurred and incomplete. The best way to gather more dots is reading. The more we read, the better we understand the world, and the better the decisions we take. When we can experience a thousand lives through books, why restrict ourselves to one? A good man with noble intentions can make bad decisions with horrible consequences. The only way to hedge against this is by staying humble and reading more. So when you finish reading this piece, think of an unread book that you’d like to read today – and read it! The India Uncut Blog © 2010 Amit Varma. All rights reserved. Follow me on Twitter. Full Article
an Can Amit Shah do for India what he did for the BJP? By indiauncut.com Published On :: 2019-06-02T02:07:40+00:00 This is the 20th installment of The Rationalist, my column for the Times of India. Amit Shah’s induction into the union cabinet is such an interesting moment. Even partisans who oppose the BJP, as I do, would admit that Shah is a political genius. Under his leadership, the BJP has become an electoral behemoth in the most complicated political landscape in the world. The big question that now arises is this: can Shah do for India what he did for the BJP? This raises a perplexing question: in the last five years, as the BJP has flourished, India has languished. And yet, the leadership of both the party and the nation are more or less the same. Then why hasn’t the ability to manage the party translated to governing the country? I would argue that there are two reasons for this. One, the skills required in those two tasks are different. Two, so are the incentives in play. Let’s look at the skills first. Managing a party like the BJP is, in some ways, like managing a large multinational company. Shah is a master at top-down planning and micro-management. How he went about winning the 2014 elections, described in detail in Prashant Jha’s book How the BJP Wins, should be a Harvard Business School case study. The book describes how he fixed the BJP’s ground game in Uttar Pradesh, picking teams for 147,000 booths in Uttar Pradesh, monitoring them, and keeping them accountable. Shah looked at the market segmentation in UP, and hit upon his now famous “60% formula”. He realised he could not deliver the votes of Muslims, Yadavs and Jatavs, who were 40% of the population. So he focussed on wooing the other 60%, including non-Yadav OBCs and non-Jatav Dalits. He carried out versions of these caste reconfigurations across states, and according to Jha, covered “over 5 lakh kilometres” between 2014 and 2017, consolidating market share in every state in this country. He nurtured “a pool of a thousand new OBC and Dalit leaders”, going well beyond the posturing of other parties. That so many Dalits and OBCs voted for the BJP in 2019 is astonishing. Shah went past Mandal politics, managing to subsume previously antagonistic castes and sub-castes into a broad Hindutva identity. And as the BJP increased its depth, it expanded its breadth as well. What it has done in West Bengal, wiping out the Left and weakening Mamata Banerjee, is jaw-dropping. With hindsight, it may one day seem inevitable, but only a madman could have conceived it, and only a genius could have executed it. Good man to be Home Minister then, eh? Not quite. A country is not like a large company or even a political party. It is much too complex to be managed from the top down, and a control freak is bound to flounder. The approach needed is very different. Some tasks of governance, it is true, are tailor-made for efficient managers. Building infrastructure, taking care of roads and power, building toilets (even without an underlying drainage system) and PR campaigns can all be executed by good managers. But the deeper tasks of making an economy flourish require a different approach. They need a light touch, not a heavy hand. The 20th century is full of cautionary tales that show that economies cannot be centrally planned from the top down. Examples of that ‘fatal conceit’, to use my hero Friedrich Hayek’s term, include the Soviet Union, Mao’s China, and even the lady Modi most reminds me of, Indira Gandhi. The task of the state, when it comes to the economy, is to administer a strong rule of law, and to make sure it is applied equally. No special favours to cronies or special interest groups. Just unleash the natural creativity of the people, and don’t try to micro-manage. Sadly, the BJP’s impulse, like that of most governments of the past, is a statist one. India should have a small state that does a few things well. Instead, we have a large state that does many things badly, and acts as a parasite on its people. As it happens, the few things that we should do well are all right up Shah’s managerial alley. For example, the rule of law is effectively absent in India today, especially for the poor. As Home Minister, Shah could fix this if he applied the same zeal to governing India as he did to growing the BJP. But will he? And here we come to the question of incentives. What drives Amit Shah: maximising power, or serving the nation? What is good for the country will often coincide with what is good for the party – but not always. When they diverge, which path will Shah choose? So much rests on that. The India Uncut Blog © 2010 Amit Varma. All rights reserved. Follow me on Twitter. Full Article
an Trump and Modi are playing a Lose-Lose game By indiauncut.com Published On :: 2019-06-23T03:26:43+00:00 This is the 22nd installment of The Rationalist, my column for the Times of India. Trade wars are on the rise, and it’s enough to get any nationalist all het up and excited. Earlier this week, Narendra Modi’s government announced that it would start imposing tariffs on 28 US products starting today. This is a response to similar treatment towards us from the US. There is one thing I would invite you to consider: Trump and Modi are not engaged in a war with each other. Instead, they are waging war on their own people. Let’s unpack that a bit. Part of the reason Trump came to power is that he provided simple and wrong answers for people’s problems. He responded to the growing jobs crisis in middle America with two explanations: one, foreigners are coming and taking your jobs; two, your jobs are being shipped overseas. Both explanations are wrong but intuitive, and they worked for Trump. (He is stupid enough that he probably did not create these narratives for votes but actually believes them.) The first of those leads to the demonising of immigrants. The second leads to a demonising of trade. Trump has acted on his rhetoric after becoming president, and a modern US version of our old ‘Indira is India’ slogan might well be, “Trump is Tariff. Tariff is Trump.” Contrary to the fulminations of the economically illiterate, all tariffs are bad, without exception. Let me illustrate this with an example. Say there is a fictional product called Brump. A local Brump costs Rs 100. Foreign manufacturers appear and offer better Brumps at a cheaper price, say Rs 90. Consumers shift to foreign Brumps. Manufacturers of local Brumps get angry, and form an interest group. They lobby the government – or bribe it with campaign contributions – to impose a tariff on import of Brumps. The government puts a 20-rupee tariff. The foreign Brumps now cost Rs 110, and people start buying local Brumps again. This is a good thing, right? Local businesses have been helped, and local jobs have been saved. But this is only the seen effect. The unseen effect of this tariff is that millions of Brump buyers would have saved Rs 10-per-Brump if there were no tariffs. This money would have gone out into the economy, been part of new demand, generated more jobs. Everyone would have been better off, and the overall standard of living would have been higher. That brings to me to an essential truth about tariffs. Every tariff is a tax on your own people. And every intervention in markets amounts to a distribution of wealth from the people at large to specific interest groups. (In other words, from the poor to the rich.) The costs of this are dispersed and invisible – what is Rs 10 to any of us? – and the benefits are large and worth fighting for: Local manufacturers of Brumps can make crores extra. Much modern politics amounts to manufacturers of Brumps buying politicians to redistribute money from us to them. There are second-order effects of protectionism as well. When the US imposes tariffs on other countries, those countries may respond by imposing tariffs back. Raw materials for many goods made locally are imported, and as these become expensive, so do those goods. That quintessential American product, the iPhone, uses parts from 43 countries. As local products rise in price because of expensive foreign parts, prices rise, demand goes down, jobs are lost, and everyone is worse off. Trump keeps talking about how he wants to ‘win’ at trade, but trade is not a zero-sum game. The most misunderstood term in our times is probably ‘trade-deficit’. A country has a trade deficit when it imports more than what it exports, and Trump thinks of that as a bad thing. It is not. I run a trade deficit with my domestic help and my local grocery store. I buy more from them than they do from me. That is fine, because we all benefit. It is a win-win game. Similarly, trade between countries is really trade between the people of both countries – and people trade with each other because they are both better off. To interfere in that process is to reduce the value created in their lives. It is immoral. To modify a slogan often identified with libertarians like me, ‘Tariffs are Theft.’ These trade wars, thus, carry a touch of the absurd. Any leader who imposes tariffs is imposing a tax on his own people. Just see the chain of events: Trump taxes the American people. In retaliation, Modi taxes the Indian people. Trump raises taxes. Modi raises taxes. Nationalists in both countries cheer. Interests groups in both countries laugh their way to the bank. What kind of idiocy is this? How long will this lose-lose game continue? The India Uncut Blog © 2010 Amit Varma. All rights reserved. Follow me on Twitter. Full Article
an Farmers, Technology and Freedom of Choice: A Tale of Two Satyagrahas By indiauncut.com Published On :: 2019-06-30T03:29:02+00:00 This is the 23rd installment of The Rationalist, my column for the Times of India. I had a strange dream last night. I dreamt that the government had passed a law that made using laptops illegal. I would have to write this column by hand. I would also have to leave my home in Mumbai to deliver it in person to my editor in Delhi. I woke up trembling and angry – and realised how Indian farmers feel every single day of their lives. My column today is a tale of two satyagrahas. Both involve farmers, technology and the freedom of choice. One of them began this month – but first, let us go back to the turn of the millennium. As the 1990s came to an end, cotton farmers across India were in distress. Pests known as bollworms were ravaging crops across the country. Farmers had to use increasing amounts of pesticide to keep them at bay. The costs of the pesticide and the amount of labour involved made it unviable – and often, the crops would fail anyway. Then, technology came to the rescue. The farmers heard of Bt Cotton, a genetically modified type of cotton that kept these pests away, and was being used around the world. But they were illegal in India, even though no bad effects had ever been recorded. Well, who cares about ‘illegal’ when it is a matter of life and death? Farmers in Gujarat got hold of Bt Cotton seeds from the black market and planted them. You’ll never guess what happened next. As 2002 began, all cotton crops in Gujarat failed – except the 10,000 hectares that had Bt Cotton. The government did not care about the failed crops. They cared about the ‘illegal’ ones. They ordered all the Bt Cotton crops to be destroyed. It was time for a satyagraha – and not just in Gujarat. The late Sharad Joshi, leader of the Shetkari Sanghatana in Maharashtra, took around 10,000 farmers to Gujarat to stand with their fellows there. They sat in the fields of Bt Cotton and basically said, ‘Over our dead bodies.’ ¬Joshi’s point was simple: all other citizens of India have access to the latest technology from all over. They are all empowered with choice. Why should farmers be held back? The satyagraha was successful. The ban on Bt Cotton was lifted. There are three things I would like to point out here. One, the lifting of the ban transformed cotton farming in India. Over 90% of Indian farmers now use Bt Cotton. India has become the world’s largest producer of cotton, moving ahead of China. According to agriculture expert Ashok Gulati, India has gained US$ 67 billion in the years since from higher exports and import savings because of Bt Cotton. Most importantly, cotton farmers’ incomes have doubled. Two, GMO crops have become standard across the world. Around 190 million hectares of GMO crops have been planted worldwide, and GMO foods are accepted in 67 countries. The humanitarian benefits have been massive: Golden Rice, a variety of rice packed with minerals and vitamins, has prevented blindness in countless new-born kids since it was introduced in the Philippines. Three, despite the fear-mongering of some NGOs, whose existence depends on alarmism, the science behind GMO is settled. No harmful side effects have been noted in all these years, and millions of lives impacted positively. A couple of years ago, over 100 Nobel Laureates signed a petition asserting that GMO foods were safe, and blasting anti-science NGOs that stood in the way of progress. There is scientific consensus on this. The science may be settled, but the politics is not. The government still bans some types of GMO seeds, such as Bt Brinjal, which was developed by an Indian company called Mahyco, and used successfully in Bangladesh. More crucially, a variety called HT Bt Cotton, which fights weeds, is also banned. Weeding takes up to 15% of a farmer’s time, and often makes farming unviable. Farmers across the world use this variant – 60% of global cotton crops are HT Bt. Indian farmers are so desperate for it that they choose to break the law and buy expensive seeds from the black market – but the government is cracking down. A farmer in Haryana had his crop destroyed by the government in May. On June 10 this year, a farmer named Lalit Bahale in the Akola District of Maharashtra kicked off a satyagraha by planting banned seeds of HT Bt Cotton and Bt Brinjal. He was soon joined by thousands of farmers. Far from our urban eyes, a heroic fight has begun. Our farmers, already victimised and oppressed by a predatory government in countless ways, are fighting for their right to take charge of their lives. As this brave struggle unfolds, I am left with a troubling question: All those satyagrahas of the past by our great freedom fighters, what were they for, if all they got us was independence and not freedom? The India Uncut Blog © 2010 Amit Varma. All rights reserved. Follow me on Twitter. Full Article
an For this Brave New World of cricket, we have IPL and England to thank By indiauncut.com Published On :: 2019-07-13T23:50:53+00:00 This is the 24th installment of The Rationalist, my column for the Times of India. Back in the last decade, I was a cricket journalist for a few years. Then, around 12 years ago, I quit. I was jaded as hell. Every game seemed like déjà vu, nothing new, just another round on the treadmill. Although I would remember her fondly, I thought me and cricket were done. And then I fell in love again. Cricket has changed in the last few years in glorious ways. There have been new ways of thinking about the game. There have been new ways of playing the game. Every season, new kinds of drama form, new nuances spring up into sight. This is true even of what had once seemed the dullest form of the game, one-day cricket. We are entering into a brave new world, and the team leading us there is England. No matter what happens in the World Cup final today – a single game involves a huge amount of luck – this England side are extraordinary. They are the bridge between eras, leading us into a Golden Age of Cricket. I know that sounds hyperbolic, so let me stun you further by saying that I give the IPL credit for this. And now, having woken up you up with such a jolt on this lovely Sunday morning, let me explain. Twenty20 cricket changed the game in two fundamental ways. Both ended up changing one-day cricket. The first was strategy. When the first T20 games took place, teams applied an ODI template to innings-building: pinch-hit, build, slog. But this was not an optimal approach. In ODIs, teams have 11 players over 50 overs. In T20s, they have 11 players over 20 overs. The equation between resources and constraints is different. This means that the cost of a wicket goes down, and the cost of a dot ball goes up. Critically, it means that the value of aggression rises. A team need not follow the ODI template. In some instances, attacking for all 20 overs – or as I call it, ‘frontloading’ – may be optimal. West Indies won the T20 World Cup in 2016 by doing just this, and England played similarly. And some sides began to realise was that they had been underestimating the value of aggression in one-day cricket as well. The second fundamental way in which T20 cricket changed cricket was in terms of skills. The IPL and other leagues brought big money into the game. This changed incentives for budding cricketers. Relatively few people break into Test or ODI cricket, and play for their countries. A much wider pool can aspire to play T20 cricket – which also provides much more money. So it makes sense to spend the hundreds of hours you are in the nets honing T20 skills rather than Test match skills. Go to any nets practice, and you will find many more kids practising innovative aggressive strokes than playing the forward defensive. As a result, batsmen today have a wider array of attacking strokes than earlier generations. Because every run counts more in T20 cricket, the standard of fielding has also shot up. And bowlers have also reacted to this by expanding their arsenal of tricks. Everyone has had to lift their game. In one-day cricket, thus, two things have happened. One, there is better strategic understanding about the value of aggression. Two, batsmen are better equipped to act on the aggressive imperative. The game has continued to evolve. Bowlers have reacted to this with greater aggression on their part, and this ongoing dialogue has been fascinating. The cricket writer Gideon Haigh once told me on my podcast that the 2015 World Cup featured a battle between T20 batting and Test match bowling. This England team is the high watermark so far. Their aggression does not come from slogging. They bat with a combination of intent and skills that allows them to coast at 6-an-over, without needing to take too many risks. In normal conditions, thus, they can coast to 300 – any hitting they do beyond that is the bonus that takes them to 350 or 400. It’s a whole new level, illustrated by the fact that at one point a few days ago, they had seven consecutive scores of 300 to their name. Look at their scores over the last few years, in fact, and it is clear that this is the greatest batting side in the history of one-day cricket – by a margin. There have been stumbles in this World Cup, but in the bigger picture, those are outliers. If England have a bad day in the final and New Zealand play their A-game, England might even lose today. But if Captain Morgan’s men play their A-game, they will coast to victory. New Zealand does not have those gears. No other team in the world does – for now. But one day, they will all have to learn to play like this. The India Uncut Blog © 2010 Amit Varma. All rights reserved. Follow me on Twitter. Full Article
an Start Your Engines: Create and Insert Connect Modules for Mixed-Signal Verification By community.cadence.com Published On :: Tue, 11 Jun 2024 16:17:00 GMT Read this blog to know how you can easily create and insert connect modules using Spectre AMS Designer with the Verilog-AMS standard language defined by Accellera. (read more) Full Article AMS AMS Designer Mixed-Signal AMS simulation mixed-signal design AMS Verification mixed-signal verification
an Doc Assistant A-Z: Making the Most of the Cadence Cloud-Based Help Viewer: Pt. 2 By community.cadence.com Published On :: Wed, 26 Jun 2024 20:00:00 GMT At a bustling Cadence event, we met Adrian, an intern at a startup who immerses himself in Cadence tools for his research and work. Adrian was enthusiastic about the innovative technologies at his disposal but faced a significant challenge: internet access was limited to a single machine for new joiners, forcing interns to wait in line for their turn to use online resources. Adrian's excitement soared when he discovered a game-changing solution: Doc Assistant. The cloud-based help viewer, Doc Assistant, ships with all Cadence tools, enabling Adrian to access help resources offline from any machine equipped with the software. This meant Adrian could continue his research and work seamlessly, irrespective of internet availability! Meeting Cadence users and customers at such events has given us the opportunity to showcase how they can benefit from the diverse features that Doc Assistant offers. With that note, welcome back to our Doc Assistant A-Z blog series! In Part 1, we explored key features and benefits that our innovative viewer brings to the table. Today, in Part 2, we'll dive deeper into the advanced functionalities and customization options that make Doc Assistant indispensable for its users. Whether you're looking to streamline your workflow or enhance your user experience, this blog will provide the insights you need to fully leverage the capabilities of our documentation viewer. Let’s get started! What Makes Doc Assistant Stand Out? Here are a few (more) cool features of Doc Assistant! History and Bookmarks: Want to refer to the topic you read last week? Of course, you can! Doc Assistant stores your browsing activity as History. You can also bookmark topics and revisit them later. Indexing Capabilities: Looking for seamless search capabilities? The advanced indexing capabilities of Doc Assistant enhance the accessibility and manageability of documents. Doc Assistant automatically creates a search index if it is missing or broken. Jump Links: Worried about scrolling through lengthy topics? Fret no more! Use the jump links in each topic to quickly navigate to different sections within the same topic or across topics. Jump links reduce the need for excessive scrolling and let you access relevant content swiftly. Just-in-Time Notifications: Looking for alerts and messages? That’s supported. Doc Assistant displays notifications about important events, including errors, warnings, information, and success messages. Keyword-Based Search Suggestions: You somewhat know your search keyword, but not quite sure? No worries. Just start typing what you know. Keyword and page suggestions are displayed dynamically as you type, providing a more sophisticated and intuitive search experience. Library-Switch Support: Want to view documents from other libraries? Doc Assistant, by default, displays documents for the currently active release in your machine. You can access documents from other releases by configuring the associated documentation libraries. Multimedia Support: Want to view product demos? Multimedia support in Doc Assistant lets you play videos, listen to audio, and view images without opening any external application. Navigation Made Easy: Worried that you’ll get lost in an infinite doc loop? Not at all. The intuitive navigation controls in Doc Assistant are designed to provide you with a fluid and efficient experience. The Doc Assistant user interface is clean and logically organized, with easy-to-access documentation links. That's not all. We have more coming your way. Until next time, take care and stay tuned for our next edition! Want to Know More? Here's a video about Doc Assistant Visit the Doc Assistant web page Read the Doc Assistant FAQ document For any questions or general feedback, write to docassistant.support@cadence.com. Subscribe to receive email notifications about our latest Custom IC Design blog posts. Happy reading! -Priya Sriram, on behalf of the Doc Assistant Team Full Article In-Tool Help user documentation in-built help Cloud-Based Help Doc Assistant
an Knowledge Booster Training Bytes - Writing Physical Verification Language Rules By community.cadence.com Published On :: Wed, 03 Jul 2024 08:56:00 GMT Have you ever wanted to write a DRC rule deck to check for space or width constraints on polygons? Or have you wondered how the multiple lines of an LVS rule deck extract and conduct a comparison between the schematic and layout? Maybe you've been curious about the role of rule deck writers in creating high-quality designs ready for tape-out. If any of these questions interest you, there is good news: the latest version (v23.1) of the Physical Verification Rules Writer (PVLRW) course is designed to teach you rule deck writing. This free 16-hour online course includes audio and labs designed to make your learning experience comfortable and flexible. Whether you are new to the concept or an experienced CAD/PDK engineer, the course is structured to enhance your rule deck writing skills. The PVLRW course covers six core modules: Layer Processing, DRC Rules, Layout Extraction, ERC and LVS Rules, Schematic Netlisting, and Coloring Rules. There are also three optional appendix sections. Each module explains relevant rules with syntax, concepts, graphics, examples, and case studies. This course is based on tool versions PEGASUS231 and Virtuoso Studio IC231. Pegasus Input and Output Pegasus is a cloud-ready physical verification signoff solution that enables engineers to support faster delivery of advanced-node integrated circuits (ICs) to market. Pegasus requires input data in the form of layout geometry, schematic netlists, and rules that direct the tool operation. The rules fall into two categories: those that describe the fabrication process and those that control the job-specific operation. Pegasus provides log and report files, netlists, databases, and error databases as output. Overview of Pegasus Rule File The rule decks written in Physical Verification Language (PVL) work for the Cadence PV signoff tools Pegasus and PVS (Physical Verification System). The PVL rules are placed in a file that gets selected in a run from the GUI or the command line, as the user directs. PVL rules may be on separate lines within the file and can also be contained in named rule blocks. Each line of code starts with a PVL rule that uses prefix type notation. It consists of a keyword followed by options, input layer or variable names, and output layer or variable names. A rule block has the format of the keyword rule, followed by a rule name you wish to give it, followed by an opening curly brace. You enter the rules you wish to perform, followed by a closing curly brace on the last separate line. Sample Rule deck with individual lines of code and rule blocks. DRC Rules The first step in a typical Pegasus flow is a Design Rule Check (DRC), which verifies that layout geometries conform to the minimum width, spacing, and other fabrication process rules required by an IC foundry. Each foundry specifies its own process-dependent rules that must be met by the layout design. There are three types of DRC rules: layer definition rules, layer derivation rules, and DRC design check rules. Layer definition rules identify the layers contained in the input layout database, and layer derivation rules derive additional layers from the original input layers, allowing the tool to test the design against specific foundry requirements using the design check rules. A sample DRC Rule deck A layout view displaying the DRC violations LVS Rules The Pegasus Layout Versus Schematic (LVS) tool compares the layout netlist with the schematic netlist to check for discrepancies. There are two essential LVS rule sets: LVS extraction rules and comparison rules. LVS extraction rules help extract drawn devices and connectivity information from the input layout geometry data and outputs into a layout netlist. The LVS extraction rule set also includes the layer definition, derivation, extraction, connectivity, and net listing rules. LVS comparison rules are associated with comparing the extracted layout netlist to a schematic netlist. A sample LVS Rule deck. TCL, Macros, and Conditional commands Tcl is supported and used in various Pegasus functionalities, such as Pegasus rule files and Pegasus configurator. Macros are functional templates that are defined once and can be used multiple times in a rule file. Conditional Commands are used to process or skip specific commands in the rule file. Do You Have Access to the Cadence Support Portal? If not, follow the steps below to create your account. On the Cadence Support portal, select Register Now and provide the requested information on the Registration page. You will need an email address and host ID to sign up. If you need help with registration, contact support@cadence.com. To stay up to date with the latest news and information about Cadence training and webinars, subscribe to the Cadence Training emails. If you have questions about courses, schedules, online, public, or live onsite training, reach out to us at Cadence Training. For any questions, general feedback, or future blog topic suggestions, please leave a comment. Related Resources Product Manuals Cadence Pegasus Developers Guide Rapid Adoption Kits Running Pegasus DRC/LVS/FILL in Batch Mode Training Byte Videos What Is the Run Command File? How to Run PVS-Pegasus Jobs in GUI and Batch modes? PVS DRC Run From - Setup Rules What Is PVS/Pegasus Layer Viewer? PVL Coloring Ruledecks with Docolor and Stitchcolor PLV Commands: dfm_property with Primary & Secondary Layer PVS Quantus QRC Overview Online Courses Pegasus Verification System PVS (Physical Verification System) Virtuoso Layout Design Basics About Knowledge Booster Training Bytes Knowledge Booster Training Bytes is an online journal that relays information about Cadence Training videos, online courses, and upcoming webinars in the Learning section of the Cadence Learning and Support portal. This blog category brings you direct links to these videos, courses, and other related material on a regular basis. Subscribe to receive email notifications about our latest Custom IC Design blog posts. Full Article Virtuoso Studio Routing Layout Suite Cadence training training bytes Circuit Design Cadence Education Services Custom IC Design online training
an Doc Assistant A-Z: Making the Most of the Cadence Cloud-Based Help Viewer Part 3 By community.cadence.com Published On :: Tue, 01 Oct 2024 05:16:00 GMT Welcome back to the Doc Assistant A-Z blog series! Since the launch of Doc Assistant, we've been gathering feedback and input from our customers regarding their experiences with our latest documentation viewer. My interaction with Ralf was particularly useful and interesting. Ralf is a design engineer who works on complex schematics and intricate layouts. For each release, he is challenged with the task of verifying the tool and feature changes across multiple releases. He shared with me that he has been using Doc Assistant’s capabilities to help him achieve this. Ralf explained that he utilizes Doc Assistant to open and compare documents from different releases side-by-side, seamlessly tracking updates across multiple releases and verifying those updates in his Cadence tools. Additionally, in Doc Assistant’s online mode, he compares documents across previous tool versions, ensuring a thorough review of any changes. Finally, he was happy to share with me that Doc Assistant features have helped him significantly reduce the time he spends on identifying such changes. You, of course, can also achieve such productivity gains using several Doc Assistant features designed to help simplify such tasks! In previous editions of this blog series, we looked at some key features and benefits of Doc Assistant. If you've missed these editions, I would highly recommend that you read them: Doc Assistant A-Z: Making the Most of the Cadence Cloud-Based Help Viewer: Part 1 Doc Assistant A-Z: Making the Most of the Cadence Cloud-Based Help Viewer: Part 2 In this third installment, we're diving into some more of Doc Assistant's key capabilities. Open Multiple Documents Want to refer to multiple docs at the same time? That’s easy! Open each doc on a separate tab in Doc Assistant. Personalized Content Recommendations Is it a hassle to navigate through all docs each time? You don’t have to. You can tailor your Doc Assistant preferences to match your content requirements. PDF Support Do you prefer downloading and reading a PDF instead of an HTML? That’s also supported. Quick Access to Relevant Search Results Are you pressed for time, and yet want to run a comprehensive doc search? You’re covered. In online mode, search runs on all available product documentation, and the results are listed from multiple sources. Resource Links Looking for more information about a topic you’ve just read? That’s handy. Look out for content recommendations! Share Content Want to share a useful doc with the rest of your team? That’s easy. With a single click, Doc Assistant lets you share content with one or more readers. Submit Feedback Your feedback is important to us. Use the Submit Feedback feature to share your comments and inputs. To learn more about how to use the above features, check out the Doc Assistant User Guide. These are just a few of the productivity gain features in Doc Assistant. We’ll cover more in the next blog in the series. Want to Know More? Here's a video about Doc Assistant Visit the Doc Assistant web page Read the Doc Assistant FAQ document If you have any feedback on Doc Assistant or would like to request more information or a demo, please contact docassistant.support@cadence.com. Subscribe to receive email notifications about our latest Custom IC Design blog posts. Happy reading! - Priya Sriram, on behalf of the Doc Assistant Team Full Article In-Tool Help user documentation in-built help Cloud-Based Help Doc Assistant
an How to transfer custom title block from Orcad Capture to PCB Editor By community.cadence.com Published On :: Thu, 09 Dec 2021 21:37:59 GMT Hi, So I was trying to update the title block of a schematic that I have. The title block that was on there was out of date . I clicked on place --> title block and was able to find the title block that I need. I also have a .OLB file that contains that title block. Then I created a Netlist with the old BRD file as the input file (To keep it as is but modify changes) but when I do that I still do not see / cannot place the title block that I need. Under Place --> format symbols in Allegro , I do see a title block that is coming from the database (But it's the old one). I don't know what to do at this point and would appreciate any tips. I did make sure that the path to where the library is , is defined in the user preferences. I also tried copying the title block under the library folder in capture before creating my Netlist and that did not work either. Thank you all. Full Article
an Can I align pin numbers in edit part windows in Orcad Capture? By community.cadence.com Published On :: Tue, 14 Dec 2021 01:55:10 GMT Hello.. I'm updating part in part editor in orcad capture, and I wonder how to align pin numbers using menu or tcl/tk command. Please, let me know. Thank you. Full Article
an datasheets for difference of Allegro PCB and OrCAD Professional By community.cadence.com Published On :: Tue, 14 Dec 2021 09:08:17 GMT Hi All I am looking for the functions which are different about OrCAD Professional and Allegro tier. is there any resource? regard Full Article
an CIS Standard BOM to Excel 365 By community.cadence.com Published On :: Tue, 14 Dec 2021 14:49:39 GMT I'm not able to export a CIS Standard BOM to a Microsoft 365 Excel (business subscription, version 2111).Selecting the "Export BOM report to Excel" option opens a new Excel window, but OrCAD (17.4-2019 S023) won't fill it with any data... I tried it on a different PC with Microsoft Office Professional Plus 2019 Excel (strangely the version number is the same: 2111) and with OrCAD 17.4-2019 S016 and it worked flawlessly. Does anybody experiencing the same issue?Does the Excel variant, the OrCAD version or the PC itself causing this?Thanks for any help! Full Article
an Version upgrade 17.2 to 17.4 - Cadance orcad capture By community.cadence.com Published On :: Tue, 21 Dec 2021 10:49:08 GMT hello, We have a number of workstations with version 17.2 that work on a floating license server We want to know if it can be upgraded to version 17.4 If so, should the floating license server be upgraded as well? In addition, how can you know where the license was purchased from? Thanks! Full Article
an Sense line and decoupling capacitors By community.cadence.com Published On :: Thu, 23 Dec 2021 08:16:10 GMT Hello, A mybe silly question came to my mind: When routing sense lines, is it better to hav them as close as possible to DUT or afer the decoupling capacitors ? Force in red, sense in purple. Best way is 1 or 2 ? Thanks in advance and Merry Christmas to everybody ! Full Article
an Display Resource Editor: Different Colors for Schematic and Layout Axis By community.cadence.com Published On :: Wed, 23 Oct 2024 06:30:07 GMT Hi In the environment I'm currently working, axes are shown for schematic, symbol, and layout views.For schematics and symbols, I'd prefer a dim gray, such that the axes are just visible but not dominant. For the layout, I'd prefer a brighter color. Is there a way to realize this? So far when I change the color of the 'axis' layer in the display resource editor, the axes in all three views get changed together: Thanks very much for your input! Full Article
an Netlisting error when doing parametric sweep on transient simulation By community.cadence.com Published On :: Wed, 23 Oct 2024 10:13:32 GMT Dear all, I defined two design variables in ADE Assembler, say V1 and V2, that define the voltage 1 and voltage 2 of a "vpulse" voltage source in my schematic. Then, I define V1 = 1.0, and V2 = 2.0, run a transient simulation, and everything is as expexcted. The source provides pulses between 1.0 V and 2.0 V. Next, I set V1 = 1.0:0.5:1.5, thereby creating a parametric sweep with 1.0 V and 1.5 V for V1. I keep V2 at 2.0 V. Then the simulation fails, and all I get is "netl err" in my Output Expressions and an error message that the results directory does not exist and nothing can be plotted: This is reasonable, as the results directory is deleted on starting a new simulation, and as there is no simulation result, none of my output expressions can be plotted. WARNING (OCN-6040): The specified directory does not exist, or the directory does not contain valid PSF results. Ensure that the path to the directory is correct and the directory has a logFile and PSF result files.WARNING (ADE-1065): No simulation results are available.ERROR (WIA-1175): Cannot plot waveform signals because no waveform data is available for plotting.One of the possible reasons can be that 'Save' check box for these signals are not selected in the Outputs Setup pane. Ensure that these check boxes are selected before you run the simulation. Normally, this kind of para,metric sweep is not a problem, I have done this many times before. There must be something special in THIS PARTICULAR test bench or simulator setup. The trouble is, I don't get any useful error messages. Does anyone know what might be the problem here OR where to find useful information to investigate further (log files stored somewhere)? Thank you! Regards, Volker P.S. Using Corners instead does not help either. Running it through all values by hand works, though. Full Article
an New CDF creation and callback By community.cadence.com Published On :: Sun, 27 Oct 2024 11:35:08 GMT I need to add a new CDF parameter called "mag" to symbols in a given library using skill script in which the symbol size can be controlled and call back it each time this library is used so that all the symbols are updated. Full Article