on Flow Control Credit Updates in PCIe 6.1 ECN By community.cadence.com Published On :: Fri, 13 Sep 2024 21:25:20 GMT As technology continues to evolve at a rapid pace, the importance of robust and efficient interconnect standards cannot be overstated. Peripheral Component Interconnect Express (PCIe) has been a cornerstone in high-speed data transfer, enabling seamless communication between various hardware components. With the advent of PCIe 6.1 ECN, a significant advancement in speed and efficiency, ensuring the accuracy and reliability of its operations is paramount. One critical aspect of this is the verification of shared credit updates. For detailed understanding on Shared Credit, please refer Understanding PCIe 6.0 Shared Flow Control. In this blog, we will discuss why this verification is essential and what it entails. Introduction PCIe 6.1 ECN brings numerous advancements over earlier versions, such as increased bandwidth and faster data transfer speeds. A crucial mechanism for efficient data transmission in PCIe 6.0 is the credit-based flow control system. In this system, devices monitor credits, representing the buffer capacity available for incoming data. When a device transmits data, it uses credits, which are replenished or adjusted once the data is received and processed. This system ensures that the sender does not overload the receiver. Given the critical role of shared credit updates in maintaining the integrity and efficiency of data transfers, verification of these updates is crucial. Proper management of credit updates is essential to ensure data integrity, as any discrepancies can lead to data loss, corruption, or system crashes. Verification also guarantees efficient resource allocation, preventing scenarios where some components are starved of credit while others have an excess, thus avoiding inefficiencies. Credit inefficiencies pose issues in low power negotiations by preventing devices from entering low power states. Additionally, verification involves checking for proper error handling mechanisms, ensuring that the system can recover gracefully from errors in credit updates and maintain overall stability. PCIe 6.1 ECN Flow Control Optimizations Over PCIe 6.0 PCIe 6.1 ECN builds on the FLIT-based architecture introduced in PCIe 6.0, further optimizing flow control mechanisms to handle increased data rates and improved efficiency. PCIe 6.1 ECN introduced refinements in credit management, making the allocation and advertisement of credits more precise, which helps in reducing bottlenecks and improving data flow efficiency. Enhancements in flow control protocols ensure better management of buffer spaces and more efficient credit allocation. These enhancements are designed to handle the increased data rates and throughput demands of next-generation applications, ensuring robust and efficient data flow across PCIe devices. Below are some major updates: There have been improvements in error detection and correction mechanisms in PCIe 6.1 ECN to enhance flow control reliability by ensuring that corrupted data packets are detected and handled appropriately without disrupting the flow of valid packets. The merged credit system, which was a key feature introduced int PCIe 6.0 to simplify and optimize credit management, was further enhanced in PCIe 6.1 ECN to improve performance and efficiency. PCIe 6.1 ECN introduced better algorithms for allocating and reclaiming merged credits to handle high data rates, introduced more robust error detection and correction mechanism reducing the degradation or system instability. PCIe 6.1 ECN provided clear guidelines on how to implement the merged credit system correctly, helping developers to implement more reliable systems. For more details, please refer to Specifications section 2.6.1 Flow Control (FC) Rules. Summary In summary, PCIe 6.0 is a complex protocol with many verification challenges. You must understand many new Spec changes and think about the robust verification plan for the new features and backward compatible tests impacted by new features. Cadence’s PCIe 6.0 Verification IP is fully compliant with the latest PCIe Express 6.0 specifications and provides an effective and efficient way to verify the components interfacing with the PCIe 6.0 interface. Cadence VIP for PCIe 6.0 provides exhaustive verification of PCIe-based IP and SoCs, and we are working with early adopter customers to speed up every verification stage. More Information For more info on how Cadence PCIe Verification IP and Triple Check VIP enable users to confidently verify PCIe 6.0, see VIP for PCI Express, VIP for Compute Express Link and TripleCheck for PCI Express See the PCI-SIG website for more details on PCIe in general and the different PCI standards. For more information on PCIe 6.0 new features, please visit PCIeLaneMargin, PCIe6.0LaneMargin, and Demonstrating PCIe 6.0 Equalization Procedure. Full Article Verification IP PCIExpress PCIe pcie gen6 PCIe 6.0 verification
on Training Insights – Palladium Emulation Course for Beginner and Advanced Users By community.cadence.com Published On :: Fri, 13 Sep 2024 23:00:00 GMT The Cadence Palladium Emulation Platform is a hardware system that implements the design, accelerating its execution and verification. Itoffers the highest performance and fastest bring-up times for pre-silicon validation of billion-gate designs, using a custom processor built by Cadence. This Palladium Introduction course is based on the Palladium 23.03 ISR4 version and covers the following modules: Introduction Palladium flow Running a design on the Palladium system This course starts with an “Introduction” module that explains Palladium and other verification platforms to show its place in the big picture. It also compares Palladium with Protium and simulation and discusses its usage and limitations. The “Palladium Flow” module includes two stages at a high level, which are Compile and Run. Then, it covers these stages in detail. First, it covers the ICE compile flow and IXCOM compile flow steps in detail. Then it explains Run, which is common for both ICE and IXCOM modes. The third module, “Running Design on the Palladium System,” covers all the items required for running your design on the Palladium system, including: Software stack requirements Basic concepts required to understand the flow Compute machine requirements In addition, this course contains labs for both the ICE and IXCOM flows with detailed steps to exercise the features provided by the Palladium system. The lab explains a practical example of multiple counters and exercising their signals for force, monitor, and deposit features, along with frequency calculation using a real-time clock. The course is available on the Cadence support page: There is also a Digital Badge available. You will find the Badge exam opportunity when you enroll in the Online training or after you have taken the training as "live" training. For questions and inquiries, or issues with registration, reach out to us at Cadence Training. Want to stay up to date on webinars and courses? Subscribe to Cadence Training emails. To view our complete training offerings, visit the Cadence Training website. Related Training Bytes Palladium: What Are Verification Platforms Palladium: What Is Processor Based Emulation Palladium: Comparing Emulation (Z2) and Prototyping (X2) Palladium: What Are ICE and IXCOM Compile Flow Palladium: How to Process a Design to Run on Palladium Palladium: XCOM Compile Flow (TB+RTL to Palladium Database) Palladium: ICE Compile Flow (RTL to Palladium Database) Palladium: Legacy ICE Compile Flow Palladium: Cadence Software Releases for Palladium and Protium Flow Palladium: Setting of PATHs for Using Palladium Palladium: Z2 Hardware Structure (Blade and Boards) Palladium: What Is Sourceless and Loadless nets Palladium: Design Clocks Palladium: Step Count and Step Clock Palladium: Steps for Running the Design on Palladium Z2 Related Courses Verilog Language and Application Training SystemVerilog for Design and Verification Xcelium Simulator Related Blogs Training Insights – A New Free Online Course on the Protium System for Beginner and Advanced Users It’s the Digital Era; Why Not Showcase Your Brand Through a Digital Badge! Training Insights - Free Online Courses on Cadence Learning and Support Portal Full Article digital badge live training blended training Palladium Training Insights online training
on DDR5 UDIMM Evolution to Clock Buffered DIMMs (CUDIMM) By community.cadence.com Published On :: Mon, 23 Sep 2024 05:52:00 GMT DDR5 is the latest generation of PCDDR memory that is used in a wide range of application like data centers, Laptops and personal computers, autonomous driving systems, servers, cloud computing, and gaming are now increasingly being used for AI applications with advances in memory bandwidth and density to allow DDR5 DIMMs (Dual Inline Memory Modules) to support densities higher then 256 GB per DIMM card. The highest speed DDR5 SDRAM devices can support data rates of up to 8800 MTps. DDR5 SO-DIMMs and UDIMMs One of the most recognized uses of PCDDR is with client devices like laptops and personal computers. These client devices mostly use two types of DDR5 DIMMs called SO-DIMM (Small Outline Dual Inline Memory Module) and UDIMM (Unbuffered Dual Inline Memory Module). These types of DIMMs have no signal regeneration or buffering (which, for example, the Registering Clock Driver or the RCD does for clocks/command/control signals for a registered DIMMs). A typical 2-Rank UDIMM with x8 DDR5 SDRAM components has 8 or 10 components per rank depending on the system ECC (Error Correction Code) memory being part of the DIMM. Why DDR5 Clock Buffer and CUDIMM? Clocks are one of the most important signals for synchronous devices, and DDR5 SDRAMs are no exception. The host is responsible for the fanout to all the DRAM input ports, such as clocks for UDIMMs. Driving of all these DRAM clocks can put quite a bit of load on the host output drivers, thus affecting the signal quality, which can result in unexpected memory errors. This issue gets amplified when operating at the higher clock and data rates where the clock signals transition from one logic value to the next over a very short time. To solve these signal integrity issues with DRAM clocks, JEDEC has come up with a new type of DDR5 DIMM component that is called DDR5 clock buffer. Clock buffers can be used for both DDR5 SO-DIMMs and DDR5 UDIMMs. DDR5 UDIMMs that include a clock buffer component as part of the DIMM card are called DDR5 CUDIMMs (Clock Buffered UDIMMs). DDR5 Clock Buffer Overview DDR5 Clock Buffer is a simple logic device that takes in two sets of input clock pins and drives two sets of clock pins as output per channel. The clock buffer device can operate in three types of clock modes: - PLL bypass mode: In this mode, the clock buffer just passes on the input clocks to output without any kind of signal buffering. The PLL bypass mode enabled CUDIMM devices behave like traditional UDIMMs without any buffering of the clocks. This is why it’s also referred to as legacy mode. Recommended CUDIMM operating speeds in PLL bypass mode are typically limited to 3000 MHz. Single PLL mode: In the single PLL Mode, the clock buffer device will use a Phase Lock Loop (PLL) for the regeneration of the incoming host clock to create a better-quality clock that is sent to the DRAMs. However, since there is only one PLL that is used in this mode, both sub channel output clocks will be driven based on only one set of input clocks with the other set of input clocks remaining unused. Dual PLL mode: In this mode, the clock buffer will use two PLLs to independently generate each sub channel output clock based on each set of incoming host clocks. The second set of PLL can be turned on or off on the fly if needed to save power. Beyond the clock modes, clock buffers provide additional flexibility to the system designers with register-controlled additional signal delays, optional output clock enable/disable per bit feature, drive strength and termination choices, etc. All DDR5 clock buffer device control word registers are accessible via DDR5 DIMM sideband. Cadence VIPs offers a compressive memory subsystem solution that includes memory models for DDR5 SDRAM, DDR5 RCD, DDR5 DB, DDR5 clock buffer, all types of DDR5 DIMMs, including the DDR5 CUDIMMs, DFI Memory Controller/PHY VIPs, and a system VIP compliant to JEDEC specifications defined for each of those devices along with latest DFI Specification. More information on Cadence DDR5 DIMM VIP is available at the Cadence VIP Memory Models website. Full Article Verification IP DDR5 SDRAM DDR5 UDIMM VIP JEDEC DRAM DDR5 CUDIMM memory models DDR5 SODIMM DDR5DIMM
on Jasper Formal Fundamentals 2403 Course for Starting Formal Verification By community.cadence.com Published On :: Mon, 30 Sep 2024 09:16:00 GMT The course "Jasper Formal Fundamentals v24.03" introduces formal analysis to those who want to use formal analysis for design or verification. To optimally benefit from this course, you must already have sufficient knowledge of the System Verilog assertions to be capable of writing properties for formal verification. Hence, this training provides a module on formal analysis to help cover this essential background. In this course, you will learn how to code efficient SVA Properties for formal analysis, understand formal complexity and how to overcome it, and learn the basics of formal coverage. After completing this course, you will be able to: Define reusable, functionally correct SVA properties that are efficient for formal tools. These shall use abstract auxiliary code to simplify descriptions, make code maintenance easier, reduce debug time, and reduce tool-proof runtime. Set up, run, and analyze results from formal analysis. Identify designs upon which formal is likely to be successful while understanding formal complexity issues and how to identify and overcome them. Use a systematic property development process to approach a completely new verification problem. Understand the basics of formal coverage. The most recently updated release includes new modules on: "Basic complexity handling" which discusses the complexity in formal and how to identify and handle them. "Complexity reduction methods” which discusses the complexity reduction methods and which is suitable for which type of complexity problem. “Coverage in formal” which discusses the basics of coverage in formal verification and how coverage can be used in formal. Take this course to learn the basics of formal verification. What's Next? You can check out the complete training: Jasper Formal Fundamentals. There is a free online version of the training available 24/7 for all customers with a Cadence Learning and Support Portal account. If you are interested in an instructor-led version of the training, please contact Cadence Training. And don't forget to obtain your digital badge after completing the training! You can also check Jasper University page for more materials on formal analysis and Jasper apps. Related Trainings Jasper Formal Expert Training Course | Cadence Verilog Language and Application Training Course | Cadence SystemVerilog for Design and Verification Training Course | Cadence SystemVerilog Assertions Training Course | Cadence Related Training Bytes Jasper Formal Property Verification (FPV) App: Basic Usage Demo (Video) Jasper Formal Methodology playlist Related Training Blogs It’s the Digital Era; Why Not Showcase Your Brand Through a Digital Badge! Training Insights: Introducing the C++ Course for All Your C++ Learning Needs! Training Insights: Reaching Your Verification Closure Using Verisium Manager Training Insights - Free Online Courses on Cadence Learning and Support Portal Full Article Jasper Formal Fundamentals FPV Formal Analysis formal Jasper Jasper Apps Formal verification verification
on Partial Header Encryption in Integrity and Data Encryption for PCIe By community.cadence.com Published On :: Mon, 07 Oct 2024 02:25:00 GMT Cadence PCIe/CXL VIP support for Partial Header Encryption in Integrity and Data Encryption.(read more) Full Article CXL Verification IP PCIe IDE
on Unveiling the Capabilities of Verisium Manager for Optimized Operations By community.cadence.com Published On :: Thu, 17 Oct 2024 06:13:06 GMT In SoC development, the verification cycle is a crucial phase that ensures products meet their specifications and function correctly. However, the complexity of modern SoC projects, with their constant data flow, multiple validation teams working in parallel, and tight schedules, presents significant challenges. This article explores these challenges and introduces Verisium Manager as a solution that embodies the 'One Tool Fits All' concept. This means that Verisium Manager is designed to handle all aspects of the verification process for SoC development, from planning to coverage analysis to regression testing, thereby addressing the complex needs of SoC verification. The Hurdles in Traditional Validation Cycles A typical validation process involves planning, coverage analysis, and regression testing. This complexity is compounded by using separate tools for each activity, leading to multiple control environments, APIs, and databases, not to mention the array of tool owners. Such fragmentation results in constant data transfer and translation between systems, from the planning tool to the coverage analysis tool and then to the regression testing tool. This continuous movement of data causes delays, system instability, poor user experiences, and, ultimately, a dip in the quality of the validation process. The use of multiple platforms leads to inefficiency and reduced productivity. What's needed is a unified system that can streamline the workflow, simplify the verification process, and enhance its effectiveness. Envisioning the Ideal Solution: Verisium Manager The cornerstone of an efficient validation cycle is integration and simplicity. The ideal solution is a singular platform that consolidates planning, coverage analysis, and regression management into one smooth, unified process. Verisium Manager emerges as this much-needed solution, encompassing all the functionalities necessary to streamline the validation process. Its comprehensive nature instills confidence in its ability to handle all aspects of the verification cycle. It can be fully customized to address and enforce any validation methodology and can facilitate smooth integration into any customer environment. Features that stand out in Verisium Manager include: Unified Workflow: It acts as a single cockpit from which all activities are orchestrated, ensuring the validation teams' work is uninterrupted and seamlessly integrated. Customization and Integration: Verisium Manager supports customizing test-plan structures and mapping results per project, ensuring a perfect fit for various project requirements. Its ability to smoothly integrate into the project's environment and compute platforms is unparalleled. Support for Continuous Updates and Migration: The tool accommodates constant updates to project data and supports the migration of legacy data, ensuring that no historical data is lost in the transition to a new system. Addressing Project-Specific Needs Verisium Manager recognizes diversity in different projects and offers project-specific solutions, including: Enforcing Project Test-Plan Structures and Attributes: It supports and enforces each project's unique test-plan structure and mapping guidelines. Unified Data Views and Measurements: Verisium Manager promotes a unified view of data across all teams and enforces unified measurements, ensuring consistency and clarity in the validation process. Enabling Project-Specific Actions and Integrations: The tool is designed to support project-specific actions directly from its graphical user interface and allows for smooth integration with in-house databases, dashboards, and the project execution stack. Verisium Manager is the epitome of efficiency in software/hardware validation. Its differentiating features, such as support for customization, unified data view, and comprehensive coverage and regression requirements, make it an indispensable tool for any validation team looking to elevate their workflow. Full Article validation vPlan verisium Verisium Manager vManager verification
on A Brief on Message Bus Interface in PIPE By community.cadence.com Published On :: Thu, 17 Oct 2024 12:24:00 GMT PHY Interface for the PCI Express (PCIe), SATA, USB, DisplayPort, and USB4 Architectures (PIPE) enables the development of the Physical Layer (PHY) and Media Access Layer (MAC) design separately, providing a standard communication interface between these two components in the system. In recent years, the PIPE interface specification has incorporated many enhancements to support new features and advancements happening in the supported protocols. As the supported features increase, so does the count of signals on PIPE interface. To address the issue of increasing signal count, the message bus interface was introduced in PIPE 4.4 and utilized for PCIe lane margining at the receiver and elastic buffer depth control. In PIPE 5.0, all the legacy PIPE signals without critical timing requirements were mapped into message bus registers so that their associated functionality could be accessed via the message bus interface instead of implementing dedicated signals. It was decided that any new feature added in the new version of PIPE specification will be available only via message bus accesses unless they have critical timing requirements that need dedicated signals. Message Bus Interface The message bus interface provides a way to initiate and participate in non-latency-sensitive PIPE operations using a small number of wires. It also enables future PIPE operations to be added without adding additional wires. The use of this interface requires the device to be in a power state with PCLK running. Control and status bits used for PIPE operations are mapped into 8-bit registers that are hosted in 12-bit address spaces in the PHY and the MAC. The registers are accessed using read-and-write commands driven over the signals M2P_MessageBus[7:0] and P2M_MessageBus[7:0]. These signals are synchronous with the PCLK and are reset with Reset#. Message Bus Interface Commands The 4-bit commands are used for accessing the PIPE registers across the message bus. A transaction consists of a command and any associated address and data. All the following are time multiplexed over the bus from MAC and PHY: Commands (write_uncommitted, write_committed, read, read completion, write_ack) 12-bit address used for all types and read and writes 8-bit data, either read or written There can be cases where multiple PIPE interface signals can change on the same PCLK. To address such cases, the concept of write_uncommitted and write_committed is introduced. The uncommitted write should be saved into a write buffer, and its associated data values are updated into the relevant PIPE register at a future time when a write_committed is received, taking effect during the same PCLK cycle. Once a write_committed is sent, no new writes, whether committed or uncommitted, and any read command may be sent until a write_ack is received. Also, it is allowed to send NOP commands between write uncommitted and write committed. A simple timing demonstration of message bus: Message Address Space MAC and PHY each implement unique 12-bit address spaces. These address spaces will host registers associated with the PIPE operations. MAC accesses PHY registers using M2P_MessageBus[7:0], and PHY accesses the MAC registers using the M2P_MessageBus[7:0]. The MAC and PHY access specific bits in the registers to: initiate operations, Initiate handshakes, and Indicate status. Each 12-bit address space is divided into four main regions: the receiver address region, the transmitter address region, the common address region, and the vendor-specific address region. Each register field has an attribute description of either level or 1-cycle assertion. When a level field is written, the value written is maintained by the hardware until the next write to that field or until a reset occurs. When a 1-cycle field is written to assert the value high, the hardware maintains the assertion for only a single cycle and then automatically resets the value to zero on the next cycle. Cadence has a mature Verification IP solution for the verification of various aspects and topologies of PIPE PHY design. For more details, you may refer to the Simulation VIP for PIPE PHY | Cadence page, or you may send an email to support@cadence.com. Full Article Verification IP PHY VIP PIPE
on Deferrable Memory Write Usage and Verification Challenges By community.cadence.com Published On :: Thu, 17 Oct 2024 21:00:00 GMT The application of real-time data processing or responsiveness is crucial, such as in high-performance computing, data centers, or applications requiring low-latency data transfers. It enables efficient use of PCIe bandwidth and resources by intelligently managing memory write operations based on system dynamics and workload priorities. By effectively leveraging Deferrable Memory Write [DMWr], Devices can achieve optimized performance and responsiveness, aligning with the evolving demands of modern computing applications. What Is Deferrable Memory Write? Deferrable Memory Write (DMWr) ECN introduced this new memory transaction type, which was later officially incorporated in PCIe 5.0 to CXL2.0. This enhanced type of memory transaction is Deferrable Memory Write [DMWr], which flows as another type of existing Read/Write memory transaction; the major difference of this Deferrable Memory Write, where the Requester attempts to write to a given location in Memory Space using the non-posted DMWr TLP Type, it Postponing their completion of memory write transactions to improve overall system efficiency and performance, those memory write operation can be delay or deferred until other priority task complete. The Deferrable Memory Write (DMWr) requires the Completer to return an acknowledgment to the Requester and provides a mechanism for the recipient to defer (temporarily refuse to service) the Request. DMWr provides a mechanism for Endpoints and hosts to choose to carry out or defer incoming DMWr Requests. This mechanism can be used by Endpoints and Hosts to simplify the design of flow control, reduce latency, and improve throughput. The Deferrable Memory writes TLP format in Figure A. (Fig A) Deferrable Memory writes TLP format. Example Scenario Here's how the DMWr works with a simplified example: Imagine a system with an endpoint device (Device A) and a host CPU (Device B). Device B wants to write data to Device A's memory, but due to varying reasons such as system bus congestion or prioritization of other transactions, Device A can defer the completion of the memory write request. Just follow these steps: Initiation of Memory Write: Device B initiates a memory write transaction to Device A. This involves sending the memory write request along with the data payload over the PCIe physical layer link. Acknowledgment and Deferral: Upon receiving the memory write request, Device A acknowledges the transaction but may decide to defer its completion. Device A sends an acknowledgment (ACK) back to Device B, indicating it has received the data and intends to complete the write operation but not immediately. Deferred Completion: Device A defers the completion of the memory write operation to a later, more opportune time. This deferral allows Device A to prioritize other transactions or optimize the use of system resources, such as memory bandwidth or processor availability. Completion and Response: At a later point, Device A completes the deferred memory write operation and sends a completion indication back to Device B. This completion typically includes any status updates or additional information related to the transaction. Usage or Importance of DMWr Deferrable Memory Write usage provides the improvement in the following aspects: Reduced Latency: By deferring less critical memory write operations, more critical transactions can be processed with lower latency, improving overall system responsiveness. Improved Efficiency: Optimizes the utilization of system resources such as memory bandwidth and CPU cycles, enhancing the efficiency of data transfers within the PCIe architecture. Enhanced Performance: Allows devices to manage and prioritize transactions dynamically, potentially increasing overall system throughput and reducing contention. Challenges in the Implementation of DMWr Transactions The implementation of deferrable memory writes (DMWr) introduces several advancements and challenges in terms of usage and verification: Timing and Synchronization: DMWr allows transactions to be deferred, complicating timing requirements or completing them within acceptable timing windows to avoid protocol violations. Ensuring proper synchronization between devices becomes critical to prevent data loss or corruption. Protocol Compliance: Verification must ensure compliance with ECN PCIe 6.0 and CXL specifications regarding when and how DMWr transactions can be initiated and completed. Performance Optimization: While DMWr can improve overall system performance by reducing latency, verifying its impact on system performance and ensuring it meets expected benchmarks is crucial. Error Handling: Handling errors related to deferred transactions adds complexity. Verifying error detection and recovery mechanisms under various scenarios (e.g., timeout during deferral) is essential. Verification Challenges of DMWr Transactions The challenges to verifying the DMWr transaction consist of all checks with respect to Function, Timing, Protocol compliance, improvement, Error scenario, and security usage on purpose, as well as Data integrity at the PCIe and CXL. Functional Verification: Verifying the correct implementation of DMWr at both ends of the PCIe link (transmitter and receiver) to ensure proper functionality and adherence to specifications. Timing Verification: Validating timing constraints associated with deferring writes and ensuring transactions are completed within specified windows without violating protocol rules. Protocol Compliance Verification: Checking that DMWr transactions adhere to PCIe and CXL protocol rules, including ordering rules and any restrictions on deferral based on the transaction type. Performance Verification: Assessing the impact of DMWr on overall system performance, including latency reduction and bandwidth utilization, through simulation and testing. Error Scenario Verification: Creating and testing scenarios to verify error handling mechanisms related to DMWr, such as timeouts, retries, and recovery procedures. Security Considerations: Assessing potential security vulnerabilities related to DMWr, such as data integrity risks during deferred transactions or exposure to timing-based attacks. Major verification challenges and approaches are timing and synchronization verification in the context of implementing deferrable memory writes (DMWr), which is crucial due to the inherent complexities introduced by deferred transactions. Here are the key issues and approaches to address them: Timing and Synchronization Issues Transaction Completion Timing: Issue: Ensuring deferred transactions are completed within the specified time window without violating protocol timing constraints. Approach: Design an internal timer and checker to model worst-case scenarios where transactions are deferred and verify that they are complete within allowable latency limits. This involves simulating various traffic loads and conditions to assess timing under different scenarios. Ordering and Dependencies: Issue: Verifying that transactions deferred using DMWr maintain the correct ordering and dependencies relative to non-deferred transactions. Approach: Implement test scenarios that include mixed traffic of DMWr and non-DMWr transactions. Verify through simulation or emulation that dependencies and ordering requirements are correctly maintained across the PCIe link. Interrupt Handling and Response Times: Issue: Verify the handling of interrupts and ensure timely responses from devices involved in DMWr transactions. Approach: Implement test cases that simulate interrupt generation during DMWr transactions. Measure and verify the response times to interrupts to ensure they meet system latency requirements. In conclusion, while deferrable memory writes in PCIe and CXL offer significant performance benefits, their implementation and verification present several challenges related to timing, protocol compliance, performance optimization, and error handling. Addressing these challenges requires rigorous testing and testbench of traffic, advanced verification methodologies, and a thorough understanding of PCIe specifications and also the motivation behind introducing this Deferrable Write is effectively used in the CXL further. Outcomes of Deferrable Memory Write verify that the performance benefits of DMWr (reduced latency, improved throughput) are achieved without compromising timing integrity or violating protocol specifications. In summary, PCIe and CXL are complex protocols with many verification challenges. You must understand many new Spec changes and consider the robust verification plan for the new features and backward compatible tests impacted by new features. Cadence's PCIe 6.0 Verification IP is fully compliant with the latest PCIe Express 6.0 specifications and provides an effective and efficient way to verify the components interfacing with the PCIe 6.0 interface. Cadence VIP for PCIe 6.0 provides exhaustive verification of PCIe-based IP and SoCs, and we are working with Early Adopter customers to speed up every verification stage. More Information For more info on how Cadence PCIe Verification IP and TripleCheck VIP enable users to confidently verify PCIe 6.0, see our VIP for PCI Express, VIP for Compute Express Link, and TripleCheck for PCI Express See the PCI-SIG website for more details on PCIe in general and the different PCI standards. Full Article CXL PCIe PCIe Gen5 Deferrable memory write transaction
on Wild River Collaborates with Cadence on CMP-70 Channel Modeling By community.cadence.com Published On :: Wed, 23 Oct 2024 23:00:00 GMT Wild River Technology (WRT), the leading supplier of signal integrity measurement and optimization test fixtures for high-speed channels at data rates of up to 224G, has announced the availability of a new advanced channel modeling solution that helps achieve extreme signal integrity design to 70GHz. Read the press release. The CMP-70 program continues the industry-first simulation-to-measurement collaboration with Cadence that was initially established with the CMP-50. Significant resources were dedicated to the development of the CMP-70 by Cadence and WRT over almost three years. The CMP-70 will be on display at DesignCon 2025 , January 28-30, in Cadence booth 827 to benchmark the Cadence Clarity 3D Solver . “I am not a fan of hype-based programs that simply get attention,” remarked Alfred P. Neves, WRT’s co-founder and chief technical officer. “Both Cadence and Wild River brought substantial skills to the table in this project as we continued our industry-first simulation-to-measurement collaboration. The result is a proven, robust and accurate platform that brings extreme signal integrity to 70GHz designs. This application package has also been instrumental in demonstrating the robust 3D EM simulation capability of the Cadence Clarity solver.” “We’re delighted to continue the joint development and validation program with WRT that started with the CMP-50,” said Gary Lytle, product management director at Cadence. “The skilled and experienced signal integrity technologists that both companies bring to the program results in a superior signal integrity solution for our mutual customers.” CMP-70 Solution Features The solution is available both in a standard configuration and as a custom solution for customer-specific stackups and fabrication. The primary target application is to support a 3D EM solver analysis modeling versus the time- and frequency-domain measurement methodologies. The solution features include: The CMP-70 platform, assembled and 100% TDR NIST traceable tested, with custom stands Material Identification overview web-based meeting including anisotropic 3D material identification A cross-section PCB report and structures for using as-fabricated geometries Measured S-parameters, pre-tested for quality (passivity/causality and resampled for time domain simulations) A host of novel crosstalk structures suited for 112G HD level project analysis PCB layout design files (NDA required) An EDA starter library including loss models with industry-first accurate surface roughness models Comprehensive training available for 3D EM analysis – correspondence, material ID in X-Y and Z axis for a host of EDA tools Industry-First Hausdorff Technique The WRT application package also includes an industry-first modified Hausdorff (MHD) technique , included as MATLAB code. This algorithmic approach provides an accurate way to compare two sets of measurements in multi-dimensional space to determine how well they match. The technique is used to compare the results simulated by the Clarity solver with those measured on the CMP-70 platform. The methodology and initial results are shown in the figure below, where the figure of merit (FOM) is calculated from 10, 35, and finally to 50GHz. The MHD algorithm requires a MATLAB license, but WRT also accommodates customer data as another option, where WRT provides the comparison between measured and simulated data. Additional Resources If you are attending DesignCon 2025 , be sure to stop by Cadence booth 827 to see WRT’s CMP-70 advanced channel modeling solution in action with the Clarity 3D Solver. Check out our on-demand webinar, " Validating Clarity 3D Solver Accuracy Through Measurement Correlation ." Learn more about the CMP-70 solution and the Clarity 3D Solver . For more information about Cadence’s full suite of integrated multiphysics simulation solutions, download our Multiphysics System Analysis Solutions Portfolio . Full Article
on Training Webinar: Fast Track RTL Debug with the Verisium Debug Python App Store By community.cadence.com Published On :: Thu, 24 Oct 2024 09:55:00 GMT As a verification engineer, you’re surely looking for ways to automate the debugging process. Have you developed your own scripts to ease specific debugging steps that tools don’t offer? Working with scripts locally and manually is challenging—so is reusing and organizing them. What if there was a way to create your own app with the required functionality and register it with the tool? The answer to that question is “Yes!” The Verisium Debug Python App Store lets you instantly add additional features and capabilities to your Verisium Debug Application using Python Apps that interact with Verisium Debug via the Python API. Join me, Principal Education Application Engineer Bhairava Prasad, for this Training Webinar and discover the Verisium Debug Python App Store. The app store allows you to search for existing apps, learn about them, install or uninstall them, and even customize existing apps. Date and Time Wednesday, November 20, 2024 07:00 PST San Jose / 10:00 EST New York / 15:00 GMT London / 16:00 CET Munich / 17:00 IST Jerusalem / 20:30 IST Bangalore / 23:00 CST Beijing REGISTER To register for this webinar, sign in with your Cadence Support account (email ID and password) to log in to the Learning and Support System*. Then select Enroll to register for the session. Once registered, you’ll receive a confirmation email containing all login details. A quick reminder: If you haven’t received a registration confirmation within one hour of registering, please check your spam folder and ensure your pop-up blockers are off and cookies are enabled. For issues with registration or other inquiries, reach out to eur_training_webinars@cadence.com . Like this topic? Take this opportunity and register for the free online course related to this webinar topic: Verisium Debug Training To view our complete training offerings, visit the Cadence Training website Want to share this and other great Cadence learning opportunities with someone else? Tell them to subscribe . Hungry for Training? Choose the Cadence Training Menu that’s right for you. Related Courses Xcelium Simulator Training Course | Cadence Related Blogs Unveiling the Capabilities of Verisium Manager for Optimized Operations - Verification - Cadence Blogs - Cadence Community Verisium SimAI: SoC Verification with Unprecedented Coverage Maximization - Corporate News - Cadence Blogs - Cadence Community Verisium SimAI: Maximizing Coverage, Minimizing Bugs, Unlocking Peak Throughput - Verification - Cadence Blogs - Cadence Community Related Training Bytes Introducing Verisium Debug (Video) (cadence.com) Introduction to UVM Debug of Verisium Debug (Video) (cadence.com) Verisium Debug Customized Apps with Python API Please see course learning maps a visual representation of courses and course relationships. Regional course catalogs may be viewed here . *If you don’t have a Cadence Support account, go to Cadence User Registration and complete the requested information. Or visit Registration Help . Full Article
on Cadence Fem.AI Summit: A Journey of Inspiration By community.cadence.com Published On :: Thu, 24 Oct 2024 21:00:00 GMT This year, the Cadence Giving Foundation (CGF) launched Fem.AI to achieve a more inclusive tech sector, and the inaugural Fem.AI Summit that took place on October 1 was a luminary in a world where technology is evolving at an unprecedented pace. The summit not only excelled in its mission to enlighten, empower, and mobilize stakeholders across various industries on the issue of gender disparity in high tech and AI, but was a celebration of innovation, diversity, and empowerment. As we reflect on the moments that made the summit unforgettable, it's clear that the event was more than just a meeting of minds—it was a movement for change! Shaping Tomorrow Together Cadence’s president and CEO, Anirudh Devgan, stated, “Women’s talent and perspectives are crucial to shaping the future of AI.” Devgan’s words epitomized the driving force behind the first-ever Fem.AI Summit which brought together innovators, educators, business leadership, and investors across industries to create an ecosystem that ensures women can fully participate in the AI revolution and burgeoning AI economy. The energy of pioneers ready to collectively disrupt the status quo filled the air, and as the day-long summit began, it became clear that we were part of something truly groundbreaking. The event's lineup of speakers held discussions that went beyond the technical aspects of AI, emphasizing the vital importance of diversity in technology. Such insights were lent by leading voices from MIT, Stanford, and UC Berkeley, who set the stage for inspiring discussions with speakers like Dr. Joy Buolamwini, Founder of the Algorithmic Justice League, and Reshma Saujani, Founder and CEO of Moms First and Girls Who Code. Included in this lineup of leading figures was Dr. Chelsea Clinton, Vice Chair of the Clinton Foundation, who left us with her hopes for the future of women in AI: “I’m hoping because of company-wide commitments like what we’re experiencing here today thanks to Cadence, that the people who will be part of designing [future technologies] will have a different group of people around the proverbial table or the computer screens doing that… and that women will be more integral into the conceptualization and then the actualization of AI-driven enterprises.” The hopes and visions for women in AI cannot manifest in a vacuum, they must be achieved with the support of individuals and systems from education all the way to the upper echelons of leadership. It is with this understanding, that Fem.AI is committed to investing in women at every stage of their STEM journey. Breaking Barriers It is with this ideal that we were honored to hear from women breaking through barriers of gender, race, and class in achieving pinnacles of success in areas of science and technology. Dr. Sarah H. Chen, Postdoctoral Researcher at Stanford and Thriving Stars Scholar at MIT, Niki Karanikola, Machine Learning Engineer and Break Through Tech AI Scholar at MIT, and Katya Echazarreta, NASA’s first Mexican Astronaut, showcased the resilience and determination that drive progress within and beyond our industry. Through their stories of persevering despite all odds, we were reminded that supporting students in STEM can create generational change with impacts beyond the realms of AI and technology. The final speaker at the Cadence Fem.AI Summit, the trailblazing Brandi Chastain, Founder of Bay FC, World Cup Champion, and Olympic Gold Medalist, left us with a powerful reminder that when faced with this opportunity: “Our purpose needs to be intentional” especially in building the future of technology and AI where “diversity is not something to be afraid of, but something to be embraced.” Echoing this sentiment, summit attendees left the event reminded of the crucial role we collectively play in ensuring women are part of this tech revolution. Moving Forward While the summit may have concluded, its impact will continue through individuals, companies, and communities aspiring to achieve an equitable tech sector. This is just the start, and we must take collective action now. We hope that you will join Cadence to ensure that we clear the path and catalyze women's role in the AI revolution! Meet Our Partners Our partners are making Fem.AI’s vision a reality through their important work advancing women in technology, including fostering STEM excellence in higher education, launching STEM careers, and achieving gender diversity in leadership. Learn more about the important work of each of our partners by visiting their pages: Break Through Tech Last Mile Education Fund Fast Forward Generation VC Include Global Semiconductor Alliance Join the Fem.AI Alliance Joining the Fem.AI Alliance signals that your company or institution is committed to evolving the AI workforce. By increasing the representation of women in AI, we aim to broaden the talent pool and the perspective so that AI represents us all. Through the Fem.AI Alliance, companies and institutions can share best practices, guidance, and inspiration. Since its launch, companies like the Equinix Foundation, NetApp, NVIDIA, Unity Technologies, and Workday have joined the Alliance in their commitment to Fem.AI’s work and mission. Visit Fem.AI to get involved today or contact Fem.AI@cadence.com . Full Article
on Versatile Use Case for DDR5 DIMM Discrete Component Memory Models By community.cadence.com Published On :: Tue, 29 Oct 2024 19:00:00 GMT DDR5 DIMM Architectures The DDR5 generation of Double Data Rate DRAM memories has experienced rapid adoption in recent years. In particular, the JEDEC-defined DDR5 Dual Inline Memory Module (DIMM) cards have become a mainstay for systems looking for high-density, high-bandwidth, off-chip random access memory[1]. Within a short time, the DIMM architecture evolved from an interconnected hierarchy of only SDRAM memory devices (UDIMM[2]) to complex subsystems of interconnected components (RDIMM/LRDIMM/MRDIMM[3]). DIMM Designs and Popular Verification Use Cases The growing complexity of the DIMMs presented a challenge for pre-silicon verification engineers who could no longer simply validate against single DDR5 SDRAM memory models. They needed to consider how their designs would perform against DIMMs connected to each channel and operating at gigahertz clock speeds. To address this verification gap, Cadence developed DDR5 DIMM Memory Models that encapsulated all of the architectural complexities presented by real-world DIMMs based on a robust, easy-to-use, easy-to-debug, and easy-to-reconfigure methodology. This memory-subsystem-in-a-single-instance model has seen explosive adoption among the traditional IP Developer and SOC Integrator customers of Cadence Memory Models. The Cadence DIMM models act as a single unit with all of the relevant DIMM components instantiated and interconnected within, and with all AC/Timing parameters among the various components fully matched out-of-the-box, based on JEDEC specifications as well as datasheets of actual devices in the market. The typical use-case for the DIMM models has been where the DUT is a DDR5 Memory Controller + PHY IP stack, and the validation plan mandated compliance with the JEDEC standards and Memory Device vendor datasheets. Unique Use Case for the DIMM Discrete Component Models Although the Cadence DIMM models have enjoyed tremendous proliferation because of their cohesive implementation and unified user API, the actual DIMM Models are built on top of powerful, flexible discrete component models, each of which was designed to stand on its own as a complete SystemVerilog UVM-based VIP. All of these discrete component models exist in the Cadence VIP Catalog as standalone VIPs, complete with their own protocol compliance checking capabilities and their own configuration mappings comprehensively modeling individual AC/Timing parameters. Because of this deliberate design decision, the Cadence DIMM Discrete Component Models can support a unique use-case scenario. Some users seek to develop IC Designs for the various DIMM components. Such users need verification environments that can model the individual components of a DIMM and allow them the option to replace one or another component with their Component Design IP. They can then validate that their component design is fully compatible with the rest of the components on the DIMM and meets the integrity of the overall DIMM compliance with JEDEC standards or Memory Vendor datasheets. The Cadence Memory VIP portfolio today includes various examples that demonstrate how customers can create DIMM “wrappers” by selecting from among the available DIMM discrete component models and “stitching” them together to build their own custom testbench around their specific Component Design IP. A Solution for Unique Component Scenarios The Cadence DDR5 DIMM Memory Models and DIMM Discrete Component Models can provide users with a flexible approach to validating their specific component designs with a fully populated pre-silicon environment. Augmented Verification Capabilities When the DIMM “wrapper” model is augmented with the Cadence DFI VIP[4] that can simulate an MC+PHY stack and offers a SystemVerilog UVM test API to the verification engineer, the overall testbench transforms into a formidable pre-silicon validation vehicle. The DFI VIP is designed as a combination of an independent DFI MC VIP and a DFI PHY VIP connected to each other via the DFI Standard Interface and capable of operating seamlessly as a single unit. It presents a UVM Sequence API to the user into the DFI MC VIP with the Memory Interface of the PHY VIP connected to the DIMM “wrapper” model. With this testbench in hand, the user can then fully take advantage of the UVM Sequence Library that comes with the DFI VIP to enable deep validation of their Component Design inside the DIMM “wrapper” model. Verification Capabilities Further Enhanced A possible further enhancement comes with the potential addition of an instance of the Cadence DIMM Memory Model in a Passive Monitor mode at the DRAM Memory Interface. The DIMM Passive Monitor consumes the same configuration describing the DIMM “wrapper” in the testbench, and thus can act as a reference model for the DIMM wrapper. If the DIMM Passive Monitor responds successfully to accesses from the DFI VIP, but the DIMM wrapper does not, then it exposes potential bugs in the DUT Components or in the settings of their AC/Timing parameters inside the DIMM wrapper. Debuggability, Interface Visibility, and Protocol Compliance One of the key benefits of the DIMM Discrete Component Models that become manifest, whether in terms of the unique use-case scenario described here, or when working with the wholly unified DDR5 DIMM Memory Models, is the increased debuggability of the protocol functionality. The intentional separation of the discrete components of a DIMM allows the user to have full visibility of the memory traffic at every datapath landmark within a DIMM structure. For example, in modeling an LRDIMM or MRDIMM, the interface between the RCD component and the SDRAM components, the interface between the RCD component and the DB components, and the interface between the SDRAM components and the DB components—all are visible and accessible to the user. The user has full access to dump the values and states of the wire interconnects at these interfaces to the waveform viewer and thus can observe and correlate the activity against any protocol violations flagged in the trace logs by any one or more of the DIMM Discrete Component Models. Access to these interfaces is freely available when using the DIMM Discrete Component Models. On the unified DDR5 DIMM Memory Models, a feature called Debug Ports enables the same level of visibility into the individual interconnects amidst the SDRAM components, RCD components, and DB components. When combined with the Waveform Debugger[5] capability that comes built-in with the VIPs and Memory Models offered by Cadence and used with the Cadence Verisium Debug[6] tool, the enhanced debuggability becomes a powerful platform. With these debug accesses enabled, the user can pull out transaction streams, chip state and bank state streams, mode register streams, and error message streams all right next to their RTL signals in the same Verisium Debug waveform viewer window to debug failures all in one place. The Verisium Debug tool also parses all of the log files to probe and extract messages into a fully integrated Smart Log in a tabbed window fully hyperlinked to the waveform viewer, all at your fingertips. A Solution for Every Scenario Cadence's DDR5 DIMM Memory Models and DIMM Discrete Component Models , partnered with the Cadence DFI VIP, can provide users with a robust and flexible approach to validating their designs thoroughly and effectively in pre-silicon verification environments ahead of tapeout commitments. The solution offers unparalleled latitude in debuggability when the Debug Ports and Waveform Debugger functions of the Memory Models are switched on and boosted with the use of the Cadence Verisium Debug tool. [1] Shyam Sharma, DDR5 DIMM Design and Verification Considerations , 13 Jan 2023. [2] Shyam Sharma, DDR5 UDIMM Evolution to Clock Buffered DIMMs (CUDIMM) , 23 Sep 2024. [3] Kos Gitchev, DDR5 12.8Gbps MRDIMM IP: Powering the Future of AI, HPC, and Data Centers , 26 Aug 2024. [4] Chetan Shingala and Salehabibi Shaikh, How to Verify JEDEC DRAM Memory Controller, PHY, or Memory Device? , 29 Mar 2022. [5] Rahul Jha, Cadence Memory Models - The Gold Standard , 15 Apr 2024. [6] Manisha Pradhan, Accelerate Design Debugging Using Verisium Debug , 11 Jul 2023. Full Article
on Lessons from the UMass Lowell Women’s Leadership Conference By community.cadence.com Published On :: Mon, 04 Nov 2024 22:00:00 GMT This post was contributed by Liliko Uchida, application engineer at Cadence. Being a “Woman in STEM” is a phrase that has long been used to describe the holistic experience shared by thousands of women globally, yet it still makes us feel isolated. Partially due to the statistics of gender population in the STEM workforce and the remainder due to our own internal obstacles, being a woman in STEM continues to be a challenge. While many of us know the should-do’s and should-be’s of taking on this unique role objectively, we struggle to implement them. After all, our perseverance as engineers, mathematicians, businesswomen, programmers, and scientists is largely affected by subjectivity. The UMass Lowell Women’s Leadership Conference 2024 aimed to tackle this problem by uniting hundreds of women with shared experiences under one roof. Not only did the conference provide us with the knowledge necessary to persevere, but it also gave us the tools that will allow us to thrive and act upon the facts we already know. It is my hope that through this blog post, I can share some of my main takeaways from this special day. Be Confident This is one of the most palpable pieces of advice we always hear. Yet so many of us struggle to build this confidence because we don’t know how. Featured speaker Nicole Kalil defined confidence as “complete trust in oneself”.”One way to build this self-trust is by getting to know yourself on a deeper level. By creating a true inner connection, we begin to see ourselves as a whole instead of hyper-focusing on our shortcomings frequently illusioned by imposter syndrome. In one of the sessions, we were asked to introduce ourselves to our neighbors, not by what we do for work, but by who we are as a person. Even if this opportunity does not arise every day, this practice can be done simply by listing characteristics of yourself that define who you are. Who do you care for? How do you show them? What are your life goals oriented towards? How do you observe others’ behavior around you, and what does that say about how you make them feel? Getting to know you beneath the surface and allowing yourself to be seen for who you are is critical in building internal confidence. With practice, this self-reassurance will grow independent of external factors. Take Risks “Sometimes, you have to put your foot in the elevator” - Barb Vlacich, Keynote Speaker When opportunities arise, the only thing you can do to have a chance is to try. Without putting your foot in the elevator, the doors will close, becoming a missed opportunity. Similarly, several of the conference’s speakers also emphasized that the answer to every unasked question will always be a no. Even if you are not ready to full-send a negotiation, ask for a raise, or respectfully disagree with a co-worker’s opinion, start by getting comfortable asking uncomfortable questions. Just one discomfort a day will help in building an immunity to the anxiety that comes with taking risks, typically driven by our self-doubt. Another interesting point that stood out from the conference was the statistics of self-assessed qualifications between men and women. During the negotiation panel, it was revealed that men typically feel they only need 60% of the qualifications under a job description to apply, whereas women often feel they need close to 100%. These numbers alone demonstrate how the pure mental habits of men continue to funnel them into STEM and not women. The next time you seek a new opportunity, assess yourself based on the 60% and use it as a checklist threshold. If more women are able to pursue STEM careers using these numbers, the more likely we will begin to populate these roles. Build Your Genuine Network “ The essence of communication lies in the mutual exchange of ideas and emotions. And when the listener isn’t invested, it undermines the entire purpose of the conversation. Why are you having it anyway?” This is a quote from episode 186 of Julie Brown’s podcast This Sh!t Works called “The 5 Steps to Being an Active Listener”. Julie Brown is a Networking Coach, author, and podcast host who guided an energetic and candid conversation about networking and building a personal brand for women. Networking is often misunderstood as putting your name and qualifications out on the table for as many people to pick up your cards. While making these things known is important, they are not what nurtures effective connections. The key to cultivating your genuine network is to activate a sincere interest in the people you meet. Become the proactive receiver of the confidence exercise discussed above. When you meet someone new, what can you take away from them as a person, not an employee? By making people feel heard, even through the little conversations, you can begin to develop more meaningful connections that resonate. And, with practice, the sometimes inherent need to overcompensate by defining yourself with your resume will slowly fade. It was a wonderful opportunity to attend the UML Women’s Leadership Conference with four other inspiring Cadence women. Not only was the conference a motivating learning experience, but it was also a wonderful opportunity for us to bond together as women and feel supported by each other. The most eye-opening part of the day was seeing just how many women alike were sitting under the same roof. The conclusion of the event led me to feel proud to be an engineer, proud to be at Cadence, and most importantly, proud to be a woman. Learn more about life at Cadence . Full Article
on Solutions to Maximize Data Center Performance Featured at OCP Global Summit 2024 By community.cadence.com Published On :: Wed, 06 Nov 2024 00:21:00 GMT The demand for higher compute performance, energy efficiency, and faster time-to-market drove the conversations at this year's Open Compute Project (OCP) Global Summit in San Jose, California. It was the scene of showcasing groundbreaking innovations, expert-led sessions, and networking opportunities to drive the future of data center technology. For those who didn't get to attend or stop by our booth, here's a recap of Cadence's comprehensive solutions that enable next-generation compute technology, AI data center design, analysis, and optimization. Optimized Data Center Design and Operations As the data center community increasingly faces demands for enhanced efficiency, thermal management, sustainability, and performance optimization, data center operators, IT managers, and executives are looking for solutions to these challenges. At the Cadence booth, attendees explored the Cadence Reality Digital Twin Platform and Celsius EC Solver. These technologies are pivotal in achieving high-performance standards for AI data centers, providing advanced digital twin modeling capabilities that redefine next-generation data center design and operation. The Celsius EC Solver demonstration showed how it solves challenging thermal and electronics cooling management problems with precision and speed. CadenceCONNECT: Take the Heat Out of Your AI Data Center Cadence hosted a networking reception on October 16 titled "Take the Heat Out of Your AI Data Center." In today's AI era, managing the heat generated by high-density computing environments is more critical than ever. This reception offered insights into current and emerging data center technologies, digital twin cooling strategies that deliver energy-saving operations, and a chance to engage with industry leaders, Cadence experts, and peers to explore the latest cooling, AI, and GPU acceleration advancements. Here's a recap: Researcher, author, and entrepreneur Dr. Jon Koomey highlighted the inefficiency of data centers in his talk "The Rise of Zombie Data Centers," noting that 20-30% of their capacity is stranded and unused. He advocated for organizational changes and technological solutions like digital twins to reduce wasted energy and improve computational effectiveness as AI deployments increase. In "A New Millennium in Multiphysics System Analysis," Cadence Corporate VP Ben Gu explained the company's significant strides in multiphysics system analysis, evolving from chip simulation to a broader application of computational software for simulating various physical systems, including entire data centers. He noted that the latest Cadence venture, a digital twin platform for data center optimization, opened the opportunity to use simulation technology to optimize the efficiency of data centers. Senior Software Engineering Group Director Albert Zeng highlighted the Cadence Reality DC suite's ability to transform data center operations through simulation, emphasizing its multi-phase engine for optimal thermal performance and the integration of AI capabilities for enhanced design and management. A panel discussion titled "Turning AI Factory Blueprints into Reality at the Speed of Light" featured industry experts from NVIDIA, Norman Wright Precision Environmental and Power, NV5, Switch Data Centers, and Cadence, who explored the evolving requirements and multidimensional challenges of AI factories, emphasizing the need for collaboration across the supply chain to achieve high-performing and sustainable data centers. Watch the highlights. Transforming Designs from Chips to Data Centers The OCP Global Summit 2024 has reaffirmed its status as a pivotal event for data center professionals seeking to stay at the forefront of technological advancements. Cadence's contributions, from groundbreaking digital twin technologies to innovative cooling strategies, have shed light on the path forward for efficient, sustainable data centers. For data center professionals, IT managers, and engineers, the insights gained at this summit are invaluable in navigating the challenges and opportunities presented by the burgeoning AI era. Partnering with Arm Arm Total Design Cadence is a member of the Arm Total Design program. At an invitation-only special Arm event, Cadence's VP of Research and Development, Lokesh Korlipara, delivered a presentation focusing on data center challenges and design solutions with Arm Neoverse Compute Subsystem (CSS). The session highlighted: Efficient integration of Arm Neoverse CSS into system on chips (SoCs) with pre-integrated connectivity IP Performance analysis and verification of the Neoverse CSS integration into the SoC through Cadence's System VIP verification suite and automated testbench creation, enhancing both quality and productivity Jumpstarting designs through Cadence's collaboration with Arm for 3D-IC system planning, chiplets, and interposers Design Services readiness and global scale to support and/or deliver the most demanding Arm Neoverse CSS-based SoC design projects Cadence Supports Arm CSS in Arm Booth During the event, Cadence conducted a demo in the Arm booth that showcased the Cadence System VIP verification suite. The demo highlighted automated testbench creation and performance analysis for integrating the Arm CSS into SoCs while enhancing verification quality and productivity. Summary Cadence offers data center solutions for designing everything from the compute and networking chips to the board, racks, data centers, and campuses. Stay connected with Cadence and other industry leaders to continue exploring the innovations set to redefine the future of data centers. Learn More Cadence Joins Arm Total Design Cadence Arm-Based Solutions Cadence Reality Digital Twin Platform Full Article
on Celebrating Milestones: The Cadence Bangalore Toastmasters Club’s Journey By community.cadence.com Published On :: Wed, 06 Nov 2024 18:00:00 GMT On November 5, 2024, the Cadence Bangalore Toastmasters Club celebrated a significant milestone by hosting its 50th meeting. Established in December 2020, the club was created to provide a supportive environment for individuals looking to improve their communication and leadership skills. Over the years, the club has evolved into a vibrant community filled with success stories of personal development and newfound confidence. A testament to the club's dedication is its achievement of the "Select Distinguished Club" status during the 2023-2024 program year. By fulfilling 7 out of 10 distinguished goals, the club highlighted its commitment to excellence—a success driven by its vibrant members' relentless focus and perseverance. The strategic insight gained from regular Toastmasters committee meetings and the influential "Moments of Truth" sessions held in 2023 and 2024 are key to this success. Our club members have consistently demonstrated strong performance in various speech contests, with notable achievements across multiple levels. In 2023, members excelled in Evaluation and Table Topics contests, reaching the district level while advancing to the Division Level in the International Speech Contest. Continuing their success into 2024, members again qualified for area-level contests, securing third-place positions in the Evaluation and Table Topics categories, highlighting the club's dedication and competitive spirit. The 50th meeting was based on the theme of serendipity. It was not only a milestone celebration but also a vibrant festival of achievements and growth. The day buzzed with energy as activities like a spirited Treasure Hunt injected enthusiasm and camaraderie among attendees. Distinguished guests, including Kripa Venkitachalam and Madhavi Rao, enriched the occasion with inspiring speeches. Madhavi reignited the club's spirit, while Kripa's discourse on the Growth Mindset and the "Power of Yet" encouraged members to pursue continuous self-improvement. The Cadence Bangalore Toastmasters Club is enthusiastic about its promising future and is committed to creating an environment that promotes personal and professional growth. Many members are close to completing their Toastmasters levels and pathways, and this term, a new group of approximately 30 individuals has joined, bringing the total membership to 52. This vibrant community is just beginning its journey and is eager to reach new milestones together through mutual support and a shared commitment to excellence. The transformations experienced by many club members are truly compelling. They often share how the club has significantly improved their communication skills and boosted their confidence. One member recalls, "Before joining, I found public speaking intimidating. Now, I embrace every opportunity to share my ideas." Another member highlights how the club's supportive environment helped him overcome his fear of public speaking, propelling his career to new heights. This culture of constructive feedback and continuous improvement has inspired countless members to pursue their dreams with renewed determination and optimism. The Cadence Bangalore Toastmasters Club's journey is a living testament to the power of community and the potential within each of us to grow and achieve greatness. As the club continues to evolve and inspire, it serves as a beacon for those aspiring to transform their skills and seize their moment in the spotlight. Learn more about life at Cadence. Full Article
on Cleared to Land: An Interview with Cadence Veterans ERG Lead Johnathan Edmonds By community.cadence.com Published On :: Thu, 07 Nov 2024 18:30:00 GMT Each November, we are reminded of the bravery and dedication of those who have served our country. At Cadence, we thank our Veteran employees for their patriotism by reaffirming our commitment to honoring their sacrifices and recognizing their contributions to our business success. Our diverse and inclusive culture is strengthened by the unique perspective of our Veteran employees, and we are proud to support the Veterans Inclusion Group as a space for community members and their allies to connect. In celebration of Veterans Day, we were excited to catch up with Johnathan Edmonds, Veterans Inclusion Group Lead and Design Engineering Director, for a heartfelt chat on his journey through military service to leadership within Cadence. Throughout the conversation, he shared the importance of creating space for Veterans, the skills they offer, and his aspirations for what the Veterans Inclusion Group will achieve in the years ahead. Oh yeah, and he flies planes, too! Join us as we dive into what makes this holiday special for so many across the nation and how we can respectfully commemorate it together. Johnathan, you’re a retired Air Force Reservist, pilot, and now a Design Engineering Director. Can you tell us about your journey from the military to your current role at Cadence? I started my military and electronics journey in the Navy. I enlisted at 18 and served for six years as an aviation electronics technician. During this time, I was able to learn about and repair electronics on planes. This set me up for success, and when I was honorably discharged, I attended Virginia Tech to study computer engineering. Once I graduated, I continued my career as an engineer, but I still wanted to be a military pilot. From my past experience, I knew the reserves were an option where I could learn to fly and still have a civilian career. Not only was I lucky enough to get selected to go to pilot training, but after I returned from flight school, my luck grew, and I was hired at Cadence. Cadence has supported me throughout my military career, which has been a great benefit, as many companies don’t support reservists. The best thing about serving and being employed at Cadence is how I could blend my skill sets to further the Air Force’s mission and achieve great things in engineering. As the first lead of Cadence’s Veterans Inclusion Group, you played an integral part in growing our culture and building community at the company since launching the group four years ago. What inspired you to take on the role of Inclusion Group Lead? I was inspired by three things: camaraderie, service, and outreach. I wanted to see if we could achieve a similar sense of community through the Veterans Inclusion Group as we had during our service life. I also wanted to see how we could better serve our Veterans here at Cadence. I wanted to explore any benefits that could be expanded, roles that could be developed by Vets, and, lastly, I wanted to serve a broader community. COVID-19 put a damper on some of the community support, but we are getting back on track with Veteran employment programs and volunteer efforts like Carry the Load and Gold Star Families. Why is it important to have this space dedicated to Veteran employees? There are many reasons! Networking, for one, creates a stronger, more unified Cadence culture. Two, Vets face a variety of issues not generally understood by those who have not served, such as PTSD, where to get help for disabilities, how to get an old medical record, etc. As I mentioned, I’m also passionate about connecting Veterans with employment and job opportunities. It is so nice to work for a company that actively recruits Vets. We have our own “language,” if you will, so it’s nice to have a space to talk in the language that we are familiar with. What have been some of your favorite moments leading this group over the past few years? Are there any “wins” that you would like to recognize? We have a lot of wins. Events held during COVID-19 and getting past COVID-19, donating to worthwhile causes, and hosting guest speakers are all fantastic milestones and accomplishments. That said, the biggest win is the hiring of new Veteran employees. Mark Murphy, Corporate VP of Sales Operations, and I have both welcomed Vets to our team during this time, and it is such a joy to watch what someone can do when given the opportunity to succeed in the right environment. As you are set to transition out of the lead role next year, what do you hope to see the Veterans Inclusion Group accomplish next? My hope is that the Veterans Inclusion Group partners with other companies, expanding our reach externally and exploring new opportunities to engage Veterans outside of Cadence. Johnathan (left) speaks on an inclusion group panel, along with David Sallard (center), lead of Cadence's Black Inclusion Group and Sr. Principal Application Engineer; Christina Jamerson (on screen), lead of Cadence's Abilities Inclusion Group and Demand Generation Director; and Dianne Rambke (right), lead of Cadence's Latinx Inclusion Group and Marketing Communications Director. What are the important ways that people can signal inclusion and respectfully honor Veterans at work? What are the most meaningful or impactful actions employees everywhere can take to support Veteran coworkers? I think there is one answer to both questions. I recommend that people engage with their companies’ employee resource groups (ERGs) and have conversations with them. Opening up the lines of communication will lead to new paths in their journeys. What are you looking forward to in 2025, both personally and professionally? In 2025, professionally, I am looking forward to taking mixed-signal systems and verification to another level by including emulation, automatic model generation, and seeing which boundaries we can push in our SerDes and Chiplets products. Personally, I am looking forward to making my SXS street legal so I can drive places without getting a ticket, seeing my children participate in sports, church, and school, and taking my wife on vacation to Europe or somewhere else we can unplug. Learn more about Cadence’s Inclusion Groups, diverse culture, and commitment to belonging. Full Article
on Randomization considerations for PCIe Integrity and Data Encryption Verification Challenges By community.cadence.com Published On :: Fri, 08 Nov 2024 05:00:00 GMT Peripheral Component Interconnect Express (PCIe) is a high-speed interface standard widely used for connecting processors, memory, and peripherals. With the increasing reliance on PCIe to handle sensitive data and critical high-speed data transfer, ensuring data integrity and encryption during verification is the most essential goal. As we know, in the field of verification, randomization is a key technique that drives robust PCIe verification. It introduces unpredictability to simulate real-world conditions and uncover hidden bugs from the design. This blog examines the significance of randomization in PCIe IDE verification, focusing on how it ensures data integrity and encryption reliability, while also highlighting the unique challenges it presents. For more relevant details and understanding on PCIe IDE you can refer to Introducing PCIe's Integrity and Data Encryption Feature . The Importance of Data Integrity and Data Encryption in PCIe Devices Data Integrity : Ensures that the transmitted data arrives unchanged from source to destination. Even minor corruption in data packets can compromise system reliability, making integrity a critical aspect of PCIe verification. Data Encryption : Protects sensitive data from unauthorized access during transmission. Encryption in PCIe follows a standard to secure information while operating at high speeds. Maintaining both data integrity and data encryption at PCIe’s high-speed data transfer rate of 64GT/s in PCIe 6.0 and 128GT/s in PCIe 7.0 is essential for all end point devices. However, validating these mechanisms requires comprehensive testing and verification methodologies, which is where randomization plays a very crucial role. You can refer to Why IDE Security Technology for PCIe and CXL? for more details on this. Randomization in PCIe Verification Randomization refers to the generation of test scenarios with unpredictable inputs and conditions to expose corner cases. In PCIe verification, this technique helps us to ensure that all possible behaviors are tested, including rare or unexpected situations that could cause data corruption or encryption failures that may cause serious hindrances later. So, for PCIe IDE verification, we are considering the randomization that helps us verify behavior more efficiently. Randomization for Data Integrity Verification Here are some approaches of randomized verifications that mimic real-world traffic conditions, uncovering subtle integrity issues that might not surface in normal verification methods. 1. Randomized Packet Injection: This technique randomized data packets and injected into the communication stream between devices. Here we Inject random, malformed, or out-of-sequence packets into the PCIe link and mix valid and invalid IDE-encrypted packets to check the system’s ability to detect and reject unauthorized or invalid packets. Checking if encryption/decryption occurs correctly across packets. On verifying, we check if the system logs proper errors or alerts when encountering invalid packets. It ensures coverage of different data paths and robust protocol check. This technique helps assess the resilience of the IDE feature in PCIe in below terms: (i) Data corruption: Detecting if the system can maintain data integrity. (ii) Encryption failures: Testing the robustness of the encryption under random data injection. (iii) Packet ordering errors: Ensuring reordering does not affect data delivery. 2. Random Errors and Fault Injection: It involves simulating random bit flips, PCRC errors, or protocol violations to help validate the robustness of error detection and correction mechanisms of PCIe. These techniques help assess how well the PCIe IDE implementation: (i) Detects and responds to unexpected errors. (ii) Maintains secure communication under stress. (iii) Follows the PCIe error recovery and reporting mechanisms (AER – Advanced Error Reporting). (iv) Ensures encryption and decryption states stay synchronized across endpoints. 3. Traffic Pattern Randomization: Randomizing the sequence, size, and timing of data packets helps test how the device maintains data integrity under heavy, unpredictable traffic loads. Randomization for Data Encryption Verification Encryption adds complexity to verification, as encrypted data streams are not readable for traditional checks. Randomization becomes essential to test how encryption behaves under different scenarios. Randomization in data encryption verification ensures that vulnerabilities, such as key reuse or predictable patterns, are identified and mitigated. 1. Random Encryption Keys and Payloads: Randomly varying keys and payloads help validate the correctness of encryption without hardcoding assumptions. This ensures that encryption logic behaves correctly across all possible inputs. 2. Randomized Initialization Vectors (IVs): Many encryption protocols require a unique IV for each transaction. Randomized IVs ensure that encryption does not repeat patterns. To understand the IDE Key management flow, we can follow the below diagram that illustrates a detailed example key programming flow using the IDE_KM protocol. Figure 1: IDE_KM Example As Figure 1 shows, the functionality of the IDE_KM protocol involves Start of IDE_KM Session, Device Capability Discovery, Key Request from the Host, Key Programming to PCIe Device, and Key Acknowledgment. First, the Host starts the IDE_KM session by detecting the presence of the PCIe devices; if the device supports the IDE protocol, the system continues with the key programming process. Then a query occurs to discover the device’s encryption capabilities; it ensures whether the device supports dynamic key updates or static keys. Then the host sends a request to the Key Management Entity to obtain a key suitable for the devices. Once the key is obtained, the host programs the key into the IDE Controller on the PCIe endpoint. Both the host and the device now share the same key to encrypt and authenticate traffic. The device acknowledges that it has received and successfully installed the encryption key and the acknowledgment message is sent back to the host. Once both the host and the PCIe endpoint are configured with the key, a secure communication channel is established. From this point, all data transmitted over the PCIe link is encrypted to maintain confidentiality and integrity. IDE_KM plays a crucial role in distributing keys in a secure manner and maintaining encryption and integrity for PCIe transactions. This key programming flow ensures that a secure communication channel is established between the host and the PCIe device. Hence, the Randomized key approach ensures that the encryption does not repeat patterns. 3. Randomization PHE: Partial Header Encryption (PHE) is an additional mechanism added to Integrity and Data Encryption (IDE) in PCIe 6.0. PHE validation using a variety of traffic; incorporating randomization in APIs provided for validating PHE feature can add more robust Encryption to the data. Partial Header Encryption in Integrity and Data Encryption for PCIe has more detailed information on this. Figure 2: High-Level Flow for Partial Header Encryption 4. Randomization on IDE Address Association Register values: IDE Address Association Register 1/2/3 are supposed to be configured considering the memory address range of IDE partner ports. The fields of IDE address registers are split multiple values such as Memory Base Lower, Memory Limit Lower, Memory Base Upper, and Memory Limit Upper. IDE implementation can have multiple register blocks considering addresses with 32 or 64, different registers sizes, 0-255 selective streams, 0-15 address blocks, etc. This Randomization verification can help verify all the corner cases. Please refer to Figure 2. Figure 3: IDE Address Association Register 5. Random Faults During Encryption: Injecting random faults (e.g., dropped packets or timing mismatches) ensures the system can handle disruptions and prevent data leakage. Challenges of IDE Randomization and its Solution Randomization introduces a vast number of scenarios, making it computationally intensive to simulate every possibility. Constrained randomization limits random inputs to valid ranges while still covering edge cases. Again, using coverage-driven verification to ensure critical scenarios are tested without excessive redundancy. Verifying encrypted data with random inputs increases complexity. Encryption masks data, making it hard to verify outputs without compromising security. Here we can implement various IDE checks on the IDE callback to analyze encrypted traffic without decrypting it. Randomization can trigger unexpected failures, which are often difficult to reproduce. By using seed-based randomization, a specific seed generates a repeatable random sequence. This helps in reproducing and analyzing the behavior more precisely. Conclusion Randomization is a powerful technique in PCIe verification, ensuring robust validation of both data integrity and data encryption. It helps us to uncover subtle bugs and edge cases that a non-randomized testing might miss. In Cadence PCIe VIP, we support full-fledged IDE Verification with rigorous randomized verification that ensures data integrity. Robust and reliable encryption mechanisms ensure secure and efficient data communication. However, randomization also brings various challenges, and to overcome them we adopt a combination of constrained randomization, seed-based testing, and coverage-driven verification. As PCIe continues to evolve with higher speeds and focuses on high security demands, our Cadence PCIe VIP ensures it is in line with industry demand and verify high-performance systems that safeguard data in real-world environments with excellence. For more information, you can refer to Verification of Integrity and Data Encryption(IDE) for PCIe Devices and Industry's First Adopted VIP for PCIe 7.0 . More Information: For more info on how Cadence PCIe Verification IP and TripleCheck VIP enables users to confidently verify IDE, see our VIP for PCI Express , VIP for Compute Express Link for and TripleCheck for PCI Express For more information on PCIe in general, and on the various PCI standards, see the PCI-SIG website . Full Article
on A Guide to Build A Mini Guitar/Audio Amplifier Based on LM386 By community.cadence.com Published On :: Thu, 29 Mar 2018 10:05:29 GMT Hey, is it suitable to post here? I wanted a small yet robust amp for practicing while I travel. I wanted something that would fit in my pocket yet still be loud enough to hear.Presented here is a amplifier based upon the LM386 Audio Amplifier. There is a standard circuit in the data sheet that is an excellent place to start. Materials needed:1 - HM359 project box1 - 668-1237 speaker1 - BS6I battery conn1 - CP1-3515 stereo jack1 - SC1316 stereo jack2 - 450-1742 knob1 - 679-1856 switch1- 3mm LED1 - 10 ohm 1/4W resistor1 - 10uF ceramic cap1 - .05 uF ceramic cap1 - 420 uF electrolytic cap1 - 8 ohm resistor2 - 51AADB24 10K pot1 - HM1252 circuit board1 - LM386N-4 amplifier Wire and SolderStep 1: Prep the enclosure Careful planning is required the first time you free build a circuit. The circuit board has solder pads but not traces. You will have to use thin wire to make the connections for the circuit to work. Begin by laying out the components on the circuit board that will need to pass through the enclosure. This enclosure has a removable top panel which will be used for the volume, gain and 1/4 inch stereo jack. Space is limited to check for fit before drilling. All drilling of the plastic should be done with a step drill bit. This will make the cleanest holes without breaking the plastic. Lay out the pots a few spaces back but still in line with the desired position. mark the center of each pot shaft then drill with a step drill tot he tightest fitting hole size. Make a center mark between the pot holes then drill for the stereo jack On the inside of the top cover position and mark where the speaker will go. Make a template on grid paper the same size as the speaker. Tape the template to the inside of the cover as shown then use a step bit to drill holes on the center of every square in the grid. This will form the speaker grille. clean up the holes. Step 2: place the major components Solder the pots to the circuit board as shown. then place the stereo jack(note in order to get the final fit I had to trim and modify the stereo jack housing a little) Next, position and solder the switch on the circuit board and mark a space on the top cover that will need to be cut for the switch opening. Use a small file to cut the opening. Use a sharp knife to bevel the edges of the switch hole to allow for easier operation. Drill a hole in the side of the upper case for the headphone jack and fasten it in place. ( I had to recess the hole a bit for the retaining nut to grab) Step 3: Build the circuit The speaker is held in place by using 2 small brackets that come with the serial cable connector hood. ( I had a bunch around that would never be used) Refer the the circuit shown from the datasheet and the datasheet for the LM386. The basic circuit only has the volume control while the datasheet shows how to add a gain control across pins 1 and 8 of the amplifier. The speaker is wired in series with the headphone jack. The headphone jack has internal switches that shut the speaker off when the phones are plugged in. I chose to use a chip socket for the amplifier which make prototyping easier since you do not have to worry about solder heating as much. Carefully lay the circuit out on the board and begin wiring components together. I added a second pot and cap in series between pins 1 and 8 of the amp to be able to manually set the gain in addition to volume. Check you connections with a multimeter before adding the amplifier. I chose to add a LED indicator for power. This was done by using one side of switch contacts from the battery. The LED is in series with a 220 ohm resistor. Assemble the case and insert the battery. Step 4: Final notes If the speaker is noisy while the headphones work normally, try reversing the speaker connections. If it does not correct the issue, connect a 8 ohm resistor across the speaker contacts. You may have to place an insulating layer between the speaker and the place where the stereo jack comes through to prevent contact. This will be noted by a loud buzz. You may have to add some foam in the battery compartment to stop the battery from banging around. For reference, I've also read an article about amplifiers: http://www.apogeeweb.net/article/60.html Thanks for reading! Full Article
on TensorFlow Optimization in DSVM: Azure and Cadence By community.cadence.com Published On :: Mon, 22 Oct 2018 12:41:39 GMT Hello Folks, Problem statement first: How does one properly setup tensorflow for running on a DSVM using a remote Docker environment? Can this be done in aml_config/*.runconfig? I receive the following message and I would like to be able to utilize the increased speeds of the extended FMA operations. tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA Background: I utilize a local docker environment managed through Azure ML Workbench for initial testing and code validation so that I'm not running an expensive DSVM constantly. Once I assess that my code is to my liking, I then run it on a remote docker instance on an Azure DSVM. I want a consistent conda environment across my compute environments, so this works out extremely well. However, I cannot figure out how to control the tensorflow build to optimize for the hardware at hand (i.e. my local docker on macOS vs. remote docker on Ubuntu DSVM) Full Article
on Using oscillograph waveform file CSV as the Pspice simulation signal source By community.cadence.com Published On :: Tue, 20 Nov 2018 06:28:04 GMT hi, I save the waveform file of the oscilloscope as CSV file format. Now, I need to use this waveform file as the source of the low-pass filter . I searched and read the PSPICE help documents, and did not find any methods. How to realize it? Are there any reference documents or examples? Thanks! Full Article
on Functional coverage report. By community.cadence.com Published On :: Wed, 13 Feb 2019 23:37:00 GMT Is there a way to generate coverage reports, not in ucd or any other format. I have written basic covergroup and passed arguments[-covoverwrite -cov_cgsample -cov_debuglog -coverage u] to the xrun command, however I don't see anything in sim directory, nor do I see anything in the logs indicating the covergroups have been hit. How can I confirm that cover groups are getting hit and essentially observe the bins. In Questa sim, you essentially get them as part of the log itself. Full Article
on How do I use TCL to get connections between modules in INNOVUS. By community.cadence.com Published On :: Sun, 20 Sep 2020 04:04:00 GMT Please give me some ideas. Thank you very much. Full Article
on Formal Verification Approach for I2C Slave By community.cadence.com Published On :: Mon, 16 Nov 2020 15:31:30 GMT Hello, I am new in formal verification and I have a concept question about how to verify an I2C Slave block. I think the response should be valid for any serial interface which needs to receive information for several clocks before making an action. The the protocol description is the following: I have a serial clock (SCL), Serial Data Input (SDI) and Serial Data Output (SDO), all are ports of the I2C Slave block. The protocol looks like this: The first byte which is received by the slave consists in 7bits of sensor address and the 8th bit is the command 0/1 Write/Read. After the first 8 bits, the slave sends an ACK (SDO = 1 for 1 clock) if the sensor address is correct. Lets consider only this case, where I want to verify that the slave responds with an ACK if the sensor address is correct. The only solution I found so far was to use the internal buffer from the block which saves the received bits during 8 clocks. The signal is called shift_s. I also needed to use internal chip state (state_s) and an internal counter (shift_count_s). Instead of doing an direct check of the SDO(sdo_o) depending on SDI (sdi_d_i), I used the internal shift_s register. My question is if my approach is the correct one or there is a possibility to write the verification at a blackbox level. Below you have the 2 properties: first checks connection from SDI to internal buffer, the second checks the connection between internal buffer and output. property prop_i2c_sdi_store; @(posedge sclk_n_i) $past(i2c_bl.state_s == `STATE_RECEIVE_I2C_ADDR) |-> i2c_bl.shift_s == byte'({ $past(i2c_bl.shift_s), $past(sdi_d_i)}); endproperty APF_I2C_CHECK_SDI_STORE: assert property(prop_i2c_sdi_store); property prop_i2c_sensor_addr(sens_addr_sel, sens_addr); @(posedge sclk_n_i) (i2c_bl.state_s == `STATE_RECEIVE_I2C_ADDR) && (i2c_addr_i == sens_addr_sel) && (i2c_bl.shift_count_s == 7) ##1 (i2c_bl.shift_s inside {sens_addr, sens_addr+1}) |-> sdo_o; endproperty APF_I2C_CHECK_SENSOR_ADDR0: assert property(prop_i2c_sensor_addr(0, `I2C_SENSOR_ADDRESS_A0)); APF_I2C_CHECK_SENSOR_ADDR1: assert property(prop_i2c_sensor_addr(1, `I2C_SENSOR_ADDRESS_A1)); APF_I2C_CHECK_SENSOR_ADDR2: assert property(prop_i2c_sensor_addr(2, `I2C_SENSOR_ADDRESS_A2)); APF_I2C_CHECK_SENSOR_ADDR3: assert property(prop_i2c_sensor_addr(3, `I2C_SENSOR_ADDRESS_A3)); PS: i2c_addr_i is address selection for the slave (there are 4 configurable sensor addresses, but this is not important for the case). Thank you! Full Article
on Issue With Loudness Normalization By community.cadence.com Published On :: Thu, 21 Jan 2021 12:19:15 GMT Hello everyone. In recent days, I'm having a weird problem with sound output on my Windows 10 PC. In fact, I can't control the loudness of it. So is there any possibility of PCB of sound card being damaged? Full Article
on Sweep Function Error By community.cadence.com Published On :: Fri, 21 Jun 2024 10:32:44 GMT When I try to implement the sweep inside my verilog file, I get the following errors. /// Function swp2 sweep param=Ndiscmin values=[0.011 0.5055 1] { //// Line 63 tran2 tran start=0 stop=1300n errpreset=conservative} Full Article
on Here Is Why the Indian Voter Is Saddled With Bad Economics By indiauncut.com Published On :: 2019-02-03T03:54:17+00:00 This is the 15th installment of The Rationalist, my column for the Times of India. It’s election season, and promises are raining down on voters like rose petals on naïve newlyweds. Earlier this week, the Congress party announced a minimum income guarantee for the poor. This Friday, the Modi government released a budget full of sops. As the days go by, the promises will get bolder, and you might feel important that so much attention is being given to you. Well, the joke is on you. Every election, HL Mencken once said, is “an advance auction sale of stolen goods.” A bunch of competing mafias fight to rule over you for the next five years. You decide who wins, on the basis of who can bribe you better with your own money. This is an absurd situation, which I tried to express in a limerick I wrote for this page a couple of years ago: POLITICS: A neta who loves currency notes/ Told me what his line of work denotes./ ‘It is kind of funny./ We steal people’s money/And use some of it to buy their votes.’ We’re the dupes here, and we pay far more to keep this circus going than this circus costs. It would be okay if the parties, once they came to power, provided good governance. But voters have given up on that, and now only want patronage and handouts. That leads to one of the biggest problems in Indian politics: We are stuck in an equilibrium where all good politics is bad economics, and vice versa. For example, the minimum guarantee for the poor is good politics, because the optics are great. It’s basically Garibi Hatao: that slogan made Indira Gandhi a political juggernaut in the 1970s, at the same time that she unleashed a series of economic policies that kept millions of people in garibi for decades longer than they should have been. This time, the Congress has released no details, and keeping it vague makes sense because I find it hard to see how it can make economic sense. Depending on how they define ‘poor’, how much income they offer and what the cost is, the plan will either be ineffective or unworkable. The Modi government’s interim budget announced a handout for poor farmers that seemed rather pointless. Given our agricultural distress, offering a poor farmer 500 bucks a month seems almost like mockery. Such condescending handouts solve nothing. The poor want jobs and opportunities. Those come with growth, which requires structural reforms. Structural reforms don’t sound sexy as election promises. Handouts do. A classic example is farm loan waivers. We have reached a stage in our politics where every party has to promise them to assuage farmers, who are a strong vote bank everywhere. You can’t blame farmers for wanting them – they are a necessary anaesthetic. But no government has yet made a serious attempt at tackling the root causes of our agricultural crisis. Why is it that Good Politics in India is always Bad Economics? Let me put forth some possible reasons. One, voters tend to think in zero-sum ways, as if the pie is fixed, and the only way to bring people out of poverty is to redistribute. The truth is that trade is a positive-sum game, and nations can only be lifted out of poverty when the whole pie grows. But this is unintuitive. Two, Indian politics revolves around identity and patronage. The spoils of power are limited – that is indeed a zero-sum game – so you’re likely to vote for whoever can look after the interests of your in-group rather than care about the economy as a whole. Three, voters tend to stay uninformed for good reasons, because of what Public Choice economists call Rational Ignorance. A single vote is unlikely to make a difference in an election, so why put in the effort to understand the nuances of economics and governance? Just ask, what is in it for me, and go with whatever seems to be the best answer. Four, Politicians have a short-term horizon, geared towards winning the next election. A good policy that may take years to play out is unattractive. A policy that will win them votes in the short term is preferable. Sadly, no Indian party has shown a willingness to aim for the long term. The Congress has produced new Gandhis, but not new ideas. And while the BJP did make some solid promises in 2014, they did not walk that talk, and have proved to be, as Arun Shourie once called them, UPA + Cow. Even the Congress is adopting the cow, in fact, so maybe the BJP will add Temple to that mix? Benjamin Franklin once said, “Democracy is two wolves and a lamb voting on what to have for lunch.” This election season, my friends, the people of India are on the menu. You have been deveined and deboned, marinated with rhetoric, seasoned with narrative – now enter the oven and vote. The India Uncut Blog © 2010 Amit Varma. All rights reserved. Follow me on Twitter. Full Article
on We Must Reclaim Nationalism From the BJP By indiauncut.com Published On :: 2019-04-14T03:13:32+00:00 This is the 18th installment of The Rationalist, my column for the Times of India. The man who gave us our national anthem, Rabindranath Tagore, once wrote that nationalism was “a great menace.” He went on to say, “It is the particular thing which for years has been at the bottom of India’s troubles.” Not just India’s, but the world’s: In his book The Open Society and its Enemies, published in 1945 as Adolf Hitler was defeated, Karl Popper ripped into nationalism, with all its “appeals to our tribal instincts, to passion and to prejudice, and to our nostalgic desire to be relieved from the strain of individual responsibility which it attempts to replace by a collective or group responsibility.” Nationalism is resurgent today, stomping across the globe hand-in-hand with populism. In India, too, it is tearing us apart. But must nationalism always be a bad thing? A provocative new book by the Israeli thinker Yael Tamir argues otherwise. In her book Why Nationalism, Tamir makes the following arguments. One, nation-states are here to stay. Two, the state needs the nation to be viable. Three, people need nationalism for the sense of community and belonging it gives them. Four, therefore, we need to build a better nationalism, which brings people together instead of driving them apart. The first point needs no elaboration. We are a globalised world, but we are also trapped by geography and circumstance. “Only 3.3 percent of the world’s population,” Tamir points out, “lives outside their country of birth.” Nutopia, the borderless state dreamed up by John Lennon and Yoko Ono, is not happening anytime soon. If the only thing that citizens of a state have in common is geographical circumstance, it is not enough. If the state is a necessary construct, a nation is its necessary justification. “Political institutions crave to form long-term political bonding,” writes Tamir, “and for that matter they must create a community that is neither momentary nor meaningless.” Nationalism, she says, “endows the state with intimate feelings linking the past, the present, and the future.” More pertinently, Tamir argues, people need nationalism. I am a humanist with a belief in individual rights, but Tamir says that this is not enough. “The term ‘human’ is a far too thin mode of delineation,” she writes. “Individuals need to rely on ‘thick identities’ to make their lives meaningful.” This involves a shared past, a common culture and distinctive values. Tamir also points out that there is a “strong correlation between social class and political preferences.” The privileged elites can afford to be globalists, but those less well off are inevitably drawn to other narratives that enrich their lives. “Rather than seeing nationalism as the last refuge of the scoundrel,” writes Tamir, “we should start thinking of nationalism as the last hope of the needy.” Tamir’s book bases its arguments on the West, but the argument holds in India as well. In a country with so much poverty, is it any wonder that nationalism is on the rise? The cosmopolitan, globe-trotting elites don’t have daily realities to escape, but how are those less fortunate to find meaning in their lives? I have one question, though. Why is our nationalism so exclusionary when our nation is so inclusive? In the nationalism that our ruling party promotes, there are some communities who belong here, and others who don’t. (And even among those who ‘belong’, they exploit divisions.) In their us-vs-them vision of the world, some religions are foreign, some values are foreign, even some culinary traditions are foreign – and therefore frowned upon. But the India I know and love is just the opposite of that. We embrace influences from all over. Our language, our food, our clothes, our music, our cinema have absorbed so many diverse influences that to pretend they come from a single legit source is absurd. (Even the elegant churidar-kurtas our prime minister wears have an Islamic origin.) As an example, take the recent film Gully Boy: its style of music, the clothes its protagonists wear, even the attitudes in the film would have seemed alien to us a few decades ago. And yet, could there be a truer portrait of young India? This inclusiveness, this joyous khichdi that we are, is what makes our nation a model for the rest of the world. No nation embraces all other nations as ours does. My India celebrates differences, and I do as well. I wear my kurta with jeans, I listen to ghazals, I eat dhansak and kababs, and I dream in the Indian language called English. This is my nationalism. Those who try to divide us, therefore, are the true anti-nationals. We must reclaim nationalism from them. The India Uncut Blog © 2010 Amit Varma. All rights reserved. Follow me on Twitter. Full Article
on Lessons from an Ankhon Dekhi Prime Minister By indiauncut.com Published On :: 2019-05-05T03:17:51+00:00 This is the 19th installment of The Rationalist, my column for the Times of India. A friend of mine was very impressed by the interview Narendra Modi granted last week to Akshay Kumar. ‘Such a charming man, such great work ethic,’ he gushed. ‘He is the kind of uncle I would want my kids to have.’ And then, in the same breath, he asked, ‘How can such a good man be such a bad prime minister?” I don’t want to be uncharitable and suggest that Modi’s image is entirely manufactured, so let’s take the interview at face value. Let’s also grant Modi his claims about the purity of his neeyat (intentions), and reframe the question this way: when it comes to public policy, why do good intentions often lead to bad outcomes? To attempt an answer, I’ll refer to a story a friend of mine, who knows Modi well, once told me about him. Modi was chilling with his friends at home more than a decade ago, and told them an incident from his childhood. His mother was ill once, and the young Narendra was tending to her. The heat was enervating, so the boy went to the switchboard to switch on the fan. But there was no electricity. My friend said that as he told this story, Modi’s eyes filled with tears. Even after all these years, he was moved by the memory. My friend used this story to make the point that Modi’s vision of the world is experiential. If he experiences something, he understands it. When he became chief minister of Gujarat, he made it his stated mission to get reliable electricity to every part of Gujarat. No doubt this was shaped by the time he flicked a switch as a young boy and the fan did not budge. Similarly, he has given importance to things like roads and cleanliness, since he would have experienced the impact of those as a young man. My term for him, inspired by Rajat Kapoor’s 2014 film, is ‘the ankhon dekhi prime minister’. At one level, this is a good thing. He sees a problem and works for the rest of his life to solve it. But what of things he cannot experience? The economy is a complex beast, as is society itself, and beyond a certain level, you need to grasp abstract concepts to understand how the world works. You cannot experience them. For example, spontaneous order, or the idea that society and markets, like language, cannot be centrally directed or planned. Or the positive-sum nature of things, which is the engine of our prosperity: the idea that every transaction is a win-win game, and that for one person to win, another does not have to lose. Or, indeed, respect for individual rights and free speech. One understands abstract concepts by reading about them, understanding them, applying them to the real world. Modi is not known to be a reader, and this is not his fault. Given his background, it is a near-miracle that he has made it this far. He wasn’t born into a home with a reading culture, and did not have either the resources or the time when he was young to devote to reading. The only way he could learn about the world, thus, was by experiencing it. There are two lessons here, one for Modi himself and others in his position, and another for everyone. The lesson in this for Modi is a lesson for anyone who rises to such an important position, even if he is the smartest person in the world. That lesson is to have humility about the bounds of your knowledge, and to surround yourself with experts who can advise you well. Be driven by values and not confidence in your own knowledge. Gather intellectual giants around you, and stand on their shoulders. Modi did not do this in the case of demonetisation, which he carried out against the advice of every expert he consulted. We all know the damage it caused to the economy. The other learning from this is for all of us. How do we make sense of the world? By connecting dots. An ankhon-dekhi approach will get us very few dots, and our view of the world will be blurred and incomplete. The best way to gather more dots is reading. The more we read, the better we understand the world, and the better the decisions we take. When we can experience a thousand lives through books, why restrict ourselves to one? A good man with noble intentions can make bad decisions with horrible consequences. The only way to hedge against this is by staying humble and reading more. So when you finish reading this piece, think of an unread book that you’d like to read today – and read it! The India Uncut Blog © 2010 Amit Varma. All rights reserved. Follow me on Twitter. Full Article
on Population Is Not a Problem, but Our Greatest Strength By indiauncut.com Published On :: 2019-06-09T03:27:29+00:00 This is the 21st installment of The Rationalist, my column for the Times of India. When all political parties agree on something, you know you might have a problem. Giriraj Singh, a minister in Narendra Modi’s new cabinet, tweeted this week that our population control law should become a “movement.” This is something that would find bipartisan support – we are taught from school onwards that India’s population is a big problem, and we need to control it. This is wrong. Contrary to popular belief, our population is not a problem. It is our greatest strength. The notion that we should worry about a growing population is an intuitive one. The world has limited resources. People keep increasing. Something’s gotta give. Robert Malthus made just this point in his 1798 book, An Essay on the Principle of Population. He was worried that our population would grow exponentially while resources would grow arithmetically. As more people entered the workforce, wages would fall and goods would become scarce. Calamity was inevitable. Malthus’s rationale was so influential that this mode of thinking was soon called ‘Malthusian.’ (It is a pejorative today.) A 20th-century follower of his, Harrison Brown, came up with one of my favourite images on this subject, arguing that a growing population would lead to the earth being “covered completely and to a considerable depth with a writhing mass of human beings, much as a dead cow is covered with a pulsating mass of maggots.” Another Malthusian, Paul Ehrlich, published a book called The Population Bomb in 1968, which began with the stirring lines, “The battle to feed all of humanity is over. In the 1970s hundreds of millions of people will starve to death in spite of any crash programs embarked upon now.” Ehrlich was, as you’d guess, a big supporter of India’s coercive family planning programs. ““I don’t see,” he wrote, “how India could possibly feed two hundred million more people by 1980.” None of these fears have come true. A 2007 study by Nicholas Eberstadt called ‘Too Many People?’ found no correlation between population density and poverty. The greater the density of people, the more you’d expect them to fight for resources – and yet, Monaco, which has 40 times the population density of Bangladesh, is doing well for itself. So is Bahrain, which has three times the population density of India. Not only does population not cause poverty, it makes us more prosperous. The economist Julian Simon pointed out in a 1981 book that through history, whenever there has been a spurt in population, it has coincided with a spurt in productivity. Such as, for example, between Malthus’s time and now. There were around a billion people on earth in 1798, and there are around 7.7 billion today. As you read these words, consider that you are better off than the richest person on the planet then. Why is this? The answer lies in the title of Simon’s book: The Ultimate Resource. When we speak of resources, we forget that human beings are the finest resource of all. There is no limit to our ingenuity. And we interact with each other in positive-sum ways – every voluntary interactions leaves both people better off, and the amount of value in the world goes up. This is why we want to be part of economic networks that are as large, and as dense, as possible. This is why most people migrate to cities rather than away from them – and why cities are so much richer than towns or villages. If Malthusians were right, essential commodities like wheat, maize and rice would become relatively scarcer over time, and thus more expensive – but they have actually become much cheaper in real terms. This is thanks to the productivity and creativity of humans, who, in Eberstadt’s words, are “in practice always renewable and in theory entirely inexhaustible.” The error made by Malthus, Brown and Ehrlich is the same error that our politicians make today, and not just in the context of population: zero-sum thinking. If our population grows and resources stays the same, of course there will be scarcity. But this is never the case. All we need to do to learn this lesson is look at our cities! This mistaken thinking has had savage humanitarian consequences in India. Think of the unborn millions over the decades because of our brutal family planning policies. How many Tendulkars, Rahmans and Satyajit Rays have we lost? Think of the immoral coercion still carried out on poor people across the country. And finally, think of the condescension of our politicians, asserting that people are India’s problem – but always other people, never themselves. This arrogance is India’s greatest problem, not our people. The India Uncut Blog © 2010 Amit Varma. All rights reserved. Follow me on Twitter. Full Article
on Start Your Engines: Optimizing Mixed-Signal Simulation Efficiency By community.cadence.com Published On :: Wed, 05 Jun 2024 20:18:00 GMT During a mixed-signal simulation, the analog engine usually dominates the simulation time and resources. If you need to run only the analog engine in several windows, or if you would like to to run multiple tests of the same circuit with different stimuli or test pattern, then you need to run the simulation multiple times. View this blog to know more about the the two advanced technologies that Spectre AMS Designer provides to help you improve the efficiency of your mixed-signal designs and to increase the simulation speed.(read more) Full Article AMS mixed-signal methodology AMS Designer Start Your Engines AMS simulation
on Virtuoso Studio: How Do You Name Simulation Histories in Virtuoso ADE Assembler? By community.cadence.com Published On :: Fri, 07 Jun 2024 12:16:00 GMT This blog describes an efficient way to name the histories saved by the simulation runs in Virtuoso ADE Assembler.(read more) Full Article Virtuoso Analog Design Environment Custom IC Virtuoso ADE Assembler ADE Assembler IC23.1 Virtuoso IC23.1
on Start Your Engines: Create and Insert Connect Modules for Mixed-Signal Verification By community.cadence.com Published On :: Tue, 11 Jun 2024 16:17:00 GMT Read this blog to know how you can easily create and insert connect modules using Spectre AMS Designer with the Verilog-AMS standard language defined by Accellera. (read more) Full Article AMS AMS Designer Mixed-Signal AMS simulation mixed-signal design AMS Verification mixed-signal verification
on Knowledge Booster Training Bytes - Writing Physical Verification Language Rules By community.cadence.com Published On :: Wed, 03 Jul 2024 08:56:00 GMT Have you ever wanted to write a DRC rule deck to check for space or width constraints on polygons? Or have you wondered how the multiple lines of an LVS rule deck extract and conduct a comparison between the schematic and layout? Maybe you've been curious about the role of rule deck writers in creating high-quality designs ready for tape-out. If any of these questions interest you, there is good news: the latest version (v23.1) of the Physical Verification Rules Writer (PVLRW) course is designed to teach you rule deck writing. This free 16-hour online course includes audio and labs designed to make your learning experience comfortable and flexible. Whether you are new to the concept or an experienced CAD/PDK engineer, the course is structured to enhance your rule deck writing skills. The PVLRW course covers six core modules: Layer Processing, DRC Rules, Layout Extraction, ERC and LVS Rules, Schematic Netlisting, and Coloring Rules. There are also three optional appendix sections. Each module explains relevant rules with syntax, concepts, graphics, examples, and case studies. This course is based on tool versions PEGASUS231 and Virtuoso Studio IC231. Pegasus Input and Output Pegasus is a cloud-ready physical verification signoff solution that enables engineers to support faster delivery of advanced-node integrated circuits (ICs) to market. Pegasus requires input data in the form of layout geometry, schematic netlists, and rules that direct the tool operation. The rules fall into two categories: those that describe the fabrication process and those that control the job-specific operation. Pegasus provides log and report files, netlists, databases, and error databases as output. Overview of Pegasus Rule File The rule decks written in Physical Verification Language (PVL) work for the Cadence PV signoff tools Pegasus and PVS (Physical Verification System). The PVL rules are placed in a file that gets selected in a run from the GUI or the command line, as the user directs. PVL rules may be on separate lines within the file and can also be contained in named rule blocks. Each line of code starts with a PVL rule that uses prefix type notation. It consists of a keyword followed by options, input layer or variable names, and output layer or variable names. A rule block has the format of the keyword rule, followed by a rule name you wish to give it, followed by an opening curly brace. You enter the rules you wish to perform, followed by a closing curly brace on the last separate line. Sample Rule deck with individual lines of code and rule blocks. DRC Rules The first step in a typical Pegasus flow is a Design Rule Check (DRC), which verifies that layout geometries conform to the minimum width, spacing, and other fabrication process rules required by an IC foundry. Each foundry specifies its own process-dependent rules that must be met by the layout design. There are three types of DRC rules: layer definition rules, layer derivation rules, and DRC design check rules. Layer definition rules identify the layers contained in the input layout database, and layer derivation rules derive additional layers from the original input layers, allowing the tool to test the design against specific foundry requirements using the design check rules. A sample DRC Rule deck A layout view displaying the DRC violations LVS Rules The Pegasus Layout Versus Schematic (LVS) tool compares the layout netlist with the schematic netlist to check for discrepancies. There are two essential LVS rule sets: LVS extraction rules and comparison rules. LVS extraction rules help extract drawn devices and connectivity information from the input layout geometry data and outputs into a layout netlist. The LVS extraction rule set also includes the layer definition, derivation, extraction, connectivity, and net listing rules. LVS comparison rules are associated with comparing the extracted layout netlist to a schematic netlist. A sample LVS Rule deck. TCL, Macros, and Conditional commands Tcl is supported and used in various Pegasus functionalities, such as Pegasus rule files and Pegasus configurator. Macros are functional templates that are defined once and can be used multiple times in a rule file. Conditional Commands are used to process or skip specific commands in the rule file. Do You Have Access to the Cadence Support Portal? If not, follow the steps below to create your account. On the Cadence Support portal, select Register Now and provide the requested information on the Registration page. You will need an email address and host ID to sign up. If you need help with registration, contact support@cadence.com. To stay up to date with the latest news and information about Cadence training and webinars, subscribe to the Cadence Training emails. If you have questions about courses, schedules, online, public, or live onsite training, reach out to us at Cadence Training. For any questions, general feedback, or future blog topic suggestions, please leave a comment. Related Resources Product Manuals Cadence Pegasus Developers Guide Rapid Adoption Kits Running Pegasus DRC/LVS/FILL in Batch Mode Training Byte Videos What Is the Run Command File? How to Run PVS-Pegasus Jobs in GUI and Batch modes? PVS DRC Run From - Setup Rules What Is PVS/Pegasus Layer Viewer? PVL Coloring Ruledecks with Docolor and Stitchcolor PLV Commands: dfm_property with Primary & Secondary Layer PVS Quantus QRC Overview Online Courses Pegasus Verification System PVS (Physical Verification System) Virtuoso Layout Design Basics About Knowledge Booster Training Bytes Knowledge Booster Training Bytes is an online journal that relays information about Cadence Training videos, online courses, and upcoming webinars in the Learning section of the Cadence Learning and Support portal. This blog category brings you direct links to these videos, courses, and other related material on a regular basis. Subscribe to receive email notifications about our latest Custom IC Design blog posts. Full Article Virtuoso Studio Routing Layout Suite Cadence training training bytes Circuit Design Cadence Education Services Custom IC Design online training
on Start Your Engines: The Innovation Behind Universal Connect Modules (UCM) By community.cadence.com Published On :: Fri, 02 Aug 2024 08:10:00 GMT Read this blog to know more about the innovation behind Universal Connect Modules (UCM).(read more) Full Article SystemVerilog Start Your Engines Spectre AMS Designer Verilog-AMS Mixed-Signal mixed-signal verification
on PCB Chamfering Board edge connectors By community.cadence.com Published On :: Thu, 09 Dec 2021 15:12:48 GMT Hi I am looking into chamfering the edge of PCB for Board edge connectors. I have performed fillet command earlier but new to chamfering. Below is the description : As seen above, the PCB edge are chamfered in thickness as well as at the corners. Using OrCAD PCB hotfix S023. Full Article
on 10 Layer PCB project won't generate Gerber's completely for middle layers By community.cadence.com Published On :: Thu, 09 Dec 2021 16:29:21 GMT Hello Fellow PCB Designers, We have a 10 layer PCB design that originated in Pads and was converted over to Allegro 17.4, this is an old design but is manufacturable and works perfectly fine. When I try to generate a Gerber for the Top or Bottom layers the Gerber comes out fine. But Most of the middle layers are Etch's and via's for power and grounds, but the Gerber's come mostly blank, there might be some details, but in the Gerber view everything is displayed correctly. The design does have many close spacings, I have not changed anything in the constrains manager yet, turned off a lot of the DRC's, but thinking there might be something wrong with the constrains. I find that the CSet is set to 2_18, not sure yet what this means, also there are many of these definitions, PCS 3,4,5,ect, are the same as CSet 2_18 any suggestions would be great, we are currently looking into this, have seen that even small change in constraint manager can cause long processing and even Allegro crashing, this is a large project. Thanks Much, Thanks, Mike Pollock. Full Article
on SPB17.4 installation package build defect By community.cadence.com Published On :: Thu, 09 Dec 2021 23:05:50 GMT 1, Some components in the installation package cannot choose to install; even if they do not choose them, they will still be installed; just less shortcut icons, the documents are still released to the installation directory. 2, "Catia Application Frame" repeat the problem? “x:CadenceSPB_17.4 oolsin“ ”x:CadenceSPB_17.4 oolsspatial“ "Catia Application Frame" shouldn't you use the latest version? 3,Follow-up update patch cleaning the useless files and extra empty folder action !!! The SPB17.4 installation package is currently the worst installation package I have seen for large-scale software packaging. Full Article
on datasheets for difference of Allegro PCB and OrCAD Professional By community.cadence.com Published On :: Tue, 14 Dec 2021 09:08:17 GMT Hi All I am looking for the functions which are different about OrCAD Professional and Allegro tier. is there any resource? regard Full Article
on The default location of orCAD Capture library Pin Number is incorrect By community.cadence.com Published On :: Tue, 14 Dec 2021 21:38:21 GMT The default position of the pin number is incorrect. Full Article
on How to magnify a board on a film view By community.cadence.com Published On :: Thu, 16 Dec 2021 17:36:00 GMT I have a small board that is not readable even though the document is 11' x 17'. Is there a way I could expand/magnify the board along with the components on them to make them legible? I have created a new film and is displaying the bottom and top side of the board but the board is too small and the components are not legible. Perhaps there is a way to upscale it or expand it?Please note I have other stuff in the document that I am not showing , notes and other things, and I am trying to make just the boards look bigger in some way.I do have a PDF image of the same file where the board appears to be MUCH bigger and is fully legible and I am trying to match that. Thank you all. Full Article
on Version upgrade 17.2 to 17.4 - Cadance orcad capture By community.cadence.com Published On :: Tue, 21 Dec 2021 10:49:08 GMT hello, We have a number of workstations with version 17.2 that work on a floating license server We want to know if it can be upgraded to version 17.4 If so, should the floating license server be upgraded as well? In addition, how can you know where the license was purchased from? Thanks! Full Article
on "net logic" question By community.cadence.com Published On :: Thu, 23 Dec 2021 22:52:28 GMT hello: i use the command "net logic" to change/assign the net name of the pins but the command can be used for only one pin at a time. is there a way to change/assign the same net name on 100 pins all at once? i have a daisy chain design so i need to assign one net name for 100s of pins. ======== thank you david, i was able to do it. i am writing this section because i can not reply to your comment. Full Article
on Unconnected nets By community.cadence.com Published On :: Fri, 24 Dec 2021 13:55:17 GMT I have a design which says there are 6 unconnected nets. But 'Display All Nets, shows only 4 unconnected. When I try to look at the non-connection, is appears connected and nothing shows?? What is happening? Full Article
on UI issues of PCB Environment Editor 17.4 By community.cadence.com Published On :: Sun, 26 Dec 2021 14:04:11 GMT Hi, I found that under the Dark Theme of PCB Environment Editor 17.4, the window background is not all dark, resulting in unclear text display。 As shown in the figure below: Full Article
on Noise summary data per sub-block in Maestro output expressions By community.cadence.com Published On :: Tue, 22 Oct 2024 21:56:24 GMT Hi, I have a question about printing noise summary via maestro output expressions. How can I print noise data using output expressions, for multiple levels of the hierarchy? I have found this article which describe the procedure using ocnGenNoiseSummary() function: https://support.cadence.com/apex/ArticleAttachmentPortal?id=a1Od0000007MViHEAW&pageName=ArticleContent I see also Andrew Beckett referring to the above mentioned article as a solution to a similar question: community.cadence.com/.../noise-summary-per-instance However, this seems to work only if I'm to extract noise data from a single level of hierarchy. If I have the output expression "ocnGenNoiseSummary(2 ?result 'hbnoise)", it will generate a "noisesummary" directory under results directory for a hierarchy level of 2. If I am to extract data from various hierarchy levels, I should be able to generate multiple noise summary directories, such as noisesummary1, noisesummary2 where they correspond to "ocnGenNoiseSummary(1 ?result 'hbnoise)" & "ocnGenNoiseSummary(2 ?result 'hbnoise)", respectively. However this does not seem to be possible. Can you please advice? Thanks. My Cadence version: IC23.1-64b.ISR7.27 BR, Denizhan Karaca Full Article
on Netlisting error when doing parametric sweep on transient simulation By community.cadence.com Published On :: Wed, 23 Oct 2024 10:13:32 GMT Dear all, I defined two design variables in ADE Assembler, say V1 and V2, that define the voltage 1 and voltage 2 of a "vpulse" voltage source in my schematic. Then, I define V1 = 1.0, and V2 = 2.0, run a transient simulation, and everything is as expexcted. The source provides pulses between 1.0 V and 2.0 V. Next, I set V1 = 1.0:0.5:1.5, thereby creating a parametric sweep with 1.0 V and 1.5 V for V1. I keep V2 at 2.0 V. Then the simulation fails, and all I get is "netl err" in my Output Expressions and an error message that the results directory does not exist and nothing can be plotted: This is reasonable, as the results directory is deleted on starting a new simulation, and as there is no simulation result, none of my output expressions can be plotted. WARNING (OCN-6040): The specified directory does not exist, or the directory does not contain valid PSF results. Ensure that the path to the directory is correct and the directory has a logFile and PSF result files.WARNING (ADE-1065): No simulation results are available.ERROR (WIA-1175): Cannot plot waveform signals because no waveform data is available for plotting.One of the possible reasons can be that 'Save' check box for these signals are not selected in the Outputs Setup pane. Ensure that these check boxes are selected before you run the simulation. Normally, this kind of para,metric sweep is not a problem, I have done this many times before. There must be something special in THIS PARTICULAR test bench or simulator setup. The trouble is, I don't get any useful error messages. Does anyone know what might be the problem here OR where to find useful information to investigate further (log files stored somewhere)? Thank you! Regards, Volker P.S. Using Corners instead does not help either. Running it through all values by hand works, though. Full Article
on A problem with setup when Monte Carlo simulation starts By community.cadence.com Published On :: Thu, 24 Oct 2024 17:50:42 GMT Hi, When I try to run Monte Carlo it gives me a 3 items message for possible failure: 1. It says the machine selected in the current job setup policy isnot reachable 2. The Cadence hierarchy is not detected, not installed properly. or 3. Job start script (with a path and a name like swiftNetlistService#) is not found on the remote machines. Any recommendation on how to fix this? Full Article
on New CDF creation and callback By community.cadence.com Published On :: Sun, 27 Oct 2024 11:35:08 GMT I need to add a new CDF parameter called "mag" to symbols in a given library using skill script in which the symbol size can be controlled and call back it each time this library is used so that all the symbols are updated. Full Article
on Cannot access individual noise contributions using SpectreMDL By community.cadence.com Published On :: Tue, 29 Oct 2024 12:21:23 GMT I have tried replicating the setup described in a previous post (here), with the proposed solution. The MDL measurements return a value of 0 for all exported result but the first. Using Viva I can actually see the correct value for each contribution. I am using :- Spectre 23.1.0.538.isr10- Viva IC23.1-64b.ISR8.40 What should I do differently? Thanks! ***** test.scs ***** r1 (1 0) res_model l=10e-6 w=2e-6 r2 (2 1) res_model l=15e-6 w=2e-6 vr (2 0) vsource dc=1.0 mag=1 model res_model resistor rsh=100 kf=1e-20*exp(dkf) parameters dkf=0 statistics { process { vary dkf dist=gauss std=0.5 } } noi (1 0) noise freq=1 /***** test.mdl *****/ alias measurement noi_test { run noi; export real noi_total=noi_test:out; export real r1_total=r1:total; export real r1_flicker=r1:fn; export real r1_thermal=r1:rn; export real r2_total=r2:total; export real r2_flicker=r2:fn; export real r2_thermal=r2:rn; } run noi_test **** test.measure **** Measurement Name : noi_testAnalysis Type : noisenoi_total = 6.9282e-06 r1_flicker = 0 r1_thermal = 0 r1_total = 0 r2_flicker = 0 r2_thermal = 0 r2_total = 0 Full Article
on Config sweep View in Tests By community.cadence.com Published On :: Wed, 30 Oct 2024 10:28:38 GMT Hi all, I have a question regarding how to sweep the config view without using a global variable. I’d like to set a different config view for each test, and I'm trying to avoid using corners or plan. Any suggestions on how to achieve this? Thanks in advance for your help! Best,MooH Full Article