co μWaveRiders: Setting Up a Successful AWR Design Environment Design - Layout and Component Libraries By community.cadence.com Published On :: Fri, 16 Dec 2022 20:15:00 GMT When starting a new design, it's important to take the time to consider design recommendations that prevent problems that can arise later in the design cycle. This two-part compilation of guidelines for starting a new design is the result of years of Cadence AWR Design Environment platform Support experience with designs. Pre-design decisions for user interface, simulation, layout, and library configuration lay the groundwork for a successful and efficient AWR design. This blog, part 2, covers the layout and component library considerations designers should note prior to starting a design.(read more) Full Article RF Simulation Circuit simulation AWR Design Environment awr Component library Layout microwave office Visual System Simulator (VSS)
co Knowledge Booster Training Bytes - The Close Connection Between Schematics and Their Layouts in Microwave Office By community.cadence.com Published On :: Wed, 04 Jan 2023 04:03:00 GMT Microwave Office is Cadence’s tool-of-choice for RF and microwave designers designing everything from III-V 5G chips, to RF systems in board and package technologies. These types of designs require close interaction between the schematic and its layout. A new Training Byte demonstrates how the schematic-layout connections is built into Microwave Office.(read more) Full Article RF RF Simulation RF designer AWR customization RF design microwave office
co Training Webinar: Microwave Office - Comprehensive RF and Microwave Design Creation By community.cadence.com Published On :: Tue, 13 Jun 2023 04:56:00 GMT A training webinar on Microwave Office will be given June 27, 2023. The emphasis will be on EM simulation.(read more) Full Article RF RF Simulation awr EM simulation webinar AWR AXIEM RF design AWR Microwave Office microwave office
co Training Insights New Course: Planar EM Simulation in AWR Microwave Office By community.cadence.com Published On :: Mon, 30 Oct 2023 18:44:00 GMT New online training course for AXIEM EM Simulator in AWR Microwave Office is available.(read more) Full Article awr EM simulation AWR AXIEM AWR Microwave Office AXIEM 3D Planar Simulator microwave office
co Constraining some nets to route through a specific metal layer, and changing some pin/cell placements and wire directions in Cadence Innovus. By community.cadence.com Published On :: Fri, 03 Feb 2023 22:13:10 GMT Hello All: I am looking for help on the following, as I am new to Cadence tools [I have to use Cadence Innovus for Physical Design after Logic Synthesis using Synopsys Design Compiler, using Nangate 45 nm Open Cell Library]: while using Cadence Innovus, I would need to select a few specific nets to be routed through a specific metal layer. How can I do this on Innovus [are there any command(s)]? Also, would writing and sourcing a .tcl script [containing the command(s)] on the Innovus terminal after the Placement Stage of Physical Design be fine for this? Secondly, is there a way in Innovus to manipulate layout components, such as changing some pin placements, wire directions (say for example, wire direction changed to facing east from west, etc.) or moving specific closely placed cells around (without violating timing constraints of course) using any command(s)/.tcl script? If so, would pin placement changes and constraining some closely placed cells to be moved apart be done after Floorplanning/Powerplanning (that is, prior to Placement) and the wire direction changes be done after Routing? While making the necessary changes, could I use the usual Innovus commands to perform Physical Design of the remaining nets/wires/pins/cells, etc., or would anything need modification for the remaining components as well? I would finally need to dump the entire design containing all of this in a .def file. I tried looking up but could only find matter on Virtuoso and SKILL scripting, but I'd be using Innovus GUI/terminal with Nangate 45 nm Open Cell Library. I know this is a lot, but I would greatly appreciate your help. Thanks in advance. Riya Full Article
co Conformal LEC can't finish at analyze abort step. How do I proceed? By community.cadence.com Published On :: Mon, 07 Aug 2023 02:19:35 GMT Hi Cadence & forumers, I am running a conformal LEC with a flattened netlist against RTL. The run hang for 5 days at the "analyze abort" step which is automatically launched by the compare. The netlist is flattened at some levels so hierarchical flow which I tried didn't help much. The flattened/highly optimized netlist is from customer and the ultimate goal. How shall I proceed now? On the a side note, a test run with a hierarchical netlist from a simple DC "compile -map_effort medium" command finished after 1 day or so. Thank you! // Command: vpx compare -verbose -ABORT_Print -NONEQ_Print -TIMEstamp// Starting multithreaded comparison ... Comparing 241112 points in parallel. // Multithreading Overhead: 38% Gates: 8501606/6168138// Multithreaded processing completed. ================================================================================Compared points PO DFF DLAT BBOX CUT Total --------------------------------------------------------------------------------Equivalent 1025 241638 30 75 21 242789 --------------------------------------------------------------------------------Abort 0 124 0 0 0 124 ================================================================================Compare results of instance/output/pin equivalences and/or sequential merge ================================================================================Compared points DFF Total --------------------------------------------------------------------------------Equivalent 204 204 ================================================================================// Warning: 512 DFFs/DLATs have 1 disabled clock port: skipped data cone comparison// Resolving aborts by analyze abort... Full Article
co How to generate "Sheet Name" column in a pin report? By community.cadence.com Published On :: Wed, 08 Nov 2023 03:52:26 GMT Hi everyone, Is there any method to generate "Sheet" column for a pin report like table below? The column "Name.Pin" & "Signal" can be generated easily, but I have no idea to generate the column of "Sheet Name". The software using here are Allegro Design Entry HDL, OrCAD Capture and Allegro PCB Editor. Can these 3 software generate "Sheet Name" data? Name.Pin Signal Sheet Name C1_1.1 N301321 SITE1_1 C1_1.2 GND_ANA_1 SITE1_1 C1_2.1 N180243 SITE2_1 C1_2.2 GND_ANA_2 SITE2_1 Thank you. Full Article
co copy paste circuit from one schematic design to another By community.cadence.com Published On :: Tue, 30 Jan 2024 08:59:20 GMT Hi, have two designs and would like to copy paste one area of circuit from the old design to the new design, best way/approach and guidance please.. Full Article
co Regarding the loading of waveform signals in the waveform windown using the tcl command By community.cadence.com Published On :: Mon, 26 Feb 2024 09:26:52 GMT Hello, I am trying to load some of the signals of the design saved in the signals.svwf to the waveform windown via the tcl file, I am using the following commands but nothing works, Can you please help -submit waveform loadsignals -using "Waveform 2" FB1.svwf but it gives me the below error -submit waveform new -reuse -name Waveforms Full Article
co Conformal CEC checking By community.cadence.com Published On :: Tue, 19 Mar 2024 21:04:55 GMT Below is showing my Master.v ******************************************************************************************************************************************************************************************************************** ///////ALUmodule ALU ( input [31:0] A,B, input[3:0] alu_control, output reg [31:0] alu_result, output reg zero_flag); always @(*) begin // Operating based on control input case(alu_control) 4'b0001: alu_result = A+B; 4'b0010: alu_result = A-B; 4'b0011: alu_result = A*B; 4'b0100: alu_result = A|B; 4'b0101: alu_result = A&B; 4'b0110: alu_result = A^B; 4'b0111: alu_result = ~B; 4'b1000: alu_result = A<<B; 4'b1001: alu_result = A>>B; 4'b1010: begin if(A<B) alu_result = 1; else alu_result = 0; end default: alu_result = A+B; endcase // Setting Zero_flag if ALU_result is zero if (alu_result) zero_flag = 1'b1; else zero_flag = 1'b0; endendmodule/////CONTROL UNIT/* Control unit controls takes opcode, funct7, funct3 of the instruction code to determineand control regwrite in IFU, alu control in ALU to execute proper instruction*//* Control unit controls takes opcode, funct7, funct3 of the instruction code to determineand control regwrite in IFU, alu control in ALU to execute proper instruction*/module CONTROL( input [4:0] opcode, output reg [3:0] alu_control, output reg regwrite_control,memread_control,memwrite_control); always @(opcode) begin case(opcode) 5'b00001: begin alu_control=4'b0001; //add regwrite_control=1; memread_control=0; memwrite_control=0; end 5'b00010: begin alu_control=4'b0010; ///sub regwrite_control=1; memread_control=0; memwrite_control=0; end 5'b00011: begin alu_control=4'b0011; //mul regwrite_control=0; memread_control=0; memwrite_control=1; end 5'b00100: begin alu_control=4'b0100; ///OR regwrite_control=0; memread_control=0; memwrite_control=1; end 5'b00101: begin alu_control=4'b0101; ///AND regwrite_control=1; memread_control=0; memwrite_control=0; end 5'b00110: begin alu_control=4'b0110; ///XOR regwrite_control=0; memread_control=0; memwrite_control=1; end 5'b00111: begin alu_control=4'b0111; ///NOT regwrite_control=0; memread_control=0; memwrite_control=1; end 5'b01000: begin alu_control=4'b1000; //SL regwrite_control=1; memread_control=1; memwrite_control=0; end 5'b11001: begin alu_control=4'b1001; //SR regwrite_control=1; memread_control=1; memwrite_control=0; end 5'b01010: begin alu_control=4'b1010; //COMPARE regwrite_control=1; memread_control=1; memwrite_control=0; end //5'b11010: begin ALU_control=4'b0000; //SW //regwrite_control=1; memread_control=0; memwrite_control=0; //end //5'b01010: begin ALU_control=4'bxxxx; //LW //regwrite_control=0; memread_control=0; memwrite_control=1; //end default : begin alu_control = 4'b0001; regwrite_control=1; memread_control=0; memwrite_control=0; end endcase endendmodule//////DATA MEMORYmodule Data_Mem(input clock, rd_mem_enable, wr_mem_enable,input [11:0] address,input [31:0] datawrite_to_mem,output reg [31:0] dataread_from_mem );reg [31:0] Data_Memory[8:0];initial begin Data_Memory[0] = 32'hFFFFFFFF; Data_Memory[1] = 32'h00000001; Data_Memory[2] = 32'h00000005; Data_Memory[3] = 32'h00000003; Data_Memory[4] = 32'h00000004; Data_Memory[5] = 32'h00000000; Data_Memory[6] = 32'hFFFFFFFF; Data_Memory[7] = 32'h00000000; //Data_Memory[8] = 32'h00000008; //Data_Memory[9] = 32'h00000009; //Data_Memory[10] = 32'h0000000A; //Data_Memory[11] = 32'h0000000B; //Data_Memory[12] = 32'h0000000C; //Data_Memory[13] = 32'h0000000D; //Data_Memory[14] = 32'h0000000E; //Data_Memory[15] = 32'h0000000F; //Data_Memory[16] = 32'h00000010; //Data_Memory[17] = 32'h00000011; //Data_Memory[18] = 32'h00000012; //Data_Memory[19] = 32'h00000013; //Data_Memory[20] = 32'h00000014; //Data_Memory[21] = 32'h00000015; //Data_Memory[22] = 32'h00000016; //Data_Memory[23] = 32'h00000017; //Data_Memory[24] = 32'h00000018; //Data_Memory[25] = 32'h00000019; //Data_Memory[26] = 32'h0000001A; //Data_Memory[27] = 32'h0000001B; //Data_Memory[28] = 32'h0000001C; //Data_Memory[29] = 32'h0000001D; //Data_Memory[30] = 32'h0000001E; Data_Memory[31] = 32'h0000001F; end always@(posedge clock) begin if(wr_mem_enable) begin Data_Memory[address] <= datawrite_to_mem; end else if(rd_mem_enable) begin dataread_from_mem <= Data_Memory[address]; end else begin dataread_from_mem <= 32'h00000000; end endendmodule /////INST MEM/* */module INST_MEM( input [31:0] PC, input reset, output [31:0] Instruction_Code); reg [7:0] Memory [43:0]; // Byte addressable memory with 32 locations assign Instruction_Code = {Memory[PC+3],Memory[PC+2],Memory[PC+1],Memory[PC]}; initial begin // Setting 32-bit instruction: add t1, s0,s1 => 0x00940333 Memory[3] = 8'b0000_0000; Memory[2] = 8'b0000_0001; Memory[1] = 8'b0111_1100; Memory[0] = 8'b0000_0001; // Setting 32-bit instruction: sub t2, s2, s3 => 0x413903b3 Memory[7] = 8'b0000_0000; Memory[6] = 8'b0000_0110; Memory[5] = 8'b1000_1111; Memory[4] = 8'b1110_0010; // Setting 32-bit instruction: mul t0, s4, s5 => 0x035a02b3 Memory[11] = 8'b0000_0000; Memory[10] = 8'b0000_0101; Memory[9] = 8'b0111_1100; Memory[8] = 8'b0000_0011; // Setting 32-bit instruction: or t3, s6, s7 => 0x017b4e33 Memory[15] = 8'b1111_1111; Memory[14] = 8'b1111_0100; Memory[13] = 8'b1010_0000; Memory[12] = 8'b1010_0100; // Setting 32-bit instruction: and Memory[19] = 8'b0000_0000; Memory[18] = 8'b0010_1001; Memory[17] = 8'b0001_1101; Memory[16] = 8'b0010_0101; // Setting 32-bit instruction: xor Memory[23] = 8'b0000_0000; Memory[22] = 8'b0001_1000; Memory[21] = 8'b0000_1101; Memory[20] = 8'b0110_0110; // Setting 32-bit instruction: not Memory[27] = 8'b0000_0000; Memory[26] = 8'b0010_1001; Memory[25] = 8'b0011_1101; Memory[24] = 8'b1100_0111; // Setting 32-bit instruction: shift left Memory[31] = 8'b0000_0000; Memory[30] = 8'b0101_0111; Memory[29] = 8'b1100_0110; Memory[28] = 8'b0000_1000; // Setting 32-bit instruction: shift right Memory[35] = 8'b0000_0000; Memory[34] = 8'b0110_1010; Memory[33] = 8'b1101_0010; Memory[32] = 8'b0111_1001; /// Setting 32-bit instruction: Campare Memory[39] = 8'b0000_0000; Memory[38] = 8'b0111_1010; Memory[37] = 8'b1101_0010; Memory[36] = 8'b0110_1010; /// Setting 32-bit instruction: Memory[43] = 8'b0000_0000; Memory[42] = 8'b0111_0111; Memory[41] = 8'b1101_0010; Memory[40] = 8'b0111_0010; end endmodule//IFU/*The instruction fetch unit has clock and reset pins as input and 32-bit instruction code as output.Internally the block has Instruction Memory, Program Counter(P.C) and an adder to increment counter by 4, on every positive clock edge.*/module IFU( input clock,reset, output [31:0] Instruction_Code);reg [31:0] PC = 32'b0; // 32-bit program counter is initialized to zero always @(posedge clock, posedge reset) begin if(reset == 1) //If reset is one, clear the program counter PC <= 0; else PC <= PC+4; // Increment program counter on positive clock edge end // Initializing the instruction memory block INST_MEM instr_mem(.PC(PC),.reset(reset),.Instruction_Code(Instruction_Code));endmodule///MUXmodule Mux_2X1 ( input mem_rd_select, // rd_mem_enable input wire [31:0] dataread_from_mem, regdata2, output reg [31:0] mux_out);always @(mem_rd_select or dataread_from_mem or regdata2) begin if (mem_rd_select == 1) mux_out <= dataread_from_mem ; else mux_out <= regdata2; endendmodule//DFlipFlopmodule DFlipFlop(D,clock,Q);input D; // Data input input clock; // clock input output reg Q; // output Q always @(posedge clock) begin Q <= D; end endmodule ///DATA pathmodule DATAPATH( input [4:0]Read_reg_add1, input [4:0]Read_reg_add2, input [4:0]Reg_write_add, input [3:0]Alu_control, input [11:0]Address, input Wr_reg_enable,Wr_mem_enable,Rd_mem_enable, input clock, input reset, output OUTPUT ); // Declaring internal wires that carry data wire zero_flag; wire [31:0]Dataread_from_mem; wire [31:0]read_data1; wire [31:0]read_data2; wire [31:0]Mux_out; wire [31:0]Alu_result; //wire [31:0]datawrite_to_reg; // Instantiating the register file REG_FILE reg_file_module(.reg_read_add1(Read_reg_add1),.reg_read_add2(Read_reg_add2),.reg_write_add(Reg_write_add),.datawrite_to_reg(Alu_result),.read_data1(read_data1),.read_data2(read_data2),.wr_reg_enable(Wr_reg_enable),.clock(clock),.reset(reset)); // Instanting ALU ALU alu_module(.A(read_data1), .B(Mux_out), .alu_control(Alu_control), .alu_result(Alu_result), .zero_flag(zero_flag)); //Mux Mux_2X1 mux(.mem_rd_select(Rd_mem_enable),.dataread_from_mem(Dataread_from_mem),.regdata2(read_data2),.mux_out(Mux_out)); //Data Memory Data_Mem DM(.clock(clock),.rd_mem_enable(Rd_mem_enable),.wr_mem_enable(Wr_mem_enable),.address(Address),.datawrite_to_mem(Alu_result),.dataread_from_mem(Dataread_from_mem)); // Dflipflop DFlipFlop DF (.D(zero_flag), .Q(OUTPUT),.clock(clock));endmodule/*A register file can read two registers and write in to one register. The RISC V register file contains total of 32 registers each of size 32-bit. Hence 5-bits are used to specify the register numbers that are to be read or written. *//*Register Read: Register file always outputs the contents of the register corresponding to read register numbers specified. Reading a register is not dependent on any other signals.Register Write: Register writes are controlled by a control signal RegWrite. Additionally the register file has a clock signal. The write should happen if RegWrite signal is made 1 and if there is positive edge of clock. */module REG_FILE( input [4:0] reg_read_add1, input [4:0] reg_read_add2, input [4:0] reg_write_add, input [31:0] datawrite_to_reg, output [31:0] read_data1, output [31:0] read_data2, input wr_reg_enable, input clock, input reset); reg [31:0] reg_memory [31:0]; // 32 memory locations each 32 bits wide initial begin reg_memory[0] = 32'h00000000; reg_memory[1] = 32'hFFFFFFFF; reg_memory[2] = 32'h00000002; reg_memory[3] = 32'hFFFFFFFF; reg_memory[4] = 32'h00000004; reg_memory[5] = 32'h01010101; reg_memory[6] = 32'h00000006; reg_memory[7] = 32'h00000000; reg_memory[8] = 32'h10101010; reg_memory[9] = 32'h00000009; reg_memory[10] = 32'h0000000A; reg_memory[11] = 32'h0000000B; reg_memory[12] = 32'h0000000C; reg_memory[13] = 32'h0000000D; reg_memory[14] = 32'h0000000E; reg_memory[15] = 32'h0000000F; reg_memory[16] = 32'h00000010; reg_memory[17] = 32'h00000011; reg_memory[18] = 32'h00000012; reg_memory[19] = 32'h00000013; reg_memory[20] = 32'h00000014; reg_memory[21] = 32'h00000015; //reg_memory[22] = 32'h00000016; //reg_memory[23] = 32'h00000017; //reg_memory[24] = 32'h00000018; //reg_memory[25] = 32'h00000019; //reg_memory[26] = 32'h0000001A; //reg_memory[27] = 32'h0000001B; //reg_memory[28] = 32'h0000001C; //reg_memory[29] = 32'h0000001D; //reg_memory[30] = 32'h0000001E; reg_memory[31] = 32'hFFFFFFFF; end // The register file will always output the vaules corresponding to read register numbers // It is independent of any other signal assign read_data1 = reg_memory[reg_read_add1]; assign read_data2 = reg_memory[reg_read_add2]; // If clock edge is positive and regwrite is 1, we write data to specified register always @(posedge clock) begin if (wr_reg_enable) begin reg_memory[reg_write_add] = datawrite_to_reg; end else reg_memory[reg_write_add] = 32'h00000000; endendmodule/////PROCESSORmodule PROCESSOR( input clock, input reset, output Output); wire [31:0] instruction_Code; wire [3:0] ALu_control; wire WR_reg_enable; wire WR_mem_enable; wire RD_mem_enable; IFU IFU_module(.clock(clock), .reset(reset), .Instruction_Code(instruction_Code)); CONTROL control_module(.opcode(instruction_Code[4:0]),.alu_control(ALu_control),.regwrite_control(WR_reg_enable),.memread_control(RD_mem_enable),.memwrite_control(WR_mem_enable)); DATAPATH datapath_module(.Wr_mem_enable(WR_mem_enable),.Rd_mem_enable(RD_mem_enable),.Read_reg_add1(instruction_Code[9:5]),.Read_reg_add2(instruction_Code[14:10]),.Reg_write_add(instruction_Code[19:15]),.Address(instruction_Code[31:20]),.Alu_control(ALu_control),.Wr_reg_enable(WR_reg_enable), .clock(clock), .reset(reset), .OUTPUT(Output));endmodule**********************************************************************************************************************************************************Below is my Synthesis.tcl file for genus synthesis ******************** set_attribute lib_search_path "/home/sameer23185/Desktop/VDF_PROJECT/lib"set_attribute hdl_search_path "/home/sameer23185/Desktop/VDF_PROJECT"set_attribute library "/home/sameer23185/Desktop/VDF_PROJECT/lib/90/fast.lib"read_hdl Master.velaborateread_sdc Min_area.sdcset_attribute hdl_preserve_unused_register trueset_attribute delete_unloaded_seqs falseset_attribute optimize_constant_0_flops falseset_attribute optimize_constant_1_flops falseset_attribute optimize_constant_latches falseset_attribute optimize_constant_feedback_seqs false#set_attribute prune_unsued_logic falsesynthesize -to_mapped -effort mediumwrite_hdl > report/HDL_min_Netlist.vwrite_sdc > report/constraints.sdc write_script > report/synthesis.greport_timing > report/synthesis_timing_report.repreport_power > report/synthesis_power_report.repreport_gates > report/synthesis_cell_report.repreport_area > report/synthesis_area_report.repgui_show **********************************************WHEN I COMPARING MY GOLDEN.V WITH HDL_min_Netlist.v during conformal , I got these non-equivalent point for every reg memory and for every data memory. I don't know what to do with these non-equivalent point. I've been stuck here for the past four days. Please help me in this and how can I remove this non- equivalent point , since I am new to this I really don't know what to do. Full Article
co how to tell conformal to ignore certain combination of input By community.cadence.com Published On :: Thu, 04 Apr 2024 10:35:38 GMT hi How can I tell the LEC tool to ignore a combination of Primary input bus in both Golden and revised. For example in both Golden and revised there is input [3:0] data_in I want LEC not to check the case that data_in[3:0] == 4'b1000 Full Article
co Quest for Bugs – The Constrained-Random Predicament By community.cadence.com Published On :: Tue, 14 Jun 2022 14:54:00 GMT Optimize Regression Suite, Accelerate Coverage Closure, and Increase hit count of rare bins using Xcelium Machine Learning. It is easy to use and has no learning curve for existing Xcelium customers. Xcelium Machine Learning Technology helps you discover hidden bugs when used early in your design verification cycle.(read more) Full Article compression throughput machine learning Hard to Hit Bin Coverage Closure Regression simulation
co Cadence in Collaboration with Arm Ensures the Software Just Works By community.cadence.com Published On :: Tue, 12 Jul 2022 01:02:00 GMT The increase in compute and data-intensive applications and the need for lower power consumption have resulted in a rapidly growing number of Arm-based devices in various market segments; this requires fast time to market (TTM) and support for off-t...(read more) Full Article SBSA Emulation Pre Silicon compliance Testing Arm SystemReady
co Stay Ahead of Competition with Real-Time Cross-Team Collaborations By community.cadence.com Published On :: Tue, 26 Jul 2022 05:21:00 GMT To stay ahead in competition in chip design real-time collaborations ensure traceability, speedy innovations at reduced the cost.(read more) Full Article collaboration Palladium verification management Traceability vManager
co Coalesce Xcelium Apps to Maximize Performance by 10X and Catch More Bugs By community.cadence.com Published On :: Tue, 02 Aug 2022 04:30:00 GMT Xcelium Simulator has been in the industry for years and is the leading high-performance simulation platform. As designs are getting more and more complex and verification is taking longer than ever, the need of the hour is plug-and-play apps that ar...(read more) Full Article performance SoC apps xcelium simulation verification
co TSN-PTP: A Real-Time Network Clock Synchronizing Protocol By community.cadence.com Published On :: Mon, 12 Sep 2022 06:45:00 GMT In a network containing multiple nodes, the need for synchronization between the various nodes is not just instrumental but also a complicated and highly complex process. This process becomes even more tricky if we synchronize the clocks between the Manager and the Peripheral. As we know, in a real-time network, some of the nodes would behave like Managers while some would be a Peripheral. If we must make the communication process smooth, then the local clocks of these nodes must be synchronized. The problem with this synchronization is that we have the clock running in the Manager as well. If we send the value of the Manager clock to the Peripheral, the synchronization doesn’t happen as we have a propagation delay of the messages, along with the propagation delay of the electronic circuits of Manager and the Peripheral. The cherry on the cake is that these electronic circuit propagation delays are not random and remain constant, so we can add a time offset to it to match the clock. To tackle this challenge, IEEE has come up with a protocol named “Precision Timing Protocol.” Operation of PTP: To synchronize the clocks, a Sync message is sent by the Manager to the Peripheral, which then timestamps the receiving time of the same. Following this, a ‘Follow up’ message is issued by the Manager stating the timestamp at which the Sync message was sent. The Peripheral then finds the difference between the two values and adds this to its current time. After this, the time difference between the Manager and the Peripheral narrows down to only the propagation delay of the messages. To overcome this, the Peripheral issues a ‘Delay Request’ to the Manager, and the Manager, in turn, issues a ‘Delay Response.’ Both these messages have the timestamp of when they were issued. The time at which they are received is then noted. Since two messages are sent, one from the Peripheral and the other from the Manager, there are two propagation delays. Then half of this value is our propagation delay. The Peripheral then adds this propagation delay to its clock, and hence the clock gets synchronized. Advantages of PTP: It provides accurate time stamping. It is a well-known clock synchronization protocol. It provides intensified security inside the premises. It provides the possibility of setting coordinated actions and synchronized communication. There are various versions of PTP that have been developed over time, namely PTPv1, PTPv2, PTPv2_1, and the latest PTP-AS. Cadence Verification IP for Ethernet is available to support the newer version of PTP, allowing simulation of the device for efficient IP, SoC, and system-level design verification. Semiconductor companies can start using it to fully verify their controller design and achieve functional verification closure on it within no time. Full Article Verification IP uvm 5G Network Ethernet VIP Functional Verification Cadence VIP portfolio VIP Automotive Ethernet Ethernet TSN PTP precision timing protocol verification
co DesignCon Best Paper 2024: Addressing Challenges in PDN Design By community.cadence.com Published On :: Tue, 17 Sep 2024 19:40:00 GMT Explore Impacts of Finite Interconnect Impedance on PDN Characterization Over the past few decades, many details have been worked out in the power distribution network (PDN) in the frequency and time domains. We have simulation tools that can analyze the physical structure from DC to very high frequencies, including spatial variations of the behavior. We also have frequency- and time-domain test methods to measure the steady-state and transient behavior of the built-up systems. All of these pieces in our current toolbox have their own assumptions, limitations, and artifacts, and they constantly raise the challenging question that designers need to answer: How to select the design process, simulation, measurement tools, and processes so that we get reasonable answers within a reasonable time frame with a reasonable budget. Read this award-winning DesignCon 2024 paper titled “Impact of Finite Interconnect Impedance Including Spatial and Domain Comparison of PDN Characterization.” Led by Samtec’s Istvan Novak and written with a team of nine authors from Cadence, Amazon, and Samtec, the paper discusses a series of continually evolving challenges with PDN requirements for cutting-edge designs. Read the full paper now: “Impact of Finite Interconnect Impedance Including Spatial and Domain Comparison of PDN Characterization.” Full Article featured DesignCon PDN signal integrity analysis Signal Integrity PDN Analysis Sigrity
co 10 Most Viewed Posts in Cadence Community Forum By community.cadence.com Published On :: Thu, 26 Sep 2024 05:39:00 GMT Community engagement is a dynamic concept that does not adhere to a singular, universal approach. Its various forms, methods, and objectives can vary significantly depending on the specific context, goals, and desired outcomes. Whether you seek assis...(read more) Full Article PCB CFD Allegro X AI Community cadence awr community forum PCB Editor OrCAD PCB design OrCAD X allegro x PCB Capture
co Using Voltus IC Power Integrity to Overcome 3D-IC Design Challenges By community.cadence.com Published On :: Tue, 08 Oct 2024 06:12:00 GMT Power network design and analysis of 3D-ICs is a major challenge due to the complex nature and large size of the power network. In addition, designers must deal with the complexity of routing power through the interposer, multiple dies, through-silicon vias (TSVs), and through-dielectric vias (TDVs). Cadence’s Integrity 3D-IC Platform and Voltus IC Power Integrity Solution provide a fully integrated solution for early planning and analysis of 3D-IC power networks, 3D-IC chip-centric power integrity signoff, and hierarchical methods that significantly improve capacity and performance of power integrity (PI) signoff while maintaining a very high level of accuracy at signoff. This blog summarizes the typical design challenges faced by today’s 3D-IC designers, as discussed in our recent webinar, “Addressing 3D-IC Power Integrity Design Challenges.” Please click here to view the full webinar. Major Trends in Advanced Chip Design From chips to chiplets, stacked die, 3D-ICs, and more, three major trends are impacting advanced semiconductor packaging design. The first is heterogenous integration, which we define as a disaggregated approach to designing systems on chip (SoCs) from multiple chiplets. This approach is similar to system-in-package (SiP) design, except that instead of integrating multiple bare die – including 3D stacking – on a single substrate, multiple IPs are integrated in the form of chiplets on a single substrate. The second major trend is around new silicon manufacturing techniques that leverage silicon vias (TSVs) and high-density fanout RDL. These advancements mean that silicon is becoming a more attractive material for packaging, especially when high bandwidth and form factor become key attributes in the end design. This brings new design and verification challenges to most packaging engineers who typically work with organic and ceramic substrate materials. Finally, on the ecosystem side, all the large semiconductor foundries now offer their own versions of advanced packaging. This brings new ways of supporting design teams with technologies like reference flows and PDKs, concepts that have typically been lacking in the packaging community. Cadence has worked with many of the leading foundries and outsourced semiconductor assembly and test facilities (OSATs) to develop multi-chip(let) packaging reference flows and package assembly design kits. The downside is that, with the time restrictions designers are under today, there isn’t enough time to simulate the details of these flows and PDKs further. For those who must make the best electro/thermal/physical decisions to achieve the best power/performance/area/cost (PPAC), factors can include accurate die size estimations, thermal feasibility, die-to-die interconnect planning, interposer planning (silicon/organic), front-to-front and front-to-back (F2F/F2B) planning, layer stack and electromigration/ IR drop (EMIR)/TSV planning, IO bandwidth feasibility, and system-level architecture selection. 3D-IC Power Network Design and Analysis The key to success in 3D-IC design is early power integrity planning and analysis. Cadence’s Integrity 3D-IC platform is a high-capacity 3D-IC platform that enables 3D design planning, implementation, and system analysis in a single, unified cockpit. Cadence’s Voltus IC Power Integrity Solution is a comprehensive full chip electromigration, IR drop, and power analysis solution. With its fully distributed architecture and hierarchical analysis capabilities, Voltus provides very fast analysis and has the capacity to handle the largest designs in the industry. Typically, 3D-IC PDN design and analysis is performed in four phases, as shown in Figure 1. Phase 1 - Perform early power delivery network (PDN) exploration with each fabric’s PDN cascaded in system PI with early circuit models. Phase 2 – Plan 3D-IC PDNs in Cadence’s Integrity 3D-IC platform, including micro bumps, TSVs, and through dielectric vias (TDVs), power grid synthesis for dies, and early rail analysis and optimization. Phase 3 – Perform full chip-centric signoff in Voltus with detailed die, interposer, and package models, including chip die models, while keeping some dies flat. Phase 4 – Perform full system-level signoff with Cadence’s Sigrity SystemPI using detailed extracted package models from Sigrity XtractIM, board models from Sigrity PowerSI or Clarity 3D Solver, interposer models from XtractIM or Voltus, and chip power models from Voltus. Figure 1. 3D-IC PDN design and analysis phases 3D-IC Chip-Centric Signoff The integration of Integrity 3D-IC and Voltus enables chip-centric early analysis and signoff. Figure 2 and Figure 3 highlight the chip centric early PI optimization and signoff flows. In early analysis, the on-chip power networks are synthesized, and the micro bumps and TSVs can be placed and optimized. In the signoff stage, all the detailed design data is used for power analysis, and detailed models are extracted and used for package, interposer, and on-die power networks. Figure 2. Early chip-centric PI analysis and optimization flow Figure 3. Chip-centric 3D-IC PI signoff Hierarchical 3D-IC PI Analysis To improve the capacity and performance of 3D-IC PI analysis, Voltus enables hierarchical analysis using chiplet models. Chiplet models can be reduced chip models in spice format or more accurate xPGV models which are highly accurate proprietary models generated by Voltus. With xPGV models, the hierarchical PI analysis has almost the same accuracy as flat analysis but offers 10X or higher benefit in runtime and memory requirements. Conclusion This blog has highlighted the major design trends enabled by advanced 3D packaging and the design challenges arising from these advancements. The design of power delivery networks is one of these major challenges. We have discussed Cadence solutions to overcome this PI challenge. To learn more, view our recent webinar, "Addressing 3D-IC Power Integrity Design Challenges" and visit the Voltus web page. Full Article PDN 3D-IC Integrity Power Integrity in-design analysis Sigrity Clarity 3D Solver
co Modern Thermal Analysis Overcomes Complex Design Issues By community.cadence.com Published On :: Wed, 16 Oct 2024 04:20:00 GMT Melika Roshandell, Cadence product marketing director for the Celsius Thermal Solver, recently published an article in Designing Electronics discussing how the use of modern thermal analysis techniques can help engineers meet the challenges of today’s complex electronic designs, which require ever more functionality and performance to meet consumer demand. Today’s modern electronic designs require ever more functionality and performance to meet consumer demand. These requirements make scaling traditional, flat, 2D-ICs very challenging. With the recent introduction of 3D-ICs into the electronic design industry, IC vendors need to optimize the performance and cost of their devices while also taking advantage of the ability to combine heterogeneous technologies and nodes into a single package. While this greatly advances IC technology, 3D-IC design brings about its own unique challenges and complexities, a major one of which is thermal management. To overcome thermal management issues, a thermal solution that can handle the complexity of the entire design efficiently and without any simplification is necessary. However, because of the nature of 3D-ICs, the typical point tool approach that dissects the design space into subsections cannot adequately address this need. This approach also creates a longer turnaround time, which can impact critical decision-making to optimize design performance. A more effective solution is to utilize a solver that not only can import the entire package, PCB, and chiplets but also offers high performance to run the entire analysis in a timely manner. Celsius Thermal Management Solutions Cadence offers the Celsius Thermal Solver, a unique technology integrated with both IC and package design tools such as the Cadence Innovus Implementation System, Allegro PCB Designer, and Voltus IC Power Integrity Solution. The Celsius Thermal Solver is the first complete electrothermal co-simulation solution for the full hierarchy of electronic systems from ICs to physical enclosures. Based on a production-proven, massively parallel architecture, the Celsius Thermal Solver also provides end-to-end capabilities for both in-design and signoff methodologies and delivers up to 10X faster performance than legacy solutions without sacrificing accuracy. By combining finite element analysis (FEA) for solid structures with computational fluid dynamics (CFD) for fluids (both liquid and gas, as well as airflow), designers can perform complete system analysis in a single tool. For PCB and IC packaging, engineering teams can combine electrical and thermal analysis and simulate the flow of both current and heat for a more accurate system-level thermal simulation than can be achieved using legacy tools. In addition, both static (steady-state) and dynamic (transient) electrical-thermal co-simulations can be performed based on the actual flow of electrical power in advanced 3D structures, providing visibility into real-world system behavior. Designers are already co-simulating the Celsius Thermal Solver with Celsius EC Solver (formerly Future Facilities’ 6SigmaET electronics thermal simulation software), which provides state-of-the-art intelligence, automation, and accuracy. The combined workflow that ties Celsius FEA thermal analysis with Celsius EC Solver CFD results in even higher-accuracy models of electronics equipment, allowing engineers to test their designs through thermal simulations and mitigate thermal design risks. Conclusion As systems become more densely populated with heat-dissipating electronics, the operating temperatures of those devices impact reliability (device lifetime) and performance. Thermal analysis gives designers an understanding of device operating temperatures related to power dissipation, and that temperature information can be introduced into an electrothermal model to predict the impact on device performance. The robust capabilities in modern thermal management software enable new system analyses and design insights. This empowers electrical design teams to detect and mitigate thermal issues early in the design process—reducing electronic system development iterations and costs and shortening time to market. To learn more about Cadence thermal analysis products, visit the Celsius Thermal Solver product page and download the Cadence Multiphysics Systems Analysis Product Portfolio. Full Article Celsius Thermal Solver thermal management 3D-IC Celsius EC Solver Thermal Analysis
co Aligning Components using Offset Mode in Allegro X APD By community.cadence.com Published On :: Tue, 28 Nov 2023 12:49:16 GMT Starting SPB 23.1, in Allegro X PCB Editor and Allegro X Advanced Package Designer, you can align components by using offset mode. Earlier only spacing mode was available. Follow these steps to Align Components using Offset Mode: Set Application Mode to Placement Edit. Drag the components that need to be aligned and right-click and choose Align Components. Now, in the Options tab, you will notice Spacing Section with Equal Offset. You can equally and individually offset the components by using the +/- buttons for increment or decrement. Full Article
co How to reuse device files for existing components By community.cadence.com Published On :: Thu, 07 Dec 2023 11:09:26 GMT Have you ever encountered ERROR(SPMHNI-67) while importing logic? If yes, you might already know that you had to export libraries of the design and make sure that paths (devpath, padpath, and psmpath) include the location of exported files. Starting in SPB23.1, if you go to File > Import > Logic/Netlist and click on the Other tab, you will see an option, Reuse device files for existing components. After selecting this option, ERROR(SPMHNI-67) will no longer be there in the log file, because the tool will automatically extract device files and seamlessly use them for newly imported data. In other words, SPB_23.1 lets you reuse the device / component definitions already in the design without first having to dump libraries manually. An excellent improvement, don’t you think? Full Article
co How to export and import symbols and component properties through Die Text wizards By community.cadence.com Published On :: Thu, 04 Jan 2024 15:50:39 GMT Starting SPB 23.1, Allegro X APD lets you import/export the symbol and component properties by using Die Text-In/Out wizards. Exporting the symbol You can export the symbol by using File > Export > Die Text-Out Wizard. In the Die Text-Out Wizard window, you can see the newly added options, that is, Component Properties and Symbol Properties. This entire information including the properties will be saved in a text file. Importing the symbol You can import the same text file in Allegro X APD by using Die Text-In Wizard. Choose the text file you want to import. Symbol properties added in the text file will be visible in the Die Text-In Wizard window. Full Article
co Allegro: Tip of the Week : Push Connectivity By community.cadence.com Published On :: Fri, 09 Feb 2024 11:33:39 GMT At times, there might arise a condition in the design where you need to push the net of selected pins to all its physically connected objects. For example, a few pins are updated with a new net, and it is required to push the new net to all its connected objects. At times, you might update the die or copy routing to other components, when a portion of routing gets the wrong net. To propagate the net of the pin to all its physically connected objects, Allegro X APD uses the standalone command, Push Connectivity. You can call the command through Logic > Push Connectivity. Alternately, you can use the push connectivity command at the command line. Once the command is active, it lets you select pins or symbols that will be used to push net connectivity to all connected objects. Presently, dynamic shapes and filled rectangles are not considered as part of connectivity. Static shapes are supported. Full Article
co DFA check space of compont to BGA ball or BGA PAD in APD By community.cadence.com Published On :: Fri, 29 Mar 2024 12:37:40 GMT Hi, There are mang components in BGA ball side of flipchip package. Are there DFA check space of compont body or pin soldermask to BGA ball or BGA PAD or bga soldermask in allegro APD? I only find space of compont to compont in APD DFA. Full Article
co Allegro X APD - Tip of the week: Wondering how to set two adjacent layers as conductor layers! Then this post should help you. By community.cadence.com Published On :: Fri, 10 May 2024 14:01:45 GMT By default, a dielectric must separate each pair of conductor layers in the cross-section of a design. In rare cases, this does not represent the real, manufactured substrate. If your design requires you to have conductor layers that are not separated by a dielectric (such as, for half-etch designs), there is a variable that needs to be set in Allegro X APD. You must set this by enabling the variable icp_allow_adjacent_conductors. This entry, and its location in the User Preferences Editor, are shown in the following image. The Objects on adjacent conductor layers do not electrically connect together, automatically. A via must be used to establish the inter-layer connections. When enabling this option, it is recommended to exercise caution because excluding dielectric layers from your cross-section can lead to inaccurate calculations, including the calculations for signal integrity and via heights. It is important that your cross-section accurately reflect the finished product to ensure the most accurate results possible. Any dielectric layers present in the manufactured part need to be in the cross-section for accurate extraction, 3D viewing, and so on. Let us know your comments on the various designs that would require adjacent conductor layers. Full Article
co How to transfer etch/conductor delays from Allegro Package Designer (APD) to pin delays in Allegro PCB Editor By community.cadence.com Published On :: Sun, 10 Nov 2024 23:39:10 GMT The packaging group has finished their design in Allegro Package Designer (APD) and I want to use the etch/conductor delay information from the mcm file in the board design in Allegro PCB Designer. Is there a method to do this? This can be done by exporting the etch/conductor data from APD and importing it as PIN_DELAY information into Allegro PCB Editor. If you are generating a length report for use in Allegro Pin Delay, you should consider changing the APD units to Mils and uncheck the Time Delay Report. In Allegro Package Designer: Select File > Export > Board Level Component. Select HDL for the Output format and select OK. 3. Choose a padstack for use when generating the component and select OK. This will create a file, package_pin_delay.rpt, in the component subdirectory of the current working directory. This file will contain the etch/conductor delay information that can be imported into Allegro. In Allegro PCB Editor: Make sure that the device you want to import delays to is placed in your board design and is visible. Select File > Import > Pin delay. Browse to the component directory and select package_pin_delay.rpt. The browser defaults to look for *.csv files so you will need to change the Files of type to *.* to select the file. You may be prompted with an error message stating that the component cannot be found and you should select one. If so, select the appropriate component. Select Import. Once the import is completed, select Close. Note: It is important that all non-trace shapes have a VOLTAGE property so they will not be processed by the the 2D field solver. You should run Reports > Net Delay Report in APD prior to generating the board-level component. This will display the net name of each net as it is processed. If you miss a VOLTAGE property on a net, the net name will show in the report processing window, and you will know which net needs the property. Full Article
co Maximizing Display Performance with Display Stream Compression (DSC) By community.cadence.com Published On :: Wed, 11 Sep 2024 12:50:00 GMT Display Stream Compression (DSC) is a lossless or near-lossless image compression standard developed by the Video Electronics Standards Association (VESA) for reducing the bandwidth required to transmit high-resolution video and images. DSC compresses video streams in real-time, allowing for higher resolutions, refresh rates, and color depths while minimizing the data load on transmission interfaces such as DisplayPort, HDMI, and embedded display interfaces. Why Is DSC Needed? In the ever-evolving landscape of display technology, the pursuit of higher resolutions and better visual quality is relentless. As display capabilities advance, so do the challenges of managing the immense amounts of data required to drive these high-performance screens. This is where DSC steps in. DSC is designed to address the challenges of transmitting ultra-high-definition content without sacrificing quality or performance. As displays grow in resolution and capability, the amount of data they need to transmit increases exponentially. DSC addresses these issues by compressing video streams in real-time, significantly reducing the bandwidth needed while preserving image quality. DSC Use in End-to-end System DSC Key Features Encoding tools: Modified Median-Adaptive Prediction (MMAP) Block Prediction (BP) Midpoint Prediction (MPP) Indexed color history (ICH) Entropy coding using delta size unit-variable length coding (DSU-VLC) The DSC bitstream and decoding process are designed to facilitate the decoding of 3 pixels/clock in practical hardware decoder implementations. Hardware encoder implementations are possible at 1 pixel/clock. DSC uses an intra-frame, line-based coding algorithm, which results in very low latency for encoding and decoding. DSC encoding algorithm Compression can be done to a fractional bpp. The compressed bits per pixel ranges from 6 to 63.9375. For validation/compliance certification of DSC compression and decompression engines, cyclic redundancy checks (CRCs) are used to verify the correctness of the bitstream and the reconstructed image. DSC supports more color bit depths, including 8, 10, 12, 14, and 16 bpc. DSC supports RGB and YCbCr input format, supporting 4:4:4, 4:2:2, and 4:2:0 sampling. Maximum decompressor-supported bits/pixel values are as listed in the Maximum Allowed Bit Rate column in the table below DP DSC Source device shall program the bit rate within the range of Minimum Allowed Bit Rate column in the table: Summary Display Stream Compression (DSC) is a technology used in DisplayPort to enable higher resolutions and refresh rates while maintaining high image quality. It works by compressing the video data transmitted from the source to the display, effectively reducing the bandwidth required. DSC uses a visually lossless algorithm, meaning that the compression is designed to be imperceptible to the human eye, preserving the fidelity of the image. This technology allows for smoother, more detailed visuals at higher resolutions, such as 4K or 8K, without requiring a significant increase in data bandwidth. More Information Cadence has a very mature Verification IP solution. Verification over many different configurations can be used with DisplayPort 2.1 and DisplayPort 1.4 designs, so you can choose the best version for your specific needs. The DisplayPort VIP provides a full-stack solution for Sink and Source devices with a comprehensive coverage model, protocol checkers, and an extensive test suite. More details are available on the DisplayPort Verification IP product page, Simulation VIP pages. If you have any queries, feel free to contact us at talk_to_vip_expert@cadence.com Full Article resolution DisplayPort Display Stream Compression lossless
co Use Verisium SimAI to Accelerate Verification Closure with Big Compute Savings By community.cadence.com Published On :: Fri, 13 Sep 2024 07:30:00 GMT Verisium SimAI App harnesses the power of machine learning technology with the Cadence Xcelium Logic Simulator - the ultimate breakthrough in accelerating verification closure. It builds models from regressions run in the Xcelium simulator, enabling the generation of new regressions with specific targets. The Verisium SimAI app also features cousin bug hunting, a unique capability that uses information from difficult-to-hit failures to expose cousin bugs. With these advanced machine learning techniques, Verisium SimAI offers the potential for a significant boost in productivity, promising an exciting future for our users. Figure 1: Regression compression and coverage maximization with Verisium SimAI What can I do with Verisium SimAI? You can exercise different use cases with Verisium SimAI as per your requirements. For some users, the goal might be regression compression and improving coverage regain. Coverage maximization and hitting new bins could be another goal. Other users may be interested in exposing hard-to-hit failures, bug hunting for difficult to find issues. Verisium SimAI allows users to take on any of these challenges to achieve the desired results. Let's go into some more details of these use cases and scenarios where using SimAI can have a big positive impact. Using SimAI for Regression Compression and Coverage Regain Unlock up to 10X compute savings with SimAI! Verisium SimAI can be used to compress regressions and regain coverage. This flow involves setting up your regression environment for SimAI, running your random regressions with coverage and randomization data followed by training, and finally, synthesizing and running the SimAI-generated compressed regressions. The synthesized regression may prune tests that do not help meet the goal and add more runs for the most relevant tests, as well as add run-specific constraints. This flow can also be used to target specific areas like areas involving a high code churn or high complexity. You can check out the details of this flow with illustrative examples in the following Rapid Adoption Kits (RAK) available on the Cadence Learning and Support Portal (Cadence customer credentials needed): Using SimAI with vManager (For Regression Compression and Coverage Regain) (RAK) Using SimAI with a Generic Runner (For Regression Compression and Coverage Regain) (RAK) Using SimAI for Coverage Maximization and Targeting coverage holes Reduce your Functional Coverage Holes by up to 40% using SimAI! Verisium SimAI can be used for iterative coverage maximization. This is most effective when regressions are largely saturated, and SimAI will explicitly try to hit uncovered bins, which may be hard-to-hit (but not impossible) coverage holes. This is achieved using iterative learning technology where with each iteration, SimAI does some exploration and determines how well it performed. This technique can also be used for bug hunting by using holes as targets of interest. See more details on the Cadence Learning and Support Portal: Using SimAI for Coverage Maximization - vManager flow (RAK) Using SimAI for Coverage Maximization - Generic Runner Flow (RAK) Using SimAI for Bug Hunting Discover and fix bugs faster using SimAI! Verisium SimAI has a new bug hunting flow which can be used to target the goal of exposing hard-to-hit failure conditions. This is achieved using an iterative framework and by targeting failures or rare bins. The goal to target failures is best exercised when the overall failure rate is typically low (below 5%). Iterative learning can be used to improve the ability to target specific areas. Use the SimAI bug hunting use case to target rare events, low hit coverage bins, and low hit failure signatures. See more details on the Cadence Learning and Support Portal: Using SimAI for Bug Hunting with vManager (RAK) Using SimAI for Bug Hunting – Generic runner flow (RAK) Unlock compute savings, reduce your functional coverage holes, and discover and fix bugs faster with the power of machine learning technology now enabled by Verisium SimAI! Please keep visiting https://support.cadence.com/raks to download new RAKs as they become available. Please note that you will need the Cadence customer credentials to log on to the Cadence Online Support https://support.cadence.com/, your 24/7 partner for getting help in resolving issues related to Cadence software or learning Cadence tools and technologies. Happy Learning! Full Article Functional Verification verisium machine learning SimAI AI
co Flow Control Credit Updates in PCIe 6.1 ECN By community.cadence.com Published On :: Fri, 13 Sep 2024 21:25:20 GMT As technology continues to evolve at a rapid pace, the importance of robust and efficient interconnect standards cannot be overstated. Peripheral Component Interconnect Express (PCIe) has been a cornerstone in high-speed data transfer, enabling seamless communication between various hardware components. With the advent of PCIe 6.1 ECN, a significant advancement in speed and efficiency, ensuring the accuracy and reliability of its operations is paramount. One critical aspect of this is the verification of shared credit updates. For detailed understanding on Shared Credit, please refer Understanding PCIe 6.0 Shared Flow Control. In this blog, we will discuss why this verification is essential and what it entails. Introduction PCIe 6.1 ECN brings numerous advancements over earlier versions, such as increased bandwidth and faster data transfer speeds. A crucial mechanism for efficient data transmission in PCIe 6.0 is the credit-based flow control system. In this system, devices monitor credits, representing the buffer capacity available for incoming data. When a device transmits data, it uses credits, which are replenished or adjusted once the data is received and processed. This system ensures that the sender does not overload the receiver. Given the critical role of shared credit updates in maintaining the integrity and efficiency of data transfers, verification of these updates is crucial. Proper management of credit updates is essential to ensure data integrity, as any discrepancies can lead to data loss, corruption, or system crashes. Verification also guarantees efficient resource allocation, preventing scenarios where some components are starved of credit while others have an excess, thus avoiding inefficiencies. Credit inefficiencies pose issues in low power negotiations by preventing devices from entering low power states. Additionally, verification involves checking for proper error handling mechanisms, ensuring that the system can recover gracefully from errors in credit updates and maintain overall stability. PCIe 6.1 ECN Flow Control Optimizations Over PCIe 6.0 PCIe 6.1 ECN builds on the FLIT-based architecture introduced in PCIe 6.0, further optimizing flow control mechanisms to handle increased data rates and improved efficiency. PCIe 6.1 ECN introduced refinements in credit management, making the allocation and advertisement of credits more precise, which helps in reducing bottlenecks and improving data flow efficiency. Enhancements in flow control protocols ensure better management of buffer spaces and more efficient credit allocation. These enhancements are designed to handle the increased data rates and throughput demands of next-generation applications, ensuring robust and efficient data flow across PCIe devices. Below are some major updates: There have been improvements in error detection and correction mechanisms in PCIe 6.1 ECN to enhance flow control reliability by ensuring that corrupted data packets are detected and handled appropriately without disrupting the flow of valid packets. The merged credit system, which was a key feature introduced int PCIe 6.0 to simplify and optimize credit management, was further enhanced in PCIe 6.1 ECN to improve performance and efficiency. PCIe 6.1 ECN introduced better algorithms for allocating and reclaiming merged credits to handle high data rates, introduced more robust error detection and correction mechanism reducing the degradation or system instability. PCIe 6.1 ECN provided clear guidelines on how to implement the merged credit system correctly, helping developers to implement more reliable systems. For more details, please refer to Specifications section 2.6.1 Flow Control (FC) Rules. Summary In summary, PCIe 6.0 is a complex protocol with many verification challenges. You must understand many new Spec changes and think about the robust verification plan for the new features and backward compatible tests impacted by new features. Cadence’s PCIe 6.0 Verification IP is fully compliant with the latest PCIe Express 6.0 specifications and provides an effective and efficient way to verify the components interfacing with the PCIe 6.0 interface. Cadence VIP for PCIe 6.0 provides exhaustive verification of PCIe-based IP and SoCs, and we are working with early adopter customers to speed up every verification stage. More Information For more info on how Cadence PCIe Verification IP and Triple Check VIP enable users to confidently verify PCIe 6.0, see VIP for PCI Express, VIP for Compute Express Link and TripleCheck for PCI Express See the PCI-SIG website for more details on PCIe in general and the different PCI standards. For more information on PCIe 6.0 new features, please visit PCIeLaneMargin, PCIe6.0LaneMargin, and Demonstrating PCIe 6.0 Equalization Procedure. Full Article Verification IP PCIExpress PCIe pcie gen6 PCIe 6.0 verification
co Training Insights – Palladium Emulation Course for Beginner and Advanced Users By community.cadence.com Published On :: Fri, 13 Sep 2024 23:00:00 GMT The Cadence Palladium Emulation Platform is a hardware system that implements the design, accelerating its execution and verification. Itoffers the highest performance and fastest bring-up times for pre-silicon validation of billion-gate designs, using a custom processor built by Cadence. This Palladium Introduction course is based on the Palladium 23.03 ISR4 version and covers the following modules: Introduction Palladium flow Running a design on the Palladium system This course starts with an “Introduction” module that explains Palladium and other verification platforms to show its place in the big picture. It also compares Palladium with Protium and simulation and discusses its usage and limitations. The “Palladium Flow” module includes two stages at a high level, which are Compile and Run. Then, it covers these stages in detail. First, it covers the ICE compile flow and IXCOM compile flow steps in detail. Then it explains Run, which is common for both ICE and IXCOM modes. The third module, “Running Design on the Palladium System,” covers all the items required for running your design on the Palladium system, including: Software stack requirements Basic concepts required to understand the flow Compute machine requirements In addition, this course contains labs for both the ICE and IXCOM flows with detailed steps to exercise the features provided by the Palladium system. The lab explains a practical example of multiple counters and exercising their signals for force, monitor, and deposit features, along with frequency calculation using a real-time clock. The course is available on the Cadence support page: There is also a Digital Badge available. You will find the Badge exam opportunity when you enroll in the Online training or after you have taken the training as "live" training. For questions and inquiries, or issues with registration, reach out to us at Cadence Training. Want to stay up to date on webinars and courses? Subscribe to Cadence Training emails. To view our complete training offerings, visit the Cadence Training website. Related Training Bytes Palladium: What Are Verification Platforms Palladium: What Is Processor Based Emulation Palladium: Comparing Emulation (Z2) and Prototyping (X2) Palladium: What Are ICE and IXCOM Compile Flow Palladium: How to Process a Design to Run on Palladium Palladium: XCOM Compile Flow (TB+RTL to Palladium Database) Palladium: ICE Compile Flow (RTL to Palladium Database) Palladium: Legacy ICE Compile Flow Palladium: Cadence Software Releases for Palladium and Protium Flow Palladium: Setting of PATHs for Using Palladium Palladium: Z2 Hardware Structure (Blade and Boards) Palladium: What Is Sourceless and Loadless nets Palladium: Design Clocks Palladium: Step Count and Step Clock Palladium: Steps for Running the Design on Palladium Z2 Related Courses Verilog Language and Application Training SystemVerilog for Design and Verification Xcelium Simulator Related Blogs Training Insights – A New Free Online Course on the Protium System for Beginner and Advanced Users It’s the Digital Era; Why Not Showcase Your Brand Through a Digital Badge! Training Insights - Free Online Courses on Cadence Learning and Support Portal Full Article digital badge live training blended training Palladium Training Insights online training
co Jasper Formal Fundamentals 2403 Course for Starting Formal Verification By community.cadence.com Published On :: Mon, 30 Sep 2024 09:16:00 GMT The course "Jasper Formal Fundamentals v24.03" introduces formal analysis to those who want to use formal analysis for design or verification. To optimally benefit from this course, you must already have sufficient knowledge of the System Verilog assertions to be capable of writing properties for formal verification. Hence, this training provides a module on formal analysis to help cover this essential background. In this course, you will learn how to code efficient SVA Properties for formal analysis, understand formal complexity and how to overcome it, and learn the basics of formal coverage. After completing this course, you will be able to: Define reusable, functionally correct SVA properties that are efficient for formal tools. These shall use abstract auxiliary code to simplify descriptions, make code maintenance easier, reduce debug time, and reduce tool-proof runtime. Set up, run, and analyze results from formal analysis. Identify designs upon which formal is likely to be successful while understanding formal complexity issues and how to identify and overcome them. Use a systematic property development process to approach a completely new verification problem. Understand the basics of formal coverage. The most recently updated release includes new modules on: "Basic complexity handling" which discusses the complexity in formal and how to identify and handle them. "Complexity reduction methods” which discusses the complexity reduction methods and which is suitable for which type of complexity problem. “Coverage in formal” which discusses the basics of coverage in formal verification and how coverage can be used in formal. Take this course to learn the basics of formal verification. What's Next? You can check out the complete training: Jasper Formal Fundamentals. There is a free online version of the training available 24/7 for all customers with a Cadence Learning and Support Portal account. If you are interested in an instructor-led version of the training, please contact Cadence Training. And don't forget to obtain your digital badge after completing the training! You can also check Jasper University page for more materials on formal analysis and Jasper apps. Related Trainings Jasper Formal Expert Training Course | Cadence Verilog Language and Application Training Course | Cadence SystemVerilog for Design and Verification Training Course | Cadence SystemVerilog Assertions Training Course | Cadence Related Training Bytes Jasper Formal Property Verification (FPV) App: Basic Usage Demo (Video) Jasper Formal Methodology playlist Related Training Blogs It’s the Digital Era; Why Not Showcase Your Brand Through a Digital Badge! Training Insights: Introducing the C++ Course for All Your C++ Learning Needs! Training Insights: Reaching Your Verification Closure Using Verisium Manager Training Insights - Free Online Courses on Cadence Learning and Support Portal Full Article Jasper Formal Fundamentals FPV Formal Analysis formal Jasper Jasper Apps Formal verification verification
co Wild River Collaborates with Cadence on CMP-70 Channel Modeling By community.cadence.com Published On :: Wed, 23 Oct 2024 23:00:00 GMT Wild River Technology (WRT), the leading supplier of signal integrity measurement and optimization test fixtures for high-speed channels at data rates of up to 224G, has announced the availability of a new advanced channel modeling solution that helps achieve extreme signal integrity design to 70GHz. Read the press release. The CMP-70 program continues the industry-first simulation-to-measurement collaboration with Cadence that was initially established with the CMP-50. Significant resources were dedicated to the development of the CMP-70 by Cadence and WRT over almost three years. The CMP-70 will be on display at DesignCon 2025 , January 28-30, in Cadence booth 827 to benchmark the Cadence Clarity 3D Solver . “I am not a fan of hype-based programs that simply get attention,” remarked Alfred P. Neves, WRT’s co-founder and chief technical officer. “Both Cadence and Wild River brought substantial skills to the table in this project as we continued our industry-first simulation-to-measurement collaboration. The result is a proven, robust and accurate platform that brings extreme signal integrity to 70GHz designs. This application package has also been instrumental in demonstrating the robust 3D EM simulation capability of the Cadence Clarity solver.” “We’re delighted to continue the joint development and validation program with WRT that started with the CMP-50,” said Gary Lytle, product management director at Cadence. “The skilled and experienced signal integrity technologists that both companies bring to the program results in a superior signal integrity solution for our mutual customers.” CMP-70 Solution Features The solution is available both in a standard configuration and as a custom solution for customer-specific stackups and fabrication. The primary target application is to support a 3D EM solver analysis modeling versus the time- and frequency-domain measurement methodologies. The solution features include: The CMP-70 platform, assembled and 100% TDR NIST traceable tested, with custom stands Material Identification overview web-based meeting including anisotropic 3D material identification A cross-section PCB report and structures for using as-fabricated geometries Measured S-parameters, pre-tested for quality (passivity/causality and resampled for time domain simulations) A host of novel crosstalk structures suited for 112G HD level project analysis PCB layout design files (NDA required) An EDA starter library including loss models with industry-first accurate surface roughness models Comprehensive training available for 3D EM analysis – correspondence, material ID in X-Y and Z axis for a host of EDA tools Industry-First Hausdorff Technique The WRT application package also includes an industry-first modified Hausdorff (MHD) technique , included as MATLAB code. This algorithmic approach provides an accurate way to compare two sets of measurements in multi-dimensional space to determine how well they match. The technique is used to compare the results simulated by the Clarity solver with those measured on the CMP-70 platform. The methodology and initial results are shown in the figure below, where the figure of merit (FOM) is calculated from 10, 35, and finally to 50GHz. The MHD algorithm requires a MATLAB license, but WRT also accommodates customer data as another option, where WRT provides the comparison between measured and simulated data. Additional Resources If you are attending DesignCon 2025 , be sure to stop by Cadence booth 827 to see WRT’s CMP-70 advanced channel modeling solution in action with the Clarity 3D Solver. Check out our on-demand webinar, " Validating Clarity 3D Solver Accuracy Through Measurement Correlation ." Learn more about the CMP-70 solution and the Clarity 3D Solver . For more information about Cadence’s full suite of integrated multiphysics simulation solutions, download our Multiphysics System Analysis Solutions Portfolio . Full Article
co Versatile Use Case for DDR5 DIMM Discrete Component Memory Models By community.cadence.com Published On :: Tue, 29 Oct 2024 19:00:00 GMT DDR5 DIMM Architectures The DDR5 generation of Double Data Rate DRAM memories has experienced rapid adoption in recent years. In particular, the JEDEC-defined DDR5 Dual Inline Memory Module (DIMM) cards have become a mainstay for systems looking for high-density, high-bandwidth, off-chip random access memory[1]. Within a short time, the DIMM architecture evolved from an interconnected hierarchy of only SDRAM memory devices (UDIMM[2]) to complex subsystems of interconnected components (RDIMM/LRDIMM/MRDIMM[3]). DIMM Designs and Popular Verification Use Cases The growing complexity of the DIMMs presented a challenge for pre-silicon verification engineers who could no longer simply validate against single DDR5 SDRAM memory models. They needed to consider how their designs would perform against DIMMs connected to each channel and operating at gigahertz clock speeds. To address this verification gap, Cadence developed DDR5 DIMM Memory Models that encapsulated all of the architectural complexities presented by real-world DIMMs based on a robust, easy-to-use, easy-to-debug, and easy-to-reconfigure methodology. This memory-subsystem-in-a-single-instance model has seen explosive adoption among the traditional IP Developer and SOC Integrator customers of Cadence Memory Models. The Cadence DIMM models act as a single unit with all of the relevant DIMM components instantiated and interconnected within, and with all AC/Timing parameters among the various components fully matched out-of-the-box, based on JEDEC specifications as well as datasheets of actual devices in the market. The typical use-case for the DIMM models has been where the DUT is a DDR5 Memory Controller + PHY IP stack, and the validation plan mandated compliance with the JEDEC standards and Memory Device vendor datasheets. Unique Use Case for the DIMM Discrete Component Models Although the Cadence DIMM models have enjoyed tremendous proliferation because of their cohesive implementation and unified user API, the actual DIMM Models are built on top of powerful, flexible discrete component models, each of which was designed to stand on its own as a complete SystemVerilog UVM-based VIP. All of these discrete component models exist in the Cadence VIP Catalog as standalone VIPs, complete with their own protocol compliance checking capabilities and their own configuration mappings comprehensively modeling individual AC/Timing parameters. Because of this deliberate design decision, the Cadence DIMM Discrete Component Models can support a unique use-case scenario. Some users seek to develop IC Designs for the various DIMM components. Such users need verification environments that can model the individual components of a DIMM and allow them the option to replace one or another component with their Component Design IP. They can then validate that their component design is fully compatible with the rest of the components on the DIMM and meets the integrity of the overall DIMM compliance with JEDEC standards or Memory Vendor datasheets. The Cadence Memory VIP portfolio today includes various examples that demonstrate how customers can create DIMM “wrappers” by selecting from among the available DIMM discrete component models and “stitching” them together to build their own custom testbench around their specific Component Design IP. A Solution for Unique Component Scenarios The Cadence DDR5 DIMM Memory Models and DIMM Discrete Component Models can provide users with a flexible approach to validating their specific component designs with a fully populated pre-silicon environment. Augmented Verification Capabilities When the DIMM “wrapper” model is augmented with the Cadence DFI VIP[4] that can simulate an MC+PHY stack and offers a SystemVerilog UVM test API to the verification engineer, the overall testbench transforms into a formidable pre-silicon validation vehicle. The DFI VIP is designed as a combination of an independent DFI MC VIP and a DFI PHY VIP connected to each other via the DFI Standard Interface and capable of operating seamlessly as a single unit. It presents a UVM Sequence API to the user into the DFI MC VIP with the Memory Interface of the PHY VIP connected to the DIMM “wrapper” model. With this testbench in hand, the user can then fully take advantage of the UVM Sequence Library that comes with the DFI VIP to enable deep validation of their Component Design inside the DIMM “wrapper” model. Verification Capabilities Further Enhanced A possible further enhancement comes with the potential addition of an instance of the Cadence DIMM Memory Model in a Passive Monitor mode at the DRAM Memory Interface. The DIMM Passive Monitor consumes the same configuration describing the DIMM “wrapper” in the testbench, and thus can act as a reference model for the DIMM wrapper. If the DIMM Passive Monitor responds successfully to accesses from the DFI VIP, but the DIMM wrapper does not, then it exposes potential bugs in the DUT Components or in the settings of their AC/Timing parameters inside the DIMM wrapper. Debuggability, Interface Visibility, and Protocol Compliance One of the key benefits of the DIMM Discrete Component Models that become manifest, whether in terms of the unique use-case scenario described here, or when working with the wholly unified DDR5 DIMM Memory Models, is the increased debuggability of the protocol functionality. The intentional separation of the discrete components of a DIMM allows the user to have full visibility of the memory traffic at every datapath landmark within a DIMM structure. For example, in modeling an LRDIMM or MRDIMM, the interface between the RCD component and the SDRAM components, the interface between the RCD component and the DB components, and the interface between the SDRAM components and the DB components—all are visible and accessible to the user. The user has full access to dump the values and states of the wire interconnects at these interfaces to the waveform viewer and thus can observe and correlate the activity against any protocol violations flagged in the trace logs by any one or more of the DIMM Discrete Component Models. Access to these interfaces is freely available when using the DIMM Discrete Component Models. On the unified DDR5 DIMM Memory Models, a feature called Debug Ports enables the same level of visibility into the individual interconnects amidst the SDRAM components, RCD components, and DB components. When combined with the Waveform Debugger[5] capability that comes built-in with the VIPs and Memory Models offered by Cadence and used with the Cadence Verisium Debug[6] tool, the enhanced debuggability becomes a powerful platform. With these debug accesses enabled, the user can pull out transaction streams, chip state and bank state streams, mode register streams, and error message streams all right next to their RTL signals in the same Verisium Debug waveform viewer window to debug failures all in one place. The Verisium Debug tool also parses all of the log files to probe and extract messages into a fully integrated Smart Log in a tabbed window fully hyperlinked to the waveform viewer, all at your fingertips. A Solution for Every Scenario Cadence's DDR5 DIMM Memory Models and DIMM Discrete Component Models , partnered with the Cadence DFI VIP, can provide users with a robust and flexible approach to validating their designs thoroughly and effectively in pre-silicon verification environments ahead of tapeout commitments. The solution offers unparalleled latitude in debuggability when the Debug Ports and Waveform Debugger functions of the Memory Models are switched on and boosted with the use of the Cadence Verisium Debug tool. [1] Shyam Sharma, DDR5 DIMM Design and Verification Considerations , 13 Jan 2023. [2] Shyam Sharma, DDR5 UDIMM Evolution to Clock Buffered DIMMs (CUDIMM) , 23 Sep 2024. [3] Kos Gitchev, DDR5 12.8Gbps MRDIMM IP: Powering the Future of AI, HPC, and Data Centers , 26 Aug 2024. [4] Chetan Shingala and Salehabibi Shaikh, How to Verify JEDEC DRAM Memory Controller, PHY, or Memory Device? , 29 Mar 2022. [5] Rahul Jha, Cadence Memory Models - The Gold Standard , 15 Apr 2024. [6] Manisha Pradhan, Accelerate Design Debugging Using Verisium Debug , 11 Jul 2023. Full Article
co Lessons from the UMass Lowell Women’s Leadership Conference By community.cadence.com Published On :: Mon, 04 Nov 2024 22:00:00 GMT This post was contributed by Liliko Uchida, application engineer at Cadence. Being a “Woman in STEM” is a phrase that has long been used to describe the holistic experience shared by thousands of women globally, yet it still makes us feel isolated. Partially due to the statistics of gender population in the STEM workforce and the remainder due to our own internal obstacles, being a woman in STEM continues to be a challenge. While many of us know the should-do’s and should-be’s of taking on this unique role objectively, we struggle to implement them. After all, our perseverance as engineers, mathematicians, businesswomen, programmers, and scientists is largely affected by subjectivity. The UMass Lowell Women’s Leadership Conference 2024 aimed to tackle this problem by uniting hundreds of women with shared experiences under one roof. Not only did the conference provide us with the knowledge necessary to persevere, but it also gave us the tools that will allow us to thrive and act upon the facts we already know. It is my hope that through this blog post, I can share some of my main takeaways from this special day. Be Confident This is one of the most palpable pieces of advice we always hear. Yet so many of us struggle to build this confidence because we don’t know how. Featured speaker Nicole Kalil defined confidence as “complete trust in oneself”.”One way to build this self-trust is by getting to know yourself on a deeper level. By creating a true inner connection, we begin to see ourselves as a whole instead of hyper-focusing on our shortcomings frequently illusioned by imposter syndrome. In one of the sessions, we were asked to introduce ourselves to our neighbors, not by what we do for work, but by who we are as a person. Even if this opportunity does not arise every day, this practice can be done simply by listing characteristics of yourself that define who you are. Who do you care for? How do you show them? What are your life goals oriented towards? How do you observe others’ behavior around you, and what does that say about how you make them feel? Getting to know you beneath the surface and allowing yourself to be seen for who you are is critical in building internal confidence. With practice, this self-reassurance will grow independent of external factors. Take Risks “Sometimes, you have to put your foot in the elevator” - Barb Vlacich, Keynote Speaker When opportunities arise, the only thing you can do to have a chance is to try. Without putting your foot in the elevator, the doors will close, becoming a missed opportunity. Similarly, several of the conference’s speakers also emphasized that the answer to every unasked question will always be a no. Even if you are not ready to full-send a negotiation, ask for a raise, or respectfully disagree with a co-worker’s opinion, start by getting comfortable asking uncomfortable questions. Just one discomfort a day will help in building an immunity to the anxiety that comes with taking risks, typically driven by our self-doubt. Another interesting point that stood out from the conference was the statistics of self-assessed qualifications between men and women. During the negotiation panel, it was revealed that men typically feel they only need 60% of the qualifications under a job description to apply, whereas women often feel they need close to 100%. These numbers alone demonstrate how the pure mental habits of men continue to funnel them into STEM and not women. The next time you seek a new opportunity, assess yourself based on the 60% and use it as a checklist threshold. If more women are able to pursue STEM careers using these numbers, the more likely we will begin to populate these roles. Build Your Genuine Network “ The essence of communication lies in the mutual exchange of ideas and emotions. And when the listener isn’t invested, it undermines the entire purpose of the conversation. Why are you having it anyway?” This is a quote from episode 186 of Julie Brown’s podcast This Sh!t Works called “The 5 Steps to Being an Active Listener”. Julie Brown is a Networking Coach, author, and podcast host who guided an energetic and candid conversation about networking and building a personal brand for women. Networking is often misunderstood as putting your name and qualifications out on the table for as many people to pick up your cards. While making these things known is important, they are not what nurtures effective connections. The key to cultivating your genuine network is to activate a sincere interest in the people you meet. Become the proactive receiver of the confidence exercise discussed above. When you meet someone new, what can you take away from them as a person, not an employee? By making people feel heard, even through the little conversations, you can begin to develop more meaningful connections that resonate. And, with practice, the sometimes inherent need to overcompensate by defining yourself with your resume will slowly fade. It was a wonderful opportunity to attend the UML Women’s Leadership Conference with four other inspiring Cadence women. Not only was the conference a motivating learning experience, but it was also a wonderful opportunity for us to bond together as women and feel supported by each other. The most eye-opening part of the day was seeing just how many women alike were sitting under the same roof. The conclusion of the event led me to feel proud to be an engineer, proud to be at Cadence, and most importantly, proud to be a woman. Learn more about life at Cadence . Full Article
co Randomization considerations for PCIe Integrity and Data Encryption Verification Challenges By community.cadence.com Published On :: Fri, 08 Nov 2024 05:00:00 GMT Peripheral Component Interconnect Express (PCIe) is a high-speed interface standard widely used for connecting processors, memory, and peripherals. With the increasing reliance on PCIe to handle sensitive data and critical high-speed data transfer, ensuring data integrity and encryption during verification is the most essential goal. As we know, in the field of verification, randomization is a key technique that drives robust PCIe verification. It introduces unpredictability to simulate real-world conditions and uncover hidden bugs from the design. This blog examines the significance of randomization in PCIe IDE verification, focusing on how it ensures data integrity and encryption reliability, while also highlighting the unique challenges it presents. For more relevant details and understanding on PCIe IDE you can refer to Introducing PCIe's Integrity and Data Encryption Feature . The Importance of Data Integrity and Data Encryption in PCIe Devices Data Integrity : Ensures that the transmitted data arrives unchanged from source to destination. Even minor corruption in data packets can compromise system reliability, making integrity a critical aspect of PCIe verification. Data Encryption : Protects sensitive data from unauthorized access during transmission. Encryption in PCIe follows a standard to secure information while operating at high speeds. Maintaining both data integrity and data encryption at PCIe’s high-speed data transfer rate of 64GT/s in PCIe 6.0 and 128GT/s in PCIe 7.0 is essential for all end point devices. However, validating these mechanisms requires comprehensive testing and verification methodologies, which is where randomization plays a very crucial role. You can refer to Why IDE Security Technology for PCIe and CXL? for more details on this. Randomization in PCIe Verification Randomization refers to the generation of test scenarios with unpredictable inputs and conditions to expose corner cases. In PCIe verification, this technique helps us to ensure that all possible behaviors are tested, including rare or unexpected situations that could cause data corruption or encryption failures that may cause serious hindrances later. So, for PCIe IDE verification, we are considering the randomization that helps us verify behavior more efficiently. Randomization for Data Integrity Verification Here are some approaches of randomized verifications that mimic real-world traffic conditions, uncovering subtle integrity issues that might not surface in normal verification methods. 1. Randomized Packet Injection: This technique randomized data packets and injected into the communication stream between devices. Here we Inject random, malformed, or out-of-sequence packets into the PCIe link and mix valid and invalid IDE-encrypted packets to check the system’s ability to detect and reject unauthorized or invalid packets. Checking if encryption/decryption occurs correctly across packets. On verifying, we check if the system logs proper errors or alerts when encountering invalid packets. It ensures coverage of different data paths and robust protocol check. This technique helps assess the resilience of the IDE feature in PCIe in below terms: (i) Data corruption: Detecting if the system can maintain data integrity. (ii) Encryption failures: Testing the robustness of the encryption under random data injection. (iii) Packet ordering errors: Ensuring reordering does not affect data delivery. 2. Random Errors and Fault Injection: It involves simulating random bit flips, PCRC errors, or protocol violations to help validate the robustness of error detection and correction mechanisms of PCIe. These techniques help assess how well the PCIe IDE implementation: (i) Detects and responds to unexpected errors. (ii) Maintains secure communication under stress. (iii) Follows the PCIe error recovery and reporting mechanisms (AER – Advanced Error Reporting). (iv) Ensures encryption and decryption states stay synchronized across endpoints. 3. Traffic Pattern Randomization: Randomizing the sequence, size, and timing of data packets helps test how the device maintains data integrity under heavy, unpredictable traffic loads. Randomization for Data Encryption Verification Encryption adds complexity to verification, as encrypted data streams are not readable for traditional checks. Randomization becomes essential to test how encryption behaves under different scenarios. Randomization in data encryption verification ensures that vulnerabilities, such as key reuse or predictable patterns, are identified and mitigated. 1. Random Encryption Keys and Payloads: Randomly varying keys and payloads help validate the correctness of encryption without hardcoding assumptions. This ensures that encryption logic behaves correctly across all possible inputs. 2. Randomized Initialization Vectors (IVs): Many encryption protocols require a unique IV for each transaction. Randomized IVs ensure that encryption does not repeat patterns. To understand the IDE Key management flow, we can follow the below diagram that illustrates a detailed example key programming flow using the IDE_KM protocol. Figure 1: IDE_KM Example As Figure 1 shows, the functionality of the IDE_KM protocol involves Start of IDE_KM Session, Device Capability Discovery, Key Request from the Host, Key Programming to PCIe Device, and Key Acknowledgment. First, the Host starts the IDE_KM session by detecting the presence of the PCIe devices; if the device supports the IDE protocol, the system continues with the key programming process. Then a query occurs to discover the device’s encryption capabilities; it ensures whether the device supports dynamic key updates or static keys. Then the host sends a request to the Key Management Entity to obtain a key suitable for the devices. Once the key is obtained, the host programs the key into the IDE Controller on the PCIe endpoint. Both the host and the device now share the same key to encrypt and authenticate traffic. The device acknowledges that it has received and successfully installed the encryption key and the acknowledgment message is sent back to the host. Once both the host and the PCIe endpoint are configured with the key, a secure communication channel is established. From this point, all data transmitted over the PCIe link is encrypted to maintain confidentiality and integrity. IDE_KM plays a crucial role in distributing keys in a secure manner and maintaining encryption and integrity for PCIe transactions. This key programming flow ensures that a secure communication channel is established between the host and the PCIe device. Hence, the Randomized key approach ensures that the encryption does not repeat patterns. 3. Randomization PHE: Partial Header Encryption (PHE) is an additional mechanism added to Integrity and Data Encryption (IDE) in PCIe 6.0. PHE validation using a variety of traffic; incorporating randomization in APIs provided for validating PHE feature can add more robust Encryption to the data. Partial Header Encryption in Integrity and Data Encryption for PCIe has more detailed information on this. Figure 2: High-Level Flow for Partial Header Encryption 4. Randomization on IDE Address Association Register values: IDE Address Association Register 1/2/3 are supposed to be configured considering the memory address range of IDE partner ports. The fields of IDE address registers are split multiple values such as Memory Base Lower, Memory Limit Lower, Memory Base Upper, and Memory Limit Upper. IDE implementation can have multiple register blocks considering addresses with 32 or 64, different registers sizes, 0-255 selective streams, 0-15 address blocks, etc. This Randomization verification can help verify all the corner cases. Please refer to Figure 2. Figure 3: IDE Address Association Register 5. Random Faults During Encryption: Injecting random faults (e.g., dropped packets or timing mismatches) ensures the system can handle disruptions and prevent data leakage. Challenges of IDE Randomization and its Solution Randomization introduces a vast number of scenarios, making it computationally intensive to simulate every possibility. Constrained randomization limits random inputs to valid ranges while still covering edge cases. Again, using coverage-driven verification to ensure critical scenarios are tested without excessive redundancy. Verifying encrypted data with random inputs increases complexity. Encryption masks data, making it hard to verify outputs without compromising security. Here we can implement various IDE checks on the IDE callback to analyze encrypted traffic without decrypting it. Randomization can trigger unexpected failures, which are often difficult to reproduce. By using seed-based randomization, a specific seed generates a repeatable random sequence. This helps in reproducing and analyzing the behavior more precisely. Conclusion Randomization is a powerful technique in PCIe verification, ensuring robust validation of both data integrity and data encryption. It helps us to uncover subtle bugs and edge cases that a non-randomized testing might miss. In Cadence PCIe VIP, we support full-fledged IDE Verification with rigorous randomized verification that ensures data integrity. Robust and reliable encryption mechanisms ensure secure and efficient data communication. However, randomization also brings various challenges, and to overcome them we adopt a combination of constrained randomization, seed-based testing, and coverage-driven verification. As PCIe continues to evolve with higher speeds and focuses on high security demands, our Cadence PCIe VIP ensures it is in line with industry demand and verify high-performance systems that safeguard data in real-world environments with excellence. For more information, you can refer to Verification of Integrity and Data Encryption(IDE) for PCIe Devices and Industry's First Adopted VIP for PCIe 7.0 . More Information: For more info on how Cadence PCIe Verification IP and TripleCheck VIP enables users to confidently verify IDE, see our VIP for PCI Express , VIP for Compute Express Link for and TripleCheck for PCI Express For more information on PCIe in general, and on the various PCI standards, see the PCI-SIG website . Full Article
co Replace Cache useing TCL command By community.cadence.com Published On :: Wed, 21 Mar 2018 09:30:10 GMT Hello, I'm using OrCad 17.2 and in the company I'm wokring at there was a change in the database folder (from driver F to G for example) and it effects the option of synchronise using the Part Manager. and changing manually each part in the Desgin Cahce can be a pain. Is there any way I can make a TCL script that will run and replace a part cahce with other? Better if I can call from a table to read, and write from other collum. I would really be happy for an example. Thanks for the help. Full Article
co Functional coverage report. By community.cadence.com Published On :: Wed, 13 Feb 2019 23:37:00 GMT Is there a way to generate coverage reports, not in ucd or any other format. I have written basic covergroup and passed arguments[-covoverwrite -cov_cgsample -cov_debuglog -coverage u] to the xrun command, however I don't see anything in sim directory, nor do I see anything in the logs indicating the covergroups have been hit. How can I confirm that cover groups are getting hit and essentially observe the bins. In Questa sim, you essentially get them as part of the log itself. Full Article
co How do I use TCL to get connections between modules in INNOVUS. By community.cadence.com Published On :: Sun, 20 Sep 2020 04:04:00 GMT Please give me some ideas. Thank you very much. Full Article
co How to remove incorrect nets error in cadence? By community.cadence.com Published On :: Tue, 03 Nov 2020 10:58:16 GMT While doing the lvs it's showing an error in gnd connection, I am not being able to understand exactly what is the error and what do I need to do to remove this error? Full Article
co Xtensa compiler issue By community.cadence.com Published On :: Thu, 01 Dec 2022 09:31:48 GMT Hi I have a Xtensa compiler issue that the compilation for switch case would be optimized in some patterns and leads to unexpected result. I cross-checked the assembly code and found that such compiler optimization seems to be similar to the tree-switch-conversion feature in GCC compiler https://gcc.gnu.org/onlinedocs/gcc-9.1.0/gcc/Optimize-Options.html-ftree-switch-conversion Perform conversion of simple initializations in a switch to initializations from a scalar array. This flag is enabled by default at -O2 and highe Unfortunately I don't find any similar compiler option(like -fno-tree-switch-conversion) in Xtensa compiler(XCC) to enable/disable such feature and such feature seems like enabled in XCC by default even if I'm using -O0 for the least optimization. I'm wondering if there's any possible solution to permanently disable such feature in XCC? PS: The release version of XCC compiler I'm using is RD-2012.5 Thanks! Full Article
co The code used to Replace Cache useing TCL command By community.cadence.com Published On :: Fri, 19 Apr 2024 10:16:17 GMT use the DBO function DboLib_RepalceCache to do the job of "Replace cache" in order to easy the job , type the code below . the code is a wrapper of the function metioned above set lStatus [DboState]set lSession $::DboSession_s_pDboSessionDboSession -this $lSessionset lDesignsIter [$lSession NewDesignsIter $lStatus]set lDesign [$lDesignsIter NextDesign $lStatus]set lNullObj NULL set oldLibName [DboTclHelper_sMakeCString "E:\PROJECT_WORKLIB.OLB"]set newLibName [DboTclHelper_sMakeCString "E:\MCU_PARTS_LIB.OLB"] #DboLib_ReplaceCache wrapperproc ReplaceCacheByName {partName} { global oldLibName global newLibName global lDesign set lPartStr [DboTclHelper_sMakeCString $partName] #set lNewStr [DboTclHelper_sMakeCString $newName] $lDesign ReplaceCache $lPartStr $oldLibName $lPartStr $newLibName 0 1} then use the tcl command like below to do the real job : ReplaceCacheByName "CL10B104KB8NNNC_C12" Full Article
co Here Is Why the Indian Voter Is Saddled With Bad Economics By indiauncut.com Published On :: 2019-02-03T03:54:17+00:00 This is the 15th installment of The Rationalist, my column for the Times of India. It’s election season, and promises are raining down on voters like rose petals on naïve newlyweds. Earlier this week, the Congress party announced a minimum income guarantee for the poor. This Friday, the Modi government released a budget full of sops. As the days go by, the promises will get bolder, and you might feel important that so much attention is being given to you. Well, the joke is on you. Every election, HL Mencken once said, is “an advance auction sale of stolen goods.” A bunch of competing mafias fight to rule over you for the next five years. You decide who wins, on the basis of who can bribe you better with your own money. This is an absurd situation, which I tried to express in a limerick I wrote for this page a couple of years ago: POLITICS: A neta who loves currency notes/ Told me what his line of work denotes./ ‘It is kind of funny./ We steal people’s money/And use some of it to buy their votes.’ We’re the dupes here, and we pay far more to keep this circus going than this circus costs. It would be okay if the parties, once they came to power, provided good governance. But voters have given up on that, and now only want patronage and handouts. That leads to one of the biggest problems in Indian politics: We are stuck in an equilibrium where all good politics is bad economics, and vice versa. For example, the minimum guarantee for the poor is good politics, because the optics are great. It’s basically Garibi Hatao: that slogan made Indira Gandhi a political juggernaut in the 1970s, at the same time that she unleashed a series of economic policies that kept millions of people in garibi for decades longer than they should have been. This time, the Congress has released no details, and keeping it vague makes sense because I find it hard to see how it can make economic sense. Depending on how they define ‘poor’, how much income they offer and what the cost is, the plan will either be ineffective or unworkable. The Modi government’s interim budget announced a handout for poor farmers that seemed rather pointless. Given our agricultural distress, offering a poor farmer 500 bucks a month seems almost like mockery. Such condescending handouts solve nothing. The poor want jobs and opportunities. Those come with growth, which requires structural reforms. Structural reforms don’t sound sexy as election promises. Handouts do. A classic example is farm loan waivers. We have reached a stage in our politics where every party has to promise them to assuage farmers, who are a strong vote bank everywhere. You can’t blame farmers for wanting them – they are a necessary anaesthetic. But no government has yet made a serious attempt at tackling the root causes of our agricultural crisis. Why is it that Good Politics in India is always Bad Economics? Let me put forth some possible reasons. One, voters tend to think in zero-sum ways, as if the pie is fixed, and the only way to bring people out of poverty is to redistribute. The truth is that trade is a positive-sum game, and nations can only be lifted out of poverty when the whole pie grows. But this is unintuitive. Two, Indian politics revolves around identity and patronage. The spoils of power are limited – that is indeed a zero-sum game – so you’re likely to vote for whoever can look after the interests of your in-group rather than care about the economy as a whole. Three, voters tend to stay uninformed for good reasons, because of what Public Choice economists call Rational Ignorance. A single vote is unlikely to make a difference in an election, so why put in the effort to understand the nuances of economics and governance? Just ask, what is in it for me, and go with whatever seems to be the best answer. Four, Politicians have a short-term horizon, geared towards winning the next election. A good policy that may take years to play out is unattractive. A policy that will win them votes in the short term is preferable. Sadly, no Indian party has shown a willingness to aim for the long term. The Congress has produced new Gandhis, but not new ideas. And while the BJP did make some solid promises in 2014, they did not walk that talk, and have proved to be, as Arun Shourie once called them, UPA + Cow. Even the Congress is adopting the cow, in fact, so maybe the BJP will add Temple to that mix? Benjamin Franklin once said, “Democracy is two wolves and a lamb voting on what to have for lunch.” This election season, my friends, the people of India are on the menu. You have been deveined and deboned, marinated with rhetoric, seasoned with narrative – now enter the oven and vote. The India Uncut Blog © 2010 Amit Varma. All rights reserved. Follow me on Twitter. Full Article
co Start Your Engines: Create and Insert Connect Modules for Mixed-Signal Verification By community.cadence.com Published On :: Tue, 11 Jun 2024 16:17:00 GMT Read this blog to know how you can easily create and insert connect modules using Spectre AMS Designer with the Verilog-AMS standard language defined by Accellera. (read more) Full Article AMS AMS Designer Mixed-Signal AMS simulation mixed-signal design AMS Verification mixed-signal verification
co Start Your Engines: The Innovation Behind Universal Connect Modules (UCM) By community.cadence.com Published On :: Fri, 02 Aug 2024 08:10:00 GMT Read this blog to know more about the innovation behind Universal Connect Modules (UCM).(read more) Full Article SystemVerilog Start Your Engines Spectre AMS Designer Verilog-AMS Mixed-Signal mixed-signal verification
co PCB Chamfering Board edge connectors By community.cadence.com Published On :: Thu, 09 Dec 2021 15:12:48 GMT Hi I am looking into chamfering the edge of PCB for Board edge connectors. I have performed fillet command earlier but new to chamfering. Below is the description : As seen above, the PCB edge are chamfered in thickness as well as at the corners. Using OrCAD PCB hotfix S023. Full Article
co 10 Layer PCB project won't generate Gerber's completely for middle layers By community.cadence.com Published On :: Thu, 09 Dec 2021 16:29:21 GMT Hello Fellow PCB Designers, We have a 10 layer PCB design that originated in Pads and was converted over to Allegro 17.4, this is an old design but is manufacturable and works perfectly fine. When I try to generate a Gerber for the Top or Bottom layers the Gerber comes out fine. But Most of the middle layers are Etch's and via's for power and grounds, but the Gerber's come mostly blank, there might be some details, but in the Gerber view everything is displayed correctly. The design does have many close spacings, I have not changed anything in the constrains manager yet, turned off a lot of the DRC's, but thinking there might be something wrong with the constrains. I find that the CSet is set to 2_18, not sure yet what this means, also there are many of these definitions, PCS 3,4,5,ect, are the same as CSet 2_18 any suggestions would be great, we are currently looking into this, have seen that even small change in constraint manager can cause long processing and even Allegro crashing, this is a large project. Thanks Much, Thanks, Mike Pollock. Full Article
co The default location of orCAD Capture library Pin Number is incorrect By community.cadence.com Published On :: Tue, 14 Dec 2021 21:38:21 GMT The default position of the pin number is incorrect. Full Article
co Sense line and decoupling capacitors By community.cadence.com Published On :: Thu, 23 Dec 2021 08:16:10 GMT Hello, A mybe silly question came to my mind: When routing sense lines, is it better to hav them as close as possible to DUT or afer the decoupling capacitors ? Force in red, sense in purple. Best way is 1 or 2 ? Thanks in advance and Merry Christmas to everybody ! Full Article
co Unconnected nets By community.cadence.com Published On :: Fri, 24 Dec 2021 13:55:17 GMT I have a design which says there are 6 unconnected nets. But 'Display All Nets, shows only 4 unconnected. When I try to look at the non-connection, is appears connected and nothing shows?? What is happening? Full Article