Winbond Introduces Revolutionary CUBE Architecture for Powerful Edge AI Devices
Winbond Electronics, a leading global supplier of semiconductor memory solutions, announced powerful technology that enables affordable edge AI computing for mainstream use cases. Winbond's new Customized Ultra-Bandwidth Elements (CUBE) enable memory technology optimization for seamless performance of generative AI on hybrid edge/cloud applications.
CUBE not only supports front-end 3D structures such as Chip on Wafer (CoW) and Wafer on Wafer (WoW), but also back-end 2.5D in Si interposer and fan-out solutions on substrates. /Also improves the performance of the 3D chip. Designed to meet the growing demand for edge AI computing equipment, it supports memory capacities from 256Mbit to 8Gbit in a single die, and can be 3D stacked with WoW if higher capacity is required. It also enhances bandwidth while reducing data transfer power consumption.
Winbond takes a big step forward with CUBE, enabling seamless deployment to different platforms and interfaces. This technology is suitable for advanced applications such as wearables, edge server devices, surveillance equipment, ADAS, and co-robots.
Winbond said: “CUBE architecture enables a paradigm shift in AI deployment. the Company believe that the integration of cloud AI and powerful edge AI will define the next stage of AI development. With CUBE, we are unlocking new possibilities. "Unlocking the path to improved memory performance and cost optimization in powerful edge AI devices."
The main features of CUBE are:
⚫ Power Efficiency: CUBE achieves excellent power efficiency with power consumption of less than 1pJ/bit, ensuring long-time operation and optimized energy usage.
⚫ Superior Performance: With a bandwidth of 32 GB/s to 256 GB/s per die, CUBE ensures outstanding performance that exceeds industry standards.
⚫ Compact size: CUBE is currently based on D1Y technology and will be based on D1α technology in 2025, and can be offered in a wide range of capacities from 256Mbit to 8Gbit per die. This allows for smaller form factors. Introducing Through-Silicon Vias (TSV) further increases performance and improves signal and power integrity. In addition, reducing the pad pitch reduces the IO area and improves heat dissipation when the top die is equipped with an SoC and the bottom die is equipped with a CUBE.
⚫ High-bandwidth and cost-effective solution: Delivering outstanding cost performance, CUBE's IO boasts impressive speeds of up to 2Gbps with 1K IO total. Combined with legacy foundry processes like 28nm/22nm SoCs, it delivers ultra-high bandwidth capabilities of 32GB/s to 256GB/s (HBM2 bandwidth), which is 4-32*LP-DDR4 Equivalent to a bandwidth of x 4266Mbps x 16 IO.
⚫ Improving cost efficiency by reducing the SoC die size: By stacking the SoC (top die without TSV) on top of the CUBE (bottom die with TSV), the SoC die size can be minimized and the TSV area constraint can be removed. can. This not only increases cost advantage but also contributes to the overall efficiency of edge AI devices.
Winbond said: “CUBE enables developers and enterprises to unlock the full potential of hybrid edge/cloud AI to increase system performance, response time and energy efficiency. You can advance in a variety of industries.”
Winbond is actively collaborating with partner companies to establish a 3DCaaS platform that utilizes CUBE's functions. By incorporating CUBE into existing technology, Winbond aims to provide cutting-edge solutions to help businesses succeed in the era of AI transformation.
Inquiry
If you have any questions regarding this content, please contact us below.
Winbond Manufacturer Product/Technical Information
Click below for Winbond product and technical information.