SiC semiconductors take the heat out of the cloud computing myth
Cloud computing is a myth. That doesn't mean it doesn't exist, doesn't work, or isn't a vital part of the global digital infrastructure. But the image that the term "cloud computing" conjures up of an immaterial source of on-demand, infinite computing power doesn't match reality.
Cloud computing, at its core, is millions of vast data centers packed with sophisticated servers that rely on advanced cooling systems and complex power supplies to convert electrical energy into heat to perform calculations. Whether you save a file to Dropbox or open a web page, every bit of data that effortlessly travels between client and server comes with a direct energy cost.
Cloud computing is essential to concepts such as the Internet of Things (IoT) and the rollout of 5G networks, playing a key role in both implementing the network infrastructure and handling the vast amounts of data that will flow across the network once it is in place. The low latency of 5G networks will also enable algorithms running in cloud computing data centers to interface with autonomous vehicles and make real-time decisions about traffic management and routing options.
The amount of computing done in cloud data centers will grow exponentially over the next decade, with Gartner predicting worldwide public cloud services revenue to grow 17.5% to $214.3 billion in 2019, but at the same time, the amount of energy consumed will increase. Even small improvements in system-wide energy efficiency can result in significant energy savings across the data center.
One place to start is by making data center power supplies more efficient. A simple way to do this is by replacing existing silicon MOSFETs with silicon carbide parts. These parts can switch at higher frequencies, allowing for more efficient power conversion, and can also run hotter than their silicon equivalents, reducing the strain on data center cooling systems.
Designers may be tempted to work from a blank sheet of paper when trying to solve complex system optimizations to build more energy-efficient data centers. But the reality for many will be to make many small improvements to what already exists. For power supply designs, ON Semiconductor offers a range of SiC FETs in the widely used DFN8x8 package. The device's stacked cascode topology means it can be driven like a silicon MOSFET, but it switches faster, handles more power, and simplifies circuit design.
For example, building a 3kW LLC circuit using ON Semiconductor's UF3SC065040D8S SiC FET requires wiring two devices in parallel (to meet thermal constraints) for every theoretical device in the circuit topology. Building a half-bridge rectifier using two of these LLC circuits would require four devices. But building the same circuit using a competing product would require at least six devices.
The advantages of SiC semiconductor technology become even greater at higher power levels: to build a 5kW LLC circuit using the same on-semiconductor parts would require three parallel devices for each theoretical device in the topology, and therefore only six devices for a full half-bridge. Competing solutions would require 10 devices to achieve the same thing.
These optimizations will allow you to expand your use of cloud computing without straining your energy budget. A pragmatic approach to improving data center efficiency also helps take some of the heat out of the cloud computing myth.
Inquiry
If you have any questions regarding this article, please contact us below.
On Semiconductor Manufacturer Information Top Page
If you want to go back to ONSEMI maker information top page, please click below.