Breakthrough in FPGAs could make custom chips Faster, Larger

Breakthrough in FPGAs could make custom chips Faster, Larger

25 March 2015, 12:50
TipMyPip
0
273

Today we are worshipping the gods of the algorithm, according to one prominent magazine. It’s not a bad comparison. Everything from search results to our machine learning efforts are the basis of a series of equations that purport to solve for something that feels almost ineffable, human. Teaching a computer to see. Helping to figure out how to take our comings and goings and turn it into a schedule. Understanding our thermostat settings and turning that into a schedule.

But if our new gods are algorithms, then the chips that are performing those complicated equations are their shrines, and the more specific the shrine, the better your prayers work. The Greeks knew that. They built shrines to each of their individual gods with statues, symbols and other trappings of faith specific to their deity of choice. When it comes to algorithms computer scientists are less vested in faith but they are aware that their equations do run faster or more efficiently on a specially designed piece of silicon.

But because algorithms change over time and hardware usually stays the same, the flexibility of being able to reprogram your hardware to match your changing algorithm becomes essential. That’s why big companies like Intel and Microsoft are turning to chips called Field Programmable Gate Arrays, or FPGAs. Intel marries custom cores to its x86 architecture to help large data center customers (like eBay or Facebook) improve their performance. Because when worshipping algorithms, a custom shrine makes those prayers work better, and a shrine that changes with the algorithm is the best of both worlds. But like all religions, using FPGAs extracts a price.

The challenge with custom chips is that they are slower than general purpose processors like x86 or ARM-based cores. By making them software programmable, handy for algorithms that you might want to change later, and more flexible, you sacrifice speed in getting information on and off the chip. There is generally a bottleneck when shuttling information to an FPGA, so while it can solve problems really quickly and can adapt to solve different problems with a minor change in programming, sending it the data it needs to solve that problem slows things down. But for certain applications, such as search engine algorithms or even Microsoft’s recent choice of using FPGA’s for neural networks, the flexibility of being able to tweak your hardware is more important than the performance hit.

But what if in exchange for a larger piece of silicon, you didn’t have to take the performance hit? That’s the premise behind Flex Logic, a startup that launched this week with less than $10 million in funding and the IP for an FPGA that is both flexible and wired completely differently so it doesn’t create a bottleneck in getting data onto the core. Flex Logic CEO Geoff Tate explained that the company has changed the wiring inside the FPGA so instead of having the FPGA outside the processor you can put it directly on the chip making it an integrated package or an SoC. Learn more about Intel at Structure 2015 Register now This makes the total area of the eventual chip larger, but boosts performance and lowers the overall cost. The Flex Logic cores also can snap together meaning that the design of these FPGAs is fairly flexible and modular.

So far Flex Logic is launching with a product called the ESLX core in a variation that offers 2,500 LUTs or look up tables (a measure of performance in FPGAs). This core can be combined with other ESLX cores to give a company more performance and each one adds about 15 cents to the overall device. That cost is mitigated by putting t on the chip as an SoC however. The initial sample chip is in the company’s hands and customers are testing it with the fist chip expected to be in products later this year, said Tate. Because Flex Logic is selling IP, much like ARM does, rather than the silicon itself, Tate expects that it will be able to translate its designs fairly rapidly to the demands of the market. It plans to make a larger and a smaller design of its ESLX core as well as to make a 40 nanometer version of the core to complement its current 28 nanometer version, but Tate is waiting to see what the market demands. He expects the products to first appear in the networking and communications space.

Other possible applications for the cores could include encryption in the security field or manufacturing software defined radios, which could be tuned to different radio protocols as needed. If we can make faster, flexible chips this is truly a breakthrough worth investigating. I’ll be keeping an eye on Flex Logic to see the customers it signs up and the tradeoffs its technology demands in the field.

Share it with friends: