News

Student from Cornell University knows how to keep hardware fast and furious

hardware Cornell University

A Computer Architect’s Balancing Act

Batten is a computer architect. He’s working toward identifying, designing, and building the types of computer architectures society will need 10 years down the line. He’s mainly interested in hardware specialization with a focus on accelerators—computer processors designed to perform certain functions faster than a generalized processor can do them.

“Specialization is particularly relevant now,” he says. “It used to be if you wanted to do a specific computation—let’s say, machine learning—and you wanted to do it faster, you just waited two years, and you’d get faster processors and your machine learning would get better, too. Now, the big differentiator is how architects balance the tension between less-efficient general architectures and more-efficient specialized architectures.”

The new developments suit Batten, who uses a vertically integrated research methodology. “In my group, we don’t just do high-level modeling,” he explains. “We like to build chips, and we like to do prototyping. We can’t build things that are competitive with a company like Intel, but we can build small things to learn about physical design issues, to test out our ideas, and to do experiments that feed into our higher-level models that then feed into even higher-level ones to create a balanced research methodology.”

That balanced research methodology requires Batten and his colleagues to work with all three levels of modeling common to computer architecture: functional level, which looks at high-level abstraction and deals with things like algorithms; cycle level, which brings the notion of time and schedules into the mix; and register-transfer-level (RTL), which delves into the concrete details of the actual hardware. “The traditional approach is that each of these levels of modeling are completely different ecosystems, so pursuing research that spans all three can be particularly challenging,” says Batten.

Folding Three Levels of Computer Hardware Modeling into One Ecosystem—PyMTL

About five years ago, Batten’s graduate student Derek Lockhart, PhD’15 Electrical and Computer Engineering, became frustrated with the laborious process required to work in all three modeling types, given the total separation of their ecosystems. He proposed one unified framework—written in the general-purpose programming language known as Python—that could handle functional, cycle, and RTL modeling. Once Lockhart created a proof of concept framework, he, Batten, and their colleagues developed the second version, which they call PyMTL (pronounced pie-metal).

The researchers used PyMTL v2 for about five years, testing and refining it. During the past year, another student in Batten’s lab, Shunning Jiang, PhD’21 Electrical and Computer Engineering, has been leading the development of a completely new version of PyMTL to improve simulation performance and designer productivity. In June 2019, the researchers are releasing this new version of PyMTL at the 2019 International Symposium on Computer Architecture. “PyMTL is a unified framework for doing hardware modeling,” Batten says. “But you can also generate, simulate, and verify hardware—all in a Python-based environment.”

Batten and his collaborators created their interface between Python and industry standard languages, using open-source software. In particular, they made use of Verilog (an industry standard for hardware descriptive language) and Verilator (a tool that compiles Verilog into a library using the general-purpose programming language C++).

“We hid all this in our framework,” Batten says. “So users won’t even notice. They can just automatically wrap and import designs written in other industry standard languages and use them in a Python-based environment. And because it’s Python, we can easily generate a hundred different designs from a single description.”

PyMTL, Jump-Starting the Open-Source Hardware Ecosystem

The Batten group has also employed PyMTL to tape out computer chips. Their latest goal is to use it to test chips after they are built. “Normally you have to set up an infrastructure to test the chip and get it running,” Batten says. “Our vision is to leverage our PyMTL modeling framework—not just to simulate the designs to verify them before you send them to the foundry to be built but also to reuse all that hard work to test the chip when it comes back. We aren’t there yet, but we’re working on it.”

The ultimate objective is for PyMTL to support a robust open-source hardware ecosystem similar to the current one for open-source software. “Right now, anyone can create a startup app, using open-source software,” Batten says. “They don’t have to build everything; they just leverage the power of open-source software that already exists.”

Open-source hardware, on the other hand, is low quality, and there’s not much of it. Most hardware today is designed by companies with billion-dollar budgets, using proprietary tools. “Everybody wants to build accelerators,” Batten says. “But you can’t just buy the accelerator that does your cool new machine-learning algorithm. You have to build it yourself. To do that, you want to reuse hardware building blocks developed by others. You want to plug your accelerator into it. You just want to download all the open-source hardware and add your special sauce.

“PyMTL is a fantastic example of what we need to jump-start the open-source hardware ecosystem,” he continues. “Yes, we need more hardware building blocks. We need open-source hardware out there, but it also needs to be high quality. To make it high quality, we need to make it easy to test and verify these hardware building blocks. We need better verification environments that are easy to use and can exploit open-source frameworks. And that’s what PyMTL is. It’s a key missing link.”

Credit: “Keeping computer hardware fast and furious”, Cornell University, Jackie Swift

Tags: