Q&A: Microsoft's new multicore-computing guru speaks out

November 26, 2007 (Network World) -- Supercomputing expert Dan Reed, who saw the birth of the Web browser at the University of Illinois at Urbana-Champaign, is joining Microsoft Research as director of scalable and multicore computing.

This serious college basketball fan is tearing himself away from the hoops heartland surrounding North Carolina's Research Triangle to lead Microsoft's efforts in multicore technology and next-generation data centers. "I watched Mosaic come up out of the ground, and I watched my students go off to start-ups, and I promised myself that if the surf was up again that I was going to grab my board and get in the water," Reed told Network World's John Fontana, during a chat about Reed's move from academia and what he says could be a coming revolution that rivals the development of the Web.

What drew you to Microsoft? There are very few places in the world that have the combination of world-class research capability and the market influence and the resources to be game-changing players in a period where there is this much ferment with multicore and large data centers.

The other more broad question is, "Why industry after having spent a life in academia?" And the answer is that the problems of this ilk [multicore, large data centers], the opportunities to profoundly influence their decision, is actually much larger right now in the industry space than it is in the academic space.

What are the unique challenges multicore computing presents to a software development company such as Microsoft? We have hit the power and clock-frequency wall, so large-scale multicore is coming. To continue to ride the performance advantages from that parallelism on chips, we have to rethink how we develop software. How we develop code that runs in parallel and uses those processors has implications for a whole new set of developments but also for the existing software base. So for a software company like Microsoft, it is about developing new applications that take advantage of that parallelism, because there is a whole new set of application opportunities.

What's your plan of attack? I will be doing two things. My position is in MSR, and some of the issues associated with that are research-related. But many of them are deeply product-related. So I will not only be engaging people on the research side, but I will also be working with the product groups and also working with Microsoft's external partners, like chip vendors, because this is not just a software issue; it is a hardware issue as well.

So for a set of ongoing projects that I expect to follow, Microsoft has several activities under way. If you saw recently, the plan is to start shifting the tool set to something called F#, the functional language based on the common runtime system, which is one of the early products to allow us to exploit the on-chip parallelism.
There are other things like transactional memory. One of the things that is true about this problem is there isn't a single silver bullet.

Microsoft Research Director Rick Rashid told me last year that technology transfer is a "full-contact sport." Do you plan to play? Absolutely, that is what I have done my entire professional life. The process of taking ideas and pushing them out into the marketplace is something that I have done for a long time. Intel has shipped product based on software that I developed as a researcher in academia.

You said in your blog that scalable and multicore computing are among the most interesting technical problems in computing. Can you, in layman's terms, tell us why? There are two questions there. One is the scalable and one is the multiple cores. This is probably a once-in-20-year opportunity to rethink some fundamental things about the way we design processors and the way that we support software.

The other piece is the rise of large-scale data centers and cloud computing's software as a service. The scope and scale of these data centers is beyond anything we have conceived of building before. This aggregation of information and the ability to deliver computing in response to remote requests is the biggest thing that is happening since the Web.

So, you put these two things together and we are about to realize the infrastructure to support the computing equivalent of the electronic power grid. When I was at Illinois, I saw the Web browser Mosaic born, and as I look at this space happen -- multicore, Web 2.0, data centers -- that same sort of confluence of excitement and opportunity is there.

For users who know nothing about this technology, where will they experience its benefits? They are going to see performance go into warp drive. And not just on the desktop, but mobile devices, because you can deliver high performance at low power and obviously that is a big issue in the mobile space. Some of the technologies that rely on vision and speech recognition and nontraditional interfaces beyond keyboard and mouse. Those kinds of interfaces need the kind of power that multicore will deliver.

How will you divide your time between research and product? I will divide my time between three things. It is a logical division rather than a division between research and product. The group of things to work on are minicore, large-core multicore systems, a clean-sheet look at next-generation data centers, so soup-to-nuts from chip-level issues up through system integration [and the] software stack to support some of these next-generation apps.

The second is large-scale environment, thermal and power management to go with what you do if you want to build a 200,000-square-foot, 50-megawatt data center. The third thing that I will be doing is to finish work on national science, technology and information policy. I have done a lot of that over the last few years on the President's Council of Advisors for Science and Technology, and I am also chair of the umbrella organization that represents all of the academic and industrial research labs in computing in North America.

Link