Got Cores?

Got Cores?

The following Google announcement crossed my desk today; 1 billion computing core-hours for researchers to tackle huge scientific challenges. I am always intrigued by large numbers of cores or in this case “core-hours.” Reading further the announcement states:

… academic research grant program called Google Exacycle for Visiting Faculty, which provides 1 billion hours of computational core capacity to researchers. That’s orders of magnitude larger than the computational resources most scientists normally have access to.

A billion is a big number. Let’s do some math. First, let’s look at the lower limit. A single core running for a billion hours would be 114,077 years of compute time. That is a long research project. Let’s assume the project is a year in duration, so if Google were donating 115,00 cores, they would use one billion core-hours in a year running 24×7. If we assume a half time duty cycle (12×7) then it works out to about 230,000 cores. Thus, Google is probably making available between 128,000 and 256,000 cores — not a small number.

I certainly don’t fault Google for such a generous offer, however, I have one question. How are these cores connected? Are they parceled out in groups of four or eight so that they can run on the same system (SMP cores) or are they distributed cores that will probably use GigE. This is an important question for most HPC users. One that they would want to ask before they use the cores. Of course, there are plenty of distributed applications that can use extra cycles (i.e. folding@home, SETI@home), but the big applications usually need a good interconnect and GigE (if that is the interconnect) may limit scalability. The devil as they say is in the details.

No Comments

Sorry, the comment form is closed at this time.