Supercomputers and data-centres have two problems that aren't going away: energy and heat. The largest facilities cost billions of dollars and consume megawatts of electricity, enough to power small towns. Not only are more built every year, they're also getting bigger.
The limiting factor in their design (and location) is not size or expense, but energy: how much power can be sent to that site? In some inner cities like London, the answer is often quite simple: zero. Without building new power stations, the local grid is already at maximum capacity. There's no more electricity for anyone.
The other problem is that computers get hot. This isn't a problem for just a couple of servers sat on a shelf or under a desk – the tiny bit of warmth they emit will hardly heat the room to chip-melting levels. But 100,000 hard-working computers, all under the same roof, is another story. Without huge cooling systems, data-centres would overheat in seconds. And those cooling systems are not exactly eco-friendly, typically using half as much power as the computers themselves.
A data-centre is effectively 100,000 electric heaters... in a giant fridge. Small wonder that they use nearly 2% of our entire planet's electricity supply. That's more than most countries.
Industry knows this is not sustainable. Building data-centres next to clean, renewable power sources (like hydro or geo-thermal) can mitigate the damage, and companies like Google and Facebook deserve praise for not just increasing the power usage efficiency (PUE) of their facilities – but also sharing how they do it. But ultimately, computers don't run on fresh air and a warehouse full of them will always require huge amounts of power, including plenty wasted on cooling. Not to mention the energy of building all the machines – and the data-centres – in the first place.
It's bizarre that we're building ever more of these enormous data-centres while the vast majority of the planet's computing power (two billion home PCs) is almost entirely idle. It's also ridiculous that we spend good money on heating up homes and offices, yet pay to have megajoules of heat taken away from computers and dumped outdoors.
Of course, that heat can't exactly be piped around the world to where it's useful. There's no such thing as the international hot water grid. But there is another way: pipe the computing and data around the world instead. A web connection can get to anywhere – including all those places that would be quite grateful for a little extra heat. Do all the computing there.
This is the basis of Charity Engine's quest for ultra-green computing. Firstly, and most obviously, we don't actually need any more computers. We simply make better use of existing machines that are mostly idle, so our hardware energy bill is virtually zero. Half the energy costs of any computing is making the equipment itself.
We don't stress our members' PCs either, but rather just skim a little extra work from each. Unlike regular data-centres, we are under no obligation to run them at 100% for maximum ROI. A gentle +10% power to the CPU is all we take – about the same as charging two cellphones – and we ask people to only use Charity Engine as a background task; to switch off their PCs when not in use.
But our final part of the green equation is perhaps the most powerful: the Winternet method. As previously mentioned, a global grid like Charity Engine can pick and choose the coldest computers on its network – something a regular data-centre cannot. Not only does this mean no energy is wasted on cooling, but it can even lead to zero-carbon computing.
If a PC is in a hot location with air-conditioning, any extra heat will make the air-con work harder – a double-whammy of energy use that we want to avoid. But if a PC is in a cold location with central heating switched on, a slightly-warmer PC will just help the central heating work that much less. In those cases, the overall energy consumption doesn't actually change.
Same electricity bill, same amount of heat in the home, just a bit more of it now coming from the PC instead of the radiators. Computing that effectively uses no energy, no carbon – and costs absolutely nothing.
Beat that, data-centres...!
Absolutely Fascinating!!
Thanks for the Write-up n Info Mark!!
I live in Romania and about 7 mounths from a year I can do at Boinc with charity attached, very efficient. At least 4 mounths is below 21 C outside, so, my i3 laptop is working at 80% capacity. Maybe 2 mounths a year I had to reduce on a single processor.
I'm doing work on Boinc since July 2010 , continuosly an consider my self dedicated.
Very interesting thoughts, Mark! This could certainly scale, as well. Much as a geothermal (or other heat pump) system works in two stages, one to pump heat from the ground to partially heat a building and a second, supplementary fuel burning or electric stage to finish the job, I could see a sort of "grid furnace" being tied into an HVAC system with a hundred ultra cheap processors cranking out 30,000 BTU/hr (along with tons of results for medical and scientific research).
Actually, the supplementary stage might only be necessary as a backup heat source, if the unit is sized sufficiently (and economical). Those hundred processors could easily provide sufficient heat for many (if not most) homes. And as you say, energy is conserved in thermodynamics, there is no additional cost there. Any direct conversion of electricity back to heat without doing any work first is really just wasted potential.
Who'd have thought charity could make you feel warm on the outside too? (the Charity Engine logo would even be right at home on the side of a furnace!)