Fast Hands

Computers already make an enormous impact on our quality of life by reducing the cost of developing new products and enhancing their safety. According to Moore’s Law, the performance of computers doubles every 18 months. Despite this, many scientific problems are still too complicated to be solved on standard computers and new approaches are needed.

Many problems can be split into parts and processed independently on separate computers around the world, and the results combined.

One approach is to link many computer processors together into a so-called parallel computer, with processors attacking the problem simultaneously. These can now be built cheaply by using standard commodity components, even those used in your home PC.

A new acquisition by the University – a Dell computer named Darwin comprising 1170 dual core Intel processors connected by InfiniBand interconnect – is the fastest academic computer in the UK and the 20th fastest computer in the world (www.top500.org/list/2006/11/100). By exploiting the commodity model, it offers significantly higher performance per unit cost than comparable systems.

Many problems can be split into parts and processed independently on separate computers around the world, and the results combined. For such problems, Grid Computing is a good alternative to parallel systems. In Cambridge, nine departments together with the University of Cambridge Computing Service (UCS) have created CamGrid, a linked system of over 700 processors shared by the participating researchers. This ensures that spare capacity in one department can be used by others. The largest such computational grid is being deployed for the Large Hadron Collider, the world’s most powerful particle accelerator. Over 100,000 computers will be harnessed to analyse the data, expected to amount to several petabytes each year, stored on disk farms around the world (www.cern.ch). However, computers are only part of the story. We also need powerful and efficient numerical algorithms and software to exploit these systems. Many case studies have shown that improvements in computational techniques generate performance speeds faster than predicted by Moore’s Law. One bottleneck is the enormous volume of data generated by modern science, for which new techniques are required to store and analyse it.

The University has a long tradition of developing computational techniques. In the past, computational scientists tended to be self-taught. This is no longer so true and there are fewer students with experience of writing software. The University is responding to this challenge by creating a Master’s course in Scientific Computation. The aim is to train the next generation of scientists in modern software engineering techniques and provide them with access to world-class computational facilities. They will then be equipped to take up challenges such as designing new materials and better drugs, and providing earlier diagnosis of illness.

For more information, please contact Professor Mike Payne (mcp1@cam.ac.uk) and Professor Andy Parker (parker@hep.phy.cam.ac.uk)


This work is licensed under a Creative Commons Licence. If you use this content on your site please link back to this page.