Durham University Logo
Institute for Particle Physics Phenomenology Logo

Institute for Particle Physics Phenomenology

Durham University Logo Institute for Particle Physics Phenomenology Logo

Institute for Particle Physics Phenomenology

Error message

  • Notice: Undefined offset: 520 in user_node_load() (line 3697 of /opt/drupal-ippp-dev/modules/user/user.module).
  • Notice: Trying to get property of non-object in user_node_load() (line 3697 of /opt/drupal-ippp-dev/modules/user/user.module).
  • Notice: Undefined offset: 520 in user_node_load() (line 3698 of /opt/drupal-ippp-dev/modules/user/user.module).
  • Notice: Trying to get property of non-object in user_node_load() (line 3698 of /opt/drupal-ippp-dev/modules/user/user.module).
  • Notice: Undefined offset: 520 in user_node_load() (line 3699 of /opt/drupal-ippp-dev/modules/user/user.module).
  • Notice: Trying to get property of non-object in user_node_load() (line 3699 of /opt/drupal-ippp-dev/modules/user/user.module).

The new IPPP Grid cluster is now available for use and has been in full operation since the beginning of January 2009.  The new cluster now provides 1 Million SpecInt2k - a factor of 12 increase on the old cluster.  This is a significant improvement in computational power to assist particle physics research through the pheno VO and other grid users.

The new cluster consists of 3 new front end machines and 84 new worker nodes.  Using twin-servers, two machines can be packed in a 1U server, providing huge CPU power in a small area.  A total of 672 job slots are available to provide the 1 M SpecInt2k - with each worker node consisting of:

  • Dual processor, quad core providing 8 cores per machine.
  • Low-power Xeon L5430 for greater power efficiency and lower running costs.
  • 16GB RAM per machine, providing 2GB per core.
  • Dual bonded gigabit ethernet
  • 0.5TB Hard Disk
  • Installed with the Scientific Linux 4.7 OS

A new high spec UI is available to allow the preparation, submission and retrieval of jobs and data.  The cluster also provides 3 disk servers providing a total of 30TB of usable grid storage.

The management and functionality of the cluster has also improved dramatically.  Switchable PDUs and a private IPMI network has been setup for improved service management.  The old front end machines have been replaced with new shiny models, and each front-end service is now running as a virtual machine.  This allows for great flexibility and helps lower costs with the following services all running as Virtual Machines:

  • Two CEs (Computing Elements) for Grid job submission and control.
  • Torque server for the PBS batching system to control job scheduling.
  • BDII for information handling to report cluster usage.
  • SE (Storage Element) for storage head node functionality.
  • Install host for kickstarting and cfengine fabric management.
  • Mon box for publishing accounting information.
  • Monitoring box running ganglia and nagios.