National Nanotechnology Infrastructure Network

National Nanotechnology Infrastructure Network

Serving Nanoscale Science, Engineering & Technology

Computation at Harvard

The Harvard University node for the National Nanotechnology Infrastructure Network Computation Project has, since 2004, been operating to provide hardware, software, expert support and educational resources to the nanoscience community.

At Harvard, users have access to the 15000 node “Odyssey” cluster of the faculty of arts and science (FAS). As a new portion of that cluster, NNIN/C at Harvard, through the Center for Nanoscale Systems and the FAS Research Computing (RC) office are in the process of purchasing a new 600 node cluster of AMD Opterons to add a priority nnin queue to Odyssey. In addition, users may obtain access to the Orgoglio cluster of Nvidia C1060 Graphical Processing Units, a joint resource of NNIN/C and the Cyber-enabled Discovery Initiative (CDI) at Harvard. The Orgoglio cluster, with its theoretical capacity of 9.6 Tflops, was installed in the summer of 2009. In August of 2009, NNIN/C at Harvard hosted the workshop: Programming the GPU: Introduction to CUDA, a hands-on tutorial-style workshop over three days, to intoroduce users to the massively parallel structure of the GPU and its Nvidia-designed programming language.

Dr. Michael Stopa is the computational scientists at Harvard. His expertise lies in the areas of:

  • electronic structure of semiconductor heterostructures including split-gate GaAs-AlGaAs 2DEG devices and heteroepitaxially-grown semiconductor nanowires of InAs/InP or Si/Ge. The SETE code and the SETEwire code at Harvard are designed to calculate, on an inhomogeneous grid, electron states, wavefunctions and potential profiles for 3D experimental devices.
  • Multiscale  electronic structure of molecules and nanoparticles in a complex environment (also referred to as QM/CE, or quantum mechanics in a complex environment).
  • Highly parallel computing and in particular GPU computing on the Orgoglio cluster. Dramatic advances in computational speed for mature codes for molecular dynamics or the N-body problem have been reported recently. Harvard's NNIN/C seeks to facilitate further such advances.

Hardware Facilities

  • 15000 core plus FAS Research computing Odyssey cluster, including:
    • New AMD opteron, 2.2 GHz cluster, with 600 total cores and FDR infiniband for interconnects at 56Gbps)
  • GPU Cluster: 9.6 Tflops
    •  Single quad-core  Xeon"Harpertownï"  processors at 3 GHz
    •  16 GB of EEC DDR2 800 RAM
    •  Two Tesla C1060 GPUs (each with 4GB of RAM)
    • (total of 24 nodes/motherboards, 96 cores, 192 GB RAM, 48 S1070 cards).
    •  QLogic 24-Port 9024 DDR InfiniBand networking between the nodes.


The Harvard node of NNIN/C maintains, in cooperation with the Faculty of Arts and Sciences Research Computing program, a wide array of open source and proprietary codes for NNIN/C users and the members of the Harvard community. NNIN/C members do not have to be a part of Harvard or any of its affiliates. The codes that are proprietary, however, can and do have usage restrictions. The licenses can be accessed by some but not all users, depending on the particular code and the arrangements that Harvard has with the code owner/creator.

The codes that are maintained on Odyssey are stored as “modules” and the total list of all such modules can be found here. The tools that have been built by FAS RC for NNIN/C (or which are otherwise most relevant for nanoscience applications) are listed below. Please note that the total list of Odyssey apps is much longer, including, for example, a wide array of mathematics, numerical processing, or compiler-related tools.


Lumerical Suite  - FDTD, photonics code

Through a special arrangement with Lumerical Inc., NNIN/C maintains a set of “engine” licenses for the Lumerical Finite Difference Time Domain photonics code. This code, which requires that the user have the primary, input/output design tool license, is a highly regarded and professional tool for solving Maxwell’s equations in a variety of device configurations.

NAMD – molecular dynamics

According to the namd homepage at the University of Illinois, Urbana-Champaign:

NAMD, recipient of a 2002 Gordon Bell Award, is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. Based on Charm++ parallel objects, NAMD scales to hundreds of cores for typical simulations and beyond 200,000 cores for the largest simulations. NAMD uses the popular molecular graphics program VMD for simulation setup and trajectory analysis, but is also file-compatible with AMBER, CHARMM, and X-PLOR. NAMD is distributed free of charge with source code. You can build NAMD yourself or download binaries for a wide variety of platforms.

Charmm – molecular dynamics


CHARMM (Chemistry at HARvard Macromolecular Mechanics):

  • is a versatile and widely used molecular simulation program with broad application to many-particle systems
  • has been developed with a primary focus on the study of molecules of biological interest, including peptides, proteins, prosthetic groups, small molecule ligands, nucleic acids, lipids, and carbohydrates, as they occur in solution, crystals, and membrane environments
  • provides a large suite of computational tools that encompass numerous conformational and path sampling methods, free energy estimates, molecular minimization, dynamics, and analysis techniques, and model-building capabilities
  • is useful for a much broader class of many-particle systems
  • can be utilized with various energy functions and models, from mixed quantum mechanical-molecular mechanical force fields, to all-atom classical potentials with explicit solvent and various boundary conditions, to implicit solvent and membrane models
  • has been ported to numerous platforms in both serial and parallel architectures

Lammps – molecular dynamics

A popular, well-documented, open source molecular dynamics code developed at Sandia with a wide set of applications. Homepage here.

General features of Lammps

  • runs on a single processor or in parallel
  • distributed-memory message-passing parallelism (MPI)
  • spatial-decomposition of simulation domain for parallelism
  • open-source distribution
  • highly portable C++
  • optional libraries used: MPI and single-processor FFT
  • GPU (CUDA and OpenCL) and OpenMP support for many code features
  • easy to extend with new features and functionality
  • runs from an input script
  • syntax for defining and using variables and formulas
  • syntax for looping over runs and breaking out of loops
  • run one or multiple simulations simultaneously (in parallel) from one script
  • build as library, invoke LAMMPS thru library interface or provided Python wrapper
  • couple with other codes: LAMMPS calls other code, other code calls LAMMPS, umbrella code calls both

Gamess – electronic structure


The General Atomic and Molecular Electronic Structure System (GAMESS)
is a general ab initio quantum chemistry package. Created by Mark Gordon’s Quantum Theory Group, Ames Laboratory/Iowa State University.

Gaussian – electronic structure

A highly developed code using a Gaussian basis set for electronic structure of molecules. Described on the Gaussian homepage.

Octopus – electronic structure

According to

Octopus is a scientific program aimed at the ab initio virtual experimentation on a hopefully ever-increasing range of system types. Electrons are described quantum-mechanically within density-functional theory (DFT), in its time-dependent form (TDDFT) when doing simulations in time. Nuclei are described classically as point particles. Electron-nucleus interaction is described within the pseudopotential approximation.

For optimal execution perfomance Octopus is parallelized using MPI and OpenMP and can scale to tens of thousands of processors. It also has support for graphical processing units (GPUs) through OpenCL.

Octopus is free software, released under the GPL license, so you are free to download it, use it and modify it.

Turbomole – electronic structure

According to

TURBOMOLE is a quantum chemical program package, initially developed in the group of Prof. Dr. Reinhart Ahlrichs at the University of Karlsruhe and at the Forschungszentrum Karlsruhe…Presently TURBOMOLE is one of the fastest and most stable codes available for standard quantum chemical applications. Unlike many other programs, the main focus in the development of TURBOMOLE has not been to implement all new methods and functionals, but to provide a fast and stable code which is able to treat molecules of industrial relevance at reasonable time and memory requirements.

Abinit – electronic structure


ABINIT is a package whose main program allows one to find the total energy, charge density and electronic structure of systems made of electrons and nuclei (molecules and periodic solids) within Density Functional Theory (DFT), using pseudopotentials and a planewave or wavelet basis. ABINIT also includes options to optimize the geometry according to the DFT forces and stresses, or to perform molecular dynamics simulations using these forces, or to generate dynamical matrices, Born effective charges, and dielectric tensors, based on Density-Functional Perturbation Theory, and many more properties. Excited states can be computed within the Many-Body Perturbation Theory (the GW approximation and the Bethe-Salpeter equation), and Time-Dependent Density Functional Theory (for molecules). In addition to the main ABINIT code, different utility programs are provided.

Q-Chem – electronic structure

From the Q-Chem homepage:

Q-Chem is a comprehensive ab initio quantum chemistry package for accurate predictions of molecular structures, reactivities, and vibrational, electronic and NMR spectra. The new release of Q-Chem 4 represents the state-of-the-art of methodology from the highest performance DFT/HF calculations to high level post-HF correlation methods:

  • Fully integrated graphic interface including molecular builder, input generator, contextual help and visualization toolkit (See amazing image below generated with IQmol; multiple copies available free of charge);
  • Dispersion-corrected and double hybrid DFT functionals;
  • Faster algorithms for DFT, HF, and coupled-cluster calculations;
  • Structures and vibrations of excited states with TD-DFT;
  • Methods for mapping complicated potential energy surfaces;
  • Efficient valence space models for strong correlation;
  • More choices for excited states, solvation, and charge-transfer;
  • Effective Fragment Potential and QM/MM for large systems;
  • For a complete list of new features, click here.

SETE – semiconductor device simulation

SETE (pronounced seet) was developed by NNIN/C computation coordinator Michael Stopa. SETE employs density functional theory to solve for the electronic structure of GaAs-AlGaAs heterostructure-based surface gated nano-devices such as quantum wires and quantum dots. The inputs to SETE specify: (i) the device, including the gate pattern and voltages, as well as the wafer profile; (ii) physical parameters such as temperature and magnetic field; (iii) run mode parameters such as quantum versus classical, exchange-correlation on versus off and, for exchange-correlation on, spin polarized or spin degenerate; (iv) various performance control parameters. A typical application is to input the gate pattern and wafer profile for a quantum dot structure and obtain the effective 2D potential profile at the two dimensional electron gas (2DEG) level as well as the Kohn-Sham energies and eigenfunctions for the electrons confined in the dot. One might then sweep an external magnetic field to see how the dot spectrum varies.

Source code – the source code for SETE is available to anyone who wants it. A gzipped tar ball archive (SETE.tar.gz) is posted at In addition to the source code (.f90, .f, .c and .h files) the archive contains a Makefile and a file which is necessary for specifying the machine architecture, compilers, etc.

We will also post sets of sample input/output decks, including basic Thomas-Fermi calculations as well as at least one quantum dot eigenvalue/eigenfunction calculation.

SETE has been recently restructured to make it more “object oriented” and thus more simple to follow and, if desired, modify. SETE uses a few mathematical libraries which are generally not included in the tar distribution. These include very standard libraries like LAPACK and ARPACK, as well as libraries fft, slatec and BLAHs (usually a subset of LAPACK). All of the source code needed for these libraries is maintained by the SETE creators and is available on request, should you find them difficult to locate. (They are all open source).

Oommff – Object Oriented Micro-Magnetic Framework simulation code

From the oommf webpage:

The goal of the OOMMF project in ITL/NIST is to develop a portable, extensible public domain micromagnetic program and associated tools. This code will form a completely functional micromagnetics package, but will also have a well documented, flexible programmer's interface so that people developing new code can swap their own code in and out as desired.


For further information on Harvard codes, contact Michael Stopa at