Research Computing Resources The URI Center for Computational Research includes an IT Research Computing Services team that consists of several specialists: Dr. Kevin Bryan manages HPC systems, Dr. Michael Puerrer is a Research Computing Facilitator, Christian Vye handles the support for software applications, licensing and 3 graduate students offer consultation and training. This team is led by Dr. Gaurav Khanna who serves as Director and is also a Professor in the Physics Department. The Center is advised by a committee of URI faculty representing widely different areas of research. The Center also has a supporting external advisory group consisting of regional experts drawn from the Massachusetts High-Performance Computing Consortium (MGHPCC) in the area of research computing.
The URI Colab for Research Computing includes an IT Research Computing Services team that consists of several specialists: Dr. Kevin Bryan manages HPC systems, Dr. Michael Puerrer is a Research Computing Facilitator, Christian Vye handles the support for software applications, licensing and 3 graduate students offer consultation and training. This team is led by Dr. Gaurav Khanna who serves as Director and is also a Professor in the Physics Department. The Colab is advised by a committee of URI faculty representing widely different areas of research. The Colab also has a supporting external advisory group consisting of regional experts drawn from the Massachusetts High-Performance Computing Consortium (MGHPCC) in the area of research computing.
Existing high-performance computing (HPC) resources
Bluewaves high-performance computing cluster
Bluewaves HPC cluster contains 62 standard computer nodes [20 physical cores; 128 GB (60), 256 GB (1), or 512 GB (1) memory, 2TB local storage], two large memory nodes (24 physical cores, 512 GB memory, 4TB local storage), and over 1.1 PB hard disk spaces for fast I/O and secondary storage. The storage disks are configured with RAID 6 protection. The computer nodes are connected with InfiniBand QDR network cables and switches and are assembled in three 42U racks. The cluster is in its seventh year of operation. The cluster, while still operating, is near the end of its life cycle and has limits for further expansion. The equipment is shared among a wide group of URI faculty and is running close to its maximum capacity.
Andromeda high-performance computing cluster
The Andromeda HPC cluster is a similar scale cluster that largely serves groups of contributing users with some support for the broader URI user community. It is established with contributions from individual researchers and currently has 47 nodes, providing 1704 cores with nodes having 64GB (8), 128GB (29), 256GB (3), 512GB (4), or 768GB (1) memory. These are connected via an Omni-Path 100Gbps network, with shared storage to the 1.1 PB hard disk on Bluewaves.
URI campus central data center
Both clusters are located in a Data Center located in the Tyler Hall Building on the Kingston campus. The 2,300 sq. ft. Data Center provides a climate-controlled environment with 90 tons of cooling for the operation of HPC systems and is equipped with 160 kVA of UPS battery backup as well as a 450 kW Generator that provides emergency power supplies. The Tyler Hall Data Center is operated by the URI Information Technology Services (ITS) and is monitored and maintained by ITS staff.
Collaborative high-performance computing (HPC) resources
The Massachusetts Green High-Performance Computing Center (MGHPCC) is a collaboration of the five major research universities in Massachusetts including Boston University, Harvard, MIT, Northeastern and UMass. The collaboration built a dedicated data center in 2012 located in Holyoke, MA that hosts the research computing infrastructure of these universities. The URI Colab for Research Computing has begun multiple collaborative programs with the MGHPCC, allowing for URI researchers to get access to the following MGHPCC resources:
UMass – URI collaboration: UNITY cluster
URI and UMass are building and operating a new shared HPC environment at the MGHPCC. Researchers have access to this “UNITY” cluster via the InCommon Federation. Details of the cluster may be found at the UNITY cluster portal. The cluster currently offers ~250 nodes with an expectation of significant growth in the near future. The nodes include both Intel and AMD multi-core processors and Nvidia GPGPUs for HPC and AI/ML computations.
MIT – URI collaboration: SuperCloud
URI researchers have access to the MIT Lincoln Labs “SuperCloud” resource via the InCommon Federation. Details of the cluster may be found at the SuperCloud cluster portal. The cluster currently offers 200+ nodes, each with 40 Intel Xeon cores, 2 Nvidia V100 GPGPUs and 384GB memory and 400+ nodes with 48 Xeon cores and 192GB memory.
Tape archival storage: NESE
The North-East Storage Exchange (NESE) is a storage collaboration led by Harvard and Boston University funded by NSF hosted at the MGHPCC. URI researchers have access to this facility for their data backup and tape archival needs via the InCommon Federation.
Active research data storage: OSN
The OpenStorageNetwork (OSN) is a national distributed storage collaboration funded by NSF, with a node hosted at the MGHPCC. URI researchers have access to this facility for their active research data storage and collaboration needs. Access to URI researchers is enabled through the InCommon Federation.
High-speed access (10 Gbps) between the MGHPCC and URI is enabled via a collaboration between OSHEAN and UMassNet.