The Georgia Tech TechFee program provides an opportunity for GT faculty and staff to submit proposals on behalf of instructors and students for resources that are dedicated to classroom educational and research experiences. These TechFee proposals are funded via student technology fees, so their primary purpose is to provide novel equipment that helps to further student instructional and unfunded (ie, undergraduate) research aims. CRNCH is proud to help utilize our expertise in supporting novel accelerators to provide access to TechFee-funded equipment for class and student usage.
Note: If you are an instructor looking to use these resources for a course, please email the PI, Jeffrey Young with information on your class use case. If you are a GT student, you can request an account on the Rogues Gallery using the online form.
Faculty Opportunities to Propose New Infrastructure
If you are interested to purchase some novel architecture hardware for your course, please consider working with CRNCH RG to deploy it! Proposals are typically due in February of each year and results are announced in August-September of the same year. Read more at CoC’s page for Techfees.
The benefits for cohosting hardware with CRNCH can be found in our hoteling agreement document, but we can summarize these benefits as follows:
- Collaboration with RTs and PIs who have extensive experience in deploying novel architectures at scale using shared filesystems, cluster-based technology, and Slurm scheduling.
- Involvement with ongoing PACE efforts around CoC-ICE and ICE to support new graphical and notebook-based interfaces like Open OnDemand.
- Datacenter space for small boards with the ability to remotely maintain and power cycle said devices.
- Engagement opportunities with other CRNCH faculty teaching similar classes on best practices and how to best incorporate TechFee equipment into classes.
Funded TechFee Proposals Hosted by Rogues Gallery
FY 2022 – CRNCH Neuromorphic Testbed Expansion
PIs: Jeffrey Young, Jennifer Hasler, Sam Shapero
Funded amount: $61,660
Project overview: In order to develop a more robust neuromorphic environment for classes and undergraduate research, we plan to acquire two different neuromorphic platforms and supporting hardware to create the basis for a neuromorphic testbed that builds off of cutting-edge platforms and tooling. We will acquire up to 12 Field Programmable Analog Array (FPAA) mixed digital/analog devices that are capable of supporting close to one thousand neurons and that are supported via the open-source Reconfigurable Analog Signal Processor (RASP) framework that has been developed by Dr. Hasler and her students. Additionally, we will procure 18 Ultrascale+ FPGAs that are capable of running the digital neuromorphic framework, Caspian, by researchers at Oak Ridge National Laboratory. 12 Raspberry Pi devices will also be procured to serve as schedulable host devices for the FPAA and FPGA platforms so that they can be more easily used in a remote fashion.
Status update: Purchasing for this project is in progress, and we are working on plans to manage this deployment in light of supply chain issues with some of the requested FPGA platforms.
FY 2021 – HPC Accelerated-GPU Cluster Upgrade Initiative
PIs: Jeffrey Young, Edmond Chow, Richard Vuduc, Will Powell
Funded amount: $111,523
Project overview: To promote a diverse environment for students to investigate HPC acceleration of applications, data analytics, and machine learning, we propose to acquire different types of GPU accelerators and host servers to host remote access for the requested devices. These resources will be used by classes like the Online Masters section of Intro to High-Performance Computing and the Team Phoenix student cluster competition Vertically Integrated Projects (VIP) class. This mix of systems is proposed to allow for students to study and use diverse GPU acceleration options in terms of high-performance computing. All of these GPUs will be made available for students to implement and evaluate high-performance algorithms written in languages like HIP, SyCL and OpenMP.
Status update: Four Intel Ice Lake CPU-based systems have been deployed (Frozone) along with one AMD Milan system with NVIDIA A30 GPUs (Quorra). Access is through the HPC subsection of the Rogues Gallery testbed. This Techfee has also supported student usage of the Arm compiler toolchain with locally hosted Arm HPC systems.
Thanks to: NVIDIA for the donation of A100 cards to support our student cluster competition course for undergraduates in 2020.
FY 2020 – Reconfigurable Cluster Initiative
Funded amount: $74,905
PIs: Jeffrey Young, Hyesoon Kim, Jason Riedy, Lee Lerner
Proposed Overview: This initiative proposes to acquire up to nine Field Programmable Reconfigurable Array (FPGA) devices and two host servers to seed a new reconfigurable cluster initiative. This cluster will initially be used by Atlanta-based students in multiple courses but will eventually be extended to support a limited number of OMSCS students via remote access and cluster-based scheduling. We anticipate supporting undergraduate students and graduate students for coursework as well as making the resource available to all interested students via our existing TSO-supported testbed, the “Rogues Gallery”. This effort will provide easier access to novel hardware for students as well as reduce the need for individual professors to acquire and maintain this type of hardware for their classes.
Status Update: Due to the COVID-19 pandemic, remote access to novel FPGA hardware has become more critical than ever. CRNCH has pushed ahead with supporting devices that can be remotely accessed and power cycled. All of our deployed equipment can be accessed from off campus by enrolled students, and we are working on Slurm scheduling to support larger class usage.
Thanks to: Tool support is provided by Intel donations and the Xilinx University Program (XUP). Many thanks also to the XUP program for the donation of U280 cards for research and teaching support.
What’s available for class and student use?:
FPGA Board / DevKit
|
FPGA
|
Memory
|
Programming Tools
|
Notes
|
Hosting Machine | |
GX1150
|
8 GB DDR4
|
Intel SDK, OneAPI
|
Flubber | |||
Intel Stratix 10 PAC | 1SX280HN2F43E2VG | 32 GB DDR4-2400 | Flubber | |||
Bittware 520N-MX | GX2800 | 16 GB DDR4 | Intel SDK, OpenCL | Flubber | ||
Mellanox Innova-2 Flex SmartNIC | Kintex Ultrascale XCKU15P | 8 GB DDR4 | Xilinx Vivado | Connect-X 5 SmartNICs with FPGA chips | Flubber | |
Zynq XC7Z020-1CLG400C
|
512MB DDR3
|
Xilinx Vivado, PYNQ
|
Also available for Nengo-FPGA
|
Brainard | ||
Zynq UltraScale+ MPSoC XCZU7EV-1FBVB900E
|
DDR4
|
Xilinx Vitis, Vivado
|
Brainard | |||
Zynq UltraScale+ MPSoC Evaluation Kit | Zynq Ultrascale+ MPSoC XCZU7EV-2FFVC1156 | DDR4, 4 GB | Xilinx Vitis, Vivado, PYNQ | Brainard | ||
Ultrascale+ XCU280
|
8 GB HBM2
|
Xilinx Vitis, Vivado, Vitis AI
|
Supported in part by XUP
|
Flubber, multiple cards | ||
Raspberry Pi 4 | Used to host FPGA Devkits | 4 GB DDR2 | OpenMP | Used to support PYNQ and devkit usage | Brainard | |
Coral TPU | Quad Cortex-A53, Cortex-M4F, Google Edge TPU | 1 GB LPDDR4 | TensorFlow Lite | Devkit that allows for alternate AI comparisons with TPU units. | Brainard | |
Intel Movidius Neural Stick 2 | OpenVino | Devkit that allows for alternate AI comparisons | Brainard | |||
Jetson Xavier NX | 6-core Arm Carmel CPU, 384 V100 cores | 8 GB DDR4 | OpenMP, CUDA | Devkit that allows for alternate AI comparisons | Brainard | |
Xilinx Versal | VMK-190 dev card. |
What classes have made use of this resource?
Please note that this list may change as resources must typically be deployed one semester before class usage.
- ECE 2601, 3601, 4601 (VIP)
- CS 3220 (Processor Design)
- CS 2698/4698 (Undergraduate Research)
- As of Spring 2022, over 200 students have accessed these CRNCH reconfigurable resources for academic-related projects.
Additionally we are seeking opportunities to utilize this hardware for machine learning-related classes like CS 4803 / 7643 and digital design classes like ECE 2031. Special topics projects like CS8803 (Topics on Datacenter Design, Dr. Alex Daglis) also are planned to make use of Xilinx-based smartNICs for optional class projects.
Tools Available: Intel OneAPI 2021.4, Intel Devstack, Xilinx Vitis and Vivado 20.2, Xilinx PYNQ
Primary Contact: Jeffrey Young
Restrictions: Techfee equipment is prioritized for instructional usage. General student usage can occur on an ad-hoc basis but is scheduled at a lower priority.