Concurrent Forward and Backward Propagation for Energy-Efficient Long-Sequence Machine Learning

PI: Qing (Ken) Yang (University of Rhode Island)
Co-PI: Tao Wei (Clemson University)
Funding Source: National Science Foundation (NSF)
Amount: $200,000
Funding Period: January 1, 2025 – December 31, 2026

This active NSF-funded research project addresses a critical challenge in modern machine learning: the high computational cost and energy consumption required to train long-sequence models such as large language models. Existing GPU-based training approaches scale poorly with sequence length, limiting accessibility and raising sustainability concerns.

The project introduces Concurrent Forward and Backward Propagation (Co-FabPro), a novel hardware–software co-designed approach that integrates forward and backward propagation into a single computational step. By fundamentally rethinking how learning updates are performed, Co-FabPro enables linear scaling for both training and inference, significantly reducing computation time and energy usage. The research explores new machine learning algorithms, pipelined dataflow architectures, and custom hardware designs that support continuous, efficient processing of sequential data.

Beyond technical innovation, the project aims to broaden access to advanced machine learning capabilities and support workforce development through interdisciplinary collaboration. By reducing cost and energy barriers, this work contributes to more sustainable, accessible, and scalable AI systems aligned with CYPHER’s mission.