SSDs have become indispensable for high-speed data services, from desktops to large-scale cloud infrastructure. As we enter the AI era, optimizing the internal software stack of SSDs, including the Flash Translation Layer (FTL), Host Interface Layer (HIL), and Flash Interface Layer (FIL), is more critical than ever. Our research focuses on re-architecting these layers to meet the evolving demands of next-generation computing.
OS is an important component of modern computing, providing a critical abstraction layer that hides the complexity of underlying hardware. Our research focuses on engineering sophisticated resource management schemes to maximize hardware utilization. Specifically, we investigate tightly optimized OS functionalities, such as multi-thread-aware scheduling (multicore) and high-concurrency parallel computing (GPU), to deliver the extreme performance required by today’s data-intensive applications.
Distributed storage is a high-end architecture designed to manage the explosive growth of data in modern cloud services. Our research addresses the challenges of efficiently and securely storing vast datasets across massive storage clusters. We focus on mitigating resource contention overheads and ensuring robust fault tolerance across numerous distributed nodes, providing the foundational infrastructure for reliable, large-scale data management.
DNA storage promises far higher data density and long-term robustness compared to conventional SSDs and HDDs. While challenges remain in sequencing and synthesis speeds, we bridge this gap through a software-driven approach. We focus on architecting efficient system protocols and prototyping next-generation storage frameworks to make DNA-based data management a reality.