NVIDIA has chosen 10 PhD students for its Graduate Fellowship Program in the 2026-2027 cycle. The program marks its 25th year of supporting graduate students whose work aligns with NVIDIA technologies. Each recipient receives up to $60,000. The awards cover research across computing innovation areas. The program accepts applicants from around the world.
The fellowship requires recipients to complete a summer internship before the fellowship year starts. Their projects focus on accelerated computing. Topics include autonomous systems, computer architecture, computer graphics, deep learning, programming systems, robotics, and security.
Fellowship Recipients and Their Research
Jiageng Mao from the University of Southern California works on solving complex physical AI problems. He uses diverse priors from internet-scale data. This approach aims to enable robust, generalizable intelligence for embodied agents in the real world.
Liwen Wu at the University of California San Diego enriches realism and efficiency in physically based rendering. She employs neural materials and neural rendering techniques.
Manya Bansal from the Massachusetts Institute of Technology designs programming languages for modern accelerators. These languages allow developers to write modular, reusable code. They maintain low-level control for peak performance.
Sizhe Chen of the University of California, Berkeley secures AI in real-world applications. He currently secures AI agents against prompt injection attacks. His defenses are general and practical. They preserve the agent's utility.
Yunfan Jiang at Stanford University develops scalable approaches to build generalist robots for everyday tasks. He draws from hybrid data sources. These include real-world whole-body manipulation, large-scale simulation, and internet-scale multimodal supervision.
Yijia Shao, also from Stanford University, researches human-agent collaboration. She develops AI agents that communicate and coordinate with humans during task execution. She designs new human-agent interaction interfaces.
Shangbin Feng from the University of Washington advances model collaboration. Multiple machine learning models, trained on different data by different people, collaborate, compose, and complement each other. This supports an open, decentralized, and collaborative AI future.
Shvetank Prakash at Harvard University advances hardware architecture and systems design. He builds AI agents on new algorithms, curated datasets, and agent-first infrastructure.
Irene Wang of the Georgia Institute of Technology develops a holistic codesign framework. It integrates accelerator architecture, network topology, and runtime scheduling. The framework enables energy-efficient and sustainable AI training at scale.
Chen Geng from Stanford University models 4D physical worlds. He uses scalable data-driven algorithms and physics-inspired principles. This advances physically grounded 3D and 4D world models for robotics and scientific applications.
Additional Recognition for Finalists
The program also names five finalists. Zizheng Guo comes from Peking University. Peter Holderrieth represents the Massachusetts Institute of Technology. Xianghui Xie is at the Max Planck Institute for Informatics. Alexander Root studies at Stanford University. Daniel Palenicek attends the Technical University of Darmstadt.
These selections highlight ongoing efforts in computing research. The fellowship continues to back early-career scholars in fields tied to NVIDIA's focus areas.


I truly appreciate you spending your valuable time here. To help make this blog the best it can be, I would love your feedback on this post. Let me know in the comments: How could this article be better? Was it clear? Did it have the right amount of detail? Did you notice any errors?
If you found it valuable, please consider sharing it.