|
|
The global vLLM Meetup is coming to Hong Kong! We’re bringing together vLLM core contributors and users locally from Hong Kong, Greater China, and around the world to share what’s next for LLM inference with vLLM—an open‑source LLM inference and serving engine with over 60,000 stars on GitHub!
Join us to dive into the fundamentals of vLLM, get hands-on experience, discover proven techniques to optimize LLM performance, deployment cost and reliability, and connect in person with a vibrant community of vLLM contributors, developers and users.
Event objectives:
- Discover vLLM and the current landscape of LLM inference
- Hear directly from vLLM core contributors to learn the latest vLLM developments and updates
- Learn how vLLM integrates with AI hardware accelerators and state-of-the-art AI models
|
|
|
|
| Featured Speakers |
CHRISTOPHER NULAND
Principal Technical Marketing Engineer AI BU, Red Hat
|
CYRUS LEUNG
Multi-Modality Co-Lead vLLM Team
|
HAICHEN ZHANG
Senior PM, AI Engineering AMD
|
|
HAN GAO
vLLM-Omni Core Maintainer
|
HENRY WONG
Data Scientist Python User Group HK
|
JIAJU ZHANG
Chief Architect Red Hat
|
|
PETER HO
Senior Solution Architect Red Hat
|
WILLIAM CHAN
Software Expert, AI Solutions MetaX
|
ZEBIN LI
Software Engineer MiniMax
|
|
|
|
|
|