AI Research Scientist - Video Generation
About Us
We are a cutting-edge AI startup specializing in next-generation video generation technology based in Hong Kong Science and Technology Park. Our mission is to push the boundaries of what's possible in AI-driven video generation through innovation of foundation model. As a growing startup located at Hong Kong Science Park, we offer a dynamic research environment where your contributions can advance the state-of-the-art in AI technology.
Position Overview
We are seeking a dedicated AI Research Scientist to join our foundation model research team. The ideal candidate will have hands-on experience in training large-scale models and a strong passion for advancing fundamental research in foundation models. This position is designed to support early-career researchers in developing their expertise while contributing to cutting-edge AI research.
Key Research Responsibilities
- Conduct fundamental research in large-scale foundation model architectures and training methodologies
- Design and implement novel approaches to video generation using transformer-based architectures
- Develop and optimize model training algorithms for distributed computing environments
- Investigate scaling laws and efficiency improvements for foundation model training
- Research innovative techniques to enhance model capabilities and performance
- Implement and evaluate state-of-the-art methods in multimodal AI and video synthesis
- Collaborate with research team members on joint research projects and publications
- Contribute to the development of proprietary research datasets and evaluation metrics
- Conduct experimental validation of theoretical approaches and document findings
- Participate in research discussions and present findings at internal research meetings
- Publish research paper on top-tier AI/ML conferences or journals
Required Qualifications
- Ph.D. in Computer Science, Machine Learning, Artificial Intelligence, or related field
- 2+ years of research experience in large-scale model training, preferably at:
- International research institutions or tech companies (e.g., Google Research, Meta AI, Microsoft Research, OpenAI, Anthropic) OR
- Leading Chinese research labs or tech companies (e.g., ByteDance AI Lab, Alibaba DAMO Academy, Baidu Research, Tencent AI Lab, SenseTime Research, Huawei Noah's Ark Lab)
- Strong research background in distributed training systems and large-scale model optimization
- Deep understanding of transformer architectures, attention mechanisms, and their variants
- Proven research experience in developing and training foundation models
- Proficiency in PyTorch and/or JAX for research implementation
- Publication record in top-tier AI/ML conferences (NeurIPS, ICML, ICLR, CVPR, ICCV) or strong research portfolio
Preferred Research Qualifications
- Research experience bridging Chinese and international AI research communities
- Knowledge of Chinese AI research infrastructure and platforms (e.g., ModelArts, PAI, ByteMLab)
- Research background in scaling laws and efficient training methodologies
- Specialized research experience in video generation models or multimodal architectures
- Active contributions to open-source research projects and ML frameworks
- Research experience in ML systems and infrastructure optimization
- Knowledge of mixed-precision training techniques and model parallelism strategies
- Experience with custom CUDA kernel development for research applications
Technical Research Skills
- Foundation Model Research: Transformer architectures, attention mechanisms, scaling methodologies
- Distributed Training Research: Multi-GPU/multi-node training, model parallelism, gradient synchronization
- Cloud Research Infrastructure:
- International platforms (AWS/GCP for research computing)
- Chinese platforms (Alibaba Cloud, Tencent Cloud, Huawei Cloud)
- Programming: Python, CUDA, C++ (beneficial for optimization research)
- Research Frameworks:
- Standard: PyTorch, JAX, Hugging Face Transformers, DeepSpeed, Megatron-LM
- Chinese ecosystem: PaddlePaddle, MindSpore (advantageous)
- Research Tools: Git, Docker, Jupyter, Weights & Biases, MLflow, TensorBoard
Research Environment & Support
- Access to substantial GPU computing resources for large-scale experiments
- Opportunity to conduct cutting-edge research in video generation and foundation models
- Collaborative research environment with opportunities for co-authored publications
- Support for attending international AI/ML conferences and workshops
- Access to research datasets and computational infrastructure
- Mentorship opportunities with senior researchers in the field
- Regular research seminars and technical discussions
- International collaboration opportunities with leading research institutions
Research Impact Goals
- Advance the state-of-the-art in foundation models for video generation
- Contribute to high-impact research publications in top-tier venues
- Develop novel methodologies that can be translated into practical applications
- Build expertise in large-scale AI model research within the Hong Kong innovation ecosystem
Application Requirement
Please submit:
- Comprehensive CV highlighting research experience and publications
- Research statement (2-3 pages) outlining your research interests and vision for foundation model research
- List of significant research projects with detailed descriptions of your contributions
- Links to Google Scholar profile, GitHub repositories, and any published code
- Contact information for 2-3 research references
To apply or inquire about this research position, please contact [via CTgoodjobs Apply Now].
All applications applied through our system will be delivered directly to the advertiser and privacy of personal data of the applicant will be ensured with security.