/ README.md
README.md
  1  <div align="center">
  2  
  3  # Apollo: An Exploration of Video Understanding in Large Multimodal Models
  4  
  5  <p align="center">
  6      <img src="assets/icon.jpg" width="150" style="margin-bottom: 0.2;"/>
  7  <p>
  8  
  9  
 10  <a href="https://arxiv.org/abs/2412.10360" target="_blank">
 11      <img alt="arXiv" src="https://img.shields.io/badge/arXiv-Apollo-red?logo=arxiv&style=for-the-badge" height="25" />
 12  </a>
 13  <a href="https://apollo-lmms.github.io" target="_blank">
 14      <img alt="Website" src="https://img.shields.io/badge/🌎_Website-apollo--lmms.github.io-blue.svg?style=for-the-badge" height="25" />
 15  </a>
 16  <br>
 17  <a href="https://huggingface.co/Apollo-LMMs" target="_blank">
 18      <img alt="HF Model: Apollo-LMMs" src="https://img.shields.io/badge/%F0%9F%A4%97%20Model-Apollo--LMMs-ffc107?color=ffc107&logoColor=white&style=for-the-badge" height="25" />
 19  </a>
 20  <a href="https://huggingface.co/spaces/Apollo-LMMs/Apollo-3B" target="_blank">
 21      <img alt="HF Demo: Apollo-3B" src="https://img.shields.io/badge/%F0%9F%A4%97%20Demo-Apollo--3B-ffc107?color=ffc107&logoColor=white&style=for-the-badge" height="25" />
 22  </a>
 23  <a href="https://huggingface.co/spaces/Apollo-LMMs/ApolloBench" target="_blank">
 24      <img alt="HF Leaderboard: ApolloBench" src="https://img.shields.io/badge/%F0%9F%A4%97%20Leaderboard-ApolloBench-ffc107?color=ffc107&logoColor=white&style=for-the-badge" height="25" />
 25  </a>
 26  
 27  </div>
 28  
 29  
 30  
 31  Apollo is a family of Large Multimodal Models (LMMs) designed to address a broad spectrum of video-language tasks, including long-form video comprehension, temporal reasoning, and multi-turn video conversations. Apollo achieves state-of-the-art performance across several benchmarks and scales efficiently from billions to tens of billions of parameters.
 32  
 33  ## Release
 34  - **[Dec 13, 2024]** Apollo released!
 35  - **[Coming soon..]** Training code will be released upon internal approval.
 36  
 37  ## Quick Start
 38  
 39  ### Installation
 40  
 41  ```bash
 42  pip install -e .
 43  pip install flash-attn --no-build-isolation
 44  ```
 45  
 46  ### Inference Example
 47  
 48  ```python
 49  import torch
 50  from transformers import AutoModelForCausalLM
 51  from apollo.mm_utils import (
 52      KeywordsStoppingCriteria,
 53      tokenizer_mm_token,
 54      ApolloMMLoader
 55  )
 56  from apollo.conversation import conv_templates, SeparatorStyle
 57  from apollo.constants import X_TOKEN, X_TOKEN_INDEX
 58  from huggingface_hub import snapshot_download
 59  
 60  # Parameters
 61  version = "qwen_2"
 62  model_url = "Apollo-LMMs/Apollo-3B-t32"
 63  model_path = snapshot_download(model_url, repo_type="model")
 64  
 65  video_path = "/your/local/path/video.mp4"
 66  question = "Describe this video in detail"
 67  temperature = 0.4
 68  top_p = 0.7
 69  max_output_tokens = 256
 70  
 71  device = "cuda" if torch.cuda.is_available() else "cpu"
 72  attn_implementation = "sdpa" if torch.__version__ > "2.1.2" else "eager"
 73  
 74  model = AutoModelForCausalLM.from_pretrained(
 75      model_path,
 76      trust_remote_code=True,
 77      low_cpu_mem_usage=True,
 78      attn_implementation=attn_implementation,
 79  ).to(device=device, dtype=torch.bfloat16)
 80  
 81  tokenizer = model.tokenizer
 82  vision_processors = model.vision_tower.vision_processor
 83  config = model.config
 84  max_length = config.llm_cfg['model_max_length']
 85  num_repeat_token = config.mm_connector_cfg['num_output_tokens']
 86  mm_use_im_start_end = config.use_mm_start_end
 87  
 88  frames_per_clip = 4
 89  clip_duration = getattr(config, 'clip_duration')
 90  
 91  mm_processor = ApolloMMLoader(
 92      vision_processors,
 93      clip_duration,
 94      frames_per_clip,
 95      clip_sampling_ratio=0.65,
 96      model_max_length=config.model_max_length,
 97      device=device,
 98      num_repeat_token=num_repeat_token
 99  )
100  
101  model.eval()
102  
103  mm_data, replace_string = mm_processor.load_video(video_path)
104  message = replace_string + "\n\n" + question
105  
106  conv = conv_templates[version].copy()
107  conv.append_message(conv.roles[0], message)
108  conv.append_message(conv.roles[1], None)
109  prompt = conv.get_prompt()
110  
111  input_ids = tokenizer_mm_token(prompt, tokenizer, return_tensors="pt").unsqueeze(0).to(device)
112  
113  pad_token_ids = tokenizer.pad_token_id if tokenizer.pad_token_id is not None else tokenizer.eos_token_id
114  stop_str = conv.sep if conv.sep_style != SeparatorStyle.TWO else conv.sep2
115  keywords = [stop_str]
116  stopping_criteria = KeywordsStoppingCriteria(keywords, tokenizer, input_ids)
117  
118  with torch.inference_mode():
119      output_ids = model.generate(
120          input_ids,
121          vision_input=[mm_data],
122          data_types=['video'],
123          do_sample=(temperature > 0),
124          temperature=temperature,
125          max_new_tokens=max_output_tokens,
126          top_p=top_p,
127          use_cache=True,
128          num_beams=1,
129          stopping_criteria=[stopping_criteria]
130      )
131  
132  pred = tokenizer.batch_decode(output_ids, skip_special_tokens=True)[0].strip()
133  print(pred)
134  ```
135  
136  ### PEFT (Parameter-Efficient Fine-Tuning)
137  - **(Coming soon..)** We will provide examples and documentation on how to apply low-rank adaptation (LoRA) and other parameter-efficient fine-tuning techniques to Apollo.
138  
139  ## Related Work
140  If you find Apollo interesting, you can check out our groups' related works in video understanding:
141  * [Video-STaR](https://github.com/orrzohar/Video-STaR): the first approach for Video-LMM self-training, which generates CoT QA pairs from video metadata via back-rationalization, direct generat
142  * [VideoAgent](https://github.com/wxh1996/VideoAgent): the first agentic long-form video understanding system that allows LLM to interact with videos and decide where to look iteratively.
143  
144  ## Citation
145  
146  If you find Apollo useful in your research, please cite:
147  ```bibtex
148  @article{zohar2024apollo,
149      title={Apollo: An Exploration of Video Understanding in Large Multimodal Models},
150      author={Orr Zohar, Xiaohan Wang, Yann Dubois, Nikhil Mehta, Tong Xiao, Philippe Hansen-Estruch, Licheng Yu, Xiaofang Wang, Felix Juefei-Xu, Ning Zhang, Serena Yeung-Levy, and Xide Xia},
151      journal={arXiv preprint arXiv:2412.10360},
152      year={2024}
153  }
154  ```