Abstract
Movie trailers are an essential tool for promoting films and attracting audiences. However, the process of creating trailers can be time-consuming and expensive. To stream-line this process, we propose an automatic trailer generation framework that generates plausible trailers from a full movie by automating shot selection and composition. Our approach draws inspiration from machine translation techniques and models the movies and trailers as sequences of shots, thus formulating the trailer generation problem as a sequence-to-sequence task. We introduce Trailer Generation Transformer (TGT), a deep-learning framework utilizing an encoder-decoder architecture. TGT movie encoder is tasked with contextualizing each movie shot representation via self-attention, while the autoregressive trailer de-coder predicts the feature representation of the next trailer shot, accounting for the relevance of shots' temporal order in trailers. Our TGT significantly outperforms previous methods on a comprehensive suite of metrics.
Original language | English (US) |
---|---|
Pages | 7445-7454 |
Number of pages | 10 |
DOIs | |
State | Published - 2024 |
Event | 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2024 - Seattle, United States Duration: Jun 16 2024 → Jun 22 2024 |
Conference
Conference | 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2024 |
---|---|
Country/Territory | United States |
City | Seattle |
Period | 06/16/24 → 06/22/24 |
Bibliographical note
Publisher Copyright:© 2024 IEEE.
ASJC Scopus subject areas
- Software
- Computer Vision and Pattern Recognition