Finding video shots for immersive journalism through text-to-video search
Description
Video assets from archives or online platforms can provide relevant content for embedding into immersive scenes or for generation of 3D objects or scenes. However, XR content creators lack tools to find relevant video segments for their chosen topic. In this paper, we explore the use case of journalists creating immersive experiences for news stories and their need to find related video material to create and populate a 3D scene. An innovative approach creates text and video embeddings and matches textual input queries to relevant video shots. This is provided via a Web dashboard for search and retrieval across video collections, with selected shots forming the input to content creation tools to generate and populate an immersive scene, meaning journalists do not need specialist knowledge to communicate stories via XR.
Files
cbmi-text2video_lnfinal.pdf
Files
(596.4 kB)
Name | Size | Download all |
---|---|---|
md5:85c29fc1a9abac8eb9000cc1f6e8cd3d
|
596.4 kB | Preview Download |