MediaDiver

Viewing and annotating multi-view video

Gregor Miller, Sidney Fels, Abir Al Hajri, Michael Ilich, Zoltan Foley-Fisher, Manuel Fernandez, Daesik Jang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

5 Citations (Scopus)

Abstract

We propose to bring our novel rich media interface called MediaDiver demonstrating our new interaction techniques for viewing and annotating multiple view video. The demonstration allows attendees to experience novel moving target selection methods (called Hold and Chase), new multi-view selection techniques, automated quality of view analysis to switch viewpoints to follow targets, integrated annotation methods for viewing or authoring meta-content and advanced context sensitive transport and timeline functions. As users have become increasingly sophisticated when managing navigation and viewing of hyper-documents, they transfer their expectations to new media. Our proposal is a demonstration of the technology required to meet these expectations for video. Thus users will be able to directly click on objects in the video to link to more information or other video, easily change camera views and mark-up the video with their own content. The applications of this technology stretch from home video management to broadcast quality media production, which may be consumed on both desktop and mobile platforms.

Original languageEnglish
Title of host publicationCHI EA 2011 - 29th Annual CHI Conference on Human Factors in Computing Systems, Conference Proceedings and Extended Abstracts
Pages1141-1146
Number of pages6
DOIs
Publication statusPublished - 2011
Event29th Annual CHI Conference on Human Factors in Computing Systems, CHI 2011 - Vancouver, BC, Canada
Duration: May 7 2011May 12 2011

Other

Other29th Annual CHI Conference on Human Factors in Computing Systems, CHI 2011
CountryCanada
CityVancouver, BC
Period5/7/115/12/11

Fingerprint

Demonstrations
Navigation
Cameras
Switches

Keywords

  • Multi-view interaction
  • Rich media viewing
  • Video annotation

ASJC Scopus subject areas

  • Software
  • Human-Computer Interaction
  • Computer Graphics and Computer-Aided Design

Cite this

Miller, G., Fels, S., Al Hajri, A., Ilich, M., Foley-Fisher, Z., Fernandez, M., & Jang, D. (2011). MediaDiver: Viewing and annotating multi-view video. In CHI EA 2011 - 29th Annual CHI Conference on Human Factors in Computing Systems, Conference Proceedings and Extended Abstracts (pp. 1141-1146) https://doi.org/10.1145/1979742.1979711

MediaDiver : Viewing and annotating multi-view video. / Miller, Gregor; Fels, Sidney; Al Hajri, Abir; Ilich, Michael; Foley-Fisher, Zoltan; Fernandez, Manuel; Jang, Daesik.

CHI EA 2011 - 29th Annual CHI Conference on Human Factors in Computing Systems, Conference Proceedings and Extended Abstracts. 2011. p. 1141-1146.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Miller, G, Fels, S, Al Hajri, A, Ilich, M, Foley-Fisher, Z, Fernandez, M & Jang, D 2011, MediaDiver: Viewing and annotating multi-view video. in CHI EA 2011 - 29th Annual CHI Conference on Human Factors in Computing Systems, Conference Proceedings and Extended Abstracts. pp. 1141-1146, 29th Annual CHI Conference on Human Factors in Computing Systems, CHI 2011, Vancouver, BC, Canada, 5/7/11. https://doi.org/10.1145/1979742.1979711
Miller G, Fels S, Al Hajri A, Ilich M, Foley-Fisher Z, Fernandez M et al. MediaDiver: Viewing and annotating multi-view video. In CHI EA 2011 - 29th Annual CHI Conference on Human Factors in Computing Systems, Conference Proceedings and Extended Abstracts. 2011. p. 1141-1146 https://doi.org/10.1145/1979742.1979711
Miller, Gregor ; Fels, Sidney ; Al Hajri, Abir ; Ilich, Michael ; Foley-Fisher, Zoltan ; Fernandez, Manuel ; Jang, Daesik. / MediaDiver : Viewing and annotating multi-view video. CHI EA 2011 - 29th Annual CHI Conference on Human Factors in Computing Systems, Conference Proceedings and Extended Abstracts. 2011. pp. 1141-1146
@inproceedings{2a04d27d20244db5a0faf6a72eeab144,
title = "MediaDiver: Viewing and annotating multi-view video",
abstract = "We propose to bring our novel rich media interface called MediaDiver demonstrating our new interaction techniques for viewing and annotating multiple view video. The demonstration allows attendees to experience novel moving target selection methods (called Hold and Chase), new multi-view selection techniques, automated quality of view analysis to switch viewpoints to follow targets, integrated annotation methods for viewing or authoring meta-content and advanced context sensitive transport and timeline functions. As users have become increasingly sophisticated when managing navigation and viewing of hyper-documents, they transfer their expectations to new media. Our proposal is a demonstration of the technology required to meet these expectations for video. Thus users will be able to directly click on objects in the video to link to more information or other video, easily change camera views and mark-up the video with their own content. The applications of this technology stretch from home video management to broadcast quality media production, which may be consumed on both desktop and mobile platforms.",
keywords = "Multi-view interaction, Rich media viewing, Video annotation",
author = "Gregor Miller and Sidney Fels and {Al Hajri}, Abir and Michael Ilich and Zoltan Foley-Fisher and Manuel Fernandez and Daesik Jang",
year = "2011",
doi = "10.1145/1979742.1979711",
language = "English",
isbn = "9781450302289",
pages = "1141--1146",
booktitle = "CHI EA 2011 - 29th Annual CHI Conference on Human Factors in Computing Systems, Conference Proceedings and Extended Abstracts",

}

TY - GEN

T1 - MediaDiver

T2 - Viewing and annotating multi-view video

AU - Miller, Gregor

AU - Fels, Sidney

AU - Al Hajri, Abir

AU - Ilich, Michael

AU - Foley-Fisher, Zoltan

AU - Fernandez, Manuel

AU - Jang, Daesik

PY - 2011

Y1 - 2011

N2 - We propose to bring our novel rich media interface called MediaDiver demonstrating our new interaction techniques for viewing and annotating multiple view video. The demonstration allows attendees to experience novel moving target selection methods (called Hold and Chase), new multi-view selection techniques, automated quality of view analysis to switch viewpoints to follow targets, integrated annotation methods for viewing or authoring meta-content and advanced context sensitive transport and timeline functions. As users have become increasingly sophisticated when managing navigation and viewing of hyper-documents, they transfer their expectations to new media. Our proposal is a demonstration of the technology required to meet these expectations for video. Thus users will be able to directly click on objects in the video to link to more information or other video, easily change camera views and mark-up the video with their own content. The applications of this technology stretch from home video management to broadcast quality media production, which may be consumed on both desktop and mobile platforms.

AB - We propose to bring our novel rich media interface called MediaDiver demonstrating our new interaction techniques for viewing and annotating multiple view video. The demonstration allows attendees to experience novel moving target selection methods (called Hold and Chase), new multi-view selection techniques, automated quality of view analysis to switch viewpoints to follow targets, integrated annotation methods for viewing or authoring meta-content and advanced context sensitive transport and timeline functions. As users have become increasingly sophisticated when managing navigation and viewing of hyper-documents, they transfer their expectations to new media. Our proposal is a demonstration of the technology required to meet these expectations for video. Thus users will be able to directly click on objects in the video to link to more information or other video, easily change camera views and mark-up the video with their own content. The applications of this technology stretch from home video management to broadcast quality media production, which may be consumed on both desktop and mobile platforms.

KW - Multi-view interaction

KW - Rich media viewing

KW - Video annotation

UR - http://www.scopus.com/inward/record.url?scp=79957962264&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=79957962264&partnerID=8YFLogxK

U2 - 10.1145/1979742.1979711

DO - 10.1145/1979742.1979711

M3 - Conference contribution

SN - 9781450302289

SP - 1141

EP - 1146

BT - CHI EA 2011 - 29th Annual CHI Conference on Human Factors in Computing Systems, Conference Proceedings and Extended Abstracts

ER -