A framework of adaptive multimodal input for location-based augmented reality application

Rimaniza Zainal Abidin, Haslina Arshad, Saidatul A.isyah Ahmad Shukri

Research output: Contribution to journalArticle

2 Citations (Scopus)

Abstract

Location-based AR is one of the most familiar mobile application currently being used. The position of the user relative to the real world will be located and digital information can be overlaid to provide information on the user’s current location and surroundings. Four main types of mobile augmented reality interfaces have been studied and one of them is a multimodal interface. Multimodal interface processes two or more combined user input modes (such as speech, pen, touch, manual gesture and gaze) in a coordinated manner with multimedia system output. In the multimodal interface, many frameworks have been proposed to guide the designer to develop multimodal applications including in augmented reality environment but there has been little work reviewing the framework of adaptive multimodal input in mobile augmented reality application. This paper presents the conceptual framework to illustrate the adaptive multimodal interface for location-based augmented reality application. We reviewed several frameworks that have been proposed in the field of multimodal interfaces, adaptive interface and location-based augmented reality. We analyzed the components in the previous frameworks and measure which input modalities can be applied in mobile devices. Our framework can be used as a guide for designers and developers to develop a location-based AR application with an adaptive multimodal interaction.

Original languageEnglish
Pages (from-to)97-103
Number of pages7
JournalJournal of Telecommunication, Electronic and Computer Engineering
Volume9
Issue number2-11
Publication statusPublished - 2017

Fingerprint

Augmented reality
Multimedia systems
Mobile devices
Interfaces (computer)

Keywords

  • Adaptive interfaces
  • Mobile augmented reality
  • Mobile Sensors
  • Multimodal interfaces

ASJC Scopus subject areas

  • Hardware and Architecture
  • Computer Networks and Communications
  • Electrical and Electronic Engineering

Cite this

A framework of adaptive multimodal input for location-based augmented reality application. / Abidin, Rimaniza Zainal; Arshad, Haslina; Shukri, Saidatul A.isyah Ahmad.

In: Journal of Telecommunication, Electronic and Computer Engineering, Vol. 9, No. 2-11, 2017, p. 97-103.

Research output: Contribution to journalArticle

@article{d1a33e10a10e402a98d86e92dff5c7f1,
title = "A framework of adaptive multimodal input for location-based augmented reality application",
abstract = "Location-based AR is one of the most familiar mobile application currently being used. The position of the user relative to the real world will be located and digital information can be overlaid to provide information on the user’s current location and surroundings. Four main types of mobile augmented reality interfaces have been studied and one of them is a multimodal interface. Multimodal interface processes two or more combined user input modes (such as speech, pen, touch, manual gesture and gaze) in a coordinated manner with multimedia system output. In the multimodal interface, many frameworks have been proposed to guide the designer to develop multimodal applications including in augmented reality environment but there has been little work reviewing the framework of adaptive multimodal input in mobile augmented reality application. This paper presents the conceptual framework to illustrate the adaptive multimodal interface for location-based augmented reality application. We reviewed several frameworks that have been proposed in the field of multimodal interfaces, adaptive interface and location-based augmented reality. We analyzed the components in the previous frameworks and measure which input modalities can be applied in mobile devices. Our framework can be used as a guide for designers and developers to develop a location-based AR application with an adaptive multimodal interaction.",
keywords = "Adaptive interfaces, Mobile augmented reality, Mobile Sensors, Multimodal interfaces",
author = "Abidin, {Rimaniza Zainal} and Haslina Arshad and Shukri, {Saidatul A.isyah Ahmad}",
year = "2017",
language = "English",
volume = "9",
pages = "97--103",
journal = "Journal of Telecommunication, Electronic and Computer Engineering",
issn = "2180-1843",
publisher = "Universiti Teknikal Malaysia Melaka",
number = "2-11",

}

TY - JOUR

T1 - A framework of adaptive multimodal input for location-based augmented reality application

AU - Abidin, Rimaniza Zainal

AU - Arshad, Haslina

AU - Shukri, Saidatul A.isyah Ahmad

PY - 2017

Y1 - 2017

N2 - Location-based AR is one of the most familiar mobile application currently being used. The position of the user relative to the real world will be located and digital information can be overlaid to provide information on the user’s current location and surroundings. Four main types of mobile augmented reality interfaces have been studied and one of them is a multimodal interface. Multimodal interface processes two or more combined user input modes (such as speech, pen, touch, manual gesture and gaze) in a coordinated manner with multimedia system output. In the multimodal interface, many frameworks have been proposed to guide the designer to develop multimodal applications including in augmented reality environment but there has been little work reviewing the framework of adaptive multimodal input in mobile augmented reality application. This paper presents the conceptual framework to illustrate the adaptive multimodal interface for location-based augmented reality application. We reviewed several frameworks that have been proposed in the field of multimodal interfaces, adaptive interface and location-based augmented reality. We analyzed the components in the previous frameworks and measure which input modalities can be applied in mobile devices. Our framework can be used as a guide for designers and developers to develop a location-based AR application with an adaptive multimodal interaction.

AB - Location-based AR is one of the most familiar mobile application currently being used. The position of the user relative to the real world will be located and digital information can be overlaid to provide information on the user’s current location and surroundings. Four main types of mobile augmented reality interfaces have been studied and one of them is a multimodal interface. Multimodal interface processes two or more combined user input modes (such as speech, pen, touch, manual gesture and gaze) in a coordinated manner with multimedia system output. In the multimodal interface, many frameworks have been proposed to guide the designer to develop multimodal applications including in augmented reality environment but there has been little work reviewing the framework of adaptive multimodal input in mobile augmented reality application. This paper presents the conceptual framework to illustrate the adaptive multimodal interface for location-based augmented reality application. We reviewed several frameworks that have been proposed in the field of multimodal interfaces, adaptive interface and location-based augmented reality. We analyzed the components in the previous frameworks and measure which input modalities can be applied in mobile devices. Our framework can be used as a guide for designers and developers to develop a location-based AR application with an adaptive multimodal interaction.

KW - Adaptive interfaces

KW - Mobile augmented reality

KW - Mobile Sensors

KW - Multimodal interfaces

UR - http://www.scopus.com/inward/record.url?scp=85032790673&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85032790673&partnerID=8YFLogxK

M3 - Article

VL - 9

SP - 97

EP - 103

JO - Journal of Telecommunication, Electronic and Computer Engineering

JF - Journal of Telecommunication, Electronic and Computer Engineering

SN - 2180-1843

IS - 2-11

ER -