Crowd sourced evaluation process for mobile learning application quality

Asharul Islam Khan, Zuhoor Al-Khanjari, Mohamed Sarrab

Research output: Chapter in Book/Report/Conference proceedingConference contribution

2 Citations (Scopus)

Abstract

Crowd sourcing is a novel method for requirements elicitation, development, testing, and evaluation of software in a dynamic environment. Crowd sourced evaluation is a new technique to overcome the traditional methods where usually the co-presence of stakeholders is required during software quality evaluation. Learning using mobile devices takes place in a dynamic and heterogeneous environment where learners have their own learning styles and need. The distinguish characteristics of Mobile Learning (M-Learning) is accessibility at anyplace, anytime, and by anyone. Therefore evaluation of M-Learning application quality using traditional techniques such as interview is tedious, costly, and time consuming. Hence in this article Crowd sourced evaluation process for M-Learning application quality has been proposed. There are four important steps: Analysis and classification of M-Learning application, defining customized quality standard based on the M-Learning application category, categorization of Crowd such as users and application development team, and lastly use of Crowd sourcing platforms. The main idea behind the process is the involvement of end users in the assessment of M-Learning application quality rather than merely inviting experts to evaluate the application. The proposed approach is theoretical in nature and is based on the finding from the existing literature on Crowd sourcing, M-Learning, evaluation of M-Learning application, and software quality. The proposed approach would be implemented in future to test the feasibility in real situation.

Original languageEnglish
Title of host publicationProceedings - 2017 2nd International Conference on Information Systems Engineering, ICISE 2017
EditorsHoussain Kettani, Chen-Huei Chou
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages1-5
Number of pages5
Volume2017-January
ISBN (Electronic)9781509048793
DOIs
Publication statusPublished - Dec 27 2017
Event2nd International Conference on Information Systems Engineering, ICISE 2017 - Charleston, United States
Duration: Apr 1 2017Apr 3 2017

Other

Other2nd International Conference on Information Systems Engineering, ICISE 2017
CountryUnited States
CityCharleston
Period4/1/174/3/17

Fingerprint

Mobile devices
Testing

Keywords

  • crowd sourced evaluation
  • Crowd sourcing
  • mobile learning
  • mobile learning application quality
  • mobile Learning quality

ASJC Scopus subject areas

  • Hardware and Architecture
  • Information Systems
  • Computer Networks and Communications

Cite this

Khan, A. I., Al-Khanjari, Z., & Sarrab, M. (2017). Crowd sourced evaluation process for mobile learning application quality. In H. Kettani, & C-H. Chou (Eds.), Proceedings - 2017 2nd International Conference on Information Systems Engineering, ICISE 2017 (Vol. 2017-January, pp. 1-5). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/ICISE.2017.17

Crowd sourced evaluation process for mobile learning application quality. / Khan, Asharul Islam; Al-Khanjari, Zuhoor; Sarrab, Mohamed.

Proceedings - 2017 2nd International Conference on Information Systems Engineering, ICISE 2017. ed. / Houssain Kettani; Chen-Huei Chou. Vol. 2017-January Institute of Electrical and Electronics Engineers Inc., 2017. p. 1-5.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Khan, AI, Al-Khanjari, Z & Sarrab, M 2017, Crowd sourced evaluation process for mobile learning application quality. in H Kettani & C-H Chou (eds), Proceedings - 2017 2nd International Conference on Information Systems Engineering, ICISE 2017. vol. 2017-January, Institute of Electrical and Electronics Engineers Inc., pp. 1-5, 2nd International Conference on Information Systems Engineering, ICISE 2017, Charleston, United States, 4/1/17. https://doi.org/10.1109/ICISE.2017.17
Khan AI, Al-Khanjari Z, Sarrab M. Crowd sourced evaluation process for mobile learning application quality. In Kettani H, Chou C-H, editors, Proceedings - 2017 2nd International Conference on Information Systems Engineering, ICISE 2017. Vol. 2017-January. Institute of Electrical and Electronics Engineers Inc. 2017. p. 1-5 https://doi.org/10.1109/ICISE.2017.17
Khan, Asharul Islam ; Al-Khanjari, Zuhoor ; Sarrab, Mohamed. / Crowd sourced evaluation process for mobile learning application quality. Proceedings - 2017 2nd International Conference on Information Systems Engineering, ICISE 2017. editor / Houssain Kettani ; Chen-Huei Chou. Vol. 2017-January Institute of Electrical and Electronics Engineers Inc., 2017. pp. 1-5
@inproceedings{57be84ccdc814cb8b39d69ad9e2da182,
title = "Crowd sourced evaluation process for mobile learning application quality",
abstract = "Crowd sourcing is a novel method for requirements elicitation, development, testing, and evaluation of software in a dynamic environment. Crowd sourced evaluation is a new technique to overcome the traditional methods where usually the co-presence of stakeholders is required during software quality evaluation. Learning using mobile devices takes place in a dynamic and heterogeneous environment where learners have their own learning styles and need. The distinguish characteristics of Mobile Learning (M-Learning) is accessibility at anyplace, anytime, and by anyone. Therefore evaluation of M-Learning application quality using traditional techniques such as interview is tedious, costly, and time consuming. Hence in this article Crowd sourced evaluation process for M-Learning application quality has been proposed. There are four important steps: Analysis and classification of M-Learning application, defining customized quality standard based on the M-Learning application category, categorization of Crowd such as users and application development team, and lastly use of Crowd sourcing platforms. The main idea behind the process is the involvement of end users in the assessment of M-Learning application quality rather than merely inviting experts to evaluate the application. The proposed approach is theoretical in nature and is based on the finding from the existing literature on Crowd sourcing, M-Learning, evaluation of M-Learning application, and software quality. The proposed approach would be implemented in future to test the feasibility in real situation.",
keywords = "crowd sourced evaluation, Crowd sourcing, mobile learning, mobile learning application quality, mobile Learning quality",
author = "Khan, {Asharul Islam} and Zuhoor Al-Khanjari and Mohamed Sarrab",
year = "2017",
month = "12",
day = "27",
doi = "10.1109/ICISE.2017.17",
language = "English",
volume = "2017-January",
pages = "1--5",
editor = "Houssain Kettani and Chen-Huei Chou",
booktitle = "Proceedings - 2017 2nd International Conference on Information Systems Engineering, ICISE 2017",
publisher = "Institute of Electrical and Electronics Engineers Inc.",

}

TY - GEN

T1 - Crowd sourced evaluation process for mobile learning application quality

AU - Khan, Asharul Islam

AU - Al-Khanjari, Zuhoor

AU - Sarrab, Mohamed

PY - 2017/12/27

Y1 - 2017/12/27

N2 - Crowd sourcing is a novel method for requirements elicitation, development, testing, and evaluation of software in a dynamic environment. Crowd sourced evaluation is a new technique to overcome the traditional methods where usually the co-presence of stakeholders is required during software quality evaluation. Learning using mobile devices takes place in a dynamic and heterogeneous environment where learners have their own learning styles and need. The distinguish characteristics of Mobile Learning (M-Learning) is accessibility at anyplace, anytime, and by anyone. Therefore evaluation of M-Learning application quality using traditional techniques such as interview is tedious, costly, and time consuming. Hence in this article Crowd sourced evaluation process for M-Learning application quality has been proposed. There are four important steps: Analysis and classification of M-Learning application, defining customized quality standard based on the M-Learning application category, categorization of Crowd such as users and application development team, and lastly use of Crowd sourcing platforms. The main idea behind the process is the involvement of end users in the assessment of M-Learning application quality rather than merely inviting experts to evaluate the application. The proposed approach is theoretical in nature and is based on the finding from the existing literature on Crowd sourcing, M-Learning, evaluation of M-Learning application, and software quality. The proposed approach would be implemented in future to test the feasibility in real situation.

AB - Crowd sourcing is a novel method for requirements elicitation, development, testing, and evaluation of software in a dynamic environment. Crowd sourced evaluation is a new technique to overcome the traditional methods where usually the co-presence of stakeholders is required during software quality evaluation. Learning using mobile devices takes place in a dynamic and heterogeneous environment where learners have their own learning styles and need. The distinguish characteristics of Mobile Learning (M-Learning) is accessibility at anyplace, anytime, and by anyone. Therefore evaluation of M-Learning application quality using traditional techniques such as interview is tedious, costly, and time consuming. Hence in this article Crowd sourced evaluation process for M-Learning application quality has been proposed. There are four important steps: Analysis and classification of M-Learning application, defining customized quality standard based on the M-Learning application category, categorization of Crowd such as users and application development team, and lastly use of Crowd sourcing platforms. The main idea behind the process is the involvement of end users in the assessment of M-Learning application quality rather than merely inviting experts to evaluate the application. The proposed approach is theoretical in nature and is based on the finding from the existing literature on Crowd sourcing, M-Learning, evaluation of M-Learning application, and software quality. The proposed approach would be implemented in future to test the feasibility in real situation.

KW - crowd sourced evaluation

KW - Crowd sourcing

KW - mobile learning

KW - mobile learning application quality

KW - mobile Learning quality

UR - http://www.scopus.com/inward/record.url?scp=85042563993&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85042563993&partnerID=8YFLogxK

U2 - 10.1109/ICISE.2017.17

DO - 10.1109/ICISE.2017.17

M3 - Conference contribution

VL - 2017-January

SP - 1

EP - 5

BT - Proceedings - 2017 2nd International Conference on Information Systems Engineering, ICISE 2017

A2 - Kettani, Houssain

A2 - Chou, Chen-Huei

PB - Institute of Electrical and Electronics Engineers Inc.

ER -