MultiEdTech 2017: 1st International Workshop on Multimedia-based Educational and Knowledge Technologies for Personalized and Social Online Training

Educational and Knowledge Technologies (EdTech), especially in connection to multimedia content and the vision of mobile and personalized learning, is a hot topic in both academia and the business start-ups ecosystem. The driver and enabler of this is on the one side the development and widespread availability of multimedia materials and MOOCs, which represent multimedia content produced specifically for supporting e-learning; and, on the other side, the ever increasing availability of all sorts on information on the Internet and in social media channels (e. g. lectures, research papers, user-generated videos, news items), which, despite not directly targeting e-learning, can prove to be valuable complements to the more targeted learning materials. Although the availability of such content is not a problem these days, finding the right content and associating different relevant pieces of multimedia so as to enable a comprehensive learning experience on a chosen subject is by no means a trivial task. This workshop provides research in areas related to multimedia-based educational and knowledge technologies and particularly on the use of multimedia search and retrieval, analysis and understanding, browsing, summarization, recommendation, and visualization technologies on multimedia content available in specialized learning platforms, the Web, mobile devices and/or social networks for supporting personalized and adaptive e-learning and training.


INTRODUCTION
Multimedia-driven Educational and Knowledge Technologies is a new and exciting research direction [5], encompassing topics such as mobile and personalized learning, which are currently a hype in Silicon Valley. Examples are Socos 1 , a startup to provide students personalized mobile information based on what they already know. To this end, Socos builds-like classical adaptive hypermedia systems [1][2][3] and intelligent tutoring systems [4]-a model of students' conceptual knowledge that allows tutors to adapt the instructions in real time and whether to address common misconceptions or expand on unusual insights. Another example is the German startup Sofatutor 2 with thousands of training videos and hundreds of courses ranging from primary school to university and encompassing classical subjects including math, social sciences, economics, religion, etc.
The driver and enabler of this movement is on the one side the development and widespread availability of multimedia materials and MOOCs, which represent multimedia content produced specifically for supporting e-learning; and, on the other side, by the ever increasing availability of all sorts on information on the Internet and in social media channels (e. g. lectures, research papers, user-generated videos, news items), which, despite not directly targeting e-learning, can prove to be valuable complements to the more targeted learning materials. Although the availability of such content is not a problem these days, finding the right content and associating different relevant pieces of multimedia so as to enable a comprehensive learning experience on a chosen subject is by no means a trivial task.
The 1st Int. Workshop on Educational and Knowledge Technologies for Personalized and Social Online Training (MultiEdTech 2017) 3 provides a forum for presenting research in areas related to multimedia-based educational and knowledge technologies and particularly on the use of multimedia search and retrieval, analysis and understanding, browsing, summarization, recommendation, and visualization technologies on multimedia content available in specialized learning platforms, the Web, mobile devices and /or social networks for supporting personalized and adaptive e-learning and training.

OBJECTIVES
Even though Technology Enhanced Learning (TEL) is an established field, multimedia-driven Educational and Knowledge Technologies (EKT) is a new research direction. The target audience of this interdisciplinary workshop of media informatics and media-based education includes all researchers and practitioners interested in developing multimedia-based educational and knowledge technologies, applications, and the use of multimedia mobile search and retrieval, analysis and understanding, browsing, summarization, recommendation, and visualization technologies on multimedia content available in specialized learning platforms, the Web, mobile devices, and social networks for supporting online-learning.
The proposed workshop cuts through the majority of ACM Multimedia core topics-personalized and social multimedia search and retrieval, analysis and understanding, browsing, summarization, recommendation, and visualization-but focusing on the (typically, combined) use of these technologies for multimedia-based educational and knowledge technologies. As such, the workshop is focusing explicitly on the e-learning applications of the various generic multimedia retrieval and analysis technologies.
Multimedia-based educational and knowledge technologies have their own research community, which is rather disconnected from the general ACM Multimedia community (and the greater multimedia and human-computer interaction research community). Bringing these communities and research areas together, and examining the use of multimedia-based retrieval and analysis technologies specifically for supporting adaptive and personalized mobile learning and training, has received relatively little attention so far. We argue that such combination is not only original, but also extremely up to date, given the significant growth in the field of educational technologies over the last few years in the hype-like use of multimedia content in mobile online-learning, such as MOOCs. Challenges and opportunities that arise from the advent of the new forms of online-leaning indeed coin a new era for the multimedia driven education which challenges both researchers and practitioners.

KEYNOTE AND PAPER PRESENTATIONS
The workshop will be kicked off with an insightful keynote by Dr. Pablo Cesar from CWI, Amsterdam: "Sensing Engagement: Helping Performers to Evaluate their Impact". Dr. Cesar leads the Interactive and Distributed Systems group 4 at CWI, which focuses on facilitating and improving the way people access media and communicate with others and with the environment. The keynote will overview on gathering data and understanding the experience of people attending cultural events, public lectures, and courses by using wearable sensor technology. Through practical case studies in different areas of the creative industries and education, we will showcase our results and discuss about our failures. Based on realistic testing grounds, collaborating with several commercial and academic partners, we have deployed our technology and infrastructure in places such as the National Theatre of China in Shanghai. Our approach is to seamless connecting fashion and textiles with sensing technology, and with the environment. The final objective is to create intelligent and empathic systems that can react to the audience and their experience.
As paper presentations, we cover with "Train in Virtual Court: Basketball Tactic Training via Virtual Reality" by Wan-Lun Tsai, Ming-Fen Chung, Tse-Yu Pan, and Min-Chun Hu a basketball tactic training system based on multimedia and virtual reality (VR) technologies. The goal is to improve the effectiveness and experience of tactic learning using a tablet-based digital tactic board.
Sabrina Kletz, Klaus Schoeffmann, Bernd Münzer, and Manfred J. Primus present with "Surgical Action Retrieval for Assisting Video Review of Laparoscopic Skills" an information retrieval system to find surgical actions from video collections of gynecologic surgeries based on two novel content descriptors. The goal is to improve the psychomotor skills, which is required for laparoscopic surgeries but difficult to learn and to teach.
Houssem Chatbri, Kevin McGuinness, Suzanne Little, Jiang Zhou, Keisuke Kameyama, Paul Kwan, and Noel E. O'Connor demonstrate with "Automatic MOOC video classification using transcript features and convolutional neural network" the use of modern deep learning techniques and speech transscripts for automatic topic classification of MOOC videos.
Finally, "Chat2Doc: From Chats to How-to Instructions, FAQ, and Reports" by Britta Meixner, Matt Lee, and Scott Carter, demonstrates a system that aims to collect, store, and automatically extract procedural knowledge from messaging interactions. The system exploits chats as means of communication and adds the capability for users to tag text and media to organize content into high-quality multimedia documents.