02022nas a2200241 4500000000100000000000100001008004100002260001200043653002300055653002000078653001100098653002900109653002500138653001500163100001800178700002000196245006900216856009600285300001200381490000600393520136700399022001401766 2019 d c03/201910aInternet of things10aCloud Computing10aSensor10aHeterogeneity Of Devices10aUbiquitous Computing10aSmart City1 aMohamed Bahaj1 aHajar Khallouki00aMultimodal Generic Framework for Multimedia Documents Adaptation uhttp://www.ijimai.org/journal/sites/default/files/files/2018/02/ijimai_5_4_14_pdf_17167.pdf a122-1270 v53 aToday, people are increasingly capable of creating and sharing documents (which generally are multimedia oriented) via the internet. These multimedia documents can be accessed at anytime and anywhere (city, home, etc.) on a wide variety of devices, such as laptops, tablets and smartphones. The heterogeneity of devices and user preferences has raised a serious issue for multimedia contents adaptation. Our research focuses on multimedia documents adaptation with a strong focus on interaction with users and exploration of multimodality. We propose a multimodal framework for adapting multimedia documents based on a distributed implementation of W3C’s Multimodal Architecture and Interfaces applied to ubiquitous computing. The core of our proposed architecture is the presence of a smart interaction manager that accepts context related information from sensors in the environment as well as from other sources, including information available on the web and multimodal user inputs. The interaction manager integrates and reasons over this information to predict the user’s situation and service use. A key to realizing this framework is the use of an ontology that undergirds the communication and representation, and the use of the cloud to insure the service continuity on heterogeneous mobile devices. Smart city is assumed as the reference scenario. a1989-1660