02093nas a2200241 4500000000100000000000100001008004100002260001200043653002200055653001800077653001500095653002600110653001600136100002800152700002700180700002300207245009600230856008100326300000900407490001300416520140800429022001401837 9998 d c09/202310aAugmented Reality10aDeep Learning10aMultimodal10aLarge Language Models10aTransformer1 aJuan Izquierdo-Domenech1 aJordi Linares-Pellicer1 aIsabel Ferri-Molla00aLarge Language Models for in Situ Knowledge Documentation and Access With Augmented Reality uhttps://www.ijimai.org/journal/sites/default/files/2023-09/ip2023_09_002.pdf a1-110 vIn Press3 aAugmented reality (AR) has become a powerful tool for assisting operators in complex environments, such as shop floors, laboratories, and industrial settings. By displaying synthetic visual elements anchored in real environments and providing information for specific tasks, AR helps to improve efficiency and accuracy. However, a common bottleneck in these environments is introducing all necessary information, which often requires predefined structured formats and needs more ability for multimodal and Natural Language (NL) interaction. This work proposes a new method for dynamically documenting complex environments using AR in a multimodal, non-structured, and interactive manner. Our method employs Large Language Models (LLMs) to allow experts to describe elements from the real environment in NL and select corresponding AR elements in a dynamic and iterative process. This enables a more natural and flexible way of introducing information, allowing experts to describe the environment in their own words rather than being constrained by a predetermined structure. Any operator can then ask about any aspect of the environment in NL to receive a response and visual guidance from the AR system, thus allowing for a more natural and flexible way of introducing and retrieving information. These capabilities ultimately improve the effectiveness and efficiency of tasks in complex environments. a1989-1660