Welcome to ARGUS’s documentation!

ARGUS is a visual analytics tool that facilitates multimodal data collection, enables quick user modeling (i.e. modeling of the physical state of the environment and performer behavior), and allows for retrospective analysis and debugging of historical data generated by the AR sensors and ML models that support task guidance.

ARGUS operates in two main modes: “Online” (during task performance), and “Offline” (after performance). The online mode supports real-time monitoring of model behavior and data acquisition during task execution time. This mode displays tailored visuals of real-time model outputs, which allows users of ARGUS to monitor the system during live sessions and facilitates online debugging. Data is saved incrementally. Once finalized, all data and associated metadata collected during the task is seamlessly stored into a permanent data store with analytical capabilities able to handle both structured data generated by ML models and multimedia data (e.g. video, depth, and audio streams) collected by the headset. The system can then be used to explore and analyze historical session data, using the offline mode, by interacting with visualizations that summarize spatiotemporal information as well as highlight detailed information regarding model performance, performer behavior, and the physical environment.

You can find the source code in our GitHub repository: https://github.com/VIDA-NYU/ARGUS

Indices and tables