During sports events, athletes use wearables and sensors to track their performances. Currently, this data is used mainly for coaching, yet it might be useful for covering the events as well. As such, new Artificial intelligence (AI) applications in professional sports based on sensors, wearables and video data might enrich live sports reporting. However, a platform capable of turning insightful sports data into stories for live commenters, content editors or viewers does not yet exist. The missing gap here is the adequate translation of the sensor data into useful narrative elements tailored to be used and integrated in real-time dynamic visualizations and storytelling for sports events. The DAIQUIRI project will develop AI algorithms that address current challenges associated with data overload, sensor-video matching, dynamic captioning and multi-modal stories. The outcome will be a sensor data platform and dashboard that supports media professionals in their live sports coverage and the audiences’ viewing experiences.