Supervisor
Matt Lemon
Programme
MSc in Data Analytics
Subject
Computer Science
Abstract
Time-series forecasting is widely used in data analytics, yet the interpretability of deep learning models remains a key challenge. This study compares Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) models using a dual interpretability framework that combines attention mechanisms and SHAP analysis. Two feature sets were evaluated across assets with different volatility regimes: Solana and Shiba Inu (high volatility), Bitcoin (moderate volatility), and Apple (low volatility). Results show that autoregressive features achieved the lowest forecasting errors, while volatility- and momentum-based indicators provided stronger interpretability. GRU performed best in moderate-volatility conditions, whereas LSTM demonstrated more consistent performance across varying regimes. The findings highlight how feature engineering and interpretability techniques can improve both transparency and predictive performance in deep learning time-series forecasting.
Date of Award
2025
Full Publication Date
2025
Access Rights
open access
Document Type
Capstone Project
Resource Type
thesis
Recommended Citation
Ariton, F.
(2025) Understanding Model Behaviour and Interpretability in Time Series Forecasting: A Deep Dive into LSTM and GRU with XAI Techniques. CCT College Dublin.
DOI: https://doi.org/10.63227/652.299.120