Supervisor

Matt Lemon

Programme

MSc in Data Analytics

Subject

Computer Science

Abstract

Time-series forecasting is widely used in data analytics, yet the interpretability of deep learning models remains a key challenge. This study compares Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) models using a dual interpretability framework that combines attention mechanisms and SHAP analysis. Two feature sets were evaluated across assets with different volatility regimes: Solana and Shiba Inu (high volatility), Bitcoin (moderate volatility), and Apple (low volatility). Results show that autoregressive features achieved the lowest forecasting errors, while volatility- and momentum-based indicators provided stronger interpretability. GRU performed best in moderate-volatility conditions, whereas LSTM demonstrated more consistent performance across varying regimes. The findings highlight how feature engineering and interpretability techniques can improve both transparency and predictive performance in deep learning time-series forecasting.

Date of Award

2025

Full Publication Date

2025

Access Rights

open access

Document Type

Capstone Project

Resource Type

thesis

Included in

Data Science Commons

Share

COinS