Skip to content
August 11, 2022

Explainable AI in medicine: Detecting AF in ECG data

Authors:

  • Svante Sörberg

Editors:

  • Puya Sharif
  • Viktor Öberg

Introduction

Deep learning models are everywhere. They have helped us push the boundaries of what we thought was possible in many different fields, from computer vision to natural language processing. Owing to their complexity, however, deep learning models are in one key sense flawed. They are effectively black boxes: data goes in, prediction comes out – but we can’t really explain how the model arrived at its conclusion. 

Thankfully, an increasingly large group of machine learning researchers are devoting themselves to the field of explainability, or explainable artificial intelligence. They are concerned with peeking inside the black boxes of deep learning models. In this post, we take a look at how explainability techniques can be used to highlight what features of an ECG are most relevant for a model predicting Atrial Fibrillation from sinus ECGs.

When explainability matters

Let’s get one thing out of the way – how big of a problem is it really that deep learning models are black boxes? Isn’t it enough that they work so wonderfully well? Well, as in most cases, it depends. If you are training a classifier that tells images of dogs from images of cats, you might not care. You are probably content with sitting back and observing with amazement the almost magical capabilities of deep learning. But what if you’re a doctor? Or an insurance company? Or a judge? You want to leverage deep learning to assist your decision-making, but there is no way you can trust a black box, For safety-, for ethical-, or for legal reasons. In some domains, being able to explain the model’s “thinking” isn’t just a nice-to-have feature, it’s a must-have.

Case study

Together with Zenicor Medical Systems AB, Modulai has previously developed a CNN-architecture to detect paroxysmal atrial fibrillation (AF), based on single-lead sinus ECGs. Unless you’re already into medicine, you may be scratching your head at some of what you just read. Let’s break it down quickly.

AF is a type of heart arrhythmia that affects a significant portion of the population. It increases the risk of heart failure and stroke. Given the risks and prevalence, effective AF screening has the potential to save many lives and reduce the burden on healthcare systems worldwide.

Paroxysmal AF in particular means that AF episodes occur sporadically in a patient and self-terminate within a week. Since the episodes can occur infrequently and irregularly, screening is especially difficult. 

If you’ve ever seen a hospital show on TV, you’ve probably seen an ECG recording: it’s the line that must not go flat. It is a recording of the electrical activity of the heart. A sinus ECG in particular means that the recording shows a heart in so-called sinus rhythm, which is medical lingo for normal rhythm. 

Detecting AF from an ECG is relatively easy if you manage to record the heart during an episode. But, as we just noted, paroxysmal AF means the episodes occur sporadically. If you could detect signs of AF in a sinus ECG, screening for it would be a very different ball game. 

In the particular case of the AF-prediction model, there are three primary motivations for applying explainability techniques:

  • To justify: in the medical domain, trust is paramount. A model that cannot be justified is a hard sell. Doctors don’t want to make decisions based on black box predictions, and patients might not be too keen on it either.
  • To improve: explainability techniques could reveal if the model is overfitting to random noise in the data, effectively turning explainability into a useful debugging tool.
  • To discover: Predicting AF from sinus ECGs in humans is not a well-studied problem. It is not yet settled to which extent it is even possible, or what features a prediction model would pick up. Hence, there is value in using explainability techniques to discover what features of a sinus ECG that might be indicative of AF.

Feature attribution is a method of explaining model predictions by assigning a measure of importance to features of the input. Since bursting on the scene in 2017 [1], Shapley values have quickly become a cornerstone of the explainability domain of machine learning. The mathematical rigor coupled with a user-friendly software package appeals to engineers and researchers alike. Subsequent papers and updates to the software package have extended the original concept with several optimizations and variations for different types of models. So how does it work?

Shapley values are named after Lloyd Shapley, a nobel-prize winning economist and mathematician. He was interested in fair distribution of rewards from so-called “cooperative games”. In simple terms, the question he was asking was this:

If a group of players with different skills and abilities cooperate for some reward, what share of the reward does each player deserve?

The solution? Each player deserves the weighted average of the change in reward that he or she brings to all possible subgroups of players. For the sake of brevity, we’ll skip the formal definitions. For the interested reader, here is an excellent overview of Shapley values from a machine learning perspective.

The SHAP package uses the concept of Shapley values to determine the contribution of input features to the model output.

Now, you may be wondering what Shapley values and cooperative games have to do with explaining the predictions of machine learning models. To understand this, imagine the prediction task as a cooperative game with the input features as the players. The features all contribute differently to the output of the model (or, in game theory language – reward from playing the game). Using the axiomatic framework that Shapley values provides, we can deduce how big a contribution to the model’s output each individual feature makes. Effectively, we are substituting the input features for the players, and the model function for the game’s reward. 

SHAP on ECG data

For the model in question, the input consists of 15s ECG recordings at a sampling rate of 1kHz. In other words, each input is a real-valued vector with 15000 values, representing the electrical activity at each sampled time step. A straight-forward attempt at feature attribution would be to treat each of these time steps as a feature. This approach has some problems, however, for example:

  1. There are no sensible “off” values for the features which we would need to define for the Shapley value method to work.
  2. Aggregation is tricky: if we are comparing multiple ECGs, the attribution to time t in one ECG cannot be meaningfully compared to the attribution to time t in the other one.
With Fourier analysis, a signal can be decomposed into its constituent frequencies

Since we are dealing with signals, why not approach the problem in the frequency domain? Using Fourier analysis, we can decompose an ECG into its constituent frequencies. We can then select a number of frequency bands to serve as our features to which we can apply our feature attribution method.

Such a method was suggested by [2]. With this approach, we address the three problems stated above. A natural way of “turning off” features is to apply a band-stop filter, and frequency bands are meaningfully comparable between ECGs.

The magnitude of different frequencies vary between the groups

The motivation for applying this method to our data is strengthened further when comparing the frequency magnitudes of ECG samples from AF patients to those of non-AF patients. There is a noticeable difference between the groups in the frequency domain.

Selecting bands

We need to define which frequency bands we are targeting as distinct features in our data. To do this in a way that is not entirely arbitrary, we select the frequency bands such that the integral under the average magnitude curve is roughly equal for each band.

The magnitude of different frequencies vary between the groups

Showtime

Now, we are ready to apply our Shapley-value based method to assign a measure of feature importance to frequency bands in our ECG data. 

This is quite remarkable considering that such high-frequencies are generally regarded as not carrying meaningful information in ECGs.

We find that, over the dataset as a whole, attributions to the 0.0-5.3Hz band are quite large in absolute terms, but are fairly evenly distributed between positive and negative. This suggests that the presence of this band, from the model’s perspective, carries information, but it does not favor one decision over the other. For the 5.3-11.4Hz and 19.8-37.5Hz bands, attributions are overall slightly positive, suggesting that the model is generally more confident of AF when these frequencies are present in the ECG. Interestingly, the attribution of the 84.9-500Hz band is small but consistently negative. This is quite remarkable considering that such high-frequencies are generally regarded as not carrying meaningful information in ECGs.

References

[1] Lundberg, S., & Lee, S.I. (2017). A Unified Approach to Interpreting Model Predictions. In Advances in Neural Information Processing Systems. Curran Associates, Inc..
[2] Mujkanovic, F., Doskoč, V., Schirneck, M., Schäfer, P., & Friedrich, T.. (2020). timeXplain – A Framework for Explaining the Predictions of Time Series Classifiers.

Wanna discuss explainable AI with us?