Alican Nalci1,2, Bhaskar D. Rao2, and Thomas T. Liu1
1Center for Functional MRI, University of California, San Diego, La Jolla, CA, United States, 2Electrical and Computer Engineering, University of California, San Diego, La Jolla, CA, United States
Synopsis
Recent studies suggest the presence of complex recurrent spatiotemporal patterns in resting-state fMRI. These patterns may affect the performance of existing preprocessing and analysis approaches, such as global signal regression and ICA. In this work we present an approach for the sparse estimation of quasi-periodic spatiotemporal components in resting state fMRI. Our algorithm successfully estimates spatiotemporal components in a sample resting-state fMRI dataset and our results suggest that the removal of these components may represent an alternative to global signal regression. Purpose
The
complex spatiotemporal nature of resting-state functional connectivity has
received increasing attention in recent years
1,2. Majeed et al
3 found
strong evidence for quasi-periodic spatiotemporal patterns in resting-state
fMRI data acquired in both animals and humans. They presented an iterative pattern matching algorithm for finding a
spatiotemporal motif, but did not consider how to construct a signal from
repeated versions of the motif. Here we
extend their work and present a sparse Bayesian learning approach for estimating
the contributions of quasi-periodic spatiotemporal patterns in resting-state
fMRI data.
Methods
Resting-state
fMRI data (5 minutes, eyes closed) were acquired from a healthy subject using a
3 Tesla GE MR750 system. We used echo planar imaging with $$$166$$$ volumes, $$$30$$$ slices, $$$3.4 \times 3.4 \times 5$$$ $$$mm^{3}$$$ voxel size, $$$64\times 64$$$ matrix size, $$$TR=1.8s$$$, $$$TE=30ms$$$. We discarded the first 6 volumes and applied standard preprocessing
steps to remove nuisance terms. We
used the pattern matching algorithm described in Majeed et al3 with a window length of 20
pts (36 seconds) to identify the basic spatiotemporal motif, denoted as a 4D
matrix $$$B$$$. This motif was then vectorized
to form a space-time column vector $$$b$$$. The
preprocessed data (a 4D matrix $$$Y$$$) were vectorized to form a space-time column
vector $$$y$$$. Shifted and zeropadded versions
of $$$b$$$ were then used to form a design matrix $$$D$$$, where the number of rows
in $$$D$$$ was equal to the length of $$$y$$$ and the shift between adjacent columns
was equal to the number of voxels. We
hypothesized that there should be a limited number of repeats of the spatiotemporal motif within a run and therefore sought to find a representation
of the data of the form $$$y \approx Dx$$$ where $$$x$$$ is an unknown vector of
coefficients that is assumed to be sparse and non-negative. Optimal $$$x$$$ can be obtained by solving the traditional sparse recovery problem: $$argmin_{x \geq 0}~||y-Dx||_2
+\lambda~||x||_0.$$ However, instead of this regularization based form, we used a non-negative version of the sparse Bayesian learning algorithm 4,
which in contrast to alternate approaches 5,6 has the advantage of not
requiring the specification of regularization parameters such as $$$\lambda$$$. As
an initial assessment of the approach, we computed measures of the
spatiotemporal correlation of the preprocessed data, the estimated spatiotemporal
component $$$Dx$$$, and the residual $$$y-Dx$$$. This was done by computing the space-time correlation between all
possible pairs of space-time blocks, where the duration of each block was 20
time points. The results are displayed as a spatiotemporal
correlation matrix (STCM) where the $$$(i,j)^{th}$$$ entry corresponds to the correlation between
the space-time blocks at times $$$i$$$ and $$$j$$$. We
also calculated functional connectivity maps using a seed signal from the
posterior cingulate cortex (PCC). These
maps were computed for (1) the original preprocessed data $$$Y$$$, (2) the
preprocessed data after global signal regression (GSR), and (3) the residual
term $$$y-Dx$$$ after conversion back to a 4D matrix form.
Results
Fig.
1 shows the estimated weights in the vector $$$x$$$ and the sliding window
correlation between the motif $$$B$$$ and the preprocessed data. $$$Y$$$. The vector is relatively sparse, with the
largest coefficients coinciding with the
correlation peaks. Fig. 2 shows the STCM
for the original data, the estimated spatiotemporal component, and the residual
term. The STCM for the estimated
spatiotemporal component largely captures the quasi-periodic structure seen in
the STCM of the original data. This structure is greatly attenuated in the STCM
of the residual data. Fig. 3 shows the
functional connectivity maps before GSR, with GSR, and for the residual data. Note the similarity of the maps obtained with
GSR and after removal of the recurring spatiotemporal component.
Discussion
We
have presented a sparse Bayesian learning approach for estimating recurring
spatiotemporal patterns in resting-state fMRI data and demonstrated the
approach using a sample resting-state fMRI dataset. Our preliminary results suggest that removal
of the spatiotemporal component may provide an alternative to GSR, but further
work is needed to better understand the relationship between the two approaches. In this work, we used a previously described
method
3 to estimate the spatiotemporal motif
$$$B$$$. Alternate methods may provide better estimates and can be readily
integrated into the proposed approach.
Acknowledgements
No acknowledgement found.References
1. Chang C, Glover H. Time–frequency dynamics of resting-state brain connectivity measured with fMRI. Neuroimage, 50.1 (2010), 81-98.
2. Handwerker D A, Roopchansingh V, et al. Periodic changes in fMRI connectivity. Neuroimage, 63(3) 2012, 1712-1719.
3. Majeed W, et al. Spatiotemporal dynamics of low frequency BOLD fluctuations in rats and
humans. Neuroimage 54.2 (2011): 1140-1150.
4. Wipf D P, Rao D B. Sparse Bayesian
learning for basis selection. Signal Processing, IEEE Transactions on
52.8 (2004): 2153-2164.
5. Liu J, Shuiwang J, Jieping Y. SLEP: Sparse
learning with efficient projections. Arizona State University 6
(2009): 491.
6. Peharz R, Pernkopf F. Sparse nonnegative
matrix factorization with l0-constraints. Neurocomputing 80
(2012): 38-46.