Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reason for fewer betas to be estimated in temporal spline model vs. stick model? #122

Closed
sjburwell opened this issue Aug 9, 2021 · 4 comments

Comments

@sjburwell
Copy link

Hello @behinger et al.,

I am enjoying working through the tutorials in MATLAB and exploring the outputs - very nice toolbox - I am considering using for my next project. The "temporal splines" demonstrations to smooth the estimates, as illustrated in the PeerJ paper, is very interesting and motivating.

I am curious why downsampling is applied to the temporal splines, such that by default (and in the paper) there are fewer betas to be estimated for the temporal spline model vs. the stick model? My guess is that temporal splines cause collinearity among the columns of the design matrix (neighboring latencies in the regression ERP) and thus must be downsampled to reduce this collinearity? But, perhaps there's another, better reason than this...

Best,
Scott

@behinger
Copy link
Member

behinger commented Aug 9, 2021

Hi!
Good question actually, we didn't actually dive much into the temporal splines.

The maximal number of (temporal) splines is the number of samples in the epoch. If you choose less, you are effectively downsampling.

Are you maybe thinking of use the maximal number, but somehow increasing the overlap between neighbouring splines? That would result in a low pass filter but with higher degrees of freedom?

Haven't tried that. You'd need to modify the toolbox to implement your own temporal splines function. I was recently working on spline implementation in unfold.jl, there it is much easier to implement new splines, but currently I don't have a good way to automatically evaluate the splines so that you are back in "normal" time space (akin to what uf_condense does).

Or did you think of something else?

Ps: our motivation was to trade-off temporal resolution for smaller Designmatrix, actually you increase collinearity by doing so. We thought this could result in faster fit, but turns out it doesn't ¯_(ツ)_/¯ maybe we didn't test enough ;-)

@sjburwell
Copy link
Author

Thanks for the quick response, @behinger ,

I was hoping to use the maximal number of betas (i.e., no downsampling), and using temporal splines (or "time basis" sets, if I'm getting the nomenclature correctly) to have a "smoother" / "low-pass" estimate of each beta. Specifically, I would like to produce something similar to the Figure 7 in Ehinger & Dimigen (2019) (screenshot below), whereby the temporal spline set produces a somewhat "cleaner" looking estimate of the original signal, compared to the stick function set.
image

Perhaps, however, similar results in "normal" space (i.e., original sampling rate) could be achieved with doing a lowpass filter after the regression step. I'd have to look into that. Of course, I could be mistaken on how the temporal splines are implemented, and for that I apologize :)

@behinger
Copy link
Member

behinger commented Aug 10, 2021

In the figure we used the "downsampled" spline version - I can send you the script to generate the figure if you are interested

The easiest solution might be to filter the data before fitting it. Given that the model fit is a linear operation and filtering as well, the order of operation doesn't matter*

If you use regularization, things might look different because then model fitting could be non-linear due to hyperparameter optimization (actually not sure on this one).

* except when it matters ;) there are boundary artifacts of filtering epoch, that's why I'd filter first

@sjburwell
Copy link
Author

Ah, I see that makes sense that the figure is downsampled - I think I will opt for the lowpass filtering before or after the regression step. It is, however, a very powerful demonstration of the regression ERP / unfold approach for signal processing of ERPs in "one step," versus the "multi-step" / binned approaches used in more traditional ERP research.

Thanks for your explanations!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants