Back to the main page.

Bug 2717 - create a FAQ about cfg.lambda

Status ASSIGNED
Reported 2014-10-02 13:37:00 +0200
Modified 2014-12-04 15:32:42 +0100
Product: FieldTrip
Component: documentation
Version: unspecified
Hardware: PC
Operating System: Mac OS
Importance: P5 normal
Assigned to: Jim Herring
URL:
Tags:
Depends on:
Blocks:
See also: http://bugzilla.fcdonders.nl/show_bug.cgi?id=2016

Jim Herring - 2014-10-02 13:37:53 +0200

cfg.lambda can be an important parameter in getting clean results from beamformer analyses. However, to non-engineers (like me) it is often unclear what it does (conceptually and computationally). Therefore, we should create a FAQ that explains this.


Jim Herring - 2014-10-02 13:59:50 +0200

From wikipedia: Regularization, in mathematics and statistics and particularly in the fields of machine learning and inverse problems, refers to a process of introducing additional information in order to solve an ill-posed problem or to prevent overfitting. This information is usually of the form of a penalty for complexity, such as restrictions for smoothness or bounds on the vector space norm.


Jim Herring - 2014-10-02 14:32:43 +0200

Some notes: When beamforming we are trying to solve an ill-posed problem; there are multiple possible solutions to calculate a spatial filter. Furthermore, our data is usually noisy. We still try to solve our inverse problem. To account for the amount of noise in the data we can use what is called a regularization paramater, lambda in our case. The amount of regularization, i.e. the size of lambda, depends on the noise level. However, there is always a trade-off. The larger your regularization parameter, the less you trust your data and the more noise you assume to be present in the data. The consequence is that your estimation is more blurry. The smaller your amount of regularization is, the better you assume your data to be (i.e. lower noise level) the unstable the result can be under noisy conditions. The trick is to find the right amount of regularization that fits the noise level in your data. In a way the regularization parameter reflects how much you trust your data. Practically, in DICS beamforming you specify lambda as a percentage. Behind the scenes this percentage is translated into the specified percentage of the average cross-correlaton within channels. This result is then added to the cross-spectral density matrix before inversion.


Robert Oostenveld - 2014-10-02 19:22:51 +0200

*** Bug 2016 has been marked as a duplicate of this bug. ***


Jim Herring - 2014-10-02 22:00:37 +0200

I found a very helpful comment from Michael Wibral that explains the effect of lambda in a nutshell on the discussion list: Quoting Michael Wibral : > Hi Marco, > > in a nutshell the effect of the lambda parameter is to smoothe your > solution in space, it also makes it more stable in the presence of > noise. You might know that the estimation of the covariance matrix > for beamforming requires quite a lot of data. CTF/VSM (a MEG > manufacturer) used to suggest to have your data satisfy the > following relationship: > > 3000 < BW[Hz] * #trials *EffectLength[s] > > Where BW[Hz] is the bandwidth of your effect of interest in Hz, > #trials is the number of trials that contain that effect, and > EffectLength[s] is the length of your effect in seconds (NOT ms!). > Here's an example: You have an effect between 30 and 60Hz, so the > bandwidth of that effect is 30Hz. The effect is visible (say at the > electrode level) for 400ms=0.4s in each trial. Now you calculate the > number of trials to be: > #trials > 3000 / ( BW[Hz] * EffectLength[s]) = 3000/(0.4*30)= 250. > This means that you would need 250 artifact free, valid trials. > Choosing a larger lambda can help to reduce the amount of data > necessary, but you get a more smeared out solution. > > A good introduction and simulation results for various values of > lambda can be found in: > > Neuroimage. 2008 Feb 15;39(4):1788-802. Epub 2007 Oct 10 > Optimising experimental design for MEG beamformer imaging. > Brookes MJ, Vrba J, Robinson SE, Stevenson CM, Peters AM, Barnes GR, > Hillebrand A, Morris PG. > > > Hope this helps, > Michael For the FAQ I think a good start would be a nutshell explanation such as this. This can be expanded to a more detailed explanation but I have a feeling that there is a rather large overlap between people who would understand such a detailed explanation and who would know sufficient by knowing lambda is a regularisation parameter. </p>

Johanna - 2014-11-21 11:41:37 +0100

I look forward to having this FAQ to point others to! :-) One idea I'd add to this, is useful to me. In the ideal world of sufficient data, then lambda=0 and one gets the 'true' beamformer result. In the opposite extreme of not trusting your data at all (and/or insufficient amount), then you could set lambda='100%' which means that you are effectively ignoring the data (by turning the data covariance matrix into indentity matrix times a constnat) and you end up with a min-norm result.


Robert Oostenveld - 2014-12-04 14:32:57 +0100

(In reply to Johanna from comment #5) Lambda*eye(cnhan) is added to the data covariance matrix, i.e. it is not (1-lambda)*cov + lamda*eye to achieve minnorm, lambda would have to approach infinity.


Johanna - 2014-12-04 15:32:42 +0100

(In reply to Robert Oostenveld from comment #6) True, thank you for pointing out my inaccuracy. I was thinking of it approximating (not equaling) a min-norm if for example the eigenvalues of the SVD of the data covariance appear as an exponential decay, such that perhaps only the top 50 (out of 275) are much above 0 and the rest near zero. Assume the eigenvalues are normalised so the max value is 1. Then add 1 to all eigenvalues and take the inverse. The eigenvalues of the inverse now range from 0.5-1 with most having a value of around 1. But I realise this is more confusing than helpful.