Back to the main page.

Bug 2142 - memory issues with runica

Status CLOSED FIXED
Reported 2013-04-26 15:34:00 +0200
Modified 2014-06-18 12:33:50 +0200
Product: FieldTrip
Component: core
Version: unspecified
Hardware: PC
Operating System: Windows
Importance: P3 normal
Assigned to: Jan-Mathijs Schoffelen
URL:
Tags:
Depends on:
Blocks:
See also:

Bruno L. Giordano - 2013-04-26 15:34:43 +0200

Created attachment 469 see message I am running some ICA cleanup of rather large datasets, and am finding some memory issues with runica. The modified runicaBLG.m I am attaching basically inlines pcsquash.m inside runica and should allow to avoid at least one copy of the dataset in memory. There's another modification that also addresses memory limitations (demeaning within for loop instead of using matrix multiplication). You can find these modifications in runicaBLG by looking for BLG. I have a feeling that there might be more space for optimization in runica.m..... Might be useful for other people (check available memory and go for modified function if memory not enough?).


Jan-Mathijs Schoffelen - 2013-05-07 10:05:32 +0200

Hi Bruno, I had a look at your changes. They look OK to me. I would propose: -do the mean subtraction in a for-loop. -don't do the pcsquash inline, but either add an optional flag that specifies whether demean should be done, and/or do the for-loop demeaning in pcsquash, too. Let me know what you think.


Jan-Mathijs Schoffelen - 2013-05-07 13:34:37 +0200

@Arno: are these changes that can be considered for runica and pcsquash in eeglab, too?


Arnaud Delorme - 2013-05-08 06:14:22 +0200

Yes, the changes seem OK although I am not sure they will save RAM. The reason is that the data array is stored also on the caller. Therefore keeping a local copy in runica is not going to hurt. I have tested this function against the runica.m function and I get the same results (after disabling the random seed). I have checked these changes into EEGLAB. Thanks, Arno


Jan-Mathijs Schoffelen - 2013-05-09 14:26:24 +0200

Thanks Arno! I don't know myself (and I didn't check), but it could be that the memory consumption in this case is machine/operating system specific. Forgive me my ignorance, but would it in anyway be possible for me to check out the updated files? I could of course make the changes in our local copy, but it would be nice to keep things synchronized. I assume Robert has svn-rights on your repository: would it be possible to add me (checkout rights only) as well? Thanks, JM


Jim Herring - 2013-09-25 14:37:48 +0200

If I may add myself to this discussion as I am having memory issues with running ICA on large datasets as well. Although I am mostly using fastICA the same problems seem to hold. I am currently trying to loop through chunks of my dataset (trials at the moment) running the ICA on each chunk and using the estimated mixing matrix as an initial guess for the ICA on the following chunk. This reduces the amount of required RAM quite drastically depending on the size of the chunks. However, I have a feeling this solution is too easy and might be problematic. Can you guys think of any objections against this? I could, for example, imagine that the chunk size has to be sufficiently large to get a good estimate but perhaps this does not matter as the estimation of the mixing matrix improves with each iteration over each chunk?


Jan-Mathijs Schoffelen - 2014-04-16 13:53:53 +0200

Discussed at FT-meeting: update all eeglab function in fieldtrip/external to latest version.


Jan-Mathijs Schoffelen - 2014-04-17 11:12:32 +0200

@Robert: could you give me svn commit rights for ~/fieldtrip/external/eeglab. I don't seem to be able to update the code to a newer version


Robert Oostenveld - 2014-05-13 15:35:31 +0200

(In reply to Jan-Mathijs Schoffelen from comment #7) you now have write access.


Jan-Mathijs Schoffelen - 2014-05-17 20:06:25 +0200

Sending eeglab/README Sending eeglab/binica.m Sending eeglab/floatread.m Sending eeglab/icadefs.m Sending eeglab/runica.m Transmitting file data ..... Committed revision 9549.