Hello PF. I wanted to optimize time expenditure on some calculation that I'm doing. A natural way to do this can be by computing parts in parallel. Now when I say "parts" I don't mean that some computations are wholly independent of other computations. My situation is that I have a bunch of solutions given from NDSolve from which I sample some data (I get a list of complex values). This list of complex values is something I need for another time consuming computation - but that is all I need from the NDSolve solutions. Now these lists of complex values comes available in chunks and so I should be able to start the other computation with these values at hand and further wait for the next batch of complex values to continue. I am here wondering how I can direct evaluations to specific kernels. So that when a chunk of values are obtained on one kernel, this is distributed to the other (second) kernel which starts when this input is given. The second kernel then waits for another chunk of data, and so on.. Someone got any ideas on this matter? Or maybe a better way to do it? (I was considering just 'parallelize' but I don't know if the second calculation will be done "properly", etc). Thanks in advance :)
The lists of complex values and the other time consuming computation are not parallel, they are serial. You say that the lists of complex values come in chunks and that you can start the other time consuming computation on a chunk. So your parallel element is your chunks. The easiest way to parallelize then would be to do a Table of chunks and run Parallelize across that.