# Simple Parallelization with Mathematica

1. Jun 14, 2012

### eleteroboltz

I have a program with Mathematica and I would like to run it in parallel.
The essential of the code is actually run a lot of cases, changing some parameters.
In other words, it means that each case is totally independent of the other.
But I'm not quite sure about how to do it.

To exemplify this situation, let's see the sample code bellow.

Code (Text):
fcalc[i_, j_] := NIntegrate[i Cos[j x], {x, 0, 1}]

ParallelDo[
f[i, j] = fcalc[i, j]
, {i, 1, 10}
, {j, 1, 10}]
The problem is that the values are not passing to f.

Any suggestions???

Thank you

2. Jun 14, 2012

### Bill Simpson

In
http://reference.wolfram.com/mathematica/ref/ParallelDo.html
under "Possible Issues" it says
"A function used that is not known on the parallel kernels has no effect:"
and then it describes using DistributeDefinitions[].

Perhaps reading every tiny detail of the documentation of ParallelDo may turn up other things you need to know.

3. Jun 14, 2012

### eleteroboltz

Thank you for the answer, Bill.
I tried to use DistributeDefinitions[] but it's still not working for me.

Code (Text):

fcalc[i_, j_] := NIntegrate[i Cos[j x], {x, 0, 1}]

DistributeDefinitions[f]

ParallelDo[
f[i, j] = fcalc[i, j]
, {i, 1, 10}
, {j, 1, 10}]

4. Jun 14, 2012

### eleteroboltz

I found the solution for this problem.
The correct code is shown bellow:

Code (Text):

fcalc[i_, j_] := NIntegrate[i Cos[j x], {x, 0, 1}]

SetSharedFunction[f]

ParallelDo[
f[i, j] = fcalc[i, j]
, {i, 1, 10}
, {j, 1, 10}]

5. Jun 14, 2012

### Bill Simpson

I'm glad that reading the documentation allowed you to get the code working.

I understand that the way you have written the code may require shared functions, but perhaps you can find a way to rewrite the code so that is not needed, just for a test.

If performance is important then you might see if you can get a stopwatch and find a way to measure the performance with and without the SetSharedFuction[].

It may or may not apply with shared functions, but someone reported using shared variables with parallel resulted in an order of magnitude slower performance.