# What does a three-dimensional matrix look like?

1. Apr 5, 2011

### Hypatio

I am trying to figure out how to construct a matrix for solving systems of linear equations with two dimensions of space and a dimension of time, but I do not know how to do this or begin visualizing such a matrix. The solution depends on all the data at all times less than the solved time so I can't cheat by simply updating a 2D matrix.

For instance, it is very clear that a 2D matrix will look like this:

http://www.eecs.berkeley.edu/~demmel/cs267/lecture17/DiscretePoisson.gif

Do I add the third dimension directly below this, or to the right? or to the bottom right? I would assume to the bottom right since a system of equations which can be reduced to tridiagonal matrix will also be true for a 3D matrix. On the other hand, I do not how it would be possible to create a tridiagonal matrix with a 3D problem because you will refer to prior time levels when solving new time levels.

Last edited: Apr 5, 2011
2. Apr 5, 2011

### Dickfore

define what you mean by dimension

3. Apr 5, 2011

### Hypatio

I'm not sure I can do that correctly without simply looking up the mathematical definition. The solution to my model has two dimensions of space, consisting of regularly spaced points. The dimension of time is also discretized so that I find the solution at desired times. The dependence on the time in the solution is due to a convolution integral so it is not a simple forward model where I can just update a 2D mesh, values for all times need to be retained since they depend on values at all previous times from the convolution.

I just want to know what the matrix is supposed to look like so that I can prepare the matrix gaussian elimination.

4. Apr 5, 2011

### Dickfore

oh, so dimension is the number of arguments the solution depends on. If the equation is linear and contains a convolution in time, then performing a Fourier transform (with respect to time) will make the covolution a simplle multiplication of the Fourier componets. Then, it is a matter of finding the inverse Fourier transform, but there are methods for numerically doing that (FFT).

5. Apr 5, 2011

### Hypatio

That sounds promising, but does matter if all the unit response functions are dependent on the time? My problem would be easily solved if the relaxation modulus was not time and space dependent. It is a linear viscoelastic problem where viscosity varies with time as well as space.

In any case, I still don't understand how a 3+dimensional problem can be numerically solved since I do not know how the matrix is constructed.

6. Apr 5, 2011

### Dickfore

ok, i would suggest you start by showing us what the meaning of the matrix you linked in the op is and how you obtain it for the Poisson equation.

7. Apr 5, 2011

### Hypatio

The equation solved is:
$$\nabla^4\phi\left (1+\lambda\frac{1-v}{E} \right )+\sum_{t_0}^t\int_{t_i}^t\left [k\frac{1-v}{E}\nabla^4\dot{\phi}-\left ( \frac{\partial^4}{\partial x^4}+\frac{\partial^4}{\partial y^4} \right )\dot{\phi} \right ]\exp \left [ -(t-t_i)\frac{\mu}{\eta} \right ]dt=\sum_{t_0}^t\int_{t_i}^t\dot{\epsilon_T}\exp \left [ -(t-t_i)\frac{\mu}{\eta} \right ]dt -\nabla^2 k\alpha_VT$$
where:
nabla^4 is the biharmonic operator
lambda, v, E, k, aV, and mu are constant coefficients.
t is time (t_0 is start time and ti is a time between t_0 and t)
T is a variable (temperature)
phi is the scalar potential function which is what the solution finds.
the dot indicates derivative in respects to time
eta is the viscosity which varies in space and time
and
$$\epsilon_T=\frac{\partial^2}{\partial y^2}k\alpha_V T$$

Finding the right side of the equation is quite trivial since epsilon is known, but since the convolution on the LHS involves the unknown stress function phi it will require more work in the dimension of time.

Last edited: Apr 5, 2011
8. Apr 5, 2011

### Dickfore

What does:

$$\sum_{t_{0}}^{t}$$

stand for?

9. Apr 5, 2011

### Hypatio

I see the notation is unclear. The amount of sum's depends on the time discretization. So you sum from time t_0 to t, t/Dt many times where Dt is the time interval.

Maybe it is better written

$$\sum_{t=0}^{t/\Delta t}\int_{t_i}^tf(t)dt$$

10. Apr 5, 2011

### Dickfore

Ok, so is:

$$\int_{t_{i}}^{t}{f(t') \, dt'}$$

a function of the upper bound $t$ (I used a different symbol for the dummy variable $t'$) and then you take the sum of the values of this function for a discrete set of the upper bound $t$?

EDIT:

Also, what is $t_{i}$?

11. Apr 5, 2011

### Hypatio

I think that is right. So:

$$\sum_{t=0}^{t/\Delta t}\int_{t_i}^tf(t)dt=\int_{t_0}^tf(t)dt+\int_{\Delta t}^tf(t)dt+\int_{2\Delta t}^tf(t)dt...$$

12. Apr 5, 2011

### Dickfore

what you wrote is not what I said.

13. Apr 5, 2011

### Hypatio

I'm sorry, I guess I didn't understand what you were asking. Is your question about the operation indicated by the summation and integration, or the term inside the integral or something else?

14. Apr 5, 2011

### Dickfore

It's about the summation. What variable are you summing with respect to?

15. Apr 5, 2011

### Hypatio

Oh I see. The value of t_i changes (increasing by Delta t) for each sum. So it is the lower bound that varies over summation. Meanwhile, the upper bound only changes when you are solving for a different time level.

16. Apr 5, 2011

### Dickfore

So, does this summation because of some approximation of an integral or is it something fundamental from the theory?

17. Apr 5, 2011

### Hypatio

The summation arises because of the dependence of the viscosity (eta in the equation) on time, and it is an approximation of an integral. If viscosity were constant, the variables epsilon_T and the partial differential terms for phi could be inserted into their respective integrals.

I'm actually wondering if there is a way to mostly reduce the summation such that I can use only some accumulated result from a previous timestep to evaluate the new timestep instead of performing the summation for all times for each timestep. I am reading a paper that says this is possible, but I am afraid that the time dependence will prevent this.

18. Apr 5, 2011

### Dickfore

Actually, I was thinking you can switch the order of integration in the double integrals and perform one of the integrals exactly.

19. Apr 5, 2011

### Hypatio

I don't see how this can be done?

20. Apr 6, 2011

### Dickfore

Well, as far as I can see, you have a double integral of the form:

$$\int_{t_{0}}^{t}{dt_{i} \, \int_{t_{i}}^{t}{f(t') \, \exp{\left[-(t' - t_{i}) \frac{\mu}{\eta(t')} \right]} \, dt'}}$$

If you change the order of integration, then you have to be careful about the limits of the integrals. The result is:

\begin{align*} \int_{t_{0}}^{t}{dt' \, \int_{t_{0}}^{t'}{f(t') \, \exp{\left[-(t' - t_{i}) \frac{\mu}{\eta(t')} \right]} \, dt_{i}}} \\ \int_{t_{0}}^{t}{dt' \, f(t') \, \exp{\left[-\frac{\mu \, t'}{\eta(t')}\right]} \, \int_{t_{0}}^{t'}{\exp{\left[\frac{\mu \, t_{i}}{\eta(t')} \right]} \, dt_{i}}} \\ \end{align*}

Now, the integral over $t_{i}$ is trivial (although $\eta(t')$ is a function of time, it is treated as a constant here).