Register to reply 
What does a threedimensional matrix look like? 
Share this thread: 
#1
Apr511, 07:30 PM

P: 87

I am trying to figure out how to construct a matrix for solving systems of linear equations with two dimensions of space and a dimension of time, but I do not know how to do this or begin visualizing such a matrix. The solution depends on all the data at all times less than the solved time so I can't cheat by simply updating a 2D matrix.
For instance, it is very clear that a 2D matrix will look like this: http://www.eecs.berkeley.edu/~demmel...etePoisson.gif Do I add the third dimension directly below this, or to the right? or to the bottom right? I would assume to the bottom right since a system of equations which can be reduced to tridiagonal matrix will also be true for a 3D matrix. On the other hand, I do not how it would be possible to create a tridiagonal matrix with a 3D problem because you will refer to prior time levels when solving new time levels. 


#2
Apr511, 07:48 PM

P: 3,014

define what you mean by dimension



#3
Apr511, 07:56 PM

P: 87

I just want to know what the matrix is supposed to look like so that I can prepare the matrix gaussian elimination. 


#4
Apr511, 08:12 PM

P: 3,014

What does a threedimensional matrix look like?
oh, so dimension is the number of arguments the solution depends on. If the equation is linear and contains a convolution in time, then performing a Fourier transform (with respect to time) will make the covolution a simplle multiplication of the Fourier componets. Then, it is a matter of finding the inverse Fourier transform, but there are methods for numerically doing that (FFT).



#5
Apr511, 08:32 PM

P: 87

In any case, I still don't understand how a 3+dimensional problem can be numerically solved since I do not know how the matrix is constructed. 


#6
Apr511, 08:35 PM

P: 3,014

ok, i would suggest you start by showing us what the meaning of the matrix you linked in the op is and how you obtain it for the Poisson equation.



#7
Apr511, 08:46 PM

P: 87

The equation solved is:
[tex]\nabla^4\phi\left (1+\lambda\frac{1v}{E} \right )+\sum_{t_0}^t\int_{t_i}^t\left [k\frac{1v}{E}\nabla^4\dot{\phi}\left ( \frac{\partial^4}{\partial x^4}+\frac{\partial^4}{\partial y^4} \right )\dot{\phi} \right ]\exp \left [ (tt_i)\frac{\mu}{\eta} \right ]dt=\sum_{t_0}^t\int_{t_i}^t\dot{\epsilon_T}\exp \left [ (tt_i)\frac{\mu}{\eta} \right ]dt \nabla^2 k\alpha_VT[/tex] where: nabla^4 is the biharmonic operator lambda, v, E, k, aV, and mu are constant coefficients. t is time (t_0 is start time and ti is a time between t_0 and t) T is a variable (temperature) phi is the scalar potential function which is what the solution finds. the dot indicates derivative in respects to time eta is the viscosity which varies in space and time and [tex]\epsilon_T=\frac{\partial^2}{\partial y^2}k\alpha_V T[/tex] Finding the right side of the equation is quite trivial since epsilon is known, but since the convolution on the LHS involves the unknown stress function phi it will require more work in the dimension of time. 


#8
Apr511, 08:49 PM

P: 3,014

What does:
[tex] \sum_{t_{0}}^{t} [/tex] stand for? 


#9
Apr511, 08:54 PM

P: 87

Maybe it is better written [tex]\sum_{t=0}^{t/\Delta t}\int_{t_i}^tf(t)dt[/tex] 


#10
Apr511, 08:59 PM

P: 3,014

Ok, so is:
[tex] \int_{t_{i}}^{t}{f(t') \, dt'} [/tex] a function of the upper bound [itex]t[/itex] (I used a different symbol for the dummy variable [itex]t'[/itex]) and then you take the sum of the values of this function for a discrete set of the upper bound [itex]t[/itex]? EDIT: Also, what is [itex]t_{i}[/itex]? 


#11
Apr511, 09:09 PM

P: 87

[tex]\sum_{t=0}^{t/\Delta t}\int_{t_i}^tf(t)dt=\int_{t_0}^tf(t)dt+\int_{\Delta t}^tf(t)dt+\int_{2\Delta t}^tf(t)dt...[/tex] 


#12
Apr511, 09:10 PM

P: 3,014

what you wrote is not what I said.



#13
Apr511, 09:21 PM

P: 87




#14
Apr511, 09:30 PM

P: 3,014

It's about the summation. What variable are you summing with respect to?



#15
Apr511, 09:37 PM

P: 87




#16
Apr511, 10:45 PM

P: 3,014

So, does this summation because of some approximation of an integral or is it something fundamental from the theory?



#17
Apr511, 11:09 PM

P: 87

I'm actually wondering if there is a way to mostly reduce the summation such that I can use only some accumulated result from a previous timestep to evaluate the new timestep instead of performing the summation for all times for each timestep. I am reading a paper that says this is possible, but I am afraid that the time dependence will prevent this. 


#18
Apr511, 11:18 PM

P: 3,014

Actually, I was thinking you can switch the order of integration in the double integrals and perform one of the integrals exactly.



Register to reply 
Related Discussions  
Real matrix  eigenvector matrix R  diagonal eigenvalue matrix L  Programming & Computer Science  0  
Dimensional reduction of the 4 dimensional Supersymmetry  Beyond the Standard Model  0  
Velocity of 2dimensional and 3dimensional waves  Classical Physics  3  
Infinitedimensional matrix multiplication  Linear & Abstract Algebra  8  
The 3Dimensional/4Dimensional Hypotheisis Of The Universe  General Physics  6 