Dot Product of Streaming vectors

In summary: Now run Gram Schmidt on ##\mathbf a## vs ##\mathbf b##, and get ##\mathbf v##. you now have ##\mathbf v = \mathbf b^{ \parallel a}##. you can do ##\mathbf v^T \mathbf c## and adjust scaling to get ##\mathbf a^T \mathbf c##. side note: there there is a non-parallelizable, but numerically stable, version of Gram Schmidt that you could use. The open question is what you wanted to use ##\mathbf b## for. Depending on uses, the above may or may
  • #1
zheng004
10
0
Hi all,

Suppose we have vectors coming in order as A, B and then C (but A must be deleted before C comes in). Then how to get the dot product between A and C? It is allowed to store some calculations of A before deleting elements of A, for example, we could store norm of A, dot(A, B) and etc.

I tried to store (A + B) .^2, (A - B)^2, and then replace B with C, but failed. Please help!
 
Physics news on Phys.org
  • #2
zheng004 said:
Hi all,

Suppose we have vectors coming in order as A, B and then C (but A must be deleted before C comes in). Then how to get the dot product between A and C? It is allowed to store some calculations of A before deleting elements of A, for example, we could store norm of A, dot(A, B) and etc.

I tried to store (A + B) .^2, (A - B)^2, and then replace B with C, but failed. Please help!

Store A in B and then dot B and C.
 
  • #3
Buffu said:
Store A in B and then dot B and C.

Thanks for your reply, it is not allowed to store A in B as B will be used later. Actually, deleting A is to save space.
 
  • #4
zheng004 said:
Thanks for your reply, it is not allowed to store A in B as B will be used later. Actually, deleting A is to save space.

In your post you said you are trying to store (A-B)^2 and (A+B)^2. Where are you storing these if space is limited ?
 
  • #5
Buffu said:
In your post you said you are trying to store (A-B)^2 and (A+B)^2. Where are you storing these if space is limited ?
Sorry, should be square of sum, i.e. store (a_1 + a_2 + ... + a_n - b_1 - b_2 - ... - b_n)^2 and (a_1 + a_2 + ... + a_n + b_1 + b_2 - ... + b_n)^2. These are the values so easy to store.

Very appreciated if you can help. Thanks.
 
  • #6
zheng004 said:
Sorry, should be square of sum, i.e. store (a_1 + a_2 + ... + a_n - b_1 - b_2 - ... - b_n)^2 and (a_1 + a_2 + ... + a_n + b_1 + b_2 - ... + b_n)^2. These are the values so easy to store.

Very appreciated if you can help. Thanks.

So you can store 2 floats at max ?
 
  • #7
Csn you provide some context here? This seems like an arbitrary constraint.
 
  • #8
Buffu said:
So you can store 2 floats at max ?

We can store any value as long as using limited space.
 
  • #9
jedishrfu said:
Csn you provide some context here? This seems like an arbitrary constraint.
Ok, here is the context. The incoming large vectors in order are A, B, C. But spaces are limited, to receive vector C, we must delete A first. (B can not be deleted as will be used later.) The problem is how to get the dot (A, C). We are allowed to use extra but limited spaces, for examples, store one or a few intermediate value. Only space constraint.

Thank a lot.
 
  • #10
zheng004 said:
We can store any value as long as using limited space.
My general idea is to store some calculation/calculations between A and B, after replacing B with C, dot (A,C) can be generated from these values. But I can not figure out the details.
 
  • #11
Okay so this is a homework problem given in computer science?
 
  • #12
jedishrfu said:
Okay so this is a homework problem given in computer science?
No, this is not a homework question. Very appreciated if you can help.
 
  • #13
Okay so who placed this constraint on the problem. What are you trying to do here?
 
  • #14
I am inferring that you are operating over Reals (albeit with floating point arithmetic). I am also assuming that these are LARGE vectors, i.e. not say 2 or 3 entries and not overly sparse, and there isn't some other special structure you've omitted.

Now from here, let's assume each vector has a squared L2 norm = 1. (If this is not the case, you can normalize them and put the actual norms in memo.)

Now run Gram Schmidt on ##\mathbf a## vs ##\mathbf b##, and get ##\mathbf v##. you now have ##\mathbf v = \mathbf b^{ \parallel a}##.

you can do ##\mathbf v^T \mathbf c## and adjust scaling to get ##\mathbf a^T \mathbf c##.

side note: there there is a non-parallelizable, but numerically stable, version of Gram Schmidt that you could use.

The open question is what you wanted to use ##\mathbf b## for. Depending on uses, the above may or may not work. If for some reason you need all of it, i.e. ##\mathbf b = \mathbf b^{ \parallel a} + \mathbf b^{ \perp a}##, then too bad, I think you're out of luck. If that is the case, it sounds like you're trying to solve for ##\mathbf x## in ##\mathbf {Ax} = \mathbf b##, where ##\mathbf x## and ##\mathbf b## are "big" vectors and ##\mathbf A## is a rank one matrix. In this case, the problem is under specified, like your original post.

If you merely need to bound the dot product of ##\mathbf a^T \mathbf c## or whatever, you can do bounds with Cauchy Schwartz. But if you are looking to solve... good luck solving a large n-dimensional equation with a rank one matrix.
 
Last edited:
  • #15
StoneTemplePython said:
I am inferring that you are operating over Reals (albeit with floating point arithmetic). I am also assuming that these are LARGE vectors, i.e. not say 2 or 3 entries and not overly sparse, and there isn't some other special structure you've omitted.

Now from here, let's assume each vector has a squared L2 norm = 1. (If this is not the case, you can normalize them and put the actual norms in memo.)

Now run Gram Schmidt on ##\mathbf a## vs ##\mathbf b##, and get ##\mathbf v##. you now have ##\mathbf v = \mathbf b^{ \parallel a}##}.

you can do ##\mathbf v^T \mathbf c## and adjust scaling to get ##\mathbf a^T \mathbf c##.

side note: there there is a non-parallelizable, but numerically stable, version of Gram Schmidt that you could use.

The open question is what you wanted to use ##\mathbf b## for. Depending on uses, the above may or may not work. If for some reason you need all of it, i.e. ##\mathbf b = \mathbf b^{ \parallel a} + \mathbf b^{ \perp a}##, then too bad, I think you're out of luck. If that is the case, it sounds like you're trying to solve for ##\mathbf x## in ##\mathbf {Ax} = \mathbf b##, where ##\mathbf x## and ##\mathbf b## are "big" vectors and ##\mathbf A## is a rank one matrix. In this case, the problem is under specified, like your original post.

If you merely need to bound the dot product of ##\mathbf a^T \mathbf c## or whatever, you can do bounds with Cauchy Schwartz. But if you are looking to solve... good luck solving a large n-dimensional equation with a rank one matrix.

Thanks for your input. But in order to calculate ##\mathbf v^T \mathbf c##, we have to compute and store ##\mathbf v ##, but the problem is that ##\mathbf v ## is a vector and we do not have enough space to store it.
 
  • #16
zheng004 said:
Thanks for your input. But in order to calculate ##\mathbf v^T \mathbf c##, we have to compute and store ##\mathbf v ##, but the problem is that ##\mathbf v ## is a vector and we do not have enough space to store it.

What I suggested is that you overwrite ##\mathbf b## with ##\mathbf v##, and store any length difference in a memo.

I should take a step back here. Have you studied linear algebra? Do you understand why you cannot solve for (large) n equations with a rank one matrix?

Barring some kind of omitted special structure, that's what this problem reduces to.
 
  • #17
StoneTemplePython said:
What I suggested is that you overwrite ##\mathbf b## with ##\mathbf v##, and store any length difference in a memo.

I should take a step back here. Have you studied linear algebra? Do you understand why you cannot solve for (large) n equations with a rank one matrix?

Barring some kind of omitted special structure, that's what this problem reduces to.

Thanks for your reply. Unfortunately, b can not be overwritten as we need to use later.
I know can we not get a unique solution from Ax = b as you mentioned, so I post the question here. This is problem abstracted from my research, any input is appreciated.
 
  • #18
zheng004 said:
Thanks for your reply. Unfortunately, b can not be overwritten as we need to use later.
I know can we not get a unique solution from Ax = b as you mentioned, so I post the question here. This is problem abstracted from my research, any input is appreciated.
If the vector only has 2 elements, i.e. A = [a1, a2]; B=[b1, b2]; C=[c1, c2], then we can store two values: v1 = (a1+a2-b1-b2) and v2 = (a1+a2+b1+b2). Then we can get dot (A, C) = 1/4 * ( (v2 - (b1+b2-c1-c2))^2 - (v1 + b1 + b2 - c1 - c2)^2). But if vector is lager than 2, I do not know how to figure it out.
 
  • #19
Closing thread for moderation
 

Related to Dot Product of Streaming vectors

What is the dot product of streaming vectors?

The dot product of streaming vectors is a mathematical operation that calculates the product of two vectors by multiplying their corresponding components and then summing the results.

How is the dot product of streaming vectors different from the dot product of regular vectors?

The dot product of streaming vectors is performed on vectors that are continuously changing or updating, such as in real-time data streams. This is different from the dot product of regular vectors, where the vectors are typically fixed or static.

What is the purpose of calculating the dot product of streaming vectors?

The dot product of streaming vectors is commonly used in various fields, such as signal processing, data analysis, and machine learning, to determine the similarity or correlation between two vectors. It can also be used to project one vector onto another and perform various calculations in vector spaces.

What are some applications of the dot product of streaming vectors?

The dot product of streaming vectors has many practical applications, such as in real-time data analysis, video and audio processing, and predictive modeling. It is also used in computer graphics and computer vision to determine the angle between two vectors.

How is the dot product of streaming vectors calculated?

The dot product of streaming vectors is calculated by multiplying the corresponding components of the two vectors and then summing the results. The formula for calculating the dot product is: A · B = |A||B|cosθ, where A and B are the two vectors, |A| and |B| are their magnitudes, and θ is the angle between them.

Similar threads

  • Linear and Abstract Algebra
Replies
33
Views
1K
  • Linear and Abstract Algebra
Replies
14
Views
1K
  • Linear and Abstract Algebra
Replies
9
Views
626
  • Linear and Abstract Algebra
Replies
4
Views
1K
  • Linear and Abstract Algebra
Replies
9
Views
941
  • Linear and Abstract Algebra
Replies
7
Views
649
  • Linear and Abstract Algebra
Replies
8
Views
1K
  • Linear and Abstract Algebra
Replies
18
Views
775
  • Calculus
Replies
4
Views
936
  • Linear and Abstract Algebra
Replies
19
Views
590
Back
Top