Inverse (LS) problem for binary data

  • Context: Graduate 
  • Thread starter Thread starter Zhiv
  • Start date Start date
  • Tags Tags
    Binary Data Inverse
Click For Summary

Discussion Overview

The discussion revolves around methods for solving linear systems involving binary data, specifically the problem of recovering a binary vector f from measured binary data g using a matrix M. Participants explore various mathematical approaches and algorithms, including Tikhonov regularization, iterative methods, and integer programming, while addressing the challenges posed by the ill-conditioned nature of the inverse of M.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • Some participants inquire about established methods for solving linear systems with binary data, noting the ill-conditioned inverse of M complicates direct solutions.
  • One participant suggests neural network classifiers as a potential approach, while acknowledging computational challenges due to the scale of the problem.
  • Another participant proposes an iterative method that minimizes ||M*f-g||, applying constraints to reshape f during iterations, but expresses concerns about the mathematical robustness of this approach.
  • There is a discussion about the use of different norms, with one participant mentioning the Euclidean norm and the number of non-zero elements as possible metrics for optimization.
  • Some participants explore the idea of reformulating the problem as an integer programming challenge, suggesting that lattice reduction techniques might aid in finding efficient solutions.
  • Clarifications are sought regarding the definition of "binary," with some participants asking whether it refers to binary matrices or simply to elements restricted to 0s and 1s.
  • One participant shares specific examples of the data structure, emphasizing the Gaussian model of the point spread function used in their context.

Areas of Agreement / Disagreement

Participants express various viewpoints on potential methods for addressing the problem, with no consensus reached on a single effective solution. The discussion remains open, with multiple competing approaches and ongoing uncertainties regarding their effectiveness.

Contextual Notes

Participants note limitations related to the ill-conditioned nature of the matrix M, the computational challenges of proposed methods, and the specific requirements of the binary data involved. There are also unresolved mathematical steps in the iterative approaches discussed.

Zhiv
Messages
5
Reaction score
0
As per topic. Is there any well established method for solving linear systems for binary data?
pardon if this is in wrong cathegory, english is not my first language and I'm not that well aware of the english terms.

i.e. the classical g = M*f problem, where g is measured data and we want to know f.
In this case, with calibration data we can determine M, but it has an ill-conditioned inverse, so the classical solution of f = M-1*g doesn't work.

Enter the Tikhonov regularization, but it fails to be accurate enough.

Conjugate gradient method, i.e. solving min || M*f-g|| might work, if the M was positive definite, but it is not. (it's symmetrical though). Also, we demand that every element of f and g is either 0 or 1, as the measured data g is in binary form. Google scholar was of little help, so...

so in short

a) Is there any well know tools for the problem when the data is binaric

b) am I screwed?
 
Physics news on Phys.org
Try looking up neural network classifiers. I think I learned an algorithm for something like this once upon a time.
 
Zhiv said:
As per topic. Is there any well established method for solving linear systems for binary data?
pardon if this is in wrong cathegory, english is not my first language and I'm not that well aware of the english terms.

i.e. the classical g = M*f problem, where g is measured data and we want to know f.
In this case, with calibration data we can determine M, but it has an ill-conditioned inverse, so the classical solution of f = M-1*g doesn't work.

Enter the Tikhonov regularization, but it fails to be accurate enough.

Conjugate gradient method, i.e. solving min || M*f-g|| might work, if the M was positive definite, but it is not. (it's symmetrical though). Also, we demand that every element of f and g is either 0 or 1, as the measured data g is in binary form. Google scholar was of little help, so...

so in short

a) Is there any well know tools for the problem when the data is binaric

b) am I screwed?

What happens when you do row reduction?
 
bpet said:
What happens when you do row reduction?

Doesn't change the nature of the problem, as far as I can see. If the matrix has ill inverse (i.e. it exsits but causes numerical problems), I can't see simple algebraic manipulation taking care of it.

When it comes to the suggestion of neural networks, that could probably work, but since I'm required to solve the problem for hundreds of thousands of f for given M, it might be computationally challenging.

I have an iterative idea where we minimize ||M*f-g||

by placing constraint force on f so that it reshapes after each iteration until minimum is reached. (after all, I know the blurring process M so I know in which direction to push my guesses). And this works. The problem is that it's pretty much mathematically non-robust.

But thanks for your help anyway, at least I know now that I'm not missing (hopefully) anything obvious.
 
Zhiv said:
I have an iterative idea where we minimize ||M*f-g|| by placing constraint force on f so that it reshapes after each iteration until minimum is reached. (after all, I know the blurring process M so I know in which direction to push my guesses). And this works. The problem is that it's pretty much mathematically non-robust.

But thanks for your help anyway, at least I know now that I'm not missing (hopefully) anything obvious.

Just out of curiosity what norm is used? I'm assuming it's the number of nonzero elements.

Iterative solvers can be tricky to implement for binary linear systems, e.g.

M = [v1,v2], v1=[1,0,1,1,...,1]', v2=[0,1,1,1,...,1]'
g = [1,1,0,0,...,0]'

where the solution is obviously f = [1,1]' but any movement from [0,0]' to [1,0]' or [0,1]' increases ||M*f-g||.


Another approach could be to write it as an integer programming problem: minimize ||M*f-g-2*h|| for integer f and h subject to 0<=f<=1 and M*f>=2*h. If M is constant and the dimension not too big it's possible that lattice reduction techniques will help simplify the equations to make repeated solutions more efficient.
 
bpet said:
Just out of curiosity what norm is used? I'm assuming it's the number of nonzero elements.

Iterative solvers can be tricky to implement for binary linear systems, e.g.

M = [v1,v2], v1=[1,0,1,1,...,1]', v2=[0,1,1,1,...,1]'
g = [1,1,0,0,...,0]'

where the solution is obviously f = [1,1]' but any movement from [0,0]' to [1,0]' or [0,1]' increases ||M*f-g||.


Another approach could be to write it as an integer programming problem: minimize ||M*f-g-2*h|| for integer f and h subject to 0<=f<=1 and M*f>=2*h. If M is constant and the dimension not too big it's possible that lattice reduction techniques will help simplify the equations to make repeated solutions more efficient.


Euclidean norm is used at the moment. I have also toyed with amount of non-zero elements.
I guess they are pretty much same when the problem is binaric. I just lack mathematical rigour to check this.

Well, I'm the lucky situation that f and g are of same rank(?) same size in plain english. Also, properties of M are well know (it's basically (gaussian) model of a point spread function.) It models blurr in depth direction in 3D imaging modality.

The fact that M is PSF and that we can use the blurred data as initial guess, I now 'squeeze' the f smaller with the norm ||M*f-g||

i.e. fk = fk-1-T(f''k-1), where f'' is second derivative.

T(x) = 0 if x smaller than 0, else 1

this is pretty much equal of deducing zero crossings from the vector, i.e operating


00111111000011100
would end up as
00011110000001000

This usually means that the blurred binary image is 'squeezed' to approximately right size.
(Images are 3D and we are only interested in the blur in blurr direction)

My worry (and the worry of my biophysicist advisor) is that this method is way too ad-hoc, so I was probing these forums. I guess I should have mentioned all this in the first post, but I wanted to check that I hadn't missed anything blindly obivous.

I will check out that integer trick out anyway, just in case.
 
Zhiv said:
Also, properties of M are well know (it's basically (gaussian) model of a point spread function.) It models blurr in depth direction in 3D imaging modality.

Sorry I should have asked earlier, but by "binary" do you mean M, f, g are all binary matrices and the arithmetic uses 1+1=0, or do you mean that the elements of f are restricted to 0's and 1's?

The latter case might be easier to solve. Could you post some small (maximum 4x4) examples of M, f, g.
 
bpet said:
Sorry I should have asked earlier, but by "binary" do you mean M, f, g are all binary matrices and the arithmetic uses 1+1=0, or do you mean that the elements of f are restricted to 0's and 1's?

The latter case might be easier to solve. Could you post some small (maximum 4x4) examples of M, f, g.


My bad. g (and f) are restricted to 0s and 1s. M is matrix with real elements.

Well, the data is usually in class of hundreds of elements, so I don't know how much use would cutting do.


i.e. basically one 'strip' of the image looks like

(g)
0
0
1
1
1
1
0
0

when it should look like

(f)
0
0
0
1
1
0
0
0


(real data is in classes of few hundred elements, with M being n x n where n is the number of elements in g.)
This blurring is modeled my gaussian function in the M, but the inversr is ill, so something bust be done to recover the real information. At now, I use method described earlier, but I was worried that there was some easier general solution to the problem.
 

Similar threads

  • · Replies 13 ·
Replies
13
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 5 ·
Replies
5
Views
4K
Replies
8
Views
2K
Replies
1
Views
2K
  • · Replies 12 ·
Replies
12
Views
3K
  • · Replies 13 ·
Replies
13
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 8 ·
Replies
8
Views
2K