Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Minimizing Chi^2, Singular Matrix problem

  1. Jan 25, 2016 #1

    Hepth

    User Avatar
    Gold Member

    I want to construct a completely correlated chi^2.
    I have a two-dimensional dataset, and its basically like:

    {m1,m2,m3,m4}
    {a1,a2,a3,a4}
    {x0,x0,x0,x0}

    So m1-m4, a1-a4 are all different, but each x0 is the same. This happens when I am fitting 2D data, but it is required that the function goes to zero (or some point) along the axis.

    I have the entire correlation matrix, and covariance matrix. I just need to then calculate the chi^2. Normally one would take the inverse of the correlation matrix, and just do a sum over the differences between my model and the desired times the Inverse of the correlation matrix.

    The problem is the the covariance matrix is not invertable because it is singular, there are n rows that are repetitive. (Basically the x0 by x0 block is all 1's, so theyre linearly dependent.)

    Is there a simple way around this? It doesn't matter too much, as the error on the x0 measurements is large (60%), but the errors are the same for all x0.

    The chi^2 is used for the training in a neural network.

    I have included the cov matrix here :

    Code (Text):

    cov = {{1.0,0.98,0.94,0.89,0.83,0.77,0.71,-0.18,-0.16,-0.13,-0.10,-0.072,-0.042,-0.013,0.91,0.91,0.90,0.88,0.86,0.81,0.74,-0.22,-0.22,-0.22,-0.22,-0.22,-0.22,-0.22},{0.98,1.0,0.99,0.96,0.91,0.87,0.83,-0.0039,0.020,0.045,0.071,0.098,0.12,0.14,0.95,0.94,0.93,0.92,0.89,0.84,0.76,-0.24,-0.24,-0.24,-0.24,-0.24,-0.24,-0.24},{0.94,0.99,1.0,0.99,0.97,0.94,0.91,0.16,0.18,0.20,0.23,0.25,0.27,0.28,0.96,0.95,0.94,0.92,0.89,0.84,0.76,-0.24,-0.24,-0.24,-0.24,-0.24,-0.24,-0.24},{0.89,0.96,0.99,1.0,0.99,0.98,0.96,0.29,0.31,0.33,0.35,0.37,0.39,0.39,0.95,0.94,0.93,0.91,0.87,0.82,0.74,-0.24,-0.24,-0.24,-0.24,-0.24,-0.24,-0.24},{0.83,0.91,0.97,0.99,1.0,1.0,0.98,0.40,0.42,0.44,0.46,0.47,0.48,0.48,0.92,0.91,0.90,0.88,0.85,0.80,0.71,-0.24,-0.24,-0.24,-0.24,-0.24,-0.24,-0.24},{0.77,0.87,0.94,0.98,1.0,1.0,1.0,0.48,0.50,0.52,0.54,0.55,0.56,0.54,0.89,0.88,0.87,0.85,0.82,0.76,0.68,-0.23,-0.23,-0.23,-0.23,-0.23,-0.23,-0.23},{0.71,0.83,0.91,0.96,0.98,1.0,1.0,0.55,0.57,0.59,0.60,0.61,0.61,0.60,0.86,0.85,0.84,0.82,0.78,0.73,0.65,-0.23,-0.23,-0.23,-0.23,-0.23,-0.23,-0.23},{-0.18,-0.0039,0.16,0.29,0.40,0.48,0.55,1.0,1.0,1.0,0.99,0.98,0.96,0.91,0.14,0.14,0.14,0.14,0.14,0.13,0.13,0.090,0.090,0.090,0.090,0.090,0.090,0.090},{-0.16,0.020,0.18,0.31,0.42,0.50,0.57,1.0,1.0,1.0,1.0,0.99,0.97,0.93,0.16,0.16,0.16,0.17,0.17,0.17,0.16,0.11,0.11,0.11,0.11,0.11,0.11,0.11},{-0.13,0.045,0.20,0.33,0.44,0.52,0.59,1.0,1.0,1.0,1.0,0.99,0.98,0.94,0.19,0.19,0.19,0.19,0.20,0.20,0.20,0.13,0.13,0.13,0.13,0.13,0.13,0.13},{-0.10,0.071,0.23,0.35,0.46,0.54,0.60,0.99,1.0,1.0,1.0,1.0,0.99,0.96,0.21,0.22,0.22,0.23,0.23,0.24,0.24,0.17,0.17,0.17,0.17,0.17,0.17,0.17},{-0.072,0.098,0.25,0.37,0.47,0.55,0.61,0.98,0.99,0.99,1.0,1.0,1.0,0.97,0.23,0.24,0.25,0.26,0.27,0.28,0.29,0.22,0.22,0.22,0.22,0.22,0.22,0.22},{-0.042,0.12,0.27,0.39,0.48,0.56,0.61,0.96,0.97,0.98,0.99,1.0,1.0,0.99,0.25,0.26,0.28,0.29,0.31,0.33,0.35,0.30,0.30,0.30,0.30,0.30,0.30,0.30},{-0.013,0.14,0.28,0.39,0.48,0.54,0.60,0.91,0.93,0.94,0.96,0.97,0.99,1.0,0.26,0.28,0.30,0.32,0.35,0.38,0.41,0.40,0.40,0.40,0.40,0.40,0.40,0.40},{0.91,0.95,0.96,0.95,0.92,0.89,0.86,0.14,0.16,0.19,0.21,0.23,0.25,0.26,1.0,1.0,0.99,0.98,0.96,0.92,0.85,-0.15,-0.15,-0.15,-0.15,-0.15,-0.15,-0.15},{0.91,0.94,0.95,0.94,0.91,0.88,0.85,0.14,0.16,0.19,0.22,0.24,0.26,0.28,1.0,1.0,1.0,0.99,0.97,0.94,0.87,-0.10,-0.10,-0.10,-0.10,-0.10,-0.10,-0.10},{0.90,0.93,0.94,0.93,0.90,0.87,0.84,0.14,0.16,0.19,0.22,0.25,0.28,0.30,0.99,1.0,1.0,1.0,0.99,0.96,0.90,-0.043,-0.043,-0.043,-0.043,-0.043,-0.043,-0.043},{0.88,0.92,0.92,0.91,0.88,0.85,0.82,0.14,0.17,0.19,0.23,0.26,0.29,0.32,0.98,0.99,1.0,1.0,1.0,0.98,0.93,0.031,0.031,0.031,0.031,0.031,0.031,0.031},{0.86,0.89,0.89,0.87,0.85,0.82,0.78,0.14,0.17,0.20,0.23,0.27,0.31,0.35,0.96,0.97,0.99,1.0,1.,0.99,0.96,0.12,0.12,0.12,0.12,0.12,0.12,0.12},{0.81,0.84,0.84,0.82,0.80,0.76,0.73,0.13,0.17,0.20,0.24,0.28,0.33,0.38,0.92,0.94,0.96,0.98,0.99,1.0,0.99,0.24,0.24,0.24,0.24,0.24,0.24,0.24},{0.74,0.76,0.76,0.74,0.71,0.68,0.65,0.13,0.16,0.20,0.24,0.29,0.35,0.41,0.85,0.87,0.90,0.93,0.96,0.99,1.0,0.39,0.39,0.39,0.39,0.39,0.39,0.39},{-0.22,-0.24,-0.24,-0.24,-0.24,-0.23,-0.23,0.090,0.11,0.13,0.17,0.22,0.30,0.40,-0.15,-0.10,-0.043,0.031,0.12,0.24,0.39,1.0,1.0,1.0,1.0,1.0,1.0,1.0},{-0.22,-0.24,-0.24,-0.24,-0.24,-0.23,-0.23,0.090,0.11,0.13,0.17,0.22,0.30,0.40,-0.15,-0.10,-0.043,0.031,0.12,0.24,0.39,1.0,1.0,1.0,1.0,1.0,1.0,1.0},{-0.22,-0.24,-0.24,-0.24,-0.24,-0.23,-0.23,0.090,0.11,0.13,0.17,0.22,0.30,0.40,-0.15,-0.10,-0.043,0.031,0.12,0.24,0.39,1.0,1.0,1.0,1.0,1.0,1.0,1.0},{-0.22,-0.24,-0.24,-0.24,-0.24,-0.23,-0.23,0.090,0.11,0.13,0.17,0.22,0.30,0.40,-0.15,-0.10,-0.043,0.031,0.12,0.24,0.39,1.0,1.0,1.0,1.0,1.0,1.0,1.0},{-0.22,-0.24,-0.24,-0.24,-0.24,-0.23,-0.23,0.090,0.11,0.13,0.17,0.22,0.30,0.40,-0.15,-0.10,-0.043,0.031,0.12,0.24,0.39,1.0,1.0,1.0,1.0,1.0,1.0,1.0},{-0.22,-0.24,-0.24,-0.24,-0.24,-0.23,-0.23,0.090,0.11,0.13,0.17,0.22,0.30,0.40,-0.15,-0.10,-0.043,0.031,0.12,0.24,0.39,1.0,1.0,1.0,1.0,1.0,1.0,1.0},{-0.22,-0.24,-0.24,-0.24,-0.24,-0.23,-0.23,0.090,0.11,0.13,0.17,0.22,0.30,0.40,-0.15,-0.10,-0.043,0.031,0.12,0.24,0.39,1.0,1.0,1.0,1.0,1.0,1.0,1.0}}
     
    Thanks for any possible help.
     
  2. jcsd
  3. Jan 25, 2016 #2

    mfb

    User Avatar
    2016 Award

    Staff: Mentor

    What do you fit to what? Are all the mi and ai different fit parameters? Then you have 9 fit parameters, and I don't understand why you write them in a 3x4 matrix, the covariance matrix would be 9x9.
     
  4. Jan 25, 2016 #3

    Hepth

    User Avatar
    Gold Member

    No, that was just an example of the structure. There are no real "fit parameters" other than the weights of the neural network.

    I will be extremely clear in this case.

    Basically my goal is to construct a goodness-of-fit for my neural network. I have 28 "desired values".

    Code (Text):

    desired = {1.020,1.021,1.022,1.023,1.024,1.025,1.028,-0.08190,-0.08207,-0.08278,-0.08438,-0.08752,-0.09339,-0.1044,0.1504,0.1418,0.1319,0.1203,0.1067,0.09051,0.07098,-0.05653,-0.05653,-0.05653,-0.05653,-0.05653,-0.05653,-0.05653}
     
    These came from some formula I have that depend on a lot of other parameters. Due to these parameters, and the uncertainty on them, I can analytically calculate the correlations between each of these measurements, and their uncertainties.

    Code (Text):

    correlation = {{0.00001568, 0.00001760, 0.00001993, 0.00002282, 0.00002649, 0.00003134, 0.00003800, -0.00001675, -0.00001488, -0.00001278, -0.00001042, -7.782*10^-6, -4.850*10^-6, -1.679*10^-6, 0.0001327, 0.0001275, 0.0001214, 0.0001144, 0.0001061, 0.00009631, 0.00008447, -0.00001882, -0.00001882, -0.00001882, -0.00001882, -0.00001882, -0.00001882, -0.00001882}, {0.00001760, 0.00002040, 0.00002380, 0.00002802, 0.00003339, 0.00004046, 0.00005021, -4.180*10^-7, 2.142*10^-6, 5.039*10^-6, 8.332*10^-6, 0.00001209, 0.00001637, 0.00002120, 0.0001578, 0.0001514, 0.0001440, 0.0001354, 0.0001253, 0.0001134, 0.00009906, -0.00002274, -0.00002274, -0.00002274, -0.00002274, -0.00002274, -0.00002274, -0.00002274}, {0.00001993, 0.00002380, 0.00002850, 0.00003433, 0.00004177, 0.00005156, 0.00006505, 0.00001943, 0.00002283, 0.00002670, 0.00003113, 0.00003624, 0.00004216, 0.00004902, 0.0001882, 0.0001804, 0.0001714, 0.0001609, 0.0001487, 0.0001342, 0.0001168, -0.00002749, -0.00002749, -0.00002749, -0.00002749, -0.00002749, -0.00002749, -0.00002749}, {0.00002282, 0.00002802, 0.00003433, 0.00004218, 0.00005217, 0.00006533, 0.00008346, 0.00004408, 0.00004852, 0.00005360, 0.00005944, 0.00006623, 0.00007418, 0.00008356, 0.0002259, 0.0002164, 0.0002053, 0.0001926, 0.0001776, 0.0001599, 0.0001388, -0.00003338, -0.00003338, -0.00003338, -0.00003338, -0.00003338, -0.00003338, -0.00003338}, {0.00002649, 0.00003339, 0.00004177, 0.00005217, 0.00006542, 0.00008288, 0.0001069, 0.00007552, 0.00008128, 0.00008789, 0.00009554, 0.0001045, 0.0001150, 0.0001276, 0.0002740, 0.0002622, 0.0002486, 0.0002329, 0.0002145, 0.0001927, 0.0001668, -0.00004087, -0.00004087, -0.00004087, -0.00004087, -0.00004087, -0.00004087, -0.00004087}, {0.00003134, 0.00004046, 0.00005156, 0.00006533, 0.00008288, 0.0001060, 0.0001379, 0.0001170, 0.0001245, 0.0001331, 0.0001431, 0.0001549, 0.0001689, 0.0001857, 0.0003374, 0.0003226, 0.0003056, 0.0002860, 0.0002630, 0.0002359, 0.0002036, -0.00005073, -0.00005073, -0.00005073, -0.00005073, -0.00005073, -0.00005073, -0.00005073}, {0.00003800, 0.00005021, 0.00006505, 0.00008346, 0.0001069, 0.0001379, 0.0001805, 0.0001742, 0.0001841, 0.0001955, 0.0002088, 0.0002245, 0.0002431, 0.0002658, 0.0004246, 0.0004058, 0.0003841, 0.0003591, 0.0003299, 0.0002954, 0.0002544, -0.00006428, -0.00006428, -0.00006428, -0.00006428, -0.00006428, -0.00006428, -0.00006428}, {-0.00001675, -4.180*10^-7, 0.00001943, 0.00004408, 0.00007552, 0.0001170, 0.0001742, 0.0005510, 0.0005644, 0.0005807, 0.0006012, 0.0006276, 0.0006628, 0.0007124, 0.0001193, 0.0001154, 0.0001110, 0.0001061, 0.0001006, 0.00009478, 0.00008877, 0.00004468, 0.00004468, 0.00004468, 0.00004468, 0.00004468, 0.00004468, 0.00004468}, {-0.00001488, 2.142*10^-6, 0.00002283, 0.00004852, 0.00008128, 0.0001245, 0.0001841, 0.0005644, 0.0005787, 0.0005963, 0.0006184, 0.0006470, 0.0006854, 0.0007396, 0.0001432, 0.0001393, 0.0001350, 0.0001301, 0.0001247, 0.0001189, 0.0001129, 0.00005493, 0.00005493, 0.00005493, 0.00005493, 0.00005493, 0.00005493, 0.00005493}, {-0.00001278, 5.039*10^-6, 0.00002670, 0.00005360, 0.00008789, 0.0001331, 0.0001955, 0.0005807, 0.0005963, 0.0006155, 0.0006398, 0.0006713, 0.0007138, 0.0007743, 0.0001702, 0.0001666, 0.0001627, 0.0001582, 0.0001534, 0.0001482, 0.0001429, 0.00007025, 0.00007025, 0.00007025, 0.00007025, 0.00007025, 0.00007025, 0.00007025}, {-0.00001042, 8.332*10^-6, 0.00003113, 0.00005944, 0.00009554, 0.0001431, 0.0002088, 0.0006012, 0.0006184, 0.0006398, 0.0006669, 0.0007023, 0.0007506, 0.0008199, 0.0002007, 0.0001980, 0.0001950, 0.0001917, 0.0001881, 0.0001844, 0.0001808, 0.00009336, 0.00009336, 0.00009336, 0.00009336, 0.00009336, 0.00009336, 0.00009336}, {-7.782*10^-6, 0.00001209, 0.00003624, 0.00006623, 0.0001045, 0.0001549, 0.0002245, 0.0006276, 0.0006470, 0.0006713, 0.0007023, 0.0007434, 0.0007999, 0.0008818, 0.0002353, 0.0002343, 0.0002333, 0.0002322, 0.0002313, 0.0002306, 0.0002306, 0.0001290, 0.0001290, 0.0001290, 0.0001290, 0.0001290, 0.0001290, 0.0001290}, {-4.850*10^-6, 0.00001637, 0.00004216, 0.00007418, 0.0001150, 0.0001689, 0.0002431, 0.0006628, 0.0006854, 0.0007138, 0.0007506, 0.0007999, 0.0008685, 0.0009696, 0.0002743, 0.0002766, 0.0002793, 0.0002825, 0.0002865, 0.0002916, 0.0002984, 0.0001856, 0.0001856, 0.0001856, 0.0001856, 0.0001856, 0.0001856, 0.0001856}, {-1.679*10^-6, 0.00002120, 0.00004902, 0.00008356, 0.0001276, 0.0001857, 0.0002658, 0.0007124, 0.0007396, 0.0007743, 0.0008199, 0.0008818, 0.0009696, 0.001101, 0.0003177, 0.0003261, 0.0003357, 0.0003470, 0.0003603, 0.0003763, 0.0003960, 0.0002796, 0.0002796, 0.0002796, 0.0002796, 0.0002796, 0.0002796, 0.0002796}, {0.0001327, 0.0001578, 0.0001882, 0.0002259, 0.0002740, 0.0003374, 0.0004246, 0.0001193, 0.0001432, 0.0001702, 0.0002007, 0.0002353, 0.0002743, 0.0003177, 0.001355, 0.001306, 0.001249, 0.001184, 0.001107, 0.001016, 0.0009057, -0.0001167, -0.0001167, -0.0001167, -0.0001167, -0.0001167, -0.0001167, -0.0001167}, {0.0001275, 0.0001514, 0.0001804, 0.0002164, 0.0002622, 0.0003226, 0.0004058, 0.0001154, 0.0001393, 0.0001666, 0.0001980, 0.0002343, 0.0002766, 0.0003261, 0.001306, 0.001262, 0.001211, 0.001151, 0.001082, 0.0009990, 0.0008992, -0.00007686, -0.00007686, -0.00007686, -0.00007686, -0.00007686, -0.00007686, -0.00007686}, {0.0001214, 0.0001440, 0.0001714, 0.0002053, 0.0002486, 0.0003056, 0.0003841, 0.0001110, 0.0001350, 0.0001627, 0.0001950, 0.0002333, 0.0002793, 0.0003357, 0.001249, 0.001211, 0.001166, 0.001114, 0.001053, 0.0009797, 0.0008916, -0.00003127, -0.00003127, -0.00003127, -0.00003127, -0.00003127, -0.00003127, -0.00003127}, {0.0001144, 0.0001354, 0.0001609, 0.0001926, 0.0002329, 0.0002860, 0.0003591, 0.0001061, 0.0001301, 0.0001582, 0.0001917, 0.0002322, 0.0002825, 0.0003470, 0.001184, 0.001151, 0.001114, 0.001070, 0.001019, 0.0009570, 0.0008824, 0.00002131, 0.00002131, 0.00002131, 0.00002131, 0.00002131, 0.00002131, 0.00002131}, {0.0001061, 0.0001253, 0.0001487, 0.0001776, 0.0002145, 0.0002630, 0.0003299, 0.0001006, 0.0001247, 0.0001534, 0.0001881, 0.0002313, 0.0002865, 0.0003603, 0.001107, 0.001082, 0.001053, 0.001019, 0.0009784, 0.0009300, 0.0008711, 0.00008245, 0.00008245, 0.00008245, 0.00008245, 0.00008245, 0.00008245, 0.00008245}, {0.00009631, 0.0001134, 0.0001342, 0.0001599, 0.0001927, 0.0002359, 0.0002954, 0.00009478, 0.0001189, 0.0001482, 0.0001844, 0.0002306, 0.0002916, 0.0003763, 0.001016, 0.0009990, 0.0009797, 0.0009570, 0.0009300, 0.0008972, 0.0008568, 0.0001541, 0.0001541, 0.0001541, 0.0001541, 0.0001541, 0.0001541, 0.0001541}, {0.00008447, 0.00009906, 0.0001168, 0.0001388, 0.0001668, 0.0002036, 0.0002544, 0.00008877, 0.0001129, 0.0001429, 0.0001808, 0.0002306, 0.0002984, 0.0003960, 0.0009057, 0.0008992, 0.0008916, 0.0008824, 0.0008711, 0.0008568, 0.0008383, 0.0002384, 0.0002384, 0.0002384, 0.0002384, 0.0002384, 0.0002384, 0.0002384}, {-0.00001882, -0.00002274, -0.00002749, -0.00003338, -0.00004087, -0.00005073, -0.00006428, 0.00004468, 0.00005493, 0.00007025, 0.00009336, 0.0001290, 0.0001856, 0.0002796, -0.0001167, -0.00007686, -0.00003127, 0.00002131, 0.00008245, 0.0001541, 0.0002384, 0.0004508, 0.0004508, 0.0004508, 0.0004508, 0.0004508, 0.0004508, 0.0004508}, {-0.00001882, -0.00002274, -0.00002749, -0.00003338, -0.00004087, -0.00005073, -0.00006428, 0.00004468, 0.00005493, 0.00007025, 0.00009336, 0.0001290, 0.0001856, 0.0002796, -0.0001167, -0.00007686, -0.00003127, 0.00002131, 0.00008245, 0.0001541, 0.0002384, 0.0004508, 0.0004508, 0.0004508, 0.0004508, 0.0004508, 0.0004508, 0.0004508}, {-0.00001882, -0.00002274, -0.00002749, -0.00003338, -0.00004087, -0.00005073, -0.00006428, 0.00004468, 0.00005493, 0.00007025, 0.00009336, 0.0001290, 0.0001856, 0.0002796, -0.0001167, -0.00007686, -0.00003127, 0.00002131, 0.00008245, 0.0001541, 0.0002384, 0.0004508, 0.0004508, 0.0004508, 0.0004508, 0.0004508, 0.0004508, 0.0004508}, {-0.00001882, -0.00002274, -0.00002749, -0.00003338, -0.00004087, -0.00005073, -0.00006428, 0.00004468, 0.00005493, 0.00007025, 0.00009336, 0.0001290, 0.0001856, 0.0002796, -0.0001167, -0.00007686, -0.00003127, 0.00002131, 0.00008245, 0.0001541, 0.0002384, 0.0004508, 0.0004508, 0.0004508, 0.0004508, 0.0004508, 0.0004508, 0.0004508}, {-0.00001882, -0.00002274, -0.00002749, -0.00003338, -0.00004087, -0.00005073, -0.00006428, 0.00004468, 0.00005493, 0.00007025, 0.00009336, 0.0001290, 0.0001856, 0.0002796, -0.0001167, -0.00007686, -0.00003127, 0.00002131, 0.00008245, 0.0001541, 0.0002384, 0.0004508, 0.0004508, 0.0004508, 0.0004508, 0.0004508, 0.0004508, 0.0004508}, {-0.00001882, -0.00002274, -0.00002749, -0.00003338, -0.00004087, -0.00005073, -0.00006428, 0.00004468, 0.00005493, 0.00007025, 0.00009336, 0.0001290, 0.0001856, 0.0002796, -0.0001167, -0.00007686, -0.00003127, 0.00002131, 0.00008245, 0.0001541, 0.0002384, 0.0004508, 0.0004508, 0.0004508, 0.0004508, 0.0004508, 0.0004508, 0.0004508}, {-0.00001882, -0.00002274, -0.00002749, -0.00003338, -0.00004087, -0.00005073, -0.00006428, 0.00004468, 0.00005493, 0.00007025, 0.00009336, 0.0001290, 0.0001856, 0.0002796, -0.0001167, -0.00007686, -0.00003127, 0.00002131, 0.00008245, 0.0001541, 0.0002384, 0.0004508, 0.0004508, 0.0004508, 0.0004508, 0.0004508, 0.0004508, 0.0004508}}
     
    The errors would be the diagonal of that.

    Now, I would normally say :

    $$\chi^2 = (NN - desired).C^{-1}.(NN - desired)$$

    And I adjust the parameters that give me the vector "NN" until the chi^2/dof is 1.

    The problem I run into now is that there are measurements that are 100% correlated with the same error. ( the last 7 entries are all the same algebraic equation, with the same error, even though they are calculated differently).

    This can be imagined as having a sampling of a 2D function, but in one of the dimensions, say at x=0 for all y, the function must go to zero. (Like f[x,y]*Sin[x]).
    So when you take some (7) samples along the x axis at this point, your desired is "0" for each of these, with the same error. This causes the correlation matrix to be singular and non-invertable.

    Does that help?

    The goal in the end is to both construct a good chi^2, but also I will need to generate random vectors of "desired" with the proper correlations. I would have done this with the Cholesky Decomposition method. But now that the correlation matrix isn't positive-definite, I don't know what to do.

    I know there is a PseudoInverse, but I don't know how statistically sound that is, and if it would reproduce the proper correlations.
     
  5. Jan 25, 2016 #4

    Hepth

    User Avatar
    Gold Member

    I think it might work to just take the error of each of the offending entries and add a random number to it (say its 30% error do +- 0.1%*Random[]), because at this point the error of the error doesn't matter. This makes it so each entry is linearly independent, and also I checked and it doesn't seem to change the correlations much.

    I wonder if that is ok.

    So basically the error of them would have been D[f[x],x]*deltax, but now I use D[f[x],x]*deltax*(1.0 + 0.01*Random[])
     
  6. Jan 25, 2016 #5

    mfb

    User Avatar
    2016 Award

    Staff: Mentor

    I'm not sure if the 100% correlation is well justified (see below), but reducing it to 99% could work. It certainly makes the cholesky decomposition possible. You can cross-check that with values like 98% and 99.9% to verify that the calculation is stable.
    Don't use more than 100% correlation, in general that leads to ill-defined things.

    There is a mathematical issue with the 100% correlation, if the neural net does not have uncertainties: it means you are absolutely sure that the last 7 entries have to be exactly the same. If they are different in the NN output even by the tiniest amount, your chi^2 is "infinity" - there is no possible way your prediction could be wrong in that way.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Minimizing Chi^2, Singular Matrix problem
  1. Chi square problem (Replies: 5)

  2. Chi-square problem (Replies: 2)

Loading...