I Observational evidence against expanding universe in MNRAS

  • #51
elerner said:
Your personal cosmological model, I guess,
Nope, see the references I did post. All of them clearly state that the universe is only assumed to be homogenous and isotropic at large scales. Nothing personal about that. Perhaps you need to take an introductory cosmology course.

The “at large scales” assumption is ubiquitous. It is literally one of the foundational assumptions. At large scales -> homogenous and isotropic assumed to apply -> with GR gives FLRW -> with observations gives LCDM. Take away the initial assumption and the rest goes away too.

This is not my personal idea. This is just me pointing out the assumptions of the model, which you appear to be overlooking. You would rather attack a straw man LCDM that claims to work at all scales rather than the actual model, which clearly claims to work only at large enough scales.

elerner said:
show me one citation that says the expansion hypothesis, which is what I am testing, does not apply on scales of 800 Mpc.
It is getting a little tiresome asking me to post references for points that I am not making. Next time you ask me to provide a reference, please quote me exactly to indicate which of my actual comments you want a reference for.
 
Last edited:
Space news on Phys.org
  • #52
elerner - The discrepancy you point to in the first figure of your press release can be resolved by modeling an accelerating universe, not a constantly expanding one. Apart from that, kimbyd is correct in pointing out that if there is an inconsistency between the properties of galaxies and the expansion of the universe, it is most likely that our knowledge about galaxies needs to be revised. If galaxies in the early universe, for example, didn't merge from smaller parts but were formed whole soon after recombination( which by way of the cosmic microwave background is more evidence(and more compelling) for an expanding universe) then that would eliminate your objections.
 
  • #53
Dale,
I can make little arrow diagrams too.

Expanding universe hypothesis at all scales->Tolman's analysis at all scales->prediction of increasing apparent radius at distance beyond z=1.25 and specific quantitative relation of galaxy sizes at all z to size at z=0.

That last prediction is what is contradicted by observation. No matter what you assert, you can't find one published reference, and you have not cited one, that limits the predictions that my paper tested to any scale on which the Hubble relation operates. The Hubble relation has been observed down to scales of 10 Mpc, far below any scales measured in my paper. So your assertion that the expanding universe hypothesis only operates at large scales is without any support.

And, Alantheastronomer, you can read Tolman's orignal papers. The calculation applies to ALL expansion, irrespective of rate. The only assumption is that the Hubble relation is due entirely to expansion. For all such models, the surface brightness of identical objects decreases as exactly (1+z)^3.

It is true that to test this hypothesis, you have to assume a luminosity-distance formula. In testing the expanding hypothesis, I use the current LCDM formula, which includes the effects of dark energy and dark matter.
 
  • Like
Likes nnunn
  • #54
Peter,

Science is about observations of nature. We can only base science on what is observed or observable. If we find, based on observation, that, at all scales we can observe, GM/r<<c^2, where M is the mass contained in a radius r, then we can conclude that GR is, on large scales, a small correction. If you want to argue that what we believe about parts of the universe that we can't observe determines truth, then you are in the realm of religious faith, not science.

I am not asserting that we have found that inequality to be true on all scales yet. But it is certainly not ruled out either.
 
  • #55
elerner said:
Expanding universe hypothesis at all scales->
That is not a prediction of the standard LCDM cosmology.
 
  • #56
Why don't you provide a quotation from and citation of a peer-reviewed published work that backs up your assertion? Since astrophysics is a quantitative science, I also suggest your quotation define what, quantitatively, is a "small" scale excluded from the expanding universe hypothesis. The low-z measurements in my paper are measured on scales of 200-800 Mpc.

In fact you will find that the only scales excluded from the expanding universe hypothesis are those in which matter is gravitationally bound, like clusters of galaxies on scales of one to a few Mpc.
 
  • #57
Hi Eric;

It appears that you may not have considered the impact on angular resolution which results from the dissimilar filters used in the HUDF and Galex datasets(?)
Rather than Hubble resolving objects 1/38 smaller than Galex, our estimation is much more modest at about 5/8.

If you had used the near IR data, instead of NUV (near ultra violet) for the HUDF dataset, the filter wavelength is nearly doubled and HST can now only resolve objects about 1/20 smaller than Galex (instead of the 1/38 you mention).

Did you test the impact this will have and does it alter any of your conclusions?
Cheers
 
  • #58
elerner said:
If we find, based on observation, that, at all scales we can observe, GM/r<<c^2, where M is the mass contained in a radius r, then we can conclude that GR is, on large scales, a small correction.

No, you can't. You could if you knew a priori that spacetime was static, but you don't know that a priori. FRW spacetimes are examples of non-static spacetimes where, even if the condition you describe holds, the geometry of the spacetime still is not flat spacetime plus "a small correction".
 
  • #59
Hi SelfSim,
If you read our 2014 paper, we describe that we used the datasets themselves to determine the actual resolutions of the two scopes. In other words, we used the cutoff radius below which the images could not be distinguished from point images--had high stellarity. There was a sharp cutoff for both scopes.
 
  • #60
elerner said:
Hi SelfSim,
If you read our 2014 paper, we describe that we used the datasets themselves to determine the actual resolutions of the two scopes. In other words, we used the cutoff radius below which the images could not be distinguished from point images--had high stellarity. There was a sharp cutoff for both scopes.
Eric;

We understand that part of your methodology, but our query is about the selection of datsets from Galex and HUDF, respectively (as a check).

You say: "To satisfy this condition and properly compare galaxies up to z~5, we have chosen two reference ultraviolet bands, namely the FUV (1550 Å) and NUV (2300 Å) bands as defined by the GALEX satellite, enabling the creation of 8 pairs of samples matched to the HUDF data".

To clarify: Did you use data from the F435W filter? (We've assumed this, as it would be the closest match to the Galex far and near ultraviolet images).
 
  • #61
Others: please bear with me on this query about the 2014 paper .. we believe it has significant bearing on the conclusions of Eric's recent MNRAS paper.

Eric;
These are the cutoff radius results from your 2014 paper,
Lerner et al said:
For GALEX this cutoff is at a radius of 2.4 +/- 0.1 arcsec for galaxies observed in the FUV and 2.6 +/- 0.2 arcsec for galaxies observed in the NUV, while for Hubble this cutoff is at a radius of 0.066 +/- 0.002 arcsec, where the errors are the 1σ statistical uncertainty.
While the Hubble cutoff of 0.066 arcsec compares with a theoretical resolution of 0.05 arcsec using the F435W filter, the Galex result of 2.4 arcsec is 30X higher than the theoretical value of 0.08 arcsec in FUV!

Something appears to be in error here(?)
I suppose it may be possible that the Galex optics were of catastrophically low quality in order to explain this major discrepency however, if this unlikely possibility were so, then also no useful science would be possible.

This discrepency is more likely be due to an error elsewhere .. (?)
Cheers
 
  • #62
elerner said:
Why don't you provide a quotation from and citation of a peer-reviewed published work that backs up your assertion?
Which assertion, please use the quote feature? That the LCDM model only works at large scales? I already provided 3. More exist, but 3 are sufficient.
 
Last edited:
  • #63
Self sim,

Not only the 435 filter. For each redshift range, we match the HST filter with either FUV or NUV filter to get the same wavelength band in the rest frames. So we actually have eight separate matched sets. All described in the 2014 paper.

Also on GALEX I guess you used the Dawes formula but it is way off. Look at the GALEX descriptions on their website--the resolution is arcseconds, not a tiny fraction of an arcsecond. Their pixels are much bigger than your value. Why is this?--you have to ask the designers of GALEX. This is just a guess on my part, but GALEX is a survey instrument. If they made the pixels too small, they would end up with too small a field of view, given limits on how big the detector chip can be.
 
Last edited:
  • #64
Dale, no reference that you have cited supports the assertions you have made. You have said that the expanding universe hypothesis makes no predictions on the scales I have tested it because they are too small. The smallest-scale measurements in my paper cover the range 200-800 Mpc. You need to provide quotes from cited sources that say that these scales are too small to be covered by the expanding universe hypothesis. That is what I am testing. The words" small" and "large" have no meaning unless there is some quantitative comparison.
 
  • #65
elerner said:
You have said that the expanding universe hypothesis makes no predictions on the scales I have tested
Where did I say that? Use the quote feature and stop claiming I said things that I didn’t.
 
  • #66
OK, great, just a misunderstanding! Then you agree that my paper is a test of the the expanding universe prediction and that the predictions are contradicted by the data?
 
  • #67
I get the impression that the weakest link in this argument is Tolman's idea that expanding space affects the angular size of more distant objects, which seems to assume a closed curved universe which grows locally with time. I thought this model was now considered misleading, as although comoving coordinates expand, there is no local physical effect of expansion; galaxies are simply moving apart for historical reasons. Tolman has always been one of the great masters of GR, but it seems possible that he missed something. Is there any more recent support for Tolman's conclusions, taking into account alternative universe structure models?

I personally like the analogy of modelling an expanding universe with only 1D of space as a cone made from flat paper, where a circle around the cone represents space and the height from the apex represents time. Although the total amount of space clearly increases with time, it is still flat even on a large scale; there is no local change in scale, and objects moving along parallel paths (including light beams) remain on parallel paths. (This model assumes that the radius of the universe increases uniformly with time, which is obviously another simplification).

[One could similarly assume an even simpler model of a flat disc with radius being time and circumference being space, but for some reason I find the cone picture more interesting].
 
  • Like
Likes Dale
  • #68
elerner said:
Then you agree that ...
Are you having trouble using the quote feature? Just select the text that I actually wrote and choose “Reply”.
 
  • #69
elerner said:
... Not only the 435 filter. For each redshift range, we match the HST filter with either FUV or NUV filter to get the same wavelength band in the rest frames. So we actually have eight separate matched sets. All described in the 2014 paper.

Also on GALEX I guess you used the Dawes formula but it is way off. Look at the GALEX descriptions on their website--the resolution is arcseconds, not a tiny fraction of an arcsecond. Their pixels are much bigger than your value. Why is this?--you have to ask the designers of GALEX. This is just a guess on my part, but GALEX is a survey instrument. If they made the pixels too small, they would end up with too small a field of view, given limits on how big the detector chip can be.
Eric;

i) The formula we used was the Rayleigh criterion for resolution ... not Dawes
Ie:
d07c8ebcc094fca224172327849b8c8d7a942a4d


ii) The data from the Galex site indicates the pixel size is 1.5 arcseconds which is the angle as “viewed” by each individual pixel and is a measurement for CCD plate scale ... not resolution. The pixel size in arcseconds depends on the focal length of the telescope used.

The physical size of the pixels used by the detector is:

Physical size of pixel (microns) = [(pixel size in arcseconds) X (Focal length)]/206.3

Galex uses a 500mm size telescope at f/6 = 3000mm focal length.
(1.5X3000)/206.3 = 22 microns.

These are not large pixels. By comparison the ACS/WFC camera used by Hubble for the HUDF is 15 micron pixels.

iii) Since you obtained the same result for each HST filter, your calculation for the HUDT data appears to be incorrect due to the wavelength dependence on resolution as per (i) above.
 
  • #70
Oh .. and there is nothing wrong with the Galex optics either, (as I speculated previously - see below for the explanation), which unless Eric can provide alternative explanations, leads us to the conclusion of a calculation error for the HUDT data (Eric - please advise us here).

There is a drop in the off-axis Galex performance, which is a characteristic of the Ritchey-Chretien optical design at lower f/ratios.
(As mentioned in my immediately prior post, Galex uses an f/6 scope).

http://iopscience.iop.org/article/10.1086/520512/pdf:
Galex said:
To verify the fundamental instrument performance from on orbit data, some bright stars were analyzed individually, outside the pipeline. These results show performance that is consistent with or better than what was measured during ground tests. We have also verified the end-to-end performance including the pipeline by stacking images of stars from the MIS survey that were observed at different locations on the detector. The results of these composites are shown in Figures 9 and 10. Performance is reasonably uniform except at the edge of the field, where it is significantly degraded.
Cheers
 
  • #71
On p. 684 of this reference it gives the resolution (FWHM) of the GALEX telescope as 4.2 arcsec in FUV and 5.3 arcsec in NUV.

As explained in our 2014 paper, we based the actual resolution on what was the minimum radius galaxy that the telescope, plus the algorithms used to account for the PSF, classified as an extended source. This could be done unambiguously because the stellarity distribution produced by these algorithms (the ones in the GALEX and HUDF catalogs respectively) had two clear peaks for point sources and extended ones. So setting the threshold anywhere near where we did at 0.4 produces the same results. Unsurprisingly this measure of radius resolution turns out to be half the FWHM resolution. In the 2014 paper we also consider the effect on our conclusions of the uncertainty in the determination of the resolutions.

As you point out, the GALEX designers picked the focal length to produce a wide field of view, which also limited the resolution to a much larger value (poorer resolution) than was optically possible. You can't optimize a single telescope for everything.

For HUDF, we checked the actual resolution in each filter by the same method and did not see a significant variation. This is what the data showed, so this is what we went with.

We do note that at z=5 a large fraction of the galaxies are not resolved. The median is not a good estimate of population mean once most of the galaxies in the redshift bin are not resolved, so HST measurements much beyond z=5 are not going to be highly reliable.
 
  • #72
elerner said:
OK, great, just a misunderstanding! Then you agree that my paper is a test of the the expanding universe prediction and that the predictions are contradicted by the data?

@elerner, this rhetorical style is not going to help the discussion. As @Dale has requested several times now, please use the PF quote feature to specify exactly what statements you are responding to. Otherwise the discussion will go nowhere and this thread will end up being closed.
 
  • #73
To rephrase: Dale, do you now agree that my paper is a test of the expanding universe prediction and that these predictions are contradicted by the data?

If not, please provide quotations from published literature that indicates why it is not. If you use an argument about "small scale", please include quotes on what quantitative scale expansion is no longer theorized to occur so that this can be compared with the 200-800 Mpc range that is the smallest scale measured in my paper.

To be totally clear, and to repeat what it is in the paper, this is a test of the hypothesis that the universe is expanding using one set of data. An overall assessment requires considering all available data sets. Also, this is a test of the hypothesis that the Hubble relation is caused by expansion. The LCDM model includes this hypothesis, so if there is no expansion, LCDM is also wrong. But LCDM also includes several other hypotheses: hot Big Bang, inflation, dark matter, dark energy, etc.

In response to Jonathan Scott, Tolman's calculations are based only on the hypothesis that the Hubble relation is due to expansion. The (1+z)^-3 reduction in surface brightness is independent of any details of how fast the expansion occurs at any point in time. That's why the Tolman test is a test of all expansion models, not any specific ones.
 
  • #74
elerner said:
The (1+z)^-3 reduction in surface brightness

Shouldn't this be ##(1 + z)^{-4}##?
 
  • Like
Likes ruarimac
  • #75
I think this paper is getting a tad over-sold. As the title of the paper carefully states this paper does not point to a problem with standard cosmology or LCDM, it merely conflicts with a model. What the paper has shown is that a model of galaxy size evolution combined with concordance cosmology are not compatible with a UV surface brightness test. I cannot stress enough it is the combination of both galaxy evolution model and cosmology that is being tested, not just cosmology. This is the main reason the Tolman test does not play a big part in modern cosmology, because of this degeneracy between the effects of galaxy evolution and cosmology. In this case however it was done in the rest-frame UV which means it will be incredibly sensitive to galaxy evolution, because the UV properties of a galaxy change on shorter timescales than the rest-frame optical, for example.

To frame the discussion about the paper. A reference from the 2014 paper which compared observations to an LCDM-like cosmology but did not attempt to model evolution so it wasn't actually LCDM:

In this paper, we do not compare data to the LCDM model. We only remark that any effort to fit such data to LCDM requires hypothesizing a size evolution of galaxies with z.

What seems to be done in this new paper is include a single model of galaxy size evolution. This is hardly surprising however as that model which is being tested is not a model of ultraviolet sizes of galaxies. It's from a paper written 20 years ago which has to assume disks all have the same mass to light ratio to calculate a luminosity at all. This model doesn't include the formation of stars. The model is outputting disk scale lengths, not ultraviolet radii. On this basis I think the comparison is apples to oranges so it's hardly surprising there is disagreement. There are a range of sophisticated galaxy formation simulations available today, they would be a much better comparison given they represent the leading edge of the field and that the selection function could be applied to them.

I reiterate, this paper is not evidence there is something wrong with standard cosmology. It is a test of a model of the size evolution of galaxies and cosmology.
 
  • Like
Likes Dale
  • #76
elerner said:
Dale, do you now agree that my paper is a test of the expanding universe prediction and that these predictions are contradicted by the data?
I still stand by my previous very clear assessment of your paper which I posted back in post 23:
Dale said:
I do accept your paper as evidence against the standard cosmology, just not as very strong evidence due to the issues mentioned above. So (as a good Bayesian/scientist) it appropriately lowers my prior slightly from “pretty likely” to “fairly likely”, and I await further evidence.
The only thing that I would change is to include “issues mentioned below” as well.

elerner said:
Also, this is a test of the hypothesis that the Hubble relation is caused by expansion. The LCDM model includes this hypothesis, so if there is no expansion, LCDM is also wrong.
This is a speculative claim that is not made in the paper.
 
Last edited:
  • #77
ruarimac said:
This is the main reason the Tolman test does not play a big part in modern cosmology
Are there any references describing this view of the Tolman test?
 
  • #78
elerner said:
On p. 684 of this reference it gives the resolution (FWHM) of the GALEX telescope as 4.2 arcsec in FUV and 5.3 arcsec in NUV.

As explained in our 2014 paper, we based the actual resolution on what was the minimum radius galaxy that the telescope, plus the algorithms used to account for the PSF, classified as an extended source. This could be done unambiguously because the stellarity distribution produced by these algorithms (the ones in the GALEX and HUDF catalogs respectively) had two clear peaks for point sources and extended ones. So setting the threshold anywhere near where we did at 0.4 produces the same results. Unsurprisingly this measure of radius resolution turns out to be half the FWHM resolution. In the 2014 paper we also consider the effect on our conclusions of the uncertainty in the determination of the resolutions.

As you point out, the GALEX designers picked the focal length to produce a wide field of view, which also limited the resolution to a much larger value (poorer resolution) than was optically possible. You can't optimize a single telescope for everything.

For HUDF, we checked the actual resolution in each filter by the same method and did not see a significant variation. This is what the data showed, so this is what we went with.

We do note that at z=5 a large fraction of the galaxies are not resolved. The median is not a good estimate of population mean once most of the galaxies in the redshift bin are not resolved, so HST measurements much beyond z=5 are not going to be highly reliable.
Thanks for your response there Eric.
To help resolve this matter, we have lodged a request directly with Galex to see if they can provide their actual performance data on angular resolution ... (fingers crossed). We'll get back on this when we have their response.
In the meantime, if this thread gets locked and as an alternative, we could continue the conversation at the IS forum ('Evidence against concordance cosmology' thread).
Cheers
 
  • #79
elerner said:
In response to Jonathan Scott, Tolman's calculations are based only on the hypothesis that the Hubble relation is due to expansion. The (1+z)^-3 reduction in surface brightness is independent of any details of how fast the expansion occurs at any point in time. That's why the Tolman test is a test of all expansion models, not any specific ones.
Tolman's 1930 paper clearly refers specifically to a spatially curved (presumably closed) universe, which I think was assumed to be the case at the time, so even if Tolman's calculations are correct, his assumptions are not necessarily correct. I'd say the new paper provides evidence against a spatially curved universe, but I don't know what the relevance of that is to current cosmology.
 
  • Like
Likes Dale
  • #80
elerner said:
To be totally clear, and to repeat what it is in the paper, this is a test of the hypothesis that the universe is expanding using one set of data. An overall assessment requires considering all available data sets. Also, this is a test of the hypothesis that the Hubble relation is caused by expansion. The LCDM model includes this hypothesis, so if there is no expansion, LCDM is also wrong. But LCDM also includes several other hypotheses: hot Big Bang, inflation, dark matter, dark energy, etc.

You paper has not disproved the expanding universe, as you clearly state in the title this is a test of a model of size evolution plus an expanding universe. I'm sure you had to negotiate that one with the referee but it's not some irrelevant point. You have tested some model of size evolution plus concordance cosmology, you clearly take the interpretation that it is the cosmology that is wrong but you have not demonstrated that.

elerner said:
As explained in our 2014 paper, we based the actual resolution on what was the minimum radius galaxy that the telescope, plus the algorithms used to account for the PSF, classified as an extended source. This could be done unambiguously because the stellarity distribution produced by these algorithms (the ones in the GALEX and HUDF catalogs respectively) had two clear peaks for point sources and extended ones. So setting the threshold anywhere near where we did at 0.4 produces the same results. Unsurprisingly this measure of radius resolution turns out to be half the FWHM resolution. In the 2014 paper we also consider the effect on our conclusions of the uncertainty in the determination of the resolutions.

How exactly do you model this selection effect in the Mo et al. model? You don't describe your model in detail at all but you state that it predicts the disk scale length varies as H(z) to some power at fixed luminosity, but that doesn't take into account the fact that you have a biased sample.
 
Last edited:
  • Like
Likes Dale
  • #81
elerner said:
On p. 684 of this reference it gives the resolution (FWHM) of the GALEX telescope as 4.2 arcsec in FUV and 5.3 arcsec in NUV. As explained in our 2014 paper, we based the actual resolution on what was the minimum radius galaxy that the telescope, plus the algorithms used to account for the PSF, classified as an extended source. This could be done unambiguously because the stellarity distribution produced by these algorithms (the ones in the GALEX and HUDF catalogs respectively) had two clear peaks for point sources and extended ones. So setting the threshold anywhere near where we did at 0.4 produces the same results. Unsurprisingly this measure of radius resolution turns out to be half the FWHM resolution. In the 2014 paper we also consider the effect on our conclusions of the uncertainty in the determination of the resolutions. As you point out, the GALEX designers picked the focal length to produce a wide field of view, which also limited the resolution to a much larger value (poorer resolution) than was optically possible. You can't optimize a single telescope for everything. For HUDF, we checked the actual resolution in each filter by the same method and did not see a significant variation. This is what the data showed, so this is what we went with. We do note that at z=5 a large fraction of the galaxies are not resolved. The median is not a good estimate of population mean once most of the galaxies in the redshift bin are not resolved, so HST measurements much beyond z=5 are not going to be highly reliable.
The FWHM values quoted by Galex seem to be based on the procedures described in: 'Section 5. RESOLUTION', (pg 691).

As described in the procedure, bright stars are used that lead to saturated cores in the PSF.
Saturation or near saturation leads to high FWHM values; the procedure is performed to show details in the wings.
In reality, if the FWHM values are a true indication of angular resolution performance, then the Galex scope has either (i) poor optics, (ii) is slightly out of focus or, (iii) is seeing limited due to atmospheric interference effects. (iii) Is obviously not applicable to Galex and can be eliminated.

Assuming the FWHM values given are a true indication of scope performance issues, then according to Eric (etal's) method, any value less than the FHWM is a point source and beyond measurement, yet the method cut off is around 50% of the FWHM values for the FUV and NUV bands, which then appears to simply be in error(?)

Also we maintain that the dependence of wavelength on resolution is still an untested issue with the analysis method, and cannot be ignored.
As per the Hubble site:
Hubble said:
Here we will try to answer the related question of how close together two features can be and still be discerned as separate – this is called the angular resolution.

The Rayleigh criterion gives the maximum (diffraction-limited) resolution, R, and is approximated for a telescope as
R = λ/D, where R is the angular resolution in radians and λ is the wavelength in metres. The telescope diameter, D, is also in metres.

In more convenient units we can write this as:
R (in arcseconds) = 0.21 λ/D, where λ is now the wavelength in micrometres and D is the size of the telescope in metres.

So for Hubble this is:
R = 0.21 x 0.500/2.4 = 0.043 arcseconds (for optical wavelengths, 500 nm) or
R = 0.21 x0.300/2.4 = 0.026 arcseconds (for ultraviolet light, 300 nm).

Note that the resolution gets better at shorter wavelengths, so we will use the second of these numbers from now on.
 
  • #82
And here we go with a Galex response:

Galex Performance, http://www.galex.caltech.edu/DATA/gr1_docs/GR1_Mission_Instrument_Overview_v1.htm:

"The design yields a field-averaged spot size of 1.6 arcsec (80%EE) for the FUV imagery and 2.5 arcsec (80%EE) for the FUV spectroscopy at 1600�. NUV performance is similar. There is no in-flight refocus capability".

So, the above 1.6 arcsec figure for the FUV imagery is much higher than the theoretical diffraction limited performance calculated by us earlier, but it is nowhere near the 4.2 arcsec FHWM in FUV figures used by Eric etal (as being indicative of actual performance)!

 
Last edited:
  • #84
newman22 said:
Galex resolution found here: 4.3 and 5.3 arcsec respectively http://www.galex.caltech.edu/researcher/techdoc-ch2.html
I think that's cited using the FWHM metric, whereas the 1.6 arcsec figure given in the GR1 Optical Design section is based on the Encircled Energy metric (80%) ... all of which then raises another question for Eric:

Was the system modelling used in his "UV surface brightness of galaxies from the local Universe to z ~ 5" paper, to come up with the 1/38 ratio of (θmGALEX/θmHUDF), sufficiently detailed as to compensate for the two different methods typically quoted for characterising the respective HUDF and Galex optical performance figures?

If so, then how was this done?
 
  • #85
I think it's fair to say that the robustness of the methods used in Lerner+ (2014) (L14), especially those using GALEX data, are critical to this paper (Lerner 2018). For example, in Figure 2:

"The log of the median radii of UV-bright disk galaxies M ~-18 from Shibuya et al, 2016 and the GALEX point at z=0.027 from Lerner, Scarpa and Falomo, 2014 is plotted against log of H(z) ,the Hubble radius at the given redshift"

Remove that "GALEX point at z=0.027" and I doubt that many (any?) of the results/conclusions would be valid.

There seems, to me, to be what could be a serious omission in L14; maybe you could say a few words about it, elerner?

"These UV data have the important advantage of being sensitive only to emissions from very young stars."

Well, AGNs are known to be (at least sometimes) strong emitters of UV. And in GALEX they'd appear to be indistinguishable from PSFs (by themselves). They can also make a galaxy appear to have a lower Sersic index ("Sersic number" in L14) if the galaxy is fitted with a single-component radial profile. Finally, in comparison to the luminosity of the rest of the galaxy, an AGN can range from totally dominant (as in most QSOs) to barely detectable.

My main question about L14 (for now) is about this:

"For the GALEX sample, we measured radial brightness profiles and fitted them with a Sersic law, [...]"

Would you please describe how you did this, elerner? I'm particularly interested in the details of how you did this for GALEX galaxies which are smaller than ~5" (i.e. less than ~twice the resolution or PSF width).

To close, here's something from L14 that I do not understand at all; could someone help me please (doesn't have to be elerner)?

"Finally we restricted the samples to disk galaxies with Sersic number <2.5 so that radii could be measured accurately by measuring the slope of the exponential decline of SB within each galaxy."
 
  • Like
Likes Dale
  • #86
Correction: “... which are smaller than ~10” (i.e less than ~twice the resolution...).”. Per what’s in some earlier posts, and GALEX, the resolution, in both UV bands, is ~5”.
 
  • #87
Unless Eric finds time to respond, we're going to have to conclude our concerns as follows:

Eric etal's method allows more Galex data to be included in the analysis because his Galex data cutoffs are ~50% lower (2.4 and 2.6 arcsecs) than what he claims to be Galex scope resolution limits (ie: 4.2 and 5.3 arcsec FWHM for FUV and NUV respectively). His method doesn't appear to explicitly address and correct for this.

Then, for the Hubble data: the proposed cutoffs don't appear to vary with the wavelength of the observations, as they approach the theoretical (Rayleigh) optical limits of the scope.

The 1/38 ratio figure used, seems to have no relevance in the light of the issues outlined above.

The Hubble data itself, thus refutes the methodology, due to its failure in finding resolution differences in the individual HUDT filter data.

If Eric agrees with the above, then it would be very nice for him to consider some form of formalised corrective measures.

Cheers
 
  • #88
Jean Tate said:
Remove that "GALEX point at z=0.027" and I doubt that many (any?) of the results/conclusions would be valid.
Without that point the concordance model is a good fit, as already shown in the previous literature. If that one point is faulty then there isn’t anything else in the paper.

I wondered if something were systematically different in the methodology for that point:
Dale said:
which could therefore have measurements which were non-randomly different from the remainder of the dataset,
 
Last edited:
  • #89
Papers "challenging the mainstream" in MNRAS and other leading astronomy/astrophysics/cosmology peer-reviewed journals are unusual but Lerner (2018) (L18) is certainly not unique.

I think L18 offers PhysicsForums (PF) a good opportunity to illustrate how science works (astronomy in this case). In a hands-on way. And in some detail. Let me explain.

One core part of science may be summed up as "objective, and independently verifiable". I propose that we - PFers (PFarians? PFists?) - can, collectively, objectively and independently verify at least some of the key parts of the results and conclusions reported in L18. Here's how:

As I noted in my earlier post, the robustness of the methods used in Lerner+ (2014) (L14), especially those using GALEX data, are critical to L18. I propose that we - collectively - go through L14 and attempt to independently verify the "GALEX" results reported therein (we could also do the same for the "HUDF" results, but perhaps that might be a tad ambitious).

I am quite unfamiliar with PF's mores and rules, so I do not really have any feel for whether this would meet with PF's PTB approval. Or if it did, whether this thread/section is an appropriate place for such an effort. But I'm sure I'll hear one way or the other soon!

Anyway, as the saying goes, "better to ask for forgiveness than permission". So I'll soon be posting several, fairly short, posts. Posts which actually kick off what I propose, in some very concrete ways.
 
  • #90
Question for elerner: how easy would it be, in your opinion, to independently reproduce the GALEX and HUDF results published in Lerner+ (2014) (L14)?

To help anyone who would want to do this, would you please create and upload (e.g. to GitHub) a 'bare bones' file (CSV, FITS, or other common format) containing the GALEX and HUDF data you started with ("galaxies (with stellarity index < 0.4) vs. angular radius for all GALEX MIS3-SDSSDR5 galaxies and for all HUDF galaxies")? Alternatively, would you please describe where one can obtain such data oneself?

Thank you in advance.
 
  • #91
If one does not have a workable answer to my second question above (actually a two-parter), how could you go about obtaining that data yourself?

Start with L14, the paper. The references in fact.

There are 16, and ADS can help you get at least the titles, abstracts, etc (the actual papers may be behind paywalls). Of these 16, I think two may be relevant for the HUDF data (10. Beckwith S. V. W., Stiavelli M., Koekemoer A. M., et al., ApJ 132, (2006) 1729, and 11. Coe D., Bentez N. Snchez S. F., Jee M., Bouwens R., Ford H., AJ 132, (2006) 926), but none appear relevant for the GALEX data. Do you agree?

Hmm, so maybe the paper itself gives a pointer to the GALEX data?

"... all GALEX MIS3-SDSSDR5 galaxies ..."

What do you think? Can you use that to find where the L14 GALEX data comes from? To actually obtain that data?
 
  • #92
Hi all, I have been busy with other things so have not visited here for the past few days.
In more or less chronological order:

Peter, (1+z)^-3 is correct if we measure in AB magnitudes (per unit frequency). This is the unit the papers use.

Ruarimac:

This is a test of predictions—things written before data is taken. Predictions are crucial in science. If you can’t predict data before you observe it, then you are not doing useful science. Just fitting the data once you have it is useless unless you can use it to predict lots more data that you have not observed. As the saying goes, with 4 variables, I can fit an elephant. That is why I used predictions from before the data was available. In addition, any merger process is contradicted by the observations of the number of mergers and any growth needed to match the data is contradicted by the measurements of gravitational mass vs stellar mass—unless you want to hypothesize negative mass dark matter (as I’m sure someone will do.)

Jonathan Scott:

Tolman’s derivation does not depend on curvature. You can find it in many places in the literature since 1930. It only depends on expansion.

On GALEX, measurement, etc.

Selfsim—you did not read my comment that my measured resolution refers to radius while FWHM refers to diameter. The key point is that with both Hubble and GALEX the resolution is mainly linked to the pixel size. That is why it is not linked to the wavelength—the pixel size does not change with wavelength.

Jean Tate: Not just the point at 0.027 but all the low z points up to z=0.11 are used for comparisons with our 2014 data. The whole point of the Tolman test is to compare sizes as we measure them at low z, where there is no cosmic distortion, with those at high z (or comparing SB of the same luminosity galaxies, which is the same as measuring size). So you can’t drop the near points if you want to do the test. The reason we can measure tiny galaxies is that when we talk about radius, that is half-light radius, the radius that contains half the light. Since disk galaxy light falls off exponentially, you can observe these bright galaxies way out beyond their half light radius and thus you can get very nice fits to an exponential line. The Sersic number is used as a cutoff between disk galaxies and ellipticals. AGNs don’t interfere as we dropped the central area of the galaxy which is most affected by the PSF blurring. The exponential fit starts further out—all explained in the 2014 paper. By the way, I don’t think checking our measurements is all that useful as we already checked them against the GALEX catalog, and they are quite close. But we wanted to make sure we were measuring HUDF and GALEX the exact same way.

Sure I can put our old 2014 data up somewhere. It would be great to have others work on it. Where would you suggest? However, it is by no means the most recent data release. I can also post how to get the more recent data. But not tonight.
 
  • #93
elerner said:
Peter, (1+z)^-3 is correct if we measure in AB magnitudes (per unit frequency).

Ok, got it.
 
  • #94
elerner said:
... Selfsim—you did not read my comment that my measured resolution refers to radius while FWHM refers to diameter. The key point is that with both Hubble and GALEX the resolution is mainly linked to the pixel size. That is why it is not linked to the wavelength—the pixel size does not change with wavelength.
Eric - Thanks for your reply.

However, as described by the Rayleigh criterion, (θ = 1.220λ/D), where the resolution is decreased by filter choice, more light ends up falling onto adjacent pixels which then affects the radius (or diameter) of the FHWM (or ensquared energy value).

In an attempt to put the resolution vs filter issue to rest, in the interim, we've performed a small test of our own by downloading some f475w and f814w filtered data for the object Messier 30 from the Hubble legacy archive site.

The ACS was used but unlike the HUDF images an 814W filter was used instead of the 850LP.

Unlike our previous discussion where only the optics were considered, the system angular resolution which links both wavelength and pixel size is defined by the equation.

System angular resolution ≈ [(0.21λ/2.4)² + (0.05²)]º̇⁵

The 0.05 term is the size of the ACS pixels in arcsec.
For a 475W filter the theoretical resolution is 0.066 arcsec and the 814W filter is 0.087 arcsec.

For the Messier 30 data, the same six stars of medium brightness were handpicked for each filter so as not to distort the FWHM measurements. (AIP4WIN software was used for the measurements).

In all cases, the FWHM of the stars were higher in the 814W data, the results being:

475W data FWHM = 0.279 +/- 0.035
814W data FWHM = 0.326 +/- 0.058

A larger sample size would have been preferable but the dependence of resolution on wavelength is clearly evident.

Irrespective of what method you employ, a lack of differentiation of resolution between the Hubble filtered data, is telling.
 
Last edited:
  • #95
elerner said:
This is a test of predictions—things written before data is taken. Predictions are crucial in science. If you can’t predict data before you observe it, then you are not doing useful science. Just fitting the data once you have it is useless unless you can use it to predict lots more data that you have not observed. As the saying goes, with 4 variables, I can fit an elephant. That is why I used predictions from before the data was available. In addition, any merger process is contradicted by the observations of the number of mergers and any growth needed to match the data is contradicted by the measurements of gravitational mass vs stellar mass—unless you want to hypothesize negative mass dark matter (as I’m sure someone will do.)

You're just making broad statements without actually addressing the points. I never mentioned fitting, please do not misrepresent my words.

You have tested a very specific model of disk size evolution combined with cosmology, but you are selling this as evidence against cosmology specifically when you haven't demonstrated that. You believe it's not mergers but that's hardly the only thing absent from this model. Take for example the fact that this model is not modelling the UV sizes which you are comparing it to. Or the fact that you haven't tested the effect of your cuts to the data, you will have selection effects which will change with redshift due to using different bands. Or the fact you haven't made K corrections due to the fact that the different filters have different transmission curves. Or the fact that in applying this model to UV surface brightness you don't take into account the varying star formation rate density with time in an expanding universe, as observed. Or the fact you have to assume the Tully-Fisher relation is fixed up to z~5 and that it applies to all of your galaxies. And then there's the effect of mergers and blending.

As I said before you have tested a single model which has all of these shortcomings, you do not justify why this is the model above all others that should be correct. This was not the only model available. You haven't demonstrated that this mismatch is a problem with cosmology and not with your attempt to model the observations. You haven't convinced me this is a problem with cosmology given that requires relying on a single model and a shopping-list of assumptions.
 
  • Like
Likes weirdoguy
  • #96
Thanks for your reply and continued interest in your paper, elerner!
elerner said:
Hi all, I have been busy with other things so have not visited here for the past few days.
In more or less chronological order:

<snip>

On GALEX, measurement, etc.

<snip>

Jean Tate: Not just the point at 0.027 but all the low z points up to z=0.11 are used for comparisons with our 2014 data. The whole point of the Tolman test is to compare sizes as we measure them at low z, where there is no cosmic distortion, with those at high z (or comparing SB of the same luminosity galaxies, which is the same as measuring size). So you can’t drop the near points if you want to do the test. The reason we can measure tiny galaxies is that when we talk about radius, that is half-light radius, the radius that contains half the light. Since disk galaxy light falls off exponentially, you can observe these bright galaxies way out beyond their half light radius and thus you can get very nice fits to an exponential line. The Sersic number is used as a cutoff between disk galaxies and ellipticals. AGNs don’t interfere as we dropped the central area of the galaxy which is most affected by the PSF blurring. The exponential fit starts further out—all explained in the 2014 paper. By the way, I don’t think checking our measurements is all that useful as we already checked them against the GALEX catalog, and they are quite close. But we wanted to make sure we were measuring HUDF and GALEX the exact same way.

Sure I can put our old 2014 data up somewhere. It would be great to have others work on it. Where would you suggest? However, it is by no means the most recent data release. I can also post how to get the more recent data. But not tonight.
I have many, many questions. Some come from my initial reading of Lerner (2018) (L18); some from your latest post. I will, however, focus on just a few.
So you can’t drop the near points if you want to do the test.
My primary interest was, and continues to be, Lerner+ (2014) (L14). However, I see that you may have misunderstood what I wrote; so let me try to be clearer.

I "get" that Lerner (2018) (L18) must include some low z data. And I think I'm correct in saying that L18 relies critically on the robustness and accuracy of the results reported in L14. In particular, the "the GALEX point at z=0.027 from Lerner, Scarpa and Falomo,2014". Does anyone disagree?

It makes little difference if that GALEX point is at z=0, or z=0.11, or anywhere in between. Does anyone disagree?

However, it makes a huge difference if that GALEX point is not near Log (r/kpc) =~0.8.

I am very interested in understanding just how robust that ~0.8 value is. Based on L14.
AGNs don’t interfere as we dropped the central area of the galaxy which is most affected by the PSF blurring. The exponential fit starts further out—all explained in the 2014 paper.
Actually, no. It is not all so explained.

I've just re-read L14; a) AGNs are not mentioned, and b) there's no mention of dropping the central area for galaxies which are smaller than ~10" ("PSF blurring" is almost certainly important out to ~twice the PSF width).

There are two questions in my first post in this thread which you did not answer, elerner; perhaps you missed them?

Here they are again:

JT1) In L14, you wrote: "For the GALEX sample, we measured radial brightness profiles and fitted them with a Sersic law, [...]".
Would you please describe how you did this? I'm particularly interested in the details of how you did this for GALEX galaxies which are smaller than ~10" (i.e. less than ~twice the resolution or PSF width).

JT2) In L14, you wrote: "Finally we restricted the samples to disk galaxies with Sersic number <2.5 so that radii could be measured accurately by measuring the slope of the exponential decline of SB within each galaxy."
I do not understand this. Would you please explain what it means?
Sure I can put our old 2014 data up somewhere. It would be great to have others work on it. Where would you suggest?
I've already made one suggestion (GitHub); perhaps others have other suggestions?

By the way, when I tried to access L18 (the full paper, not the abstract) from the link in the OP, I got this message:

"You do not currently have access to this article."

And I was invited to "Register" to get "short-term access" ("24 Hours access"), which would cost me USD $33.00. So instead I'm relying on the v2 arXiv document (link). Curiously, v2 was "last revised 2 Apr 2018", but "Journal reference: Monthly Notices of the Royal Astronomical Society, sty728 (March 22, 2018)". Could you explain please elerner?
 
  • #97
ruarimac said:
You're just making broad statements without actually addressing the points. I never mentioned fitting, please do not misrepresent my words.

You have tested a very specific model of disk size evolution combined with cosmology, but you are selling this as evidence against cosmology specifically when you haven't demonstrated that. You believe it's not mergers but that's hardly the only thing absent from this model. Take for example the fact that this model is not modelling the UV sizes which you are comparing it to. Or the fact that you haven't tested the effect of your cuts to the data, you will have selection effects which will change with redshift due to using different bands. Or the fact you haven't made K corrections due to the fact that the different filters have different transmission curves. Or the fact that in applying this model to UV surface brightness you don't take into account the varying star formation rate density with time in an expanding universe, as observed. Or the fact you have to assume the Tully-Fisher relation is fixed up to z~5 and that it applies to all of your galaxies. And then there's the effect of mergers and blending.

As I said before you have tested a single model which has all of these shortcomings, you do not justify why this is the model above all others that should be correct. This was not the only model available. You haven't demonstrated that this mismatch is a problem with cosmology and not with your attempt to model the observations. You haven't convinced me this is a problem with cosmology given that requires relying on a single model and a shopping-list of assumptions.
(my bold)

L14 seems replete with such assumptions.

Both explicitly stated and not. Such as some concerning AGNs, one aspect of which I addressed in my last post:
Jean Tate said:
elerner said:
AGNs don’t interfere as we dropped the central area of the galaxy which is most affected by the PSF blurring. The exponential fit starts further out—all explained in the 2014 paper.
Actually, no. It is not all so explained.

I've just re-read L14; a) AGNs are not mentioned, [...]
Reminder; here's what's in L14:
L14 said:
These UV data have the important advantage of being sensitive only to emissions from very young stars.
In his last post here, elerner seems to have hinted at another, unstated assumption:
elerner said:
The Sersic number is used as a cutoff between disk galaxies and ellipticals.
The implication (not even hinted at in L14) is that the only galaxy morphological classes are "disk galaxies" and "ellipticals". Or at least, only those two in the UV. The L14 authors seem to have been unaware of the extensive literature on this topic ...
 
  • #98
Hard for me to keep up with all of you in the time available. Simple things first. The new version corrects a reference and will soon be posted on MNRAS as well. If you missed the free download, go to our website https://lppfusion.com/lppfusion-chi...-against-cosmic-expansion-in-leading-journal/ and click on “paper” to get a free copy. I can’t post the link directly without violating their rules.

Here is how we did the measurements in 2014:

To measure total flux and half light radius, we extracted the average surface brightness profile for each galaxy from the HUDF or GALEX images. The apparent magnitude of each galaxy is determined by measuring the total flux within a fixed circular aperture large enough to accommodate the largest galaxies, but small enough to avoid contamination from other sources. To choose the best aperture over which to extract the radial profile, for each sample we compared average magnitudes and average radii as derived for a set of increasingly large apertures. We then defined the best aperture as the smallest for which average values converged. We found that these measurements are practically insensitive to the chosen aperture above this minimum value.

Finally, to determine scale-length radius, we fitted the radial brightness profile with a disk law excluding the central 0.1 arcsec for HST and 5 arcsec for GALEX , which could be affected by the PSF smearing. Given the magnitude and radius, the SB is obtained via the formulae in Section 2. A direct comparison between our measurements and those in the i band HUDF catalogue (Coe et al 2006) show no significant overall differences.

Here is how we checked for non-disks:

Finally we have checked, by visual inspection of galaxies in the sample, that removing objects exhibiting signatures of interaction or merging do not change our conclusions. The selection of galaxies with disturbed morphology was performed by an external team of nine amateur astronomers evaluating the NUV images and isophote contours of all NUV-sample galaxies. Each volunteer examined the galaxies and only those considered unperturbed by more than 5 people were included in a “gold” sample. Although this procedure reduces the size of the sample, there is no significant difference of the SB-z trend.
 
  • #99
Haven't heard from any PF Mods yet, and it seems that there's rather a lack of interest in my proposal (to independently try to verify the GALEX results reported in L14). So this will likely be my last post on that (my proposal).
Jean Tate said:
If one does not have a workable answer to my second question above (actually a two-parter), how could you go about obtaining that data yourself?

Start with L14, the paper. The references in fact.

There are 16, and ADS can help you get at least the titles, abstracts, etc (the actual papers may be behind paywalls). Of these 16, I think two may be relevant for the HUDF data (10. Beckwith S. V. W., Stiavelli M., Koekemoer A. M., et al., ApJ 132, (2006) 1729, and 11. Coe D., Bentez N. Snchez S. F., Jee M., Bouwens R., Ford H., AJ 132, (2006) 926), but none appear relevant for the GALEX data. Do you agree?

Hmm, so maybe the paper itself gives a pointer to the GALEX data?

"... all GALEX MIS3-SDSSDR5 galaxies ..."

What do you think? Can you use that to find where the L14 GALEX data comes from? To actually obtain that data?
"SDSS" is likely well-known to most readers; it refers to the Sloan Digital Sky Survey, and images from it were used in the hugely successful online citizen science project, Galaxy Zoo (there are quite a few iterations of Galaxy Zoo, using images/data from several surveys other than SDSS, but not GALEX as far as I know).

"DR5" means Data Release 5.

I did not know what "MIS3" meant (maybe I did, once, but forgot); however, it's fairly easy to work out using your fave search (mine is DuckDuckGo) ... "MIS" is Medium Depth Imaging Survey, and "3" likely refers to GALEX DR3.

Both SDSS and GALEX have official websites, and from those it's pretty straight-forward to find out how to access the many data products from those surveys.

Rather than doing that, I'd like to introduce a resource which you may not know about, VizieR. If you enter "GALEX" in the "Find catalogues" box, the first (of four) hits you'll see is "II/312", "GALEX-DR5 (GR5) sources from AIS and MIS (Bianchi+ 2011)", and you have several "Access" choices. True, it's not the GALEX MIS3, but is surely a superset.
 
  • Like
Likes Dale
  • #100
Thank you, elerner.
elerner said:
Hard for me to keep up with all of you in the time available. Simple things first. The new version corrects a reference and will soon be posted on MNRAS as well. If you missed the free download, go to our website https://lppfusion.com/lppfusion-chi...-against-cosmic-expansion-in-leading-journal/ and click on “paper” to get a free copy. I can’t post the link directly without violating their rules.

Here is how we did the measurements in 2014:

To measure total flux and half light radius, we extracted the average surface brightness profile for each galaxy from the HUDF or GALEX images. The apparent magnitude of each galaxy is determined by measuring the total flux within a fixed circular aperture large enough to accommodate the largest galaxies, but small enough to avoid contamination from other sources. To choose the best aperture over which to extract the radial profile, for each sample we compared average magnitudes and average radii as derived for a set of increasingly large apertures. We then defined the best aperture as the smallest for which average values converged. We found that these measurements are practically insensitive to the chosen aperture above this minimum value.

Finally, to determine scale-length radius, we fitted the radial brightness profile with a disk law excluding the central 0.1 arcsec for HST and 5 arcsec for GALEX , which could be affected by the PSF smearing. Given the magnitude and radius, the SB is obtained via the formulae in Section 2. A direct comparison between our measurements and those in the i band HUDF catalogue (Coe et al 2006) show no significant overall differences.

Here is how we checked for non-disks:

Finally we have checked, by visual inspection of galaxies in the sample, that removing objects exhibiting signatures of interaction or merging do not change our conclusions. The selection of galaxies with disturbed morphology was performed by an external team of nine amateur astronomers evaluating the NUV images and isophote contours of all NUV-sample galaxies. Each volunteer examined the galaxies and only those considered unperturbed by more than 5 people were included in a “gold” sample. Although this procedure reduces the size of the sample, there is no significant difference of the SB-z trend.
It'll take me a while to fully digest this, particularly as I want to understand it in terms of the content of L14.

However, I'm even more curious about how you "fitted the radial brightness profile with a disk law excluding the central 0.1 arcsec for HST and 5 arcsec for GALEX".

For example, did you write your own code? Or use a publicly available tool or package? Something else??
 
Back
Top