I Observational evidence against expanding universe in MNRAS

Click For Summary
A recently published paper in the Monthly Review of the Royal Astronomical Society argues against the expanding universe hypothesis, suggesting that observations of galaxy size and surface brightness do not align with predictions based on expansion. The author posits that the universe can evolve without expanding, similar to how Earth evolves without spatial expansion. The discussion explores alternative models, including the possibility of fractal matter distribution and the weakening of gravitational forces over large distances, which could explain observed phenomena without invoking expansion. Critics emphasize that any new model must be consistent with established physics, particularly the Einstein Field Equation, to make valid predictions. The paper calls for further investigation into these claims, as current cosmological models may not adequately account for the data presented.
  • #61
Others: please bear with me on this query about the 2014 paper .. we believe it has significant bearing on the conclusions of Eric's recent MNRAS paper.

Eric;
These are the cutoff radius results from your 2014 paper,
Lerner et al said:
For GALEX this cutoff is at a radius of 2.4 +/- 0.1 arcsec for galaxies observed in the FUV and 2.6 +/- 0.2 arcsec for galaxies observed in the NUV, while for Hubble this cutoff is at a radius of 0.066 +/- 0.002 arcsec, where the errors are the 1σ statistical uncertainty.
While the Hubble cutoff of 0.066 arcsec compares with a theoretical resolution of 0.05 arcsec using the F435W filter, the Galex result of 2.4 arcsec is 30X higher than the theoretical value of 0.08 arcsec in FUV!

Something appears to be in error here(?)
I suppose it may be possible that the Galex optics were of catastrophically low quality in order to explain this major discrepency however, if this unlikely possibility were so, then also no useful science would be possible.

This discrepency is more likely be due to an error elsewhere .. (?)
Cheers
 
Space news on Phys.org
  • #62
elerner said:
Why don't you provide a quotation from and citation of a peer-reviewed published work that backs up your assertion?
Which assertion, please use the quote feature? That the LCDM model only works at large scales? I already provided 3. More exist, but 3 are sufficient.
 
Last edited:
  • #63
Self sim,

Not only the 435 filter. For each redshift range, we match the HST filter with either FUV or NUV filter to get the same wavelength band in the rest frames. So we actually have eight separate matched sets. All described in the 2014 paper.

Also on GALEX I guess you used the Dawes formula but it is way off. Look at the GALEX descriptions on their website--the resolution is arcseconds, not a tiny fraction of an arcsecond. Their pixels are much bigger than your value. Why is this?--you have to ask the designers of GALEX. This is just a guess on my part, but GALEX is a survey instrument. If they made the pixels too small, they would end up with too small a field of view, given limits on how big the detector chip can be.
 
Last edited:
  • #64
Dale, no reference that you have cited supports the assertions you have made. You have said that the expanding universe hypothesis makes no predictions on the scales I have tested it because they are too small. The smallest-scale measurements in my paper cover the range 200-800 Mpc. You need to provide quotes from cited sources that say that these scales are too small to be covered by the expanding universe hypothesis. That is what I am testing. The words" small" and "large" have no meaning unless there is some quantitative comparison.
 
  • #65
elerner said:
You have said that the expanding universe hypothesis makes no predictions on the scales I have tested
Where did I say that? Use the quote feature and stop claiming I said things that I didn’t.
 
  • #66
OK, great, just a misunderstanding! Then you agree that my paper is a test of the the expanding universe prediction and that the predictions are contradicted by the data?
 
  • #67
I get the impression that the weakest link in this argument is Tolman's idea that expanding space affects the angular size of more distant objects, which seems to assume a closed curved universe which grows locally with time. I thought this model was now considered misleading, as although comoving coordinates expand, there is no local physical effect of expansion; galaxies are simply moving apart for historical reasons. Tolman has always been one of the great masters of GR, but it seems possible that he missed something. Is there any more recent support for Tolman's conclusions, taking into account alternative universe structure models?

I personally like the analogy of modelling an expanding universe with only 1D of space as a cone made from flat paper, where a circle around the cone represents space and the height from the apex represents time. Although the total amount of space clearly increases with time, it is still flat even on a large scale; there is no local change in scale, and objects moving along parallel paths (including light beams) remain on parallel paths. (This model assumes that the radius of the universe increases uniformly with time, which is obviously another simplification).

[One could similarly assume an even simpler model of a flat disc with radius being time and circumference being space, but for some reason I find the cone picture more interesting].
 
  • Like
Likes Dale
  • #68
elerner said:
Then you agree that ...
Are you having trouble using the quote feature? Just select the text that I actually wrote and choose “Reply”.
 
  • #69
elerner said:
... Not only the 435 filter. For each redshift range, we match the HST filter with either FUV or NUV filter to get the same wavelength band in the rest frames. So we actually have eight separate matched sets. All described in the 2014 paper.

Also on GALEX I guess you used the Dawes formula but it is way off. Look at the GALEX descriptions on their website--the resolution is arcseconds, not a tiny fraction of an arcsecond. Their pixels are much bigger than your value. Why is this?--you have to ask the designers of GALEX. This is just a guess on my part, but GALEX is a survey instrument. If they made the pixels too small, they would end up with too small a field of view, given limits on how big the detector chip can be.
Eric;

i) The formula we used was the Rayleigh criterion for resolution ... not Dawes
Ie:
d07c8ebcc094fca224172327849b8c8d7a942a4d


ii) The data from the Galex site indicates the pixel size is 1.5 arcseconds which is the angle as “viewed” by each individual pixel and is a measurement for CCD plate scale ... not resolution. The pixel size in arcseconds depends on the focal length of the telescope used.

The physical size of the pixels used by the detector is:

Physical size of pixel (microns) = [(pixel size in arcseconds) X (Focal length)]/206.3

Galex uses a 500mm size telescope at f/6 = 3000mm focal length.
(1.5X3000)/206.3 = 22 microns.

These are not large pixels. By comparison the ACS/WFC camera used by Hubble for the HUDF is 15 micron pixels.

iii) Since you obtained the same result for each HST filter, your calculation for the HUDT data appears to be incorrect due to the wavelength dependence on resolution as per (i) above.
 
  • #70
Oh .. and there is nothing wrong with the Galex optics either, (as I speculated previously - see below for the explanation), which unless Eric can provide alternative explanations, leads us to the conclusion of a calculation error for the HUDT data (Eric - please advise us here).

There is a drop in the off-axis Galex performance, which is a characteristic of the Ritchey-Chretien optical design at lower f/ratios.
(As mentioned in my immediately prior post, Galex uses an f/6 scope).

http://iopscience.iop.org/article/10.1086/520512/pdf:
Galex said:
To verify the fundamental instrument performance from on orbit data, some bright stars were analyzed individually, outside the pipeline. These results show performance that is consistent with or better than what was measured during ground tests. We have also verified the end-to-end performance including the pipeline by stacking images of stars from the MIS survey that were observed at different locations on the detector. The results of these composites are shown in Figures 9 and 10. Performance is reasonably uniform except at the edge of the field, where it is significantly degraded.
Cheers
 
  • #71
On p. 684 of this reference it gives the resolution (FWHM) of the GALEX telescope as 4.2 arcsec in FUV and 5.3 arcsec in NUV.

As explained in our 2014 paper, we based the actual resolution on what was the minimum radius galaxy that the telescope, plus the algorithms used to account for the PSF, classified as an extended source. This could be done unambiguously because the stellarity distribution produced by these algorithms (the ones in the GALEX and HUDF catalogs respectively) had two clear peaks for point sources and extended ones. So setting the threshold anywhere near where we did at 0.4 produces the same results. Unsurprisingly this measure of radius resolution turns out to be half the FWHM resolution. In the 2014 paper we also consider the effect on our conclusions of the uncertainty in the determination of the resolutions.

As you point out, the GALEX designers picked the focal length to produce a wide field of view, which also limited the resolution to a much larger value (poorer resolution) than was optically possible. You can't optimize a single telescope for everything.

For HUDF, we checked the actual resolution in each filter by the same method and did not see a significant variation. This is what the data showed, so this is what we went with.

We do note that at z=5 a large fraction of the galaxies are not resolved. The median is not a good estimate of population mean once most of the galaxies in the redshift bin are not resolved, so HST measurements much beyond z=5 are not going to be highly reliable.
 
  • #72
elerner said:
OK, great, just a misunderstanding! Then you agree that my paper is a test of the the expanding universe prediction and that the predictions are contradicted by the data?

@elerner, this rhetorical style is not going to help the discussion. As @Dale has requested several times now, please use the PF quote feature to specify exactly what statements you are responding to. Otherwise the discussion will go nowhere and this thread will end up being closed.
 
  • #73
To rephrase: Dale, do you now agree that my paper is a test of the expanding universe prediction and that these predictions are contradicted by the data?

If not, please provide quotations from published literature that indicates why it is not. If you use an argument about "small scale", please include quotes on what quantitative scale expansion is no longer theorized to occur so that this can be compared with the 200-800 Mpc range that is the smallest scale measured in my paper.

To be totally clear, and to repeat what it is in the paper, this is a test of the hypothesis that the universe is expanding using one set of data. An overall assessment requires considering all available data sets. Also, this is a test of the hypothesis that the Hubble relation is caused by expansion. The LCDM model includes this hypothesis, so if there is no expansion, LCDM is also wrong. But LCDM also includes several other hypotheses: hot Big Bang, inflation, dark matter, dark energy, etc.

In response to Jonathan Scott, Tolman's calculations are based only on the hypothesis that the Hubble relation is due to expansion. The (1+z)^-3 reduction in surface brightness is independent of any details of how fast the expansion occurs at any point in time. That's why the Tolman test is a test of all expansion models, not any specific ones.
 
  • #74
elerner said:
The (1+z)^-3 reduction in surface brightness

Shouldn't this be ##(1 + z)^{-4}##?
 
  • Like
Likes ruarimac
  • #75
I think this paper is getting a tad over-sold. As the title of the paper carefully states this paper does not point to a problem with standard cosmology or LCDM, it merely conflicts with a model. What the paper has shown is that a model of galaxy size evolution combined with concordance cosmology are not compatible with a UV surface brightness test. I cannot stress enough it is the combination of both galaxy evolution model and cosmology that is being tested, not just cosmology. This is the main reason the Tolman test does not play a big part in modern cosmology, because of this degeneracy between the effects of galaxy evolution and cosmology. In this case however it was done in the rest-frame UV which means it will be incredibly sensitive to galaxy evolution, because the UV properties of a galaxy change on shorter timescales than the rest-frame optical, for example.

To frame the discussion about the paper. A reference from the 2014 paper which compared observations to an LCDM-like cosmology but did not attempt to model evolution so it wasn't actually LCDM:

In this paper, we do not compare data to the LCDM model. We only remark that any effort to fit such data to LCDM requires hypothesizing a size evolution of galaxies with z.

What seems to be done in this new paper is include a single model of galaxy size evolution. This is hardly surprising however as that model which is being tested is not a model of ultraviolet sizes of galaxies. It's from a paper written 20 years ago which has to assume disks all have the same mass to light ratio to calculate a luminosity at all. This model doesn't include the formation of stars. The model is outputting disk scale lengths, not ultraviolet radii. On this basis I think the comparison is apples to oranges so it's hardly surprising there is disagreement. There are a range of sophisticated galaxy formation simulations available today, they would be a much better comparison given they represent the leading edge of the field and that the selection function could be applied to them.

I reiterate, this paper is not evidence there is something wrong with standard cosmology. It is a test of a model of the size evolution of galaxies and cosmology.
 
  • Like
Likes Dale
  • #76
elerner said:
Dale, do you now agree that my paper is a test of the expanding universe prediction and that these predictions are contradicted by the data?
I still stand by my previous very clear assessment of your paper which I posted back in post 23:
Dale said:
I do accept your paper as evidence against the standard cosmology, just not as very strong evidence due to the issues mentioned above. So (as a good Bayesian/scientist) it appropriately lowers my prior slightly from “pretty likely” to “fairly likely”, and I await further evidence.
The only thing that I would change is to include “issues mentioned below” as well.

elerner said:
Also, this is a test of the hypothesis that the Hubble relation is caused by expansion. The LCDM model includes this hypothesis, so if there is no expansion, LCDM is also wrong.
This is a speculative claim that is not made in the paper.
 
Last edited:
  • #77
ruarimac said:
This is the main reason the Tolman test does not play a big part in modern cosmology
Are there any references describing this view of the Tolman test?
 
  • #78
elerner said:
On p. 684 of this reference it gives the resolution (FWHM) of the GALEX telescope as 4.2 arcsec in FUV and 5.3 arcsec in NUV.

As explained in our 2014 paper, we based the actual resolution on what was the minimum radius galaxy that the telescope, plus the algorithms used to account for the PSF, classified as an extended source. This could be done unambiguously because the stellarity distribution produced by these algorithms (the ones in the GALEX and HUDF catalogs respectively) had two clear peaks for point sources and extended ones. So setting the threshold anywhere near where we did at 0.4 produces the same results. Unsurprisingly this measure of radius resolution turns out to be half the FWHM resolution. In the 2014 paper we also consider the effect on our conclusions of the uncertainty in the determination of the resolutions.

As you point out, the GALEX designers picked the focal length to produce a wide field of view, which also limited the resolution to a much larger value (poorer resolution) than was optically possible. You can't optimize a single telescope for everything.

For HUDF, we checked the actual resolution in each filter by the same method and did not see a significant variation. This is what the data showed, so this is what we went with.

We do note that at z=5 a large fraction of the galaxies are not resolved. The median is not a good estimate of population mean once most of the galaxies in the redshift bin are not resolved, so HST measurements much beyond z=5 are not going to be highly reliable.
Thanks for your response there Eric.
To help resolve this matter, we have lodged a request directly with Galex to see if they can provide their actual performance data on angular resolution ... (fingers crossed). We'll get back on this when we have their response.
In the meantime, if this thread gets locked and as an alternative, we could continue the conversation at the IS forum ('Evidence against concordance cosmology' thread).
Cheers
 
  • #79
elerner said:
In response to Jonathan Scott, Tolman's calculations are based only on the hypothesis that the Hubble relation is due to expansion. The (1+z)^-3 reduction in surface brightness is independent of any details of how fast the expansion occurs at any point in time. That's why the Tolman test is a test of all expansion models, not any specific ones.
Tolman's 1930 paper clearly refers specifically to a spatially curved (presumably closed) universe, which I think was assumed to be the case at the time, so even if Tolman's calculations are correct, his assumptions are not necessarily correct. I'd say the new paper provides evidence against a spatially curved universe, but I don't know what the relevance of that is to current cosmology.
 
  • Like
Likes Dale
  • #80
elerner said:
To be totally clear, and to repeat what it is in the paper, this is a test of the hypothesis that the universe is expanding using one set of data. An overall assessment requires considering all available data sets. Also, this is a test of the hypothesis that the Hubble relation is caused by expansion. The LCDM model includes this hypothesis, so if there is no expansion, LCDM is also wrong. But LCDM also includes several other hypotheses: hot Big Bang, inflation, dark matter, dark energy, etc.

You paper has not disproved the expanding universe, as you clearly state in the title this is a test of a model of size evolution plus an expanding universe. I'm sure you had to negotiate that one with the referee but it's not some irrelevant point. You have tested some model of size evolution plus concordance cosmology, you clearly take the interpretation that it is the cosmology that is wrong but you have not demonstrated that.

elerner said:
As explained in our 2014 paper, we based the actual resolution on what was the minimum radius galaxy that the telescope, plus the algorithms used to account for the PSF, classified as an extended source. This could be done unambiguously because the stellarity distribution produced by these algorithms (the ones in the GALEX and HUDF catalogs respectively) had two clear peaks for point sources and extended ones. So setting the threshold anywhere near where we did at 0.4 produces the same results. Unsurprisingly this measure of radius resolution turns out to be half the FWHM resolution. In the 2014 paper we also consider the effect on our conclusions of the uncertainty in the determination of the resolutions.

How exactly do you model this selection effect in the Mo et al. model? You don't describe your model in detail at all but you state that it predicts the disk scale length varies as H(z) to some power at fixed luminosity, but that doesn't take into account the fact that you have a biased sample.
 
Last edited:
  • Like
Likes Dale
  • #81
elerner said:
On p. 684 of this reference it gives the resolution (FWHM) of the GALEX telescope as 4.2 arcsec in FUV and 5.3 arcsec in NUV. As explained in our 2014 paper, we based the actual resolution on what was the minimum radius galaxy that the telescope, plus the algorithms used to account for the PSF, classified as an extended source. This could be done unambiguously because the stellarity distribution produced by these algorithms (the ones in the GALEX and HUDF catalogs respectively) had two clear peaks for point sources and extended ones. So setting the threshold anywhere near where we did at 0.4 produces the same results. Unsurprisingly this measure of radius resolution turns out to be half the FWHM resolution. In the 2014 paper we also consider the effect on our conclusions of the uncertainty in the determination of the resolutions. As you point out, the GALEX designers picked the focal length to produce a wide field of view, which also limited the resolution to a much larger value (poorer resolution) than was optically possible. You can't optimize a single telescope for everything. For HUDF, we checked the actual resolution in each filter by the same method and did not see a significant variation. This is what the data showed, so this is what we went with. We do note that at z=5 a large fraction of the galaxies are not resolved. The median is not a good estimate of population mean once most of the galaxies in the redshift bin are not resolved, so HST measurements much beyond z=5 are not going to be highly reliable.
The FWHM values quoted by Galex seem to be based on the procedures described in: 'Section 5. RESOLUTION', (pg 691).

As described in the procedure, bright stars are used that lead to saturated cores in the PSF.
Saturation or near saturation leads to high FWHM values; the procedure is performed to show details in the wings.
In reality, if the FWHM values are a true indication of angular resolution performance, then the Galex scope has either (i) poor optics, (ii) is slightly out of focus or, (iii) is seeing limited due to atmospheric interference effects. (iii) Is obviously not applicable to Galex and can be eliminated.

Assuming the FWHM values given are a true indication of scope performance issues, then according to Eric (etal's) method, any value less than the FHWM is a point source and beyond measurement, yet the method cut off is around 50% of the FWHM values for the FUV and NUV bands, which then appears to simply be in error(?)

Also we maintain that the dependence of wavelength on resolution is still an untested issue with the analysis method, and cannot be ignored.
As per the Hubble site:
Hubble said:
Here we will try to answer the related question of how close together two features can be and still be discerned as separate – this is called the angular resolution.

The Rayleigh criterion gives the maximum (diffraction-limited) resolution, R, and is approximated for a telescope as
R = λ/D, where R is the angular resolution in radians and λ is the wavelength in metres. The telescope diameter, D, is also in metres.

In more convenient units we can write this as:
R (in arcseconds) = 0.21 λ/D, where λ is now the wavelength in micrometres and D is the size of the telescope in metres.

So for Hubble this is:
R = 0.21 x 0.500/2.4 = 0.043 arcseconds (for optical wavelengths, 500 nm) or
R = 0.21 x0.300/2.4 = 0.026 arcseconds (for ultraviolet light, 300 nm).

Note that the resolution gets better at shorter wavelengths, so we will use the second of these numbers from now on.
 
  • #82
And here we go with a Galex response:

Galex Performance, http://www.galex.caltech.edu/DATA/gr1_docs/GR1_Mission_Instrument_Overview_v1.htm:

"The design yields a field-averaged spot size of 1.6 arcsec (80%EE) for the FUV imagery and 2.5 arcsec (80%EE) for the FUV spectroscopy at 1600�. NUV performance is similar. There is no in-flight refocus capability".

So, the above 1.6 arcsec figure for the FUV imagery is much higher than the theoretical diffraction limited performance calculated by us earlier, but it is nowhere near the 4.2 arcsec FHWM in FUV figures used by Eric etal (as being indicative of actual performance)!

 
Last edited:
  • #84
newman22 said:
Galex resolution found here: 4.3 and 5.3 arcsec respectively http://www.galex.caltech.edu/researcher/techdoc-ch2.html
I think that's cited using the FWHM metric, whereas the 1.6 arcsec figure given in the GR1 Optical Design section is based on the Encircled Energy metric (80%) ... all of which then raises another question for Eric:

Was the system modelling used in his "UV surface brightness of galaxies from the local Universe to z ~ 5" paper, to come up with the 1/38 ratio of (θmGALEX/θmHUDF), sufficiently detailed as to compensate for the two different methods typically quoted for characterising the respective HUDF and Galex optical performance figures?

If so, then how was this done?
 
  • #85
I think it's fair to say that the robustness of the methods used in Lerner+ (2014) (L14), especially those using GALEX data, are critical to this paper (Lerner 2018). For example, in Figure 2:

"The log of the median radii of UV-bright disk galaxies M ~-18 from Shibuya et al, 2016 and the GALEX point at z=0.027 from Lerner, Scarpa and Falomo, 2014 is plotted against log of H(z) ,the Hubble radius at the given redshift"

Remove that "GALEX point at z=0.027" and I doubt that many (any?) of the results/conclusions would be valid.

There seems, to me, to be what could be a serious omission in L14; maybe you could say a few words about it, elerner?

"These UV data have the important advantage of being sensitive only to emissions from very young stars."

Well, AGNs are known to be (at least sometimes) strong emitters of UV. And in GALEX they'd appear to be indistinguishable from PSFs (by themselves). They can also make a galaxy appear to have a lower Sersic index ("Sersic number" in L14) if the galaxy is fitted with a single-component radial profile. Finally, in comparison to the luminosity of the rest of the galaxy, an AGN can range from totally dominant (as in most QSOs) to barely detectable.

My main question about L14 (for now) is about this:

"For the GALEX sample, we measured radial brightness profiles and fitted them with a Sersic law, [...]"

Would you please describe how you did this, elerner? I'm particularly interested in the details of how you did this for GALEX galaxies which are smaller than ~5" (i.e. less than ~twice the resolution or PSF width).

To close, here's something from L14 that I do not understand at all; could someone help me please (doesn't have to be elerner)?

"Finally we restricted the samples to disk galaxies with Sersic number <2.5 so that radii could be measured accurately by measuring the slope of the exponential decline of SB within each galaxy."
 
  • Like
Likes Dale
  • #86
Correction: “... which are smaller than ~10” (i.e less than ~twice the resolution...).”. Per what’s in some earlier posts, and GALEX, the resolution, in both UV bands, is ~5”.
 
  • #87
Unless Eric finds time to respond, we're going to have to conclude our concerns as follows:

Eric etal's method allows more Galex data to be included in the analysis because his Galex data cutoffs are ~50% lower (2.4 and 2.6 arcsecs) than what he claims to be Galex scope resolution limits (ie: 4.2 and 5.3 arcsec FWHM for FUV and NUV respectively). His method doesn't appear to explicitly address and correct for this.

Then, for the Hubble data: the proposed cutoffs don't appear to vary with the wavelength of the observations, as they approach the theoretical (Rayleigh) optical limits of the scope.

The 1/38 ratio figure used, seems to have no relevance in the light of the issues outlined above.

The Hubble data itself, thus refutes the methodology, due to its failure in finding resolution differences in the individual HUDT filter data.

If Eric agrees with the above, then it would be very nice for him to consider some form of formalised corrective measures.

Cheers
 
  • #88
Jean Tate said:
Remove that "GALEX point at z=0.027" and I doubt that many (any?) of the results/conclusions would be valid.
Without that point the concordance model is a good fit, as already shown in the previous literature. If that one point is faulty then there isn’t anything else in the paper.

I wondered if something were systematically different in the methodology for that point:
Dale said:
which could therefore have measurements which were non-randomly different from the remainder of the dataset,
 
Last edited:
  • #89
Papers "challenging the mainstream" in MNRAS and other leading astronomy/astrophysics/cosmology peer-reviewed journals are unusual but Lerner (2018) (L18) is certainly not unique.

I think L18 offers PhysicsForums (PF) a good opportunity to illustrate how science works (astronomy in this case). In a hands-on way. And in some detail. Let me explain.

One core part of science may be summed up as "objective, and independently verifiable". I propose that we - PFers (PFarians? PFists?) - can, collectively, objectively and independently verify at least some of the key parts of the results and conclusions reported in L18. Here's how:

As I noted in my earlier post, the robustness of the methods used in Lerner+ (2014) (L14), especially those using GALEX data, are critical to L18. I propose that we - collectively - go through L14 and attempt to independently verify the "GALEX" results reported therein (we could also do the same for the "HUDF" results, but perhaps that might be a tad ambitious).

I am quite unfamiliar with PF's mores and rules, so I do not really have any feel for whether this would meet with PF's PTB approval. Or if it did, whether this thread/section is an appropriate place for such an effort. But I'm sure I'll hear one way or the other soon!

Anyway, as the saying goes, "better to ask for forgiveness than permission". So I'll soon be posting several, fairly short, posts. Posts which actually kick off what I propose, in some very concrete ways.
 
  • #90
Question for elerner: how easy would it be, in your opinion, to independently reproduce the GALEX and HUDF results published in Lerner+ (2014) (L14)?

To help anyone who would want to do this, would you please create and upload (e.g. to GitHub) a 'bare bones' file (CSV, FITS, or other common format) containing the GALEX and HUDF data you started with ("galaxies (with stellarity index < 0.4) vs. angular radius for all GALEX MIS3-SDSSDR5 galaxies and for all HUDF galaxies")? Alternatively, would you please describe where one can obtain such data oneself?

Thank you in advance.
 

Similar threads

Replies
7
Views
3K
  • · Replies 9 ·
Replies
9
Views
3K
  • · Replies 8 ·
Replies
8
Views
3K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 2 ·
Replies
2
Views
2K
Replies
4
Views
4K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 12 ·
Replies
12
Views
5K
  • · Replies 15 ·
Replies
15
Views
6K