It is also worth recognizing that the relative amounts of experimental v. theoretical work differs profoundly by subdiscipline in physics.
Low energy quantum physics experiments are small budget, single university laboratory scale efforts with a couple to half a dozen scientists involved. Usually, they don't discover things that aren't already strongly suspected by existing Standard Model theory but they can probe important corners of untested assumptions or demonstrate something that is known theoretically but still very counterintuitive in very compelling ways. There are thousand of experiments like this going on, most of which don't make the headlines.
We have hundreds of relatively small to medium budget astronomy observation devices ("telescopes" really doesn't do justice to devices that cover the entire EM frequency range, gravitational waves, neutrinos, and other non-photon particles from space deceptively called "cosmic rays", some of which are space based and some of which coordinate global data sets and new methods, as well as devices designed to detect dark matter and axions) and research projects, from a lot of independent groups that produce a torrent of new experimental results on a daily basis. So, while there are some serious theoretical issues being explored, there is a lot of incoming new and improved data to distinguish between and evaluate the theories. Individual astronomy collaborations have scientists who number in the dozens or less (although there are coalitions of collaborations that are pretty ad hoc that aggregate multiple devices and observations that are collectively quite large).
As a practical matter, this means that some of the big unsolved problems, particularly those related to dark matter and dark energy, have an immensely growing amount of experimental data that can be use to test, constrain and devise theories with.
Neutrino physics is intermediate. There are about a dozen major neutrino oriented experiments out there from reactor source neutrinos, to Earth and space natural source neutrinos, to neutrino-less double beta decay experiments, etc. and these are also "medium budget" enterprises. These typically involve a dozens to hundreds of scientists in their collaborations. This is generating enough new data that theoretical work doesn't have to go too far afield from the data.
We've gone in a relatively short time frame as science history goes, from discovering the neutrinos have mass and oscillate, to measuring the parameters related to the oscillation and bounding potential absolute neutrino masses with workable precisions that are usually as good as or better than QCD after six decades of active experiments.
In contrast, in high energy physics, several of the active experiments are all different groups using the Large Hadron Collider, at unprecedentedly high costs, and there are maybe two or three other experiments that are currently collecting data in much less ambitious experiments that are "medium budget" efforts. The LHC involves several thousand scientists. It is in HEP where the new data flow is such a trickle compared to theoretical output.
The LHC has discovered the Higgs boson and is devoting immense effort to confirming that it behaves like the SM Higgs boson which it does across the board within margins of error which are rapidly shrinking. The LHC has ruled out myriad other kinds of new physics, the search for which was the main justification for building it. It has found some anomalies (e.g. the possible lepton universality violations in B meson decays), but for a variety of reasons most of the anomalies it has found so far have evaporated as more data was collected, and there is every reason to expect the same result from the current anomalies.
One dirty little secret of HEP is that QCD experiments routinely have theoretical predictions by different means that are all over map and not infrequently QCD experiments produce results seriously in tension with the theoretical predictions. But, for the most part, this doesn't make headlines because both the predictions and the measurements are routinely so imprecise (accuracy to the 1% level or worse is pretty much par for the course) compared to EM and weak force processes, because many past discrepancies have not persisted, and because nobody really thinks that the very elegant, but very difficult to operationalize, version of QCD that the Standard Model enshrines is wrong. Almost everybody blames poor approximation methods when experiments and theory don't match properly, rather than a fundamental new physics flaw in the underlying theory.