In a recent campus talk by one of the LIGO memebers authoring the last february announced GW detection paper(and that was very proud because her small team's analytical data had actually made it to the finally published paper) explained to the audience something I didn't fully get. She said that they were monitoring a number of binary pulsars systems in our galaxy close enough so that they should have been producing a detectable signal with the sensitivity acquired in the observing run 1 of last autumn, but the fact no signal was detected meant that only 10 to 1% of their orbital decaying energy was being radiated in the form of GWs and thus went undetected. However all the models of gravitational radiation I know start from the premise that all(100%) of the energy lost by the binary sistems is radiated as GWs. That's the case in the Hulse-Taylor indirect proof of GWs and I thought was the premise used by LIgo to model the BH merger of the direct detection but I'm not so sure now if instead of 3 solar masses it could haven more or less energy radiated if one is not demanding that it must be 100% of the energy lost by the system. So I don't exactly know what to make of the non-detection in the case of nearby pulsars. If the energy radiated is not necessarily the one one gets from the quadrupole formula or from the computed total energy at lightlike infinity, how does one compute it, or is it an adjustable parameter that must be fitted according to what is observed or not observed?