Sensitivity Analysis for Missing Data: Picking Values to Try & Not Reject Null

AI Thread Summary
The discussion centers on handling missing data in a large dataset with less than 10% missing values, specifically in the context of fitting a hazard model to analyze survival time of land parcels. The author is exploring a method to assign values to missing data that would minimize the chance of rejecting a null hypothesis regarding the slope coefficient of topographic slope. They question the feasibility of this approach, especially in a multivariate context, and whether to address each hypothesis separately or simultaneously. Concerns about potential logical pitfalls in this strategy are raised, along with a request for alternative modeling suggestions. The author seeks feedback on their proposed method and expresses frustration over the lack of responses.
wvguy8258
Messages
48
Reaction score
0
Hi,

I have a large data set with with less than 10% missing values (missing response, but all predictor variables present). It is a near certainty that these values are not missing at random, dependent upon the missing value. The response is survival time of a land parcel with 'death' being development of the parcel. Predictors are things like average topographic slope in the parcel etc. I plan to fit a hazard model to the data to test hypotheses related to the sign and magnitude of slope coefficients. I've read a bit about methods for dealing with missing data, but I feel that because I am primarily interested in testing hypotheses that a simpler method may be available that I haven't yet seen in print. I am here asking for advice on the feasibility of the simple idea to follow, how it can be improved, and if anyone has any pertinent references to share.

The survival time is bounded. I am taking the beginning of colonization of the area as the beginning of the study period and the present as its end. So, the response variable is bounded between zero and 2009-time of first colonization. Let's say I have a very simple hypothesis that the slope coefficient of topographic slope is less than zero, so my null hypothesis is that it is greater than or equal to zero. It seems that I could pick values for the missing data so as to minimize the chance of rejecting this null hypothesis. If I still find evidence to reject the null under this extreme example, then it is reasonable to conclude that the full data set, if missing values were also observed, would likewise lead to this rejection. So, in the example of topographic slope I would assign missing data values that would give the largest slope coefficient (and the smallest variance at a high parameter estimate? less sure of how to think here) possible given the observed data. First, are there any logical pits I am falling into here? This seems rather straightforward with only one predictor in the model, but I suspect that a multivariate model will complicate things. Should each hypothesis considered (corresponding to each slope coefficient of interest) be considered separately? Meaning, should I concoct a series of missing values to try and not reject the null associated with hypothesis 1, start over and do the same thing for hypothesis 2, etc? Or should this be done at once? If you think this is all a bad idea, in a few words, how would you go about modeling the data I've described? Thanks. -seth
 
Mathematics news on Phys.org


Can someone at least tell me why they read and didn't respond?
 
Suppose ,instead of the usual x,y coordinate system with an I basis vector along the x -axis and a corresponding j basis vector along the y-axis we instead have a different pair of basis vectors ,call them e and f along their respective axes. I have seen that this is an important subject in maths My question is what physical applications does such a model apply to? I am asking here because I have devoted quite a lot of time in the past to understanding convectors and the dual...
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Back
Top