# I Participant non-compliance: How to include data in analysis

Tags:
1. May 17, 2017

### Carmen_41

Hello,

I hope this is the appropriate section to post this question:

I conducted a study in which participants were to do one of three treatment options and then write a test. The intention was to separate the participants based on the option they selected and using a one-way ANOVA, analyze whether these groups were statistically different from one another by comparing test grades.

The number of participants that did one of three treatment options were as follows:
Option 1, n = 27
Option 2, n = 98
Option 3, n = 69

The issue I ran into is that several participants misread the instructions and did all three options, or some combination of two of the three options.

The number of participants that did more than one treatment option:
Options 1, 2 and 3, n = 13
Options 1 and 2, n = 17
Options 1 and 3, n = 10
Options 2 and 3, n = 20

Is there a way to use the test data from this group of people (those that did more than one treatment option) in the statistical analysis? For example, can I somehow compare these groups’ test grades to those of the single option groups? Or should they be excluded from the analysis (i.e. per-protocol analysis)?

I looked into complier-average causal effect (CACE) analysis, however in this case I don’t have a control group and there’s no way to determine which of the treatment options (for those that did more than one) had the effect, if any, on the test grade.

Any guidance you can provide is greatly appreciated. Thank you!
-Carmen

2. May 17, 2017

### Staff: Mentor

I think they should be excluded. They did show up, but they didn't actually participate in the experiment. They participated in a different experiment that is not the one you planned.

3. May 17, 2017

### FactChecker

You have a typical factorial experiment. Experiments where different combinations of factors are present in the data are the norm, not the exception. (And now you know why that is true even in planned, "controlled" experiments.) Instead of being a problem, those experiments can be more efficient than "one factor at a time" experiments. Even though you didn't plan it this way, I think that you should not throw out the unplanned treatments. I think you should see what ANOVA says about the results.

See https://en.wikipedia.org/wiki/Factorial_experiment

4. May 17, 2017

### Staff: Mentor

If you do that then you no longer have a prospective controlled experiment. You now have a retrospective observational study.

Such studies are certainly possible to analyze, but they are usually considered less credible and there are a number of statistical pitfalls that open up. They are particularly vulnerable to "p-value hacking" and multiple comparisons in general.