I'm taking my first formal logic class and some of the things seem contradictory; I know it's because I'm not fully understanding something, but I don't know what I'm not fully understanding—I hope someone can help me. The problem begins:

The statement I'm having trouble with is...

{ p(a), p(b), p(f(a)), p(f(b)) }⊢_{Fitch}∀x.p(x)

... which I marked true. I'm able to prove that ∀x.p(x) while using only p(a) as a premise, even. The answer is false, and I'm told "p may not hold for terms like f(f(a)), f(f(b)), and so forth." But how could it not? Why would p(f(f(a))) not hold if ∀x.p(x)?

What I think of as I finish typing this that I'm misunderstanding what ⊢_{Fitch} really means, which is "Prove using the Fitch system and no aspects of Herbrand logic." The only way to prove ∀x.p(x) is by using Universal Introduction and Elimination—which... is not encompassed by the provable operator ⊢_{Fitch}?

... ?
Unless you mean I can't prove it within L_{2}, in which case I wouldn't have the slightest clue as to why—it'd be the same thing. Perhaps there's some implication intrinsic in the conclusion that I've yet to learn about.

Could you be... um... helpful when you next respond?

Universal Introduction doesn't work that way. Why on Earth would it follow from the fact that, say, P holds for the number 37 that P holds for all x? a and b are constants, not variables.