disregardthat said:
Logic is part of the structure of language. For example: that we can infer A from (not (not A)) is because that is how the word "not" is used, it is how the word functions in conjunction with propositions.
You seem to be arguing a Wittgenstein "meaning is use" approach here. The atomic elements of logic (the operations and quantifiers like: "not", "or", "and", "there exists", "for all") are true by definition. They are true because that is the system as we invented it.
Yet there is still seems to be a need to justify this invention. We use these syntactic elements not simply as an arbitary custom, but because they seem fundamentally true to us. So what is the basis of that truth?
As Tarski's undefinability theorem argues, this basis is not going to be found within the terms. We can't use the logical elements to prove themselves.
But still, there seems to be a rationalistic support for them (arguments from reasonableness) as well as empirical support (arguments from observation).
The empirical support comes from successful modelling of the world. If we think about things this way, we can see that it is a form of analysis that gives us control over the events described.
And we can trace a history of improvement in such modelling. Animal minds evolved to be pretty successful at controlling their worlds. There is a proto-reasoning in that animals are good at forming effective habits. Then human language lifted reasoning to another level. Every sentence is a statement of cause and effect (subject, verb, object, or "who did what to whom"). Then philosophers, principly Aristotle, refined the business of reasoning into logic - a mathematical strength syntax for doing world modelling. A formal model of causality.
And it was only this last bit that has been consciously rational in its justification. As with the law of the excluded middle, there was a careful and step-by-step development that pared away the arbitrary so as to leave only what could not be denied as correct - correct through self-consistency, symmetry, irreducibility. Or if we look really closely, through the argument of dichotomy. The arrival at metaphysical categories that are complementary pairs, formed by demanding that they are mutually exclusive, jointly exhaustive.
(I know I keep saying the same thing, but it is the very thing that history has seemed to forgotten - for reasons closely to do with what logic has become.)
So for example, the law of the excluded middle itself was the demand that statements be framed in ways that they are either true or false. Ordinary language always has to be about something, even when referring to unicorns. So semantically, truth and falsehood are rather fuzzy judgements (unicorns might exist undiscovered, the idea exists at least, the component parts of horse and horn exist as facts it seems, etc). But Aristotle cut the syntax free of the semantics. He said if statements can be constrained so that all vagueness, indeterminacy, etc, could be eliminated, then would have a purely syntactic status based on the binary dichotomy of true~false.
So there was a rationalistic process of clarification. In real life - based on empirical experience and the inescapable vagueness of all semantic claims - nothing is actually completely false or true in a way that we "definitely can know it" via our experience. But syntax is where we get to make things true by definition. We say, there just exists this general dichotomy of true~false as a global constraint on properly formed (ie: logical) statements. Of course, this leap from experience to true by definition is immediately then justified in pragmatic terms - we find that this further step works to improve our modelling of reality. But a leap still gets taken.
(Again, all this is a re-hash of Pattee's epistemic cut, Rosen's modelling relations, and other modern schools of epistemology - even though these guys are scientists rather than philosophers

.)
What then of the atomic elements of logic syntax then - the fundamental vocabulary of "not", "or", "and", "there exists", "for all"? Where are the dichotomies that are the rational basis of these?
The quantifiers of "there exists" and "for all" are simply the standard metaphysical dichotomy of particular~general (or one~many, specific~universal, local~global - the other allied ways of saying the same thing). "There exists" is a statement about particular existence, and "for all" is a statement about general existence.
You then have the three qualifiers of negation, addition and exclusion. In general, they rely on the atomistic hypothesis. This is the belief that reality reduces to atoms in a void. You have located objects with properties whose existence is irreducible. And they then freely do their thing in a void - in an a-causal backdrop that just is, and does not itself influence the goings on happening within it.
So the metaphysical dichotomy is atom~void. And it must be noted that it is not at all realistic in fact. The demand is that all causality is reduced to a collection of parts. The whole exists only in a way that does not actually count. But still, this was a
useful point of view. It certainly did not capture the whole of the truth of reality, but it was a stunningly successful partial story because it was so simplified, and allowed so much to be ignored.
A syntax of logic was then developed from this metaphysics. A leap was made that treated atomism as complete truth.
Atomism is based on the idea of local additive construction - material cause + effective cause in the Aristotelean scheme.
So negation is legitimated by the binary of atom or void - there are only two possible choices, that something exists, or that it does not exist.
Addition is legitimated by the atomistic irreducibility of existence and properties. If something exists, then that doesn't change. So two things coming together are the sum of what existed.
Exclusion is legimated again by atomism in that if something definitely exists at a location, then nothing else can exist at the same place. It becomes a definite case of either/or.
So atomism captures a particular mental image of causality (as causal atoms behaving completely freely within an a-causal void - ie: material/effective cause). And then rationalistic argument extracts a syntactical basis for logic from this. If atomism were true, these must be its consequences. The most basic qualifications of a state of affairs will be in terms of negation (does something exist - yes/no?, addition (what exists can only be summed - a conservation principle for atomistic essences), exclusion (if something exists at a spot, then nothing else can exist at the same spot).
Again, this is a syntax that works. But which is also rationally invented at the final step. An epistemic cut has to be made between the semantics and the syntax so as to have a syntax. At some point, we cut the cord. We say atomism seems enough the empirical truth of reality to just make the jump and treat it as actual known truth. And get on with using this invented tool of reasoning.
Of course, as I also keep saying, atomism is itself one half of a metaphysical dichotomy. There is also the holistic or systems view of causality. In ancient Greece, the two views were still being entertained. If you read Aristotle's metaphysics, it is largely an attempt to reconcile the two apparent extremes of causality. This is why he talked about four causes - including formal and final cause in his scheme. The "causes of the void" we might say. The global, top-down causality of constraints on local freedoms.
But the simple logic represented by atomism took off. The other possible half of logic has languished. It has flared in the work of Hegel and Peirce. It is there again in systems science and semiotics and other attempts to frame a more holistic model of causality.
So there is a "mental image" of holism, just as there was of atomism. But it has not been developed into a completely stripped-down syntax. Although see this paper for some playing around with the possibilities.
http://homepages.math.uic.edu/~kauffman/CHK.pdf
This essay explores the Mathematics of Charles Sanders Peirce. We concentrate
on his notational approaches to basic logic and his general ideas about Sign,
Symbol and diagrammatic thought.
In the course of this paper we discuss two notations of Peirce, one of Nicod and
one of Spencer-Brown...
...The reason, I believe, that portmanteau and pivot are so important to find in
looking at formal systems, and in particular symbolic logic, is that the very
attempt to make formal languages is fraught with the desire that each term shall
have a single well assigned meaning. It cannot be! The single well-assigned
meaning is against the nature of language itself. All the formal system can
actually do is choose a line of development that calls some entities elementary
(they are not) and builds other entities from them. Eventually meanings and full
relationships to ordinary language emerge. The pattern of pivot and portmanteau
is the clue to this robust nature of the formal language in relation to human
thought and to the human as a Sign for itself...
...It is important to note that with the primary arithmetic, Spencer-Brown was
able to turn the epistemology around so that one could start with the concept of a
distinction and work outwards to the patterns of first order logic. The importance
of this is that the simplicity of the making (or imagining) of a distinction is always
with us, in ordinary language and in formal systems. Once it is recognized that the
elementary act of discrimination is at the basis of logic and mathematics, many of
the puzzling enigmas of passing back and forth from formal to informal language
are seen to be nothing more than the inevitable steps that occur in linking the
simple and the complex...