My retelling of the classic story of Hilbert’s Hotel had one more point beyond “Think about infinity!” It was about how dangerous it is to think one understands something by the vividness and life-like nature of the description, and about how dangerous it is to regard “how something appears” as the be all and end all of something important. We claim to have learned this lesson with people, but there are schools of philosophy and areas of the discipline (like the debate over qualia) where it seems that’s all many people do — appeal to appearances.
Was anyone thinking they’d like to spend their vacation in such a wonderful place? I admit I am a better philosopher than a story-teller, so perhaps not!
Moving on, let’s spell out why in classical logic everything follows from a contradiction. Take a contradiction, A&~A. Then use conjunction elimination and get A. Apply disjunction introduction and get AvB. Apply conjunction elimination and get ~A. Apply disjunctive syllogism and get B.
This strikes people as a bit weird sometimes. In fact, there are at least two ways to avoid this weirdness.
One: Deny that one can apply conjunction elimination to the contradiction either of the times. This view is attributed to Aristotle, though I am not sure that’s correct. But note the difficulty in stating the rule in the terms that we are used to in classical logic. One needs a rule that is sensitive to several of the sub-formulas. This can lead to so-called connexive logic. (Not connective.)
Two: Deny that disjunction introduction is a valid rule. This is adopted by the so-called “relevant” or “relevance” logics of our day, often associated with the logicians Belnap and Anderson (and in a way to the very challenging G. Priest).
Do you think either of these approaches solves the problem? Is there even a problem? What would be necessary to convince you they are right? Remember that you are talking about how one argues for a logic, which, among other things, is itself used to argue. Logic changed historically — how did that happen? Can one rationally reconstruct that process? Finally, if one decides logic has to change, how does one rewrite all of current explicit reasoning to use the new logic (e.g., in mathematics)?


Avoiding the tedious and sometimes very unnatural “Natural Deduction rules”,
with their never-ending “Introductions” and “Eliminations”,
which philosophy profs often insist on when inflicting students with what I
sarcastically call a training to become ‘Chartered Logic Accountants’,
the following is perfectly rigorous, and seems far more natural and acceptable:
It suffices, in order to see a derivation of the formal language’s A&~A —> B,
instead to see that object (speaking in the metalanguage, AKA English,
where it’s an object, not a statement) as being, in the syntactic sense, logically
equivalent (see 1/ below) to A&~B —> A . But the latter is obviously tautological
(see 2/ below), again in the syntactic sense. Effectively, by “syntactic”,
I mean no truth tables, in fact no mention of the word ‘truth’.
The ‘—>’ is of course material implication.
Both the technicalities above are special cases of matters which
are even more natural IMHO:
1/ A&~C —> D and A&~D —> C are logically equivalent, any A, C, D;
2/ A&C —> A is trivially tautological, any A and C,
this last I admit is barely more general.
I doubt that a person as great as Aristotle doubted 2/, as you say.
But if he did, so much the less “great” than I happily grant him
for having founded formal logic.
As far as 1/ is concerned, the philosophical quibblers (here merely quibbling
about the use of the word “implication”) may be quickly shut down
by DEFINING the ‘ —>’ in terms of the perfectly (undisputed by
those non-Priests, Graham or otherwise, who are sane) conjunction (&) and negation (~) as
‘E—>F’ means just ~(E&~F).
Then the two objects in 1/ become the slightly messier
~(A&~C&~D) and ~(A&~D&~C) ,
which are clearly logically equivalent by the perfectly natural
logical independence of ‘&’ from permuting the ingredients and
from how you insert brackets when ‘&ing’ more than two of
them. (These are known to us mathematicians as commutativity and associativity,
but there’s no need here to snow jargon onto the general public, at least that
shrinking segment willing to read four paragraphs).