Three kinds of thought experiments in philosophy

You may also like...

3 Responses

  1. A worthy effort, trying to work out a taxonomy of “thought
    experiments” to get a sense of when they should be useful.

    I’m a bit at a loss as to why Searle’s “Chinese Room” falls into
    category 2 and not category 3… to me it seems like a mess of
    dubious assumptions that just exhibit confusion on Searle’s part.

    I’ve been thinking a bit lately about how useless the popular
    “fat man and trolley” scenario is… it requires you make all
    sorts of assumptions, some of which seem implausible and even
    a-physical (a trolley heavy enough to do damage isn’t going to be
    stopped by a single human body, no matter how fat), combined with
    weird overtones (it raises unintended issues like “is there some
    anti-fat person prejudice at play here?”). It makes one wonder
    why they bothered sketching out this scenario, rather than just
    keeping it abstract (“is it okay to kill one innocent if it’ll save
    more than that?”).

    • ippolit says:

      The Chinese Room is messy, I grant you that, but the conclusion drawn is not out of touch with the premises. The core of category 3 is that there are 2 features, but the connection between the 2 features is unsubstantiated. This is not the case with Searle’s Chinese Room, though I would like to hear if you think that is the case, how so?

      As to your second point, the popularity of the trolley problem bothers me. It’s one of the most overdone thought experiments, and the simplification is to such a degree that the conclusions drawn can be somewhat ridiculous (depending on the version). I am not sure whether the fat man scenario was academic, or just some internet trolling against fat people (honestly don’t care much either). But why I do not dismiss the trolley problem out of hand is because it is not really falling into category 3 – why? because it is a thought experiment that of itself does not demand a particular conclusion. It demands the reader to think/question their moral compass, it presents the difficulties of consequentialist and deontological ethics, but it does not claim a right conclusion.

  2. The thing with the Chinese Room is that it’s actually not clear that it could be set-up at all– translation software exists, but it’s not up at Turing Test levels of quality, so that’s one premise, that we can write the software, then there’s some slight of hand about running the algorithm without a computer, just using some bureaucratic paper shuffling, and that’s another assumption, because it’s entirely likely that the algorithm would be too messy for that (e.g. if it takes a week to get the answer, you flop the Turing test). Though far worse is the level of confusion the idea shows about whole systems and components… the question is not whether the guy-in-the-room knows what’s going on, the question is whether the external behavior of the whole system registers as “intelligent”. Brains may be the seat of consciousness, but every neuron need not be conscious for that to be so.

    I think you’re raising a different issue concerning the fat man/trolley scenario being a matter of ethical reasoning… the point I was making is that there’s a huge disconnect between it and reality. If something like it really came up, the right answer would be “scream your head off”, to try to get the silly people up ahead to step off the tracks. But that’s one of many things you’re supposed to assume is ineffective by the statement of the problem. And you’re supposed to assume you *know* the people won’t get off the tracks in time, or that the trolley will stop, and you’re supposed to have detailed knowledge of the funky physics by which the death of the fat man will stop the trolley, and so on… The number of odd suppositions pile up to the point where I wonder why anyone finds it interesting to think about it.

Leave a Reply