Index  Comments

The value of truth in an absolute sense is obvious, as truth is what allows humans to understand and manipulate the world. There are lesser forms truth takes, however, and the social form is my focus.

The value of truth and honesty in human interactions, sans that obvious aspect of communicating true information for practical purposes, is to spur trust. Through trust, humans can reasonably interact reliably and work towards shared goals. Once this idea is accepted, interesting consequences arise.

Firstly, answers to questions which can't be verified as true become irrelevant. If I'm asked about my thoughts, I consider it impossible for me to lie. It's not as if there's a reasonable way for my questioner to validate or disagree with my answer. There's the sense of perhaps that internal state of a human is verifiable in some technically true sense, but this seems to matter relatively little.

Secondly, lying to machines may technically be lying, but is lying in such a different sense that it becomes an entirely different matter. Consider one building an army of machines which will go about and refuse to leave a human being alone until questions are answered thereof; lying to such machines is perfectly moral, because this so starkly contrasts with real human interaction; even if such will eventually be used by the human controller, this abuses what I deem the value of honesty so severely that I don't find it immoral to lie in such a situation. A human lying to a nonhuman isn't immoral.

Thirdly, lying to a human one doesn't seek the trust of is another manner of lying entirely from the usual. If someone is trying to murder me, and I can prevent this by lying, this is moral, because I don't seek the trust of this someone anyway. Yet, this thought does seem to rather encourage lying.

I find this far more interesting than the basic morality of lying being universally bad; how boring.