2016-08-11 at 2:11 AM UTC
A persons life is their own. I think they should be able to end it whenever they want to.
2016-08-11 at 3:11 AM UTC
this is another example in a long list of many of the absolute and appalling level of human hubris. people thinking they have the right, responsibility, or even the authority to tell another human what to do, or how to do it...much less deluding themselves into thinking theyre qualified or capable of doing so.
the value of a human life is no more than that of a grain of sand. people delude themselves thinking humanity is so important and life is so grandiose...when in fact it has no purpose or worth whatsoever. if the planet earth along with everything on it were sucked into the sun and incinerated the universe wouldnt even notice.
2016-08-11 at 4:25 AM UTC
I think this quote is taken out of context. I guess I see why, malice didn't seem to get my point either.
I never said anything about a person's right to a given medical intervention or suicide. I was talking about a very specific argument which you see some people make with some regularity. It follows the basic form:
1. Intervention X is risky, untested, or simply a desperation move with a real chance of killing me or severely reducing my quality of life*
2. I'm so depressed I'm probably going to kill myself and/or continue an existence not worth having if I do not pursue any medical intervention
3. It follows from 2. that the actual risk of the intervention is low since the thing I stand to lose has little value
4. I ought to pursue intervention X since it follows from 3 that it has a (life-value adjusted) low risk
Ergo intervention X is a justified course of action.
The objection I made in the post quoted is that it's fundamentally difficult for a depressed person to make the call on 2. It's well established that there is a certain emotional ebb and flow to major depression. When a person is most depressed they are biased to deem their condition "terminal", even when it's not, just as the hypochondriac is convinced they suffer from a disease when they don't. So perhaps you can take the quality of life angle, say that "maybe I'm not great at judging if my condition is terminal but surely I have the right to determine if my life is worth living", which is a nuanced issue, I think it's obvious people can have temporary lapses of judgement when their opinion on this subject is contrary to what future and past selves would have to say and I think we can conjure an argument for why that justifies denying people, under certain limited conditions, full autonomy over their lives in the same way we deny children certain rights of personhood based on their inability to make good decisions.
In any case, I think there's a further issue here in that 4. is an unsubstantiated premise. A reasonable maxim for choosing medical intervention would seem to be to maximize reward to risk ratio of an intervention, not simply choose any intervention that meets an arbitrarily picked (and picked by a person with mental illness at that) threshold of acceptably low risk. Consider to a person with treatable cancer who doesn't value their continued existence as a cancer patient, chemotherapy and walking through coals in hopes of spontaneous remission represent the same level of risk, that is to say none because nothing of value is lost in either case. Yet it should be obvious one is clearly a better means of treatment than the other and yet we can use this same argumentative form to justify either over the other.
* Malice is going to jump in here and tell us how destroying a part of your brain with radiation is none of those things. I'd like to remind everyone I'm not talking about the particular intervention malice was discussing but rather the general form of this argument.
2016-08-11 at 4:29 AM UTC
I agree with what Kreepy said.
To elaborate, pain is what it is, and those people struggling often have for an extended period of time and is no less agonizing than a physical condition. One argument many use is how the death of someone would effect others, but this is bullshit. Everyone dies eventually and nobody should be put in a position to suffer just so another person is contented by the fact another being is alive. It's a selfish arguement. I've got more to add but my thoughts are too disorganized atm.
2016-08-11 at 8:58 AM UTC
Uhhh, Kreepy didn't say a word.... creepy!
2016-08-11 at 5:14 PM UTC
I will blindly follow and defend Lanny to the bitter end, I'm a loyal spaceman.
2016-08-11 at 8:54 PM UTC
I Thanked Infinityshock and Lanny in this thread and it's not showing up. Fuck this shit. I'm going home.
2016-08-12 at 11:15 AM UTC
I'll lick Lanny's asshole and play with his balls before I shove my fat cock in his asshole and fuck him till he comes like a fountain and begs me for more.
2016-08-19 at 1:20 PM UTC
Obbe
Alan What?
[annoy my right-angled speediness]
Looks like most of you agree with Malice. This is understandable, liberty is important.
However I tend to think a lot of problems in our world stem from what is sometimes called human error. I mean, sometimes humans make mistakes. Sometimes we fuck up. Even those humans who hold positions of authority over others. This is where I think a superhuman intelligence could be very useful/helpful. If we created an intelligence that far surpassed our human ability to reason and analyze and predict, it could potentially guide us better than any individual or human leader ever could.
There is an old movie called the day the Earth stood still. In this movie an alien comes to earth and he brings with him a large metal man. The metal man is a robot, created by the people who live on the aliens home planet. The robots there are like the police. They simply uphold the law and are completely unbiased. They stop violence. They don't make human errors. I always thought that was a neat idea.
2016-08-19 at 4:46 PM UTC
Obbe
Alan What?
[annoy my right-angled speediness]
I don't know. At least in that old movie I think the idea is that the robots stopped violence from happening. Like if you picked up a gun and aimed it at someone, the robot would vaporize the gun. Or if you were attempting to protect someone from a violent person, the robot would already subdue the violent person before you could do so yourself.
In this hypothetical scenario where human error no longer is a problem as we are guided by a super human intelligence, would there ever be a situation in which we would need to turn to violence? If the world is changed in such a way that it was impossible for anyone to harm anyone else, would we need laws?
Maybe you can imagine a scenario where we/our thoughts/ideas are not being monitored by some judgmental authority but rather we are all connected to each other and can talk to each other kind of like how we are now, but instantly, human beings enhanced with technology granting us direct access to this superhuman intelligence, what would we need thought police for?
2016-08-21 at 2:59 AM UTC
Human rights was the first-ever conceived meme. I can't believe people are still falling for it decades later.