User Controls

Do you think there will ever be self driving cars?

  1. #1
    I'm more on the side they won't happen, or they will but they will have drawback like taking alot longer. What if they cause more accidents?
  2. #2
    aldra JIDF Controlled Opposition
    there already are, they're just not approved for general use due to safety concernsf
  3. #3
    Industrial Houston
    i would actually try to work against those shits, i don't want to go across the street and get smashed by a broken self-driving truck, fuck are the retards thinking, their shit won't break as much as phones and laptops?
    The following users say it would be alright if the author of this post didn't die in a fire!
  4. #4
    arthur treacher African Astronaut
    I wonder how it will change liability when an accident happens. Can't sue the other driver, because there ain't one.
  5. #5
    Has nobody else seen this?. Googles car has already hit a bus.

    http://www.wired.com/2016/02/googles-self-driving-car-may-caused-first-crash/


    I want to make a robot that drives drunk. It would make self driving cars more "realistic" if there were a few reckless and irresponsible robots out there. Maybe this is why Bender was created :o
  6. #6
    Industrial Houston
    i wonder what happens when a car moves to avoid deer and sees a school courtyard as a road because the map hasn't been updated
    and then that happens 5 times in a day
  7. #7
    Program the robot cars to do this
  8. #8
    AngryOnion Big Wig [the nightly self-effacing broadsheet]
    I think it will become a real thing.
    It think the drivers will be able to turn the self drive on and off but it will be mandatory for freeway driving.
    The reasoning will be the freeway traffic can be controlled much like the way internet traffic is controlled.
    Now the real question is were is my fucking flying car?
  9. #9
    Lanny Bird of Courage
    I think it will happen and it'll be a good thing. Not because I have any particular faith in tech companies or regulatory authorities but because it's a convenient thing that will make an incremental improvement in the lives of first world citizens i.e. it's the type of technology who people who are involved in making technology actually give a shit about.

    When I was in school I took an ethics in sci/tech course and one of the hypotheticals that came up was this: you're in a self driving car going over a bridge. A school bus in front of you full of children hits the breaks, your car has this information and also knows it can't stop in time to avoid the bus. The car has a "choice" to hit the bus likely killing all the kids but deploying airbags and saving your probably or swerving off the bridge almost certainly killing you but keeping the expected death toll low (one person vs. the majority of a full bus). What is joe programmer writing implementing the car's logic supposed to do? I thought the answer was pretty obvious, responsibility to minimize loss of human life far exceeds a company's duty to its customers. It seems pretty uncontroversial that a pharmaceutical company manufacturing like thalidomide for example, knowing the risks and marketing it to pregnant women, is doing something wrong even if its customers understand the consequences. Likewise we shouldn't sell people cars that are going to increase the total deathtoll just to soothe their anxieties (indeed it would even, in aggregate, put such customers at greater risk since they're more likely to be on the other side of that kind of dichotomy). It was surprising how mixed reactions to that case are, a lot of people really believe that there's something wrong with a car that might sacrifice its driver for the greater good.
    The following users say it would be alright if the author of this post didn't die in a fire!
  10. #10
    Malice Naturally Camouflaged
    a lot of people really believe that there's something wrong with a car that might sacrifice its driver for the greater good.

    As far as I am aware there is nothing after death, having served the collective good would provide no benefit to me in a state of non-existence. Rationally, wouldn't any punishment/negative outcome be superior to non-existence, from an individual's point of view, as there's a chance for improvement of their condition or prior beliefs being proven false?
  11. #11
    Lanny Bird of Courage
    As far as I am aware there is nothing after death, having served the collective good would provide no benefit to me in a state of non-existence.

    Did you miss the point about the policy being harmful even on the individual level due to means? You're more likely to be victimized by a car with exclusive preference for its driver's well being than one programed to act in the way that maximizes utility.

    Rationally, wouldn't any punishment/negative outcome be superior to non-existence, from an individual's point of view, as there's a chance for improvement of their condition or prior beliefs being proven false?

    If you were a purely rational and self interested agent agent (protip: you are neither) then you'd elect the course of action that maximizes your gains given the knowledge you have. Since you have no knowledge of wether you'll but on the winning or losing end of an automated sacrificed but that more people are on the winning end when there is altruistic programming then you ought to prefer that programming. It's just like Rawls's veil of ignorance
  12. #12
    Malice Naturally Camouflaged
    I don't disagree, I was thinking about a specific instance.
  13. #13
    Sophie Pedophile Tech Support
    Did you miss the point about the policy being harmful even on the individual level due to means? You're more likely to be victimized by a car with exclusive preference for its driver's well being than one programed to act in the way that maximizes utility.



    If you were a purely rational and self interested agent agent (protip: you are neither) then you'd elect the course of action that maximizes your gains given the knowledge you have. Since you have no knowledge of wether you'll but on the winning or losing end of an automated sacrificed but that more people are on the winning end when there is altruistic programming then you ought to prefer that programming. It's just like Rawls's veil of ignorance

    You're the reason we'll live in The Matrix in 50 years.
  14. #14
    Lanny Bird of Courage
    You're the reason we'll live in The Matrix in 50 years.

    The matrix wouldn't be such a bad place to be if it weren't for ridiculous essentialist notions that informed things like the whole "we made it perfect but you didn't like it" plot.
  15. #15
    arthur treacher African Astronaut
    Without risk,competition, and selfishness, life really probably wouldn't be much worth living. I don't really know what you meant by ""we made it perfect but you didn't like it", but that is exactly what would happen. People would be miserable and disaffected in a true, perfect utopia, and we would definitely not thrive and might even eventually die out as a species from ennui or self-destruction.
  16. #16
    Lanny Bird of Courage
    Without risk,competition, and selfishness, life really probably wouldn't be much worth living. I don't really know what you meant by ""we made it perfect but you didn't like it", but that is exactly what would happen.

    It was part of the matrix, maybe one of the sequels, but a representative of the robots at one point says that they tried making the matrix that was a utopia but no one liked it, they rejected it and woke up. Implying that suffering was so endemic to human psychology that we couldn't exist without it. It's an ugly but popular head of the essentialist hydradic dogma. The matrix represents the triumph of the subjective, the honest admittance to our fundamentally subjective existence and acknowledgement of the significance of that, the admittance that we are what we experience and nothing more. It's only through small minded, conservative and fearful mindsets that we fear a utopia. So beaten are we by this world that so many of us think anything better must be a lie, secretly terrible. Go take some crazy ass drugs, learn what ecstasy is (the experience, not MDM), come to learn how fundamentally we are subjective and realize that there is no natural bound on subjectivity, that we can experience anything at any time. Physical correspondence means nothing, souls who have truthfully confronted the limitations of their experience, their subjectivity, realize there is no reason utopia is beyond reach, why suffering is no necessary part of the human condition.

    People would be miserable and disaffected in a true, perfect utopia, and we would definitely not thrive and might even eventually die out as a species from ennui or self-destruction.

    That's exactly the kind of small minded fear the prevents no only our society from being happy but you personally. You think there is no good without the bad but that's a slave mentality, an inexplicable love of what is bad for you. So broken are we that we can't even truly imagine a life that isn't as dysfunctional as this one, that's the real tragedy here.

    P.S. I'm drunk as shit
  17. #17
    ridiculous essentialist notions

    [FONT=courier new][SIZE=8px]I have no feelings.[/SIZE][/FONT]
  18. #18
    Sophie Pedophile Tech Support
    die out as a species from ennui

    You sound like you need a kiki to bid adieu to your ennui.

  19. #19
    Sophie Pedophile Tech Support
    My rare matrix pepe got fucked because mchan hotlinking doesn't work.

    FUCK YOU UNIVERSE!

  20. #20
    Malice Naturally Camouflaged
    403 forbidden. If it shows up for you: cache.
Jump to Top