User Controls

  1. 1
  2. 2
  3. 3
  4. ...
  5. 582
  6. 583
  7. 584
  8. 585
  9. 586
  10. 587
  11. ...
  12. 593
  13. 594
  14. 595
  15. 596

Posts by Obbe

  1. Obbe Alan What? [annoy my right-angled speediness]
    Readers familiar with the Doomsday argument may worry that the bland principle of indifference invoked here is the same assumption that is responsible for getting the Doomsday argument off the ground, and that the counterintuitiveness of some of the implications of the latter incriminates or casts doubt on the validity of the former. This is not so. The Doomsday argument rests on a much stronger and more controversial premiss, namely that one should reason as if one were a random sample from the set of all people who will ever have lived (past, present, and future) even though we know that we are living in the early twenty-first century rather than at some point in the distant past or the future. The bland indifference principle, by contrast, applies only to cases where we have no information about which group of people we belong to.

    If betting odds provide some guidance to rational belief, it may also be worth to ponder that if everybody were to place a bet on whether they are in a simulation or not, then if people use the bland principle of indifference, and consequently place their money on being in a simulation if they know that that’s where almost all people are, then almost everyone will win their bets. If they bet on not being in a simulation, then almost everyone will lose. It seems better that the bland indifference principle be heeded.

    Further, one can consider a sequence of possible situations in which an increasing fraction of all people live in simulations: 98%, 99%, 99.9%, 99.9999%, and so on. As one approaches the limiting case in which everybody is in a simulation (from which one can deductively infer that one is in a simulation oneself), it is plausible to require that the credence one assigns to being in a simulation gradually approach the limiting case of complete certainty in a matching manner.
  2. Obbe Alan What? [annoy my right-angled speediness]
    Here's another one:



    So, to those of you who agree that we do not have freewill, I ask you what are the implications of this? Are people responsible for their actions? Is a sense of individuality essentially meaningless? What are your thoughts on the subject?
  3. Obbe Alan What? [annoy my right-angled speediness]
    This step is sanctioned by a very weak indifference principle. Let us distinguish two cases. The first case, which is the easiest, is where all the minds in question are like your own in the sense that they are exactly qualitatively identical to yours: they have exactly the same information and the same experiences that you have. The second case is where the minds are “like” each other only in the loose sense of being the sort of minds that are typical of human creatures, but they are qualitatively distinct from one another and each has a distinct set of experiences. I maintain that even in the latter case, where the minds are qualitatively different, the simulation argument still works, provided that you have no information that bears on the question of which of the various minds are simulated and which are implemented biologically.

    A detailed defense of a stronger principle, which implies the above stance for both cases as trivial special instances, has been given in the literature. Space does not permit a recapitulation of that defense here, but we can bring out one of the underlying intuitions by bringing to our attention to an analogous situation of a more familiar kind. Suppose that x% of the population has a certain genetic sequence S within the part of their DNA commonly designated as “junk DNA”. Suppose, further, that there are no manifestations of S (short of what would turn up in a gene assay) and that there are no known correlations between having S and any observable characteristic. Then, quite clearly, unless you have had your DNA sequenced, it is rational to assign a credence of x% to the hypothesis that you have S. And this is so quite irrespective of the fact that the people who have S have qualitatively different minds and experiences from the people who don’t have S. (They are different simply because all humans have different experiences from one another, not because of any known link between S and what kind of experiences one has.)
    The same reasoning holds if S is not the property of having a certain genetic sequence but instead the property of being in a simulation, assuming only that we have no information that enables us to predict any differences between the experiences of simulated minds and those of the original biological minds.

    It should be stressed that the bland indifference principle expressed by (#) prescribes indifference only between hypotheses about which observer you are, when you have no information about which of these observers you are. It does not in general prescribe indifference between hypotheses when you lack specific information about which of the hypotheses is true. In contrast to Laplacean and other more ambitious principles of indifference, it is therefore immune to Bertrand’s paradox and similar predicaments that tend to plague indifference principles of unrestricted scope.
  4. Obbe Alan What? [annoy my right-angled speediness]
    We can take a further step and conclude that conditional on the truth of (3), one’s credence in the hypothesis that one is in a simulation should be close to unity. More generally, if we knew that a fraction x of all observers with human-type experiences live in simulations, and we don’t have any information that indicate that our own particular experiences are any more or less likely than other human-type experiences to have been implemented in vivo rather than in machina, then our credence that we are in a simulation should equal x:

    (#)
  5. Obbe Alan What? [annoy my right-angled speediness]
    Because of the immense computing power of posthuman civilizations, is extremely large, as we saw in the previous section. By inspecting (*) we can then see that at least one of the following three propositions must be true:

    (1)
    (2)
    (3)
  6. Obbe Alan What? [annoy my right-angled speediness]
    Writing for the fraction of posthuman civilizations that are interested in running ancestor-simulations (or that contain at least some individuals who are interested in that and have sufficient resources to run a significant number of such simulations), and for the average number of ancestor-simulations run by such interested civilizations, we have



    and thus:

    (*)

  7. Obbe Alan What? [annoy my right-angled speediness]
    The basic idea of this thread can be expressed roughly as follows: If there were a substantial chance that our civilization will ever get to the posthuman stage and run many ancestor-simulations, then how come you are not living in such a simulation?

    We shall develop this idea into a rigorous argument. Let us introduce the following notation:

    : Fraction of all human-level technological civilizations that survive to reach a posthuman stage

    : Average number of ancestor-simulations run by a posthuman civilization

    : Average number of individuals that have lived in a civilization before it reaches a posthuman stage

    The actual fraction of all observers with human-type experiences that live in simulations is then


  8. Obbe Alan What? [annoy my right-angled speediness]
    How and why basically ask the same question except why implies purpose or meaning. Purpose and meaning are human inventions. There is no why.
  9. Obbe Alan What? [annoy my right-angled speediness]
    I don't know who he is. I just liked what he said in that video. Is he such a cunt that he pretended to be a different person for years?
  10. Obbe Alan What? [annoy my right-angled speediness]
    At our current stage of technological development, we have neither sufficiently powerful hardware nor the requisite software to create conscious minds in computers. But persuasive arguments have been given to the effect that if technological progress continues unabated then these shortcomings will eventually be overcome. Some authors argue that this stage may be only a few decades away. Yet present purposes require no assumptions about the time-scale. The simulation argument works equally well for those who think that it will take hundreds of thousands of years to reach a “posthuman” stage of civilization, where humankind has acquired most of the technological capabilities that one can currently show to be consistent with physical laws and with material and energy constraints.

    Such a mature stage of technological development will make it possible to convert planets and other astronomical resources into enormously powerful computers. It is currently hard to be confident in any upper bound on the computing power that may be available to posthuman civilizations. As we are still lacking a “theory of everything”, we cannot rule out the possibility that novel physical phenomena, not allowed for in current physical theories, may be utilized to transcend those constraints that in our current understanding impose theoretical limits on the information processing attainable in a given lump of matter. We can with much greater confidence establish lower bounds on posthuman computation, by assuming only mechanisms that are already understood. For example, Eric Drexler has outlined a design for a system the size of a sugar cube (excluding cooling and power supply) that would perform 1021 instructions per second. Another author gives a rough estimate of 1042 operations per second for a computer with a mass on order of a large planet. (If we could create quantum computers, or learn to build computers out of nuclear matter or plasma, we could push closer to the theoretical limits. Seth Lloyd calculates an upper bound for a 1 kg computer of 5*1050 logical operations per second carried out on ~1031 bits. However, it suffices for our purposes to use the more conservative estimate that presupposes only currently known design-principles.)

    The amount of computing power needed to emulate a human mind can likewise be roughly estimated. One estimate, based on how computationally expensive it is to replicate the functionality of a piece of nervous tissue that we have already understood and whose functionality has been replicated in silico, contrast enhancement in the retina, yields a figure of ~1014 operations per second for the entire human brain. An alternative estimate, based the number of synapses in the brain and their firing frequency, gives a figure of ~1016-1017 operations per second. Conceivably, even more could be required if we want to simulate in detail the internal workings of synapses and dendritic trees. However, it is likely that the human central nervous system has a high degree of redundancy on the mircoscale to compensate for the unreliability and noisiness of its neuronal components. One would therefore expect a substantial efficiency gain when using more reliable and versatile non-biological processors.

    Memory seems to be a no more stringent constraint than processing power. Moreover, since the maximum human sensory bandwidth is ~108 bits per second, simulating all sensory events incurs a negligible cost compared to simulating the cortical activity. We can therefore use the processing power required to simulate the central nervous system as an estimate of the total computational cost of simulating a human mind.

    If the environment is included in the simulation, this will require additional computing power – how much depends on the scope and granularity of the simulation. Simulating the entire universe down to the quantum level is obviously infeasible, unless radically new physics is discovered. But in order to get a realistic simulation of human experience, much less is needed – only whatever is required to ensure that the simulated humans, interacting in normal human ways with their simulated environment, don’t notice any irregularities. The microscopic structure of the inside of the Earth can be safely omitted. Distant astronomical objects can have highly compressed representations: verisimilitude need extend to the narrow band of properties that we can observe from our planet or solar system spacecraft. On the surface of Earth, macroscopic objects in inhabited areas may need to be continuously simulated, but microscopic phenomena could likely be filled in ad hoc. What you see through an electron microscope needs to look unsuspicious, but you usually have no way of confirming its coherence with unobserved parts of the microscopic world. Exceptions arise when we deliberately design systems to harness unobserved microscopic phenomena that operate in accordance with known principles to get results that we are able to independently verify. The paradigmatic case of this is a computer. The simulation may therefore need to include a continuous representation of computers down to the level of individual logic elements. This presents no problem, since our current computing power is negligible by posthuman standards.

    Moreover, a posthuman simulator would have enough computing power to keep track of the detailed belief-states in all human brains at all times. Therefore, when it saw that a human was about to make an observation of the microscopic world, it could fill in sufficient detail in the simulation in the appropriate domain on an as-needed basis. Should any error occur, the director could easily edit the states of any brains that have become aware of an anomaly before it spoils the simulation. Alternatively, the director could skip back a few seconds and rerun the simulation in a way that avoids the problem.

    It thus seems plausible that the main computational cost in creating simulations that are indistinguishable from physical reality for human minds in the simulation resides in simulating organic brains down to the neuronal or sub-neuronal level. While it is not possible to get a very exact estimate of the cost of a realistic simulation of human history, we can use ~1033 - 1036 operations as a rough estimate. As we gain more experience with virtual reality, we will get a better grasp of the computational requirements for making such worlds appear realistic to their visitors. But in any case, even if our estimate is off by several orders of magnitude, this does not matter much for our argument. We noted that a rough approximation of the computational power of a planetary-mass computer is 1042 operations per second, and that assumes only already known nanotechnological designs, which are probably far from optimal. A single such a computer could simulate the entire mental history of humankind (call this an ancestor-simulation) by using less than one millionth of its processing power for one second. A posthuman civilization may eventually build an astronomical number of such computers. We can conclude that the computing power available to a posthuman civilization is sufficient to run a huge number of ancestor-simulations even it allocates only a minute fraction of its resources to that purpose. We can draw this conclusion even while leaving a substantial margin of error in all our estimates.

    Posthuman civilizations would have enough computing power to run hugely many ancestor-simulations even while using only a tiny fraction of their resources for that purpose.

  11. Obbe Alan What? [annoy my right-angled speediness]
    A common assumption in the philosophy of mind is that of substrate-independence. The idea is that mental states can supervene on any of a broad class of physical substrates. Provided a system implements the right sort of computational structures and processes, it can be associated with conscious experiences. It is not an essential property of consciousness that it is implemented on carbon-based biological neural networks inside a cranium: silicon-based processors inside a computer could in principle do the trick as well.

    Arguments for this thesis have been given in the literature, and although it is not entirely uncontroversial, we shall here take it as a given.

    The argument we shall present does not, however, depend on any very strong version of functionalism or computationalism. For example, we need not assume that the thesis of substrate-independence is necessarily true (either analytically or metaphysically) – just that, in fact, a computer running a suitable program would be conscious. Moreover, we need not assume that in order to create a mind on a computer it would be sufficient to program it in such a way that it behaves like a human in all situations, including passing the Turing test etc. We need only the weaker assumption that it would suffice for the generation of subjective experiences that the computational processes of a human brain are structurally replicated in suitably fine-grained detail, such as on
    the level of individual synapses. This attenuated version of substrate-independence is quite widely accepted.

    Neurotransmitters, nerve growth factors, and other chemicals that are smaller than a synapse clearly play a role in human cognition and learning. The substrate-independence thesis is not that the effects of these chemicals are small or irrelevant, but rather that they affect subjective experience only via their direct or indirect influence on computational activities. For example, if there can be no difference in subjective experience without there also being a difference in synaptic discharges, then the requisite detail of simulation is at the synaptic level (or higher).
  12. Obbe Alan What? [annoy my right-angled speediness]
    [FONT=&quot]This thread argues that at least one of the following propositions is true: (1) the human species is very likely to go extinct before reaching a “posthuman” stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation. It follows that the belief that there is a significant chance that we will one day become posthumans who run ancestor-simulations is false, unless we are currently living in a simulation. A number of other consequences of this result are also discussed.[/FONT]

    Many works of science fiction as well as some forecasts by serious technologists and futurologists predict that enormous amounts of computing power will be available in the future. Let us suppose for a moment that these predictions are correct. One thing that later generations might do with their super-powerful computers is run detailed simulations of their forebears or of people like their forebears. Because their computers would be so powerful, they could run a great many such simulations. Suppose that these simulated people are conscious (as they would be if the simulations were sufficiently fine-grained and if a certain quite widely accepted position in the philosophy of mind is correct). Then it could be the case that the vast majority of minds like ours do not belong to the original race but rather to people simulated by the advanced descendants of an original race. It is then possible to argue that, if this were the case, we would be rational to think that we are likely among the simulated minds rather than among the original biological ones. Therefore, if we don’t think that we are currently living in a computer simulation, we are not entitled to believe that we will have descendants who will run lots of such simulations of their forebears. That is the basic idea. The rest of this thread will spell it out more carefully.

    Apart form the interest this thesis may hold for those who are engaged in futuristic speculation, there are also more purely theoretical rewards. The argument provides a stimulus for formulating some methodological and metaphysical questions, and it suggests naturalistic analogies to certain traditional religious conceptions, which some may find amusing or thought-provoking.

    The structure of this thread is as follows. First, we formulate an assumption that we need to import from the philosophy of mind in order to get the argument started. Second, we consider some empirical reasons for thinking that running vastly many simulations of human minds would be within the capability of a future civilization that has developed many of those technologies that can already be shown to be compatible with known physical laws and engineering constraints. This part is not philosophically necessary but it provides an incentive for paying attention to the rest. Then follows the core of the argument, which makes use of some simple probability theory, and a section providing support for a weak indifference principle that the argument employs. Lastly, we discuss some interpretations of the disjunction, mentioned in the abstract, that forms the conclusion of the simulation argument.
  13. Obbe Alan What? [annoy my right-angled speediness]
    The relevant detail here is that if we take color to mean what you insist it means then it would entail the word means something different than common or technical usage.

    Not really. When I type the word RED, you know exactly what I mean. You imagine the colour RED every time your read the word RED, in an entirely subjective way. This is entirely common, and there is nothing incoherent about it.

    And we're back to lavalamps. When I write "lavalamp" you might think of lavalamps. This doesn't make lavalamps devoid of an objective existence.

    The difference is that lava lamps have an objective existence. Colour is subjective.

    Light has an objective existence, but seeing light as colour is entirely subjective.

    We're back to lavalamps. I could make the exact same argument that the effect which produces lavalamps is objective but lavalamps themselves are not.

    You're misrepresenting my position by using this metaphor incorrectly. You see, lava lamps do have an objective existence. However, the colour of the lamp is subjective. Similarly, wavelengths of light have an objective existence. But the colour of these wavelengths is subjective.
  14. Obbe Alan What? [annoy my right-angled speediness]
    Determinism in theory and free will in practice. In the individual level we choose our fates but those choices are dependant on the choices others made in lives before yours.

    Either way, I'm ready for a 9 page thread where OYM gives ridiculous assertions with no evidence or logical backing.

    You don't really seem to disagree with me.
  15. Obbe Alan What? [annoy my right-angled speediness]
  16. Obbe Alan What? [annoy my right-angled speediness]
    Common and technical usage do not permit for "color" to describe a mere perception, it must describe something out int the world. … "color", for usage to be coherent, must be something objective in the world.


    No, not really. You have never been confused by what I mean when I use the word colour. The entire time we have been having this conversation you have understood exactly what I mean. I'm not saying anything incoherent at all. Colour can be used to describe a perception, to describe the way an object appears. When I type the word "RED" and your eyeballs see that, your brain automatically associates that with the colour red. You are probably imagining the colour red right now at this very moment, in a very subjective way. And when light of a specific wavelength which does objectively exist enters into your eyes, your visual system subjectively interprets that information as the colour red.


    Even if color is a mere perception (it's not), then you still have to give objectivity to its cause (light of a certain frequency) in which case rainbows still have an objective existence because, the light which causes their perception is still real, they are composed of objective parts (fields of light) thus they themselves must have some sort of objective existence.

    No, the phenomenon which becomes the rainbow objectively exists. All the conditions are objectively met. But a rainbow is not the conditions. A rainbow is the appearance of a colorful arch in the sky. It is perceived subjectively, it does not have objective existence. The conditions which create it do exist objectively, but the rainbow itself does not.
  17. Obbe Alan What? [annoy my right-angled speediness]
    Whatever you do has been predetermined. Your decisions are meaningless and choice is an illusion. You agree that you can't change the laws of physics or alter the past. The present is nothing more than the result of those laws acting on the past.

    Look at your liver. It's a regenerating, automatic organic filtration system that you could not possibly build yourself. It evolved over eons from basically nothing to become this amazing thing. And if you can accept that, it shouldn't be too hard to accept that this miraculous feeling that we call freewill also evolved out of basically nothing. Sure, it's a complex system, but so is your liver. The rise of the liver as this amazing organ was entirely reactionary, as was the formation of the planet we live on and the evolution of every living thing on it, including you and all the decisions you have or will ever make. Research even shows us that brain activity behind a decision occurs before a person consciously apprehends the decision.
  18. Obbe Alan What? [annoy my right-angled speediness]
    Lanny is thoroughly brainwashed by privileged leftist 'educators'.

    He reminds me so much of this character:



    that it is hard to believe Lanny is a real person and not just a fake internet persona.
  19. Obbe Alan What? [annoy my right-angled speediness]
    I don't mean in the legal sense. I mean the vague, mystical, hard to define sense. Freewill as in the magical ability to determine your own fate, cannot exist. Can you change the laws of physics? Can you alter the past? If you answered no to these questions, then it follows that you cannot have freewill.
  20. Obbe Alan What? [annoy my right-angled speediness]
    Sure, I agree completely, but for the sentence "colors are perceived" to even make sense we would have to admit colors are not mere perceptions.

    Not really. The phenomenon which becomes colour objectively exists (the existence of visible light). However, colour itself is only perceived subjectively (your unique visual systems subjective interpretation of the objective wavelengths of light).

    Consciousness/sentience doesn't exist in a physical sense.

    Can you demonstrate that? Isn't consciousness just a physical reaction to our environment, like a highly evolved version of single celled organism reacting to its own environment?
  1. 1
  2. 2
  3. 3
  4. ...
  5. 582
  6. 583
  7. 584
  8. 585
  9. 586
  10. 587
  11. ...
  12. 593
  13. 594
  14. 595
  15. 596
Jump to Top