User Controls

  1. 1
  2. 2
  3. 3
  4. ...
  5. 120
  6. 121
  7. 122
  8. 123
  9. 124
  10. 125
  11. ...
  12. 169
  13. 170
  14. 171
  15. 172

Thanked Posts by Lanny

  1. Lanny Bird of Courage
    "mmQsic has the right to children"
    The following users say it would be alright if the author of this post didn't die in a fire!
  2. Lanny Bird of Courage
    Originally posted by Captain Falcon I didn't say you did, I'm saying that the reason it's sokvabble in a Newtonian model is because there is fundamental omission in it that makes it much simpler, to where each body does not increase the complexity of the problem because it does not affect the other bodies in anywhere near the same way; there is *one* force atteibuted to each body in Newtonian models.

    I would counter by asking how you can find it reasonable to believe that future, more accurate models, will be simpler for special cases than older, inaccurate models, specially when thus far there are no phenomena that contradict relativity. A newer model can probably give a better explanation of what exactly is going on for what specific reasons (that's the whole point of shit like string theory) but until there is a reason to doubt the nature of the phenomena described by relativity, how is it reasonable to believe that the problem will go down in complexity?

    It's reasonable to at least float the idea that complex models don't give way to simple ones, indeed much of the sciences progress by taking simplistic models and creating more complex but accurate incremental replacements. But simpler models outmoding more complex ones is not without precedent. Phlogiston models of combustion are frequently considered more complex than their modern replacements, and a great example in the motion of heavenly bodies is the heliocentric model's simplicity relative to the geocentric model it replaced. Anticipating the motion of planets was a lot more complex under the geocentric model. At at very least we can rule out any general principle that more accurate models need be as or more complex than the ones they replace.

    I've already given you the practical problems with any theoretical solution; to suggest that it's possible for us to solve them would require evidence on your part. I'm not saying the simulation hypothesis is impossible, I'm saying that there's no good reason to believe it.

    You seem to be ignoring the approximation algorithms I've mentioned a couple of times now though. Would you mind speaking to that point?
    The following users say it would be alright if the author of this post didn't die in a fire!
  3. Lanny Bird of Courage
    Originally posted by Panthrax Lanny has banned the user "L" after posts were made by that account criticizing the laziness of closing registration to keep alt accounts off. Moved the posts to the trash bin and banned the account but guess what, no bans show up on the user profile.

    If you want to be hands a hands-off admin then maybe you should just fuck off , your half assed meddling with things only worsens conditions

    NiS due process:
    Step 1: See account is alt
    Step 2: See alt is unfunny
    Step 3: Airlock alt
    The following users say it would be alright if the author of this post didn't die in a fire!
  4. Lanny Bird of Courage
    Originally posted by Captain Falcon Yes, because the Newtonian models were close, but wrong. In practice, an n body simulation using a Newtonian model would actually be wildly incorrect experimentally. It's not solvable in Newtonian models vs unsolvable in relativistic models due to any fundamental problem with relativity as a theory. Newtonian models simply treat bodies as if they exist in a static space, no tricksy spacetime etc. Each element in a relativistic model, however, is an additional element that fundamentally changes the nature of the playing field every step in a simulation, and exponentially adds complexity to it.

    I wasn't saying that Newtonian models can be used to produce convincing simulation, just as an example of a case where simulation is infeasible in one model but feasible in another.

    A GUT that somehow removes the exponentiating nature of the problem is improbable enough to the point where I don't think it's really worth discussing.

    How have you determined how probable that is? How can I estimate the probability of future theories having certain properties?

    Let me put it another way; if you pick out 189 stars close to one another, and simulate their a gravitational interactions with 3 forces attributed to each body (hideously inaccurate) in a relativistic model (or any other model that doesn't simplify the special cases of any model down to something like newtonian mechanics in special cases), and each force interaction is somehow represented on one bit, and each bit is one atom large, you would run out of atoms in the universe to represent their interaction…

    Again, I'm not denying that simulating n-bodies is infeasible to simulate on a relativistic model. I'm saying we can use approximation strategies that are or we may find it is feasible under some heretofore undiscovered model. Maybe you can make an argument that any model that predicts the motion of bodies will have this issue, similar how we can demonstrate incompleteness theorems in all systems with certain basic primitives, but I've yet to see this argument advanced.
    The following users say it would be alright if the author of this post didn't die in a fire!
  5. Lanny Bird of Courage
    I can ban by reg date and IP as well so mass registration doesn't do a lot of good either

    Originally posted by Enter are you the real panthrax

    defo no
    The following users say it would be alright if the author of this post didn't die in a fire!
  6. Lanny Bird of Courage
    Originally posted by Falco Let me start with the simplest one:

    I'm sure you are familiar with the n-body problem in mathematics (and physics). Even if you assume that interactions between n>2 bodies can be regularized and apply Sundman's globalised solution to generate every "frame" of reality as a simulation and are not doing it in real time, by the evolution of even a small system (less than 5 physical elements), eventually the simulated elements in the system will quickly grow to larger than the number of atoms in the known universe, and there is no way to represent them. It's basically mathematically impossible to simulate.

    Take the example of a quadruple pendulum for example

    Simulations such as this one are purposely designed to cull many, many elements of the actual system as it evolves. In the case of the quadruple pendulum, without saying "hey, we just aren't going to include any forces except these", the system would evolve at the rate of n^x^z if n is the initial number of forces being simulated, X is the number of frames into the simulation, and z=(x+1)^((x+1)^x) until it would quickly approaching an infinite amount of time to generate the next frame of the simulation.

    It seems like computational issues with solving the n-body problem only applies to relativistic models (we posited that future societies would live in a universe physically similar to our own, not that they would have the exact same physical models as we do). Just as n-bodies was analytically solvable under a Newtonian model, I can't see any reason to deny an analytic solution under any possible unified field theory which we know relativity is not.

    Also we have relatively efficient approximation techniques of n-bodies problems under relativistic assumptions, again, there's no requirement that if our reality is simulated it be a perfect simulation total physical simulation.

    And just to throw in the cartesian doubt angle, how realistic do you need to make a physical simulation to simulate your consciousness? When was the last time you empirically verified physical models hold? Have you done this enough and with enough scrutiny that simulating all the local cases necessary to give your consciousness the impression of a consistent reality is computationally infeasible? I don't know about you but I can't even see the pixels in my screen anymore, the level a physical simulation needs to be realistic at to convince me is maybe a hundredth of an inch at most. It's not that uncommon to put one's shirt on backwards, it's not like the contents of your experience need to be that detailed.
    The following users say it would be alright if the author of this post didn't die in a fire!
  7. Lanny Bird of Courage
    Originally posted by CountBlah yeah I'm basically gonna pump out more work and only take cash for a month or so.

    Yeah doc hooked me up due to my neuropathy. it legit does help but I also spend most of my morning noddin out. I've cut back a good bit. weird thing is if I take em around 6-7pm it acts like speed so I have to pop some k pins to calm the fuck down enough to work

    Cutting back is good,watch yourself on that shit. Pain sucks but it beats being a zombie. Torporians and opiates (nauseants and CNS depressants) aren't a great combo.

    I envy cash businesses, it would be fucking sweet to clear another 40%+.
    The following users say it would be alright if the author of this post didn't die in a fire!
  8. Lanny Bird of Courage
    There is no surer sign of vanity than concern for one's longevity. Only in those so unsure of why they continue their lives, so absolutely frightened of reaching their conclusions without having figured out how to start, does the fear of eventual death take root. Like any person the prospect of imminent death frightens me, when I see a car hurtling towards me I try to avoid it. But when confronted with prospect of my eventual demise, the knowledge that no survival strategy will carry me more than another 80 some years, at very best, I'm scarcely concerned. Cirrhosis at 50 or heart attack at 80, why would any sane person care? In the vanishing instant that is a human lifespan if what you want to do with your life takes more time and requires N more years than death from alcoholism affords then your plan is dogshit, you won't accomplish whatever you'd deluded yourself into wanting, and you might as well kill yourself right now.
    The following users say it would be alright if the author of this post didn't die in a fire!
  9. Lanny Bird of Courage
    Originally posted by -SpectraL But you left out the part that your life is not only your own, as it is shared by others around you. It's not really all about you.

    Weak. How hopeless do you need to be that you have to justify your continued existence in terms of the fostered dependence of others upon you? You don't live the way you want because you're so great others would be harmed by your absence? Pfft. Lot's of things aren't all about me, but the question as to whether I continue breathing is. Anyone who surrenders the justification for their own existence to something outside of themselves is tragically lacking in self-confidence of the most foundational kind.
    The following users say it would be alright if the author of this post didn't die in a fire!
  10. Lanny Bird of Courage
    that aside, what's good dawg? Been a while, last major thing I remember you were on the brink of murdering your girl's mom lol. Where you at now?
    The following users say it would be alright if the author of this post didn't die in a fire!
  11. Lanny Bird of Courage
    ohshititsmuffins
    The following users say it would be alright if the author of this post didn't die in a fire!
  12. Lanny Bird of Courage
    Yeah, if you liked Vice City then San Andreas is better in every technical respect. I could see some people preferring the mob and 80s/90s aesthetic of Vice City to playing as an inner city gang banger, but I think even on the story/atmosphere level SA built on VC.
    The following users say it would be alright if the author of this post didn't die in a fire!
  13. Lanny Bird of Courage
    SA was one of the best AAA titles of the era. For the time and the budget I'd say it was a much better game than GTA5, although I'm not sure how well it will have aged. With the low poly counts required to run on the PS2 honestly the graphics were dogshit. The turf war mechanic really appealed to me though, it's become kinda common place in games but it was really well executed there despite being relatively simple compared to what you see today. The enormity of the play area was impressive, even today, so if you're big into open world stuff it'd be right up your alley.

    So IDK, it was a great game in its time, and probably still enjoyable today.
    The following users say it would be alright if the author of this post didn't die in a fire!
  14. Lanny Bird of Courage
    Our friend scronoldo is going to spend a little more time separated from us than that.
    The following users say it would be alright if the author of this post didn't die in a fire!
  15. Lanny Bird of Courage
    "ou" vowel cluster
    The following users say it would be alright if the author of this post didn't die in a fire!
  16. Lanny Bird of Courage
    Lol, I actually think about this from time to time, I always kinda nervously scroll down past the banner if I'm looking at the site in class or on the bus or just anywhere someone might look over my shoulder. Especially when it's aldra's banner that comes up. This place would be difficult to explain to peers.

    Was thinking about covering it through a per-user skin system but god only knows when I'll get around to implementing that.
    The following users say it would be alright if the author of this post didn't die in a fire!
  17. Lanny Bird of Courage
    "hahah, ahmed, you fucking mudskin. Come here buddy, darkie piece of shit, gimme a hug"
    The following users say it would be alright if the author of this post didn't die in a fire!
  18. Lanny Bird of Courage
    While not disclosing that they're doing it is obviously wrong I actually kinda like the idea of using mining to defray hosting costs, you could actually build more efficient miners into browsers and allow sites to use them given they don't overuse CPU and don't have ads. I don't know if the mining power would be greater than the cost of servicing the page load but I could see it changing the hosting economy significantly, it breaks the dependence of hosts on a small ring of advertisers and it ties the value of user acquisition to a currency instead of a nebulous questionably estimated likelihood of conversion.

    At least when I'm connected to power I'd much rather give a host some spare cycles on my CPU (or more likely GPU) than degrade my experience with shit tier intrusive ads (which have gotten progressively worse in recent years).
    The following users say it would be alright if the author of this post didn't die in a fire!
  19. Lanny Bird of Courage
    Originally posted by Captain Falcon So my understanding is that ML won't help you understand the semantic content of language but it can help you determine the rule structure. So "I like dogs" is pretty much no different rule wise than "I dislike cats". In that case, what is your opinion on how to solve the riddle of language for computers to understand?

    It's an interesting question, like could we use ML approaches to generate a formal grammar? I don't know, in some sense techniques like Markov models in language tasks do produce a sort of grammar, they work for recognition and then you can turn them around and use them for generative tasks. So it is a kind of grammar, just not a great one in terms of mirroring natural languages (but to be fair the state of the art uses more sophisticated models).

    But I think it turns into a kind of curve fitting issue, many grammars can produce a training set, how do we know which one was used? Like let's say I invent a language that is defined by its first order markov property: sentences are constructed by taking an existing sentence in the language and adding one more word depending only on the last word. And then I train a markov chain by feeding it a large number of sentences from this language. The "grammar" represented the trained model has might be able to produce every sentence in the language but it's not necessarily the same grammar as was used to train it. And we can think of all kinds of way to make the source grammar trick the model, maybe the grammar doesn't allow for sentences longer than N words but the model will never know that and produce illegally long sentences. That's an example specific to the markov model but I can't think of one that doesn't have the same issue. There's also Gold's theorem which formally rules out the possibility of language learning without negative input (someone saying "this isn't a valid sentence").

    The philosophical response standard among the orthodoxy is that if you make a computer that can produce well formed sentences but it's not using the same grammar as humans, or which fails in edge cases, then it doesn't really matter. Humans produce ungrammatical sentences all the time, and it's at least worth asking if all the speakers of the same language really do have the same mental grammar or if everyone's model of language is internally different but externally similar. And that's fair, if you take the goal of AI research to be the production of systems like chatbots and spell checkers and such then it really is a non-issue. But in my opinion the really important output of the AI project is not these systems but rather a greater understanding of human cognition. That's not a super popular opinion (although it is represented as a minority in the literature) but we kind of have a lot of human intelligence, 8 billion people, society isn't going to collapse if we have to employ some of them in language tasks, language skills are abundant and cheap. But insight into the nature of language itself, something that is a substrate of thought, that we all use easily every day but can't explain (we can identify qualities like "wordiness" or "bold writing" or "awkward wording" but can't really explain them intellectually), is fascinating.
    The following users say it would be alright if the author of this post didn't die in a fire!
  20. Lanny Bird of Courage
    Originally posted by Captain Falcon Interesting, thanks for the detailed post. What do you think of the idea of using ML to map out the rules of a language precisely?

    I think to some degree it's inevitable that we use ML approaches, probably for learning and almost certainly for verification/benchmarking. But it's hard to think about what ML approach could yield meaningful rules for language. There's this paper that's pretty famous in computer vision called Eigenfaces recently, it deals with facial recognition. It's an interesting approach, unlike most CV algorithms you can represent its knowledge base as a small set of images like this:



    It turns out these blurry face images do a very good job of representing human variation in facial structure, they're created by taking a bunch of images of faces and creating these representations that capture the most representative deltas between them, and it actually performs quite well when images are normalized (a lot of commercial facial recognition is just a bunch of engineering techniques for extracting faces from images and processing them into a homogenous format before feeding them to an eigenfaces engine). BUT we can't really figure shit out from looking at these images. Like you can kinda see glasses across a couple frames, we can tell eyes tend to darker than eye sockets, but the interesting thing is there is no explicit mention of facial features in the algorithm. It deals with bitmaps, exactly nowhere in the algorithm or knowledge is there any mention of eyes or noses, cheekbones or skin color. Everything we use to understand and describe faces is absent. And I think the same case is likely to be true of language. We might find that statistically sometimes "as" describes time ("as I was leaving") and sometimes facilitates comparison ("I'm not as good at singing") but that just represents the difference. It says nothing about the conceptual quality of the word that lets it serve two different roles in speech.

    So the tl;dr I guess is ML techniques might be useful classification tasks once theoretical categories of language are established but I don't think they're likely to give any real insight into the rules governing language.
    The following users say it would be alright if the author of this post didn't die in a fire!
  1. 1
  2. 2
  3. 3
  4. ...
  5. 120
  6. 121
  7. 122
  8. 123
  9. 124
  10. 125
  11. ...
  12. 169
  13. 170
  14. 171
  15. 172
Jump to Top