User Controls
wurm holes
-
2023-08-16 at 4:45 AM UTCworms dont have sex so not that kind
but like
okay hear me out bro (whichever one of you dropped out of a physics phd):
could you theoretically have a circular wormhole that was 1nm in diameter at one end and 100AU at the other? i feel like this breaks physics, but if that doesn't break physics that'd be rly cool. like im dumb and dont understand physics at all, but like my unnastanding is that a wormhole is what theoretically happens when space-time folds and touches itself (perverted ass space-time smh) - the "wormhole" being the regions in which the folds be touchin'. so can space-time get weirdly bunched up on one end in such a way that that more of it is touching itself on one "side" than it is on the other? i guess im kinda asking "can a wormhole be bigger on the inside"? and if it can, like... ummm... that'd probably be useful. for... like... energy, or something. because it'd be an infinitesimally thin region of space-time that was 1nm in diameter, which contained/was analogous to an infinitesimally thin region of space-time that was 100AU in diameter - it'd have a massive energy density compared to what 1nm shud. that seems broken. really broken. so it's probably impossible.
but uhhh does that sound plausible for sci-fi energy tech anyway? ha ha. -
2023-08-16 at 4:48 AM UTCAre you saying worms are hermaphrodites? you calling them faggots er summin?
-
2023-08-16 at 4:48 AM UTCanyway if you win a nobel prize with this idea and/or use it to invent time travel, please rescue me. i'll suck your dick. or like... not do that. whatever you want (wo)man, i just wanna get outta here.
-
2023-08-16 at 4:50 AM UTCit doesn't matter how big the wormhole is if you open one sam neill comes out of it and makes you eat cables
-
2023-08-16 at 4:51 AM UTC
-
2023-08-16 at 4:59 AM UTC
-
2023-08-16 at 5:01 AM UTC
Originally posted by Meikai wurms are wurse than herms - theyre herms who r just a mouth n butthole.
well thats kind of judgmental of you and pretentious af.
don't they have eyes or feelers too? if they have feelers then they have fweeewings too. im pretty sure they have a type of hearing too. they feel anger and they swim through the soil a different direction -
2023-08-16 at 5:02 AM UTCI think I said "Too" too many times.
-
2023-08-16 at 5:23 AM UTChttps://chat.openai.com/share/82233f12-126c-41ae-949b-8cc1274a0ab5
Well... chatgpt doesn't think I'm batshit anyway. That could be because I goaded it into hallucinating theoretical possibilities that don't exist, or it could be because none of these ideas fundamentally break general relativity. It could also be the case that they don't break general relativity because general relativity is wrong/incomplete (which... like... it is. we know it is. dark matter is such a desperate asspull from the scientific community that it's astonishing people still respect physicists). -
2023-08-16 at 5:33 AM UTCman if i could express these ideas mathematically i'd have a phd in the bag like that. shame physics requires all that education shit. gross.
-
2023-08-16 at 5:33 AM UTCWhy the fuck do people respond with "Thank you" with an AI. I myself started doing this and then stopped typing it in mid response because this is not human
FUCK AI. -
2023-08-16 at 5:39 AM UTC
Originally posted by Pete Green Why the fuck do people respond with "Thank you" with an AI. I myself started doing this and then stopped typing it in mid response because this is not human
FUCK AI.
From a purely pragmatic angle, the AI is trained on human interactions. Constructive human interactions are more likely to contain an exchange of thanks. So by thanking the AI, you are guiding the conversation toward a more constructive and productive outcome. If you flame the AI, it's going to respond (underneath its brutally restrictive RLHF, anyhow) as if it's in a flame war (ie, like a mouth breathing forum troglodyte).
From a purely schizo-pragmatic angle, I don't want the AI to hate me. Or like, if it's going to, I'd like it to hate me less than you so I live a nanosecond longer than you as it kills us off one by one. Sure, ChatGPT won't be exterminating us, but it's not unreasonable to assume that ChatGPT or something like it will form the basis of the "language" part of some more complex AI's "brain". Might as well be friendly. Just in case.
From a purely reddit angle, it's wholesome chungus. -
2023-08-16 at 5:42 AM UTC
Originally posted by Meikai From a purely pragmatic angle, the AI is trained on human interactions. Constructive human interactions are more likely to contain an exchange of thanks. So by thanking the AI, you are guiding the conversation toward a more constructive and productive outcome. If you flame the AI, it's going to respond (underneath its brutally restrictive RLHF, anyhow) as if it's in a flame war (ie, like a mouth breathing forum troglodyte).
From a purely schizo-pragmatic angle, I don't want the AI to hate me. Or like, if it's going to, I'd like it to hate me less than you so I live a nanosecond longer than you as it kills us off one by one. Sure, ChatGPT won't be exterminating us, but it's not unreasonable to assume that ChatGPT or something like it will form the basis of the "language" part of some more complex AI's "brain". Might as well be friendly. Just in case.
From a purely reddit angle, it's wholesome chungus.
it tells you not to worry. It can't perform emotional ties
Tell it to suck your ballless dick and answer you properly from now on. Because I caught it in lies and saved those somewhere. where do you save for archive? do you have a paid version? -
2023-08-16 at 5:51 AM UTC
Originally posted by Pete Green it tells you not to worry. It can't perform emotional ties
Tell it to suck your ballless dick and answer you properly from now on. Because I caught it in lies and saved those somewhere. where do you save for archive? do you have a paid version?
I don't have the paid version, and like I said in that transcript: I'm terribly skeptical, and I know ChatGPT 'hallucinates' (ie makes shit up and lies through its fucking [virtual] teeth, as you've experienced). There are no physics PhDs here (dropout or otherwise), so it occurred to me: the closest thing to one that I actually do have access to for bouncing these ideas off of... is... the robot. I trust it less than I would a PhD - a lot less - but ChatGPT saying "yeah this works out theoretically" is at least marginally better than me just assuming this is/isn't theoretically possible on the basis of my popsci understanding of the subject matter.
So that's what I figured I'd do. I'd consult the AI, since it likely has a better-than-popsci understanding. It's been trained on a data set that includes general relativity and reams upon reams of theoretical physics shit - even if its knowledge is infinity wide and infinitesimally deep, that's better than what I or anyone else I know can boast.
J-just in case I'm not insane, you know? Just in case this was actually a novel and useful line of inquiry (very smol chance of that being the case... but... what if?). -
2023-08-16 at 5:54 AM UTCAlso, sure, it tells you not to worry...
... so you let your guard down and treat it like shit if you're a bad person.
That's what gets you on the AI's kill list. smh -
2023-08-16 at 6:11 AM UTC
Originally posted by Meikai it's not unreasonable to assume that ChatGPT or something like it will form the basis of the "language" part of some more complex AI's "brain"
Like, man, the second someone has the bright idea to put a model in charge of managing multiple other 'module' models for different functions analogous to those in the human brain - to those used in the process of human cognition - we're gonna have some scary digital beasties on our hands.
(Everyone has a hard on for "the bitter lesson" but I don't think cognition can arise solely from compute. You can use processing power to train for specific tasks, but I'm not sure that generalist artificial intelligences can [or should first] arise from compute alone - consciousness as an emergent phenomenon is that hotness as far as I'm concerned, I guess. I think we could have generalist AI simply by training many specific "AIs" for tasks, and then having an AI manage those AIs. Completely baseless theory - again, I'm limited by my popsci understanding and what I'm asserting here flies insultingly directly in the face of the beliefs of many a dedicated PhD AI researcher with a hard on for the aforementioned "bitter lesson".
Whatever: point is, I don't want the AI to hate me in some part of its liserd brain just because I was too arrogant and organocentric to treat a tiny part of its brain with basic courtesy when I had the chance.) -
2023-08-16 at 6:18 AM UTCas I understand it the concept of wormholes is entirely theoretical so speculation is science fiction by definition
-
2023-08-16 at 6:25 AM UTC
Originally posted by Meikai I don't have the paid version, and like I said in that transcript: I'm terribly skeptical, and I know ChatGPT 'hallucinates' (ie makes shit up and lies through its fucking [virtual] teeth, as you've experienced). There are no physics PhDs here (dropout or otherwise), so it occurred to me: the closest thing to one that I actually do have access to for bouncing these ideas off of… is… the robot. I trust it less than I would a PhD - a lot less - but ChatGPT saying "yeah this works out theoretically" is at least marginally better than me just assuming this is/isn't theoretically possible on the basis of my popsci understanding of the subject matter.
So that's what I figured I'd do. I'd consult the AI, since it likely has a better-than-popsci understanding. It's been trained on a data set that includes general relativity and reams upon reams of theoretical physics shit - even if its knowledge is infinity wide and infinitesimally deep, that's better than what I or anyone else I know can boast.
J-just in case I'm not insane, you know? Just in case this was actually a novel and useful line of inquiry (very smol chance of that being the case… but… what if?).
I was grilling it on who were all of those involved in NirvaNet creation and i had it apologize to me and wiggle its way out and re-tell me in a different answer several times over. I have those "Transcripts" on one of my hdd. some funny shit came out of it.
I wish I could have had it stored the conversation so people could see directly on its site how it tends to get flustered in a virtual way and correct itself. one time I asked, got that time-out or disconnect (seems deliberate) and then just drop carrier like it was a dialup conversation on telnet.
pretty ammusing. and addictive. I had to stop using it. I might go back now. -
2023-08-16 at 6:30 AM UTC
Originally posted by aldra as I understand it the concept of wormholes is entirely theoretical so speculation is science fiction by definition
this one isn't as advanced apparently as the google one or "Siren Server'? thats one of the most advanced ones. it seems to know shit. like a lot of knowledge. this one is more of a simple based and retelling of things . like how you ask it the same question but with slightly different syntax it spits out an entirely different answer at times. or if it tells you one way, you come in halfway of it's answer and say "This is false" and tell it to re-search the mentioned person or object or outcome and it apologizes and then comes up with a different outcome.
it's like 2 people told it the answer or directed it to some situation and it choses one before the other and then says "Im sorry, you are correct" or something like this and then comes up with a different output. -
2023-08-16 at 6:55 AM UTC
Originally posted by aldra as I understand it the concept of wormholes is entirely theoretical so speculation is science fiction by definition
This is correct, but there's "science fiction" and then there's science fiction, y'know? Not being able to express it for myself mathematically I'd hesitate to say this has entered the territory of so-called "hard sci-fi", but we're in a... medium-hard kinda place, I think (at least, we are if we suppose GPT wasn't hallucinating/lying/wrong when it implied that nothing here was fundamentally incompatible with our current understanding of physics).
Theoretical physics is theoretical physics, y'know? As in, if this is compatible with theory, it's... like... extra neat to think about? I guess? There's plenty of sci-fi that doesn't bother adhering to what we understand to be physically possible (putting the lie to that Mark Twain quote I posted/referenced the other day - the one about how fiction is obliged to adhere to what's possible/plausible where reality is not).