User Controls
Posts by DUSM Raylan Givens
-
2024-04-07 at 9:32 PM UTC in How are you feeling at the moment..John Feel
-
2024-04-07 at 9:31 PM UTC in What are you doing at the momentJohn Butt
-
2024-04-07 at 9:30 PM UTC in Are you looking forward to the big war with Iran?John God
-
2024-04-06 at 3:32 PM UTC in What are you listening to right now, space nigga?
-
2024-04-06 at 3:31 PM UTC in Are you looking forward to the big war with Iran?Johnny Tornado
-
2024-04-05 at 4:35 PM UTC in Are you looking forward to the big war with Iran?It's not an apoceclipse
-
2024-04-05 at 12:03 PM UTC in What are you doing at the momentKrozDogs 2: The Casperwich
-
2024-04-04 at 7:42 PM UTC in The Angst Of Being GayHe was gay tho
-
2024-04-04 at 7:12 PM UTC in The Angst Of Being GaySudo, why don't you tell us how it feels?
-
2024-04-04 at 4:03 PM UTC in Glen DanzigMore like Gayn GayFag m I rite?
-
2024-04-04 at 2:14 PM UTC in What are you doing at the momentBout to make some nongshim shin ramyun noodlez with fried chickim dunked thrown in
-
2024-04-04 at 1:16 PM UTC in We have a moral obligation to stop eating meatNigger thread for the ages
-
2024-04-03 at 2:42 PM UTC in What are you doing at the momentWatching Matilda: The Musical
-
2024-04-03 at 11:06 AM UTC in What are you doing at the momentJohn Dessert
-
2024-04-02 at 12:48 PM UTC in New SOTA open source foundation model: DBRXTopical forum. Lanny, ban this chink.
-
2024-03-30 at 5:10 PM UTC in New SOTA open source foundation model: DBRX
Originally posted by ner vegas "Humans are the only supercomputer that can be mass-produced with unskilled labour"
Yea even if you add up all the cost of raising a human to adulthood in a good environment.... DBRX cost $10million to train all in one goas it stands AI at the moment is just an algorithm for analysing large volumes of data;
???
What are you trying to say with this?
What's the "just"? That's like the broadest definition in the universe. It could apply to anything from an entire biosphere to a torrent tracker.the question at hand seems to be how sources of data are weighted for accuracy (trust), given that in your example there's probably a lot of retard assumption in the data it's trained on.
It's not that straightforward as giving preference to some data or not. It's not really even related to faulty assumptions in the semantic content of the input text data.
E.g. The encoder doesn't really care if the string "a fish is a dog" is included in the training data or if it's true or false because it has no idea what a fish or a dog is and it possibly might even tokenize and embed that string in a way where the words "fish" and "dog" never appear completely. In a sense it only cares about how a particular sequence of tokens relates to the distribution of other tokens so that same string would "align" as true or false based on the information in the rest of the model.
So in that simplified example, if the distribution of information in the rest of the model indicates that a token "fish" is strongly related to all the data points that we IRL associate with a "banana" (yellow with some brown spots, curved and elongated, segmented peel that is easily separated, soft mushy core... Etc) and the token "dog" is strongly related to all the points we associate with "fruit" (you know what I mean) then it will think it's "true" (so to speak" that "a fish is a dog". Cuz (again, so to speak) it thinks "fish" refers to "banana" and "dog" refers to fruit.
So it really is about the gestalt and not about weighting between sources.in order to analyse 'like a human' it'd need external sensors that are able to verify what base-level data can be immediately discarded
That's the Transformer architecture's weighted attention mechanism? If I understand what you mean correctly. -
2024-03-30 at 3:23 PM UTC in New SOTA open source foundation model: DBRX
Originally posted by RETARTEDFAGET what does it use for math? it actually gives accurate answers for the arithmetic and calculus questions i tried
It doesn't reference an external tool if that's what you're asking. They haven't released the dataset yet but I'm guessing it is just very well curated and trained. -
2024-03-29 at 8:13 PM UTC in New SOTA open source foundation model: DBRXNow my view has evolved. Dijkstra said, "whether a computer can think is about as relevant as whether a submarine can swim". A sub definitely can't "swim" like a human swims but in all the things you want a sub to do, it is vastly superhuman, and it's accomplished by completely different means that are better and more scalable. In the same way I think bio inspired analogies are very important but limited. If you want to create truly superhuman machine intelligence, you will have to find ways of accomplishing human like (and beyond) versatility and informational understanding and synthesis skills etc without just recreating a human brain. Cuzehats the point? Humans are already cheap.
-
2024-03-29 at 7:25 PM UTC in What do you eat in times of extreme poverty?John bread
-
2024-03-29 at 7:24 PM UTC in New SOTA open source foundation model: DBRXhttps://huggingface.co/spaces/databricks/dbrx-instruct
Ok so this thing is actually mad impressive to me.
Both GPT-4 and Gemini Advanced have failed one of my "standard" test questions and it's interesting how they fail it.
Basically I ask a conservation of momentum related question regarding counter rotating masses (flywheels) internal to a box, one flywheel isnsuddenly braked, what happens?
Both Gemini Advanced and GPT-4 on some level believe that the box can achieve net translation.
This is blatantly false and violates conservation of momentum.
DBRX, while it failed to give the correct answer (box will start rotating against braked flywheel) still reliably maintained consistency with denying any net translation could ever be achieved.
Try it out, report your results