User Controls

I have single-handedly condensed all of language into consisting of 4 subsets.

  1. #1
    Sophie Pedophile Tech Support
    Or something.

    What i mean by that tad pretentious title is that everything we try to say when we use language to communicate is that the words we use fall into 4 categories of "description". I'll post them so you see what i mean.

    When we use language we want to convey 4 things, it could be 2 but no more then 4. All language can be reduced to needing to convey these things in order to work. those things are.

    Concepts, locations, states and time. Also known as when something is taking place. For instance if we take the word "dog" that conveys a concept. When we take the word "Half" that conveys a state, "there" conveys a location and so on.


    Discuss.
  2. #2
    Nil African Astronaut [the overexcited four-footed chanar]
    ye but what about when you greet somebody? it doesn't really convey any info.
  3. #3
    Sophie Pedophile Tech Support
    Originally posted by Nil ye but what about when you greet somebody? it doesn't really convey any info.

    We'll file that under conveying a state. A state of acknowledgement of another person.
  4. #4
    how do you ask a question?
  5. #5
    Sophie Pedophile Tech Support
    Originally posted by greenplastic how do you ask a question?

    Good question. I would say when you ask a question you are in a state of inquiry but then, that just describes the act of asking a question. Hmm.
  6. #6
    kroz weak whyte, frothy cuck, and former twink
    mai tige he's be doringgiering it again ii

    /thread
    The following users say it would be alright if the author of this post didn't die in a fire!
  7. #7
    Originally posted by greenplastic how do you ask a question?

    Basically what I was gonna ask.
  8. #8
    Lanny Bird of Courage
    I'd also throw in exclamatory remarks as not fitting particularly well in this quadchotomy. Like when you stub your toe and say "fuck" even if there's no one there. Language would seem to serve more purposes than simple communication.
  9. #9
    Lanny Bird of Courage
    Taxonomy of language is really interesting though. You can look at the literature POS taggers for algorithmic approaches. It's one of the few areas in AI where linguistic perspectives have been seriously considered (vs. like general statistical ML techniques which prefer to ignore domain knowledge).
    The following users say it would be alright if the author of this post didn't die in a fire!
  10. #10
    Originally posted by Lanny Taxonomy of language is really interesting though. You can look at the literature POS taggers for algorithmic approaches. It's one of the few areas in AI where linguistic perspectives have been seriously considered (vs. like general statistical ML techniques which prefer to ignore domain knowledge).

    Please elaborate. I find this fascinating.
  11. #11
    Lanny Bird of Courage
    Originally posted by Captain Falcon Please elaborate. I find this fascinating.

    On POS taggers specifically or AI's general ignorance of articulated language models? The former seem fairly straight forward. A POS tagger needs to have a priori linguistic categories to sort words into so knowledge of those categories is a prerequisite (unless you want to go crazy and create learned POS categories but that's venturing into the fringe). In AI in general it's almost universally acknowledged that algorithms, especially those related to language, are designed to try to infer the rules governing language from a training set as opposed to explicitly stating them. A common reason given is that language is learned empirically (not an uncontested claim) and so it's rules should be discoverable to an artificial agent empirically. I think a more likely, if more cynical, explanation is that computer scientists generally aren't trained in linguistics so it's easier to think about language as a statistical problem as opposed to one governed by a (potentially very complex) formal system.
  12. #12
    Sophie Pedophile Tech Support
    I was half-bullshitting with the OP anyway bu i do find language interesting. How it relates to AI somewhat les so but mostly because i don't know a lot about AI and machine learning and such in the first place.
  13. #13
    NARCassist gollums fat coach
    language can be very interesting





    .
  14. #14
    Originally posted by Lanny On POS taggers specifically or AI's general ignorance of articulated language models? The former seem fairly straight forward. A POS tagger needs to have a priori linguistic categories to sort words into so knowledge of those categories is a prerequisite (unless you want to go crazy and create learned POS categories but that's venturing into the fringe). In AI in general it's almost universally acknowledged that algorithms, especially those related to language, are designed to try to infer the rules governing language from a training set as opposed to explicitly stating them. A common reason given is that language is learned empirically (not an uncontested claim) and so it's rules should be discoverable to an artificial agent empirically. I think a more likely, if more cynical, explanation is that computer scientists generally aren't trained in linguistics so it's easier to think about language as a statistical problem as opposed to one governed by a (potentially very complex) formal system.

    Yeah I was wondering about linguistic perspectives being used vs statistical ML techniques, and what you meant by it, what the distinction between the two are in this specific field.

    From my understanding, normal machine learning techniques are used to solve abstract or highly complex problems through trial and error rather than systemizing it and breaking it down into exact general rules. So for example, you can teach the AI the abstract idea of what is a hotdog and what is not a hotdog by feeding it 50000000000000 images of hotdog and as many not-hotdog images as possible, and eventually it gets really, really good at sorting hotdog vs not hotdog.

    Whereas otherwise you would have to directly write in "X means it's hotdog, Y means it's not hotdog" conditions.

    Am I understanding correctly that that is kind of what it is with language? In this case the linguistic approach would be to create a map of all the rules and exceptions of the language perfectly, and then just program that.

    Would you say that an engineer could probably more easily create an AI for dealing with (as an example) Lojban Vs English (assuming equal, thorough knowledge of both)?
  15. #15
    Lanny Bird of Courage
    Originally posted by Captain Falcon Yeah I was wondering about linguistic perspectives being used vs statistical ML techniques, and what you meant by it, what the distinction between the two are in this specific field.

    From my understanding, normal machine learning techniques are used to solve abstract or highly complex problems through trial and error rather than systemizing it and breaking it down into exact general rules. So for example, you can teach the AI the abstract idea of what is a hotdog and what is not a hotdog by feeding it 50000000000000 images of hotdog and as many not-hotdog images as possible, and eventually it gets really, really good at sorting hotdog vs not hotdog.

    Whereas otherwise you would have to directly write in "X means it's hotdog, Y means it's not hotdog" conditions.

    Am I understanding correctly that that is kind of what it is with language? In this case the linguistic approach would be to create a map of all the rules and exceptions of the language perfectly, and then just program that.

    Would you say that an engineer could probably more easily create an AI for dealing with (as an example) Lojban Vs English (assuming equal, thorough knowledge of both)?

    Right, that's pretty much it. Orthodox ML approaches treat language, or vision, or cognition, or any other task we might want to automate as an exercise in mapping stochastic inputs to outputs, a system governed more or less by a set of probabilistic weights and some equation that combines them. N images of hotdogs are used to predict if the next image encountered is a hotdog or not.

    The heterodoxy maintains this is a fundamentally futile strategy, that language is governed by a finite number of laws (allowing for a finite set of exceptions to each) and that any system that seeks to understand language needs to understand those laws. But then the term "understand" is interesting, commercial AI research doesn't really care about "understanding", it's not a word in their vocabulary, the focus is on problem solving. I tend to think the course we should be interested in is understanding, but from experience problem solving approaches have solved more human problems than understanding approaches.

    It's an interesting question what Lojban means to the "understanding" model. On the one hand Winograd cases resolve to a simple Turing test under Lojban, that is computers can parse Lojban sentences unambiguously, but most advocates of understanding models will say understanding language is more than being able to form a parse tree so it's still irrelevant since language tasks are about meaning rather than parsing.

    As to the point about directly writing some rules about "X means hotdog, Y means it's not a hotdog", this is very close to the distinction between these two approaches but is very slightly too reductive. The heterodoxy might relegate the particular task of discriminating hotdogs from non-hotdogs to some automated process but fundamentally the parameters to that process ought to yield some information about "hotdogness" rather than statistical artifacts which has no meaning independent of the recognition context. A more concrete example might be that the understanding model would suppose the task of recognizing hotdogs and drawing hotdogs relies on the same internal structure, while the orthodox statistical approach posits no necessary relationship between these tasks: drawing hotdogs draws on one learning set of drawn hotdogs while recognizing hotdogs relies on another of actual hotdogs representing the general class of thing recognizable as hotdogs.
    The following users say it would be alright if the author of this post didn't die in a fire!
  16. #16
    Originally posted by Lanny Right, that's pretty much it. Orthodox ML approaches treat language, or vision, or cognition, or any other task we might want to automate as an exercise in mapping stochastic inputs to outputs, a system governed more or less by a set of probabilistic weights and some equation that combines them. N images of hotdogs are used to predict if the next image encountered is a hotdog or not.

    The heterodoxy maintains this is a fundamentally futile strategy, that language is governed by a finite number of laws (allowing for a finite set of exceptions to each) and that any system that seeks to understand language needs to understand those laws. But then the term "understand" is interesting, commercial AI research doesn't really care about "understanding", it's not a word in their vocabulary, the focus is on problem solving. I tend to think the course we should be interested in is understanding, but from experience problem solving approaches have solved more human problems than understanding approaches.

    It's an interesting question what Lojban means to the "understanding" model. On the one hand Winograd cases resolve to a simple Turing test under Lojban, that is computers can parse Lojban sentences unambiguously, but most advocates of understanding models will say understanding language is more than being able to form a parse tree so it's still irrelevant since language tasks are about meaning rather than parsing.

    As to the point about directly writing some rules about "X means hotdog, Y means it's not a hotdog", this is very close to the distinction between these two approaches but is very slightly too reductive. The heterodoxy might relegate the particular task of discriminating hotdogs from non-hotdogs to some automated process but fundamentally the parameters to that process ought to yield some information about "hotdogness" rather than statistical artifacts which has no meaning independent of the recognition context. A more concrete example might be that the understanding model would suppose the task of recognizing hotdogs and drawing hotdogs relies on the same internal structure, while the orthodox statistical approach posits no necessary relationship between these tasks: drawing hotdogs draws on one learning set of drawn hotdogs while recognizing hotdogs relies on another of actual hotdogs representing the general class of thing recognizable as hotdogs.

    Interesting, thanks for the detailed post. What do you think of the idea of using ML to map out the rules of a language precisely?
  17. #17
    Lanny Bird of Courage
    Originally posted by Captain Falcon Interesting, thanks for the detailed post. What do you think of the idea of using ML to map out the rules of a language precisely?

    I think to some degree it's inevitable that we use ML approaches, probably for learning and almost certainly for verification/benchmarking. But it's hard to think about what ML approach could yield meaningful rules for language. There's this paper that's pretty famous in computer vision called Eigenfaces recently, it deals with facial recognition. It's an interesting approach, unlike most CV algorithms you can represent its knowledge base as a small set of images like this:



    It turns out these blurry face images do a very good job of representing human variation in facial structure, they're created by taking a bunch of images of faces and creating these representations that capture the most representative deltas between them, and it actually performs quite well when images are normalized (a lot of commercial facial recognition is just a bunch of engineering techniques for extracting faces from images and processing them into a homogenous format before feeding them to an eigenfaces engine). BUT we can't really figure shit out from looking at these images. Like you can kinda see glasses across a couple frames, we can tell eyes tend to darker than eye sockets, but the interesting thing is there is no explicit mention of facial features in the algorithm. It deals with bitmaps, exactly nowhere in the algorithm or knowledge is there any mention of eyes or noses, cheekbones or skin color. Everything we use to understand and describe faces is absent. And I think the same case is likely to be true of language. We might find that statistically sometimes "as" describes time ("as I was leaving") and sometimes facilitates comparison ("I'm not as good at singing") but that just represents the difference. It says nothing about the conceptual quality of the word that lets it serve two different roles in speech.

    So the tl;dr I guess is ML techniques might be useful classification tasks once theoretical categories of language are established but I don't think they're likely to give any real insight into the rules governing language.
    The following users say it would be alright if the author of this post didn't die in a fire!
  18. #18
    Originally posted by Lanny I think to some degree it's inevitable that we use ML approaches, probably for learning and almost certainly for verification/benchmarking. But it's hard to think about what ML approach could yield meaningful rules for language. There's this paper that's pretty famous in computer vision called Eigenfaces recently, it deals with facial recognition. It's an interesting approach, unlike most CV algorithms you can represent its knowledge base as a small set of images like this:



    It turns out these blurry face images do a very good job of representing human variation in facial structure, they're created by taking a bunch of images of faces and creating these representations that capture the most representative deltas between them, and it actually performs quite well when images are normalized (a lot of commercial facial recognition is just a bunch of engineering techniques for extracting faces from images and processing them into a homogenous format before feeding them to an eigenfaces engine). BUT we can't really figure shit out from looking at these images. Like you can kinda see glasses across a couple frames, we can tell eyes tend to darker than eye sockets, but the interesting thing is there is no explicit mention of facial features in the algorithm. It deals with bitmaps, exactly nowhere in the algorithm or knowledge is there any mention of eyes or noses, cheekbones or skin color. Everything we use to understand and describe faces is absent. And I think the same case is likely to be true of language. We might find that statistically sometimes "as" describes time ("as I was leaving") and sometimes facilitates comparison ("I'm not as good at singing") but that just represents the difference. It says nothing about the conceptual quality of the word that lets it serve two different roles in speech.

    So the tl;dr I guess is ML techniques might be useful classification tasks once theoretical categories of language are established but I don't think they're likely to give any real insight into the rules governing language.

    So my understanding is that ML won't help you understand the semantic content of language but it can help you determine the rule structure. So "I like dogs" is pretty much no different rule wise than "I dislike cats". In that case, what is your opinion on how to solve the riddle of language for computers to understand?
  19. #19
    Lanny Bird of Courage
    Originally posted by Captain Falcon So my understanding is that ML won't help you understand the semantic content of language but it can help you determine the rule structure. So "I like dogs" is pretty much no different rule wise than "I dislike cats". In that case, what is your opinion on how to solve the riddle of language for computers to understand?

    It's an interesting question, like could we use ML approaches to generate a formal grammar? I don't know, in some sense techniques like Markov models in language tasks do produce a sort of grammar, they work for recognition and then you can turn them around and use them for generative tasks. So it is a kind of grammar, just not a great one in terms of mirroring natural languages (but to be fair the state of the art uses more sophisticated models).

    But I think it turns into a kind of curve fitting issue, many grammars can produce a training set, how do we know which one was used? Like let's say I invent a language that is defined by its first order markov property: sentences are constructed by taking an existing sentence in the language and adding one more word depending only on the last word. And then I train a markov chain by feeding it a large number of sentences from this language. The "grammar" represented the trained model has might be able to produce every sentence in the language but it's not necessarily the same grammar as was used to train it. And we can think of all kinds of way to make the source grammar trick the model, maybe the grammar doesn't allow for sentences longer than N words but the model will never know that and produce illegally long sentences. That's an example specific to the markov model but I can't think of one that doesn't have the same issue. There's also Gold's theorem which formally rules out the possibility of language learning without negative input (someone saying "this isn't a valid sentence").

    The philosophical response standard among the orthodoxy is that if you make a computer that can produce well formed sentences but it's not using the same grammar as humans, or which fails in edge cases, then it doesn't really matter. Humans produce ungrammatical sentences all the time, and it's at least worth asking if all the speakers of the same language really do have the same mental grammar or if everyone's model of language is internally different but externally similar. And that's fair, if you take the goal of AI research to be the production of systems like chatbots and spell checkers and such then it really is a non-issue. But in my opinion the really important output of the AI project is not these systems but rather a greater understanding of human cognition. That's not a super popular opinion (although it is represented as a minority in the literature) but we kind of have a lot of human intelligence, 8 billion people, society isn't going to collapse if we have to employ some of them in language tasks, language skills are abundant and cheap. But insight into the nature of language itself, something that is a substrate of thought, that we all use easily every day but can't explain (we can identify qualities like "wordiness" or "bold writing" or "awkward wording" but can't really explain them intellectually), is fascinating.
    The following users say it would be alright if the author of this post didn't die in a fire!
  20. #20
    good job stealing my idea sophie

    who? what? when? where? why?

    lets build models!
Jump to Top