²Ô¾®¿Õ·¬ºÅѸÀ×Á´½Ó

    1. <form id=VHjPPVrdo><nobr id=VHjPPVrdo></nobr></form>
      <address id=VHjPPVrdo><nobr id=VHjPPVrdo><nobr id=VHjPPVrdo></nobr></nobr></address>

      *** Voting for the MeFiCoFo Board has begun! ***
      Oct. Site Rebuild Update | 10/5 Board Update | Sept. Site Update

      What's gonna happen outside the window next?
      November 18, 2012 1:51 PM   Subscribe

      Noam Chomsky on Where Artificial Intelligence Went Wrong
      posted by cthuljew (55 comments total) 40 users marked this as a favorite
       
      tl;dr: They stopped listening to Chomsky's colorless green ideas.
      posted by erniepan at 2:12 PM on November 18, 2012 [8 favorites]


      "Went wrong?" Wait, what? The uprising has started already?!
      posted by Ghidorah at 2:19 PM on November 18, 2012 [1 favorite]


      Once we all start failing the turing test the shoe will be on the other foot.
      posted by Artw at 2:22 PM on November 18, 2012


      Excerpt from page 3:

      Well, we are bombarded with it [noisy data], it's one of Marr's examples, we are faced with noisy data all the time, from our retina to...

      Chomsky: That's true. But what he says is: Let's ask ourselves how the biological system is picking out of that noise things that are significant. The retina is not trying to duplicate the noise that comes in. It's saying I'm going to look for this, that and the other thing. And it's the same with say, language acquisition. The newborn infant is confronted with massive noise, what William James called "a blooming, buzzing confusion," just a mess. If say, an ape or a kitten or a bird or whatever is presented with that noise, that's where it ends. However, the human infants, somehow, instantaneously and reflexively, picks out of the noise some scattered subpart which is language-related. That's the first step. Well, how is it doing that? It's not doing it by statistical analysis, because the ape can do roughly the same probabilistic analysis. It's looking for particular things. So psycholinguists, neurolinguists, and others are trying to discover the particular parts of the computational system and of the neurophysiology that are somehow tuned to particular aspects of the environment. Well, it turns out that there actually are neural circuits which are reacting to particular kinds of rhythm, which happen to show up in language, like syllable length and so on. And there's some evidence that that's one of the first things that the infant brain is seeking -- rhythmic structures. And going back to Gallistel and Marr, its got some computational system inside which is saying "okay, here's what I do with these things" and say, by nine months, the typical infant has rejected -- eliminated from its repertoire -- the phonetic distinctions that aren't used in its own language. So initially of course, any infant is tuned to any language. But say, a Japanese kid at nine months won't react to the R-L distinction anymore, that's kind of weeded out. So the system seems to sort out lots of possibilities and restrict it to just ones that are part of the language, and there's a narrow set of those. You can make up a non-language in which the infant could never do it, and then you're looking for other things. For example, to get into a more abstract kind of language, there's substantial evidence by now that such a simple thing as linear order, what precedes what, doesn't enter into the syntactic and semantic computational systems, they're just not designed to look for linear order. So you find overwhelmingly that more abstract notions of distance are computed and not linear distance, and you can find some neurophysiological evidence for this, too. Like if artificial languages are invented and taught to people, which use linear order, like you negate a sentence by doing something to the third word. People can solve the puzzle, but apparently the standard language areas of the brain are not activated -- other areas are activated, so they're treating it as a puzzle not as a language problem. You need more work, but...

      posted by Brian B. at 2:29 PM on November 18, 2012


      Being smart, it turns out, is easier than being smart in the way meat-brains are smart.
      posted by zippy at 2:39 PM on November 18, 2012 [3 favorites]


      I thought this was a really interesting interview. I really like some of what he said about approaching biological systems. Though, I still have reservations about his arguments against bayesian analysis.
      posted by SounderCoo at 3:12 PM on November 18, 2012


      I think the key phrase I hear him repeating in this interview and his earlier talk is 'modeling unanalyzed data.' This is a nonsensical phrase if you think of modeling as something you do with systems. That you're trying to computationally instantiate a system which has the same model as another system. It assumes that the 'best' model is one that most faithfully converts one system to another. However, what newfangled statistical analyses are doing skips the system-based modeling step entirely. This approach assumes that the 'best' model is as complicated as the simplest model that could produce the observed data. We are exchanging understanding for power. It's easy to see why this approach draws practitioners and other resources away from systems-based modeling.
      posted by persona at 3:21 PM on November 18, 2012 [5 favorites]


      Artificial Intelligence on Where Noam Chomsky Went Wrong.
      posted by usertm at 3:33 PM on November 18, 2012 [11 favorites]


      I think you pasted wrong.
      posted by LogicalDash at 3:39 PM on November 18, 2012


      I think not.
      posted by Cookiebastard at 3:58 PM on November 18, 2012 [3 favorites]


      I'm a machine, I can't think.
      posted by onya at 4:24 PM on November 18, 2012 [1 favorite]


      Brilliant interview. So much good stuff in here. No surprise: he's the most brilliant person in the world.
      posted by painquale at 4:26 PM on November 18, 2012 [1 favorite]


      A terrible headline, I think. So far as I can see, Chomsky doesn't say that AI "went wrong"; he says it "maybe not abandoned, but put to the side, the more fundamental scientific questions" in order to concentrate on "specific goals". That is, it's not trying to create a complete brain, which was premature, but focusing on building up a repertoire of simpler, useful functions.

      I think Chomsky's gone off the rails on his own specialty, syntax, so I was almost disappointed that the interview didn't touch on anything I disagreed with. :) I don't think the interviewer gave any evidence of knowing what Chomskyan syntax is about or what's right and wrong about it. He mostly seems to be probing Chomsky's opinion on subjects out of his (Chomsky's) area of interest.
      posted by zompist at 4:31 PM on November 18, 2012 [1 favorite]


      Contrasting view from Eugene Charniak: The Brain as a Statistical Inference Engine¡ªand You Can Too.

      I don't know all the details, but in early Natural Language Processing - getting computers to understand human language - the primary model was based on "symbols" which were universal in human languages, and of which words and sentences were only expressions (the Wikipedia article's history section mentions this briefly). Around '90, though, people tried statistical systems - to oversimplify, you basically just look for statistical patterns and infer meaning from them - and the results blew everything that had come before out of the water. Charniak was one of the first to take the new approach, and he and the field basically threw away the preceding thirty years of research and haven't looked back. Rules are still critical in some systems - particularly translations between very unrelated languages - but in addition to achieving more practical results, the statistical models do a much better job approximating human results than Chomsky would have you believe. There's also an explanation - backed up with experimental results - of a statistical basis for the children's ability to pick out language mentioned in Brian B.'s pullquote above.

      Full disclosure: Charniak was my Master's advisor, and he has excellent taste in bow ties.
      posted by 23 at 4:41 PM on November 18, 2012 [7 favorites]


      I love how almost every interview with Chomsky (the most brilliant person in the world), always has a 4-6 paragraph intro telling us how awesome he is. Great article!
      posted by vkxmai at 4:48 PM on November 18, 2012 [1 favorite]


      It's interesting that Sydney Brenner is also mentioned. I saw him speak a few years ago, and he went on about how genomics is completely misguided, will never provide concrete answers or even hypotheses, and generally derided the audience he was speaking to, though with at least a hint of good nature. He then went on to prescribe a course of X, Y, and Z, which just happened to be exactly what genomics is. He even drew a plot on the chalkboard as an example of the types of analysis that he thought people should be doing, which was exactly the type of genomic analysis that could have been presented at the lab meeting of anyone in the room.

      This is exactly mirrored by Chomsky when he says that children are clearly not doing probabilistic analyses, that's obviously the wrong way, and then in the following sentences details precisely the sorts of probabilistic analyses that would describe the process of learning language. It's as though when recent work had been presented they get hung up on a particular word at the beginning of a talk that doesn't fit the direction of research that they had been hoping to leave to the world,then don't pay attention to the meat of the rest of the science because they may not get the intellectual credit for the direction that the field is going. (This is quite common among the elder-scientist set, I believe.)

      In short, there's probably a tendency that as one ages and sees the field advance, and one naturally becomes more curmudgeonly, to be convinced that those whipper snappers are doing it all wrong and they should be listening to your wisdom, and this tendency seems to override your ability to actually listen and learn from them.

      As someone who learned linguistics in classes where Chomsky was referred to as "the Great One" but only half in jest, Chomsky's anti-machine learning tirades have truly caused the scales to fall from my eyes. I find little insight, many errors of fact, and lots of self-congratulatory back patting in his recent forays into AI commentary. It makes me fear for his legacy. Will Chomsky become a Freud, much beloved but nearly all of his theory discarded? It's seeming quite likely to me.
      posted by Llama-Lime at 4:56 PM on November 18, 2012 [11 favorites]


      I am too stupid to understand anything in that article.
      posted by roboton666 at 5:26 PM on November 18, 2012


      And to Chomsky I say:

      "You mad bro?"

      And this is coming from somebody who majored in cognitive science with a traditional emphasis on modular cognitive systems, and linguistic ideas like the universal grammar.

      But see, the thing is, that approach just wasn't working. It wasn't yielding results. AI research hasn't gone wrong, it's just gone in a different direction then Chomsky's entire cognitive model. Bayesian modeling and statistical approaches in machine learning work. I was going to point out Google's great success in this area, but several of their key researchers are involved in this debate and quoted.

      AI has undergone a Kuhn-level paradigm shift, and Chomsky is on the other side of that shift.

      That's what's happening here. Almost exactly like Einstein vs. Quantum Mechanics. The following quote could come from either Einstein or Chomsky:

      "Some physicists, among them myself, cannot believe that we must abandon, actually and forever, the idea of direct representation of physical reality in space and time; or that we must accept the view that events in nature are analogous to a game of chance."

      Another great quote from Einstein, that reflects what I think are some of Chomsky's hidden biases in this, is:

      If one wants to consider the quantum theory as final (in principle), then one must believe that a more complete description would be useless because there would be no laws for it. If that were so then physics could only claim the interest of shopkeepers and engineers; the whole thing would be a wretched bungle.

      Google is a hell of a shopkeeper built by some great engineers. But their approach isn't (usually) one based on building knowledge-based systems that reason via rules and the like. It's frustrating working with models and techniques like Support Vector Machines in Machine Learning where the end result is a set of features that have no meaning. But damn, they work. .

      The ironic part is a lot of the statistical models resemble the early work on connectionism and things like Perceptrons.

      Where AI went wrong... whatever. I want to lock Chomsky and Siri in a room together and see if she'll convince him on who went wrong.
      posted by formless at 5:32 PM on November 18, 2012 [11 favorites]


      Siri is a pretty crappy representative for AI.

      Google autocomplete, translate, or voice (or even Cleverbot, for that matter) is a pretty good representative. And I am sorry to say this, but all of them are more useful on a more regular basis to me personally than any of Noam Chomsky's ideas, brilliant as he may be.

      There are plenty of so-so minds in science that were simply in the right place at the right time, and it doesn't change the fact that they led revolutions. Larry Page and Sergey Brin may or may not be as bright as Chomsky, but they have changed the world in a way he never could.

      Truthfully, AI is irrelevant. IA -- intelligence amplification -- is the real prize, and that's what tools like most of Google's offerings have afforded us. You can use the Internet to become an idiot, a savant, or an idiot savant -- modern "AI" tools make it quicker to get to your destination.

      Incidentally, this isn't limited just to Internet-centric data -- most large biological and financial datasets are made more tractable and useful by the same techniques. And Bayesian model averaging wins out over model selection, in the long term, for one simple reason -- the world changes, and if your models can't, you lose. This is not a trivial matter, as it happens; it can be the difference between "rich" and "broke".
      posted by apathy at 6:00 PM on November 18, 2012


      Chomsky vs. Siri would be good for Chomsky because Siri is a piece of shit. It's easy to criticize the direction of AI for no other reason that the field keeps failing to meet the goals it sets, changes them, and talks about new incipient victories.

      I mean, we'll eventually get excellent q and a machines through big data, but this is a retreat from past promises of intelligent information agents in the 90s, which were themselves a retreat from promised AGI earlier. What we are getting is now not just functional simulation, but theatre; Siri as portrayed in ads is a literal fantasy designed to pressure us into pretending it is better than it is after we pay for it. It does, however, subtly train users to talk in ways it can process. This calls to mind an observation by Jaron Lanier that AI will succeed in part by making humans more machine-like. And certainly, there are folks out there that wish it were so.
      posted by mobunited at 6:02 PM on November 18, 2012 [6 favorites]


      Expert predictions of AI not significantly different than non-expert, not significantly different than past wrong predictions either.
      posted by mobunited at 6:07 PM on November 18, 2012 [1 favorite]


      In arguing against statistical approaches, Chomsky seems to be coming from the standpoint of Searle and his Chinese room thought experiment¡ªarguing that functional intelligence is not equivalent to true intelligence. It's an intuitively pleasing argument, but it's also deeply flawed. The trouble is that no one ever seems able to precisely define what true intelligence is, other than to say that humans have it and computers don't.
      posted by dephlogisticated at 6:07 PM on November 18, 2012


      > First of all, first question is, is there any point in understanding noisy data?

      Alright, that's it. If anyone takes this question seriously, I've nothing to add.

      The world is noisy. Our information is incomplete. Decisions must be made before complete information can be collected. Therefore, yes, not only is there a point to it, it's the single most important point for "IA" systems.

      Unbelievable. Might as well ask "is there any point in breathing?".
      posted by apathy at 6:08 PM on November 18, 2012 [2 favorites]


      In arguing against statistical approaches, Chomsky seems to be coming from the standpoint of Searle and his Chinese room thought experiment¡ªarguing that functional intelligence is not equivalent to true intelligence.

      No, Chomsky's concern is not Searle's. Chomsky thinks intelligence is computational. If anything, Searle's argument is an argument against Chomsky's approach even more than it is an argument against statistical analysis.
      posted by painquale at 6:12 PM on November 18, 2012 [3 favorites]


      My obligatory link to the Chomskeybot seems more relevant than ever in this thread.
      posted by charred husk at 7:01 PM on November 18, 2012


      Wouldn't statistical approaches work just as well even if Chomsky's account of language or mind was correct?
      posted by chortly at 7:28 PM on November 18, 2012 [1 favorite]


      Well, he concedes that statistical modeling is effective, but, it seemed to me, his point was that this kind of work abandons what he sees as a primary goal of science: to understand how systems and things work, not just to be able to predict their output. Obviously, that's a position that people could disagree with.
      posted by thelonius at 7:33 PM on November 18, 2012 [3 favorites]


      Yeah, the title of this article was fairly terrible and is leading people here to misunderstand Chomsky. It's natural to think of AI as an engineering problem, but Chomsky is fine with people using statistical models to solve engineering problems. In fact, he has some good thing to say about Bayesianism. He is just upset that statistical models have overtaken cognitive science and pushed computational models to the side. In the interview, he mentions that it's fine to use statistical models to predict the weather, but it would be terrible if meteorologists all used statistical techniques and abandoned the study of the atmosphere entirely. He sees something like this going on in psychology.
      posted by painquale at 7:47 PM on November 18, 2012 [5 favorites]


      In fact, he has some good thing to say about Bayesianism.

      This is just Pascal's Wager over again, only with computers.
      posted by Slap*Happy at 8:24 PM on November 18, 2012


      No, Chomsky's concern is not Searle's. Chomsky thinks intelligence is computational. If anything, Searle's argument is an argument against Chomsky's approach even more than it is an argument against statistical analysis.

      They both assert that the nature of intelligence hinges on process rather than functional ability. Searle doesn't refute the mechanistic nature of the brain, or even the possibility of machine intelligence in principle. But he insists that truly intelligent AI must be based on the design of the brain. Otherwise, it can only simulate comprehension, not create it.

      Chomsky is not quite so rigid in his thinking (and, importantly, says nothing of consciousness), but still seems to think that true intelligence requires operational homology to the human brain¡ªin the abstract rather than physical sense.

      I think most researchers in the field would agree that understanding how the brain works would be the ideal starting point for designing AI. But Chomsky seems to trivialize the difficulty of top-down unification. It's easy to criticize reductionism, but damn near impossible to advance understanding in the field using any other paradigm. Grammar may be amenable to rational formalization. Abstract thought, not so much.
      posted by dephlogisticated at 8:57 PM on November 18, 2012 [2 favorites]


      The New York Review of Books
      End of the Revolution
      FEBRUARY 28, 2002
      John R. Searle
      New Horizons in the Study of Language and Mind
      by Noam Chomsky
      (This link is behind a paywall.)

      Searle:
      Chomsky insists that the study of language is a branch of natural science and the key notion in his new conception of language is computation. On his current view, a language consists of a lexicon plus computations. But my objection to this is that computation is not a notion of natural science like force, mass, or photosynthesis. Computation is an abstract mathematical notion that we have found ways to implement in hardware. As such it is entirely relative to the observer. And so defined, in this observer-relative sense, any system whatever can be described as performing computations.

      Chomsky¡¯s Revolution
      April 25, 2002
      Sylvain Bromberger, reply by John R. Searle
      (no paywall)

      Chomsky¡¯s Revolution: An Exchange
      July 18, 2002
      Noam Chomsky, reply by John R. Searle
      (no paywall)

      Chomsky:
      The long-term goal has been, and remains, to show that contrary to appearances, human languages are basically cast to the same mold, that they are instantiations of the same fixed biological endowment, and that they ¡°grow in the mind¡± much like other biological systems, triggered and shaped by experience, but only in restricted ways.

      Searle:
      It is often tempting in the human sciences to aspire to being a natural science; and there is indeed a natural science, about which we know very little, of the foundations of language in the neurobiology of the human brain. But the idea that linguistics itself might be a natural science rests on doubtful assumptions.
      posted by Golden Eternity at 9:45 PM on November 18, 2012 [1 favorite]


      What a great mind is here o'erthrown! It's rambling and unfocused, and he makes odd slips of the tongue:
      Like maybe when you add 7 and 6, let's say, one algorithm is to say "I'll see how much it takes to get to 10" -- it takes 3, and now I've got 4 left, so I gotta go from 10 and add 4, I get 14. That's an algorithm for adding -- it's actually one I was taught in kindergarten.
      Here's another:
      But there are other ways to add -- there's no kind of right algorithm. These are algorithms for carrying out the process the cognitive system that's in your head.
      And another:
      The way they did it was -- of course, nobody knew anything about photosynthesis -- so what you do is you take a pile of earth, you heat it so all the water escapes. You weigh it, and put it in a branch of a willow tree, and pour water on it, and measure you the amount of water you put in. When you're done, you the willow tree is grown, you again take the earth and heat it so all the water is gone -- same as before.
      Also, he's talking about science and its failures to account for data, but I really don't think he understands the things he's talking about. Look at his references to Peano's axioms - yes, the axioms do not describe a process for addition. They're not supposed to; that's not what axioms are. Similarly, a 2d representation of a 3d scene may be consistent or inconsistent with the axioms of geometry, but the axioms don't tell you how the drawing was made. But he jumps from that to his assumption that the mental algorithms we use don't have any sort of deeper algorithmic layer instead of asking whether in fact you can have algorithms about algorithms - which you can, and we use them (in our computers) every day; and this has really fascinating implications for the connection between the way we think and fundamental arithmetic. But he's oblivious to this, despite the fact that it's been around since Kurt G?del in the 1930s, if not earlier.
      posted by Joe in Australia at 10:03 PM on November 18, 2012 [1 favorite]


      There's a distinction that people in computer science don't make. Or, more accurately, there are two VERY different ideas that they conflate. And I can clearly see that conflation happening in this thread.

      The two ideas are what I term "artificial intelligence" and "artificial consciousness". The former is the attempt to make computers do complex tasks. And we're very, very good at it. Just look at Google. Making computers smart has been a hugely successful undertaking. The latter refers to making computers work like human brains work. We have not been even a little successful at that, because we have absolutely no theory of how the human brain works. It doesn't matter if your statistical models get within 99% of modeling human language. Great. That's awesome. Obviously, a lot of the lexical and parametric elements of language have to be learned, which implies some amount of statistics going on. No problem. But the idea that garden-path sentences and deep embedding and inherent ambiguity are just artifacts of a fundamentally statistical system is unlikely in the extreme. More importantly, though, no matter how successful a statistical system is in predicting the grammaticality of a given sentence, it won't even be able to tell us anything about how those weird elements of our grammar (garden paths, etc) arise.

      Computers are really, really, really good at letting you do tons of calculations on tons of data. Of COURSE you're going to get really cool results when you apply that power to language. Just like you've gotten cool results when you apply it to pretty much everything in the world. But comparing the success of brute-force calculation over explicit modeling (given that we don't actually have a model for natural language worked out yet) to the success of quantum mechanics over Einstein's objections completely misses the point. Statistical analysis of cognition and language doesn't have anywhere near the predictive value of QM, which had a sophisticated mathematical underpinning based on well-understood properties of the world. Chomsky isn't objecting to an interpretation of real world data and successful models. He's objecting to trying to "explain" the world with no model whatsoever! His basic point, I think, is equivalent to saying that what a cognitive scientist wants to do is obtain Boyle's Law. What a computational, statistical analysis wants to do is explain where all the molecules are likely to be when you look at a gas. You might become really good at describing clouds of gas, but you haven't derived any general principles about the world. And, again on QM, saying "particle behavior is essentially random" is not the same as saying that "particle behavior is not essentially rule bound". In fact, we talk about electrons vs quarks specifically because we have a really solid model of reality. We don't just look into a collider and say, "well, here's what each of our sensors read, let's do a bunch of statistics on it and whatever pops out of it will be what we publish." No. We say, "Here are our predictions about what sorts of particles we see, and these are the sensors that should light up if we see them." And if our readings disagree with our predictions, we come up with new models of particles. We don't just shrug and say, "Wow, gee, look at how wonderful our statistical analysis of particle decay is!"

      Point is, cognitive science needs a model if it's ever going to be useful in explaining how brains work. (Note: NOT "useful" in the sense of convincing a bunch of computer-crazed scientists that they can get some really exciting papers out of their endeavours.) Once we have a model, I'm sure lots of statistical analysis will be required to work out the details of how brains actually do things in specific cases. But you need to be talking (read: doing statystical analysis of) about something, not just talking.
      posted by cthuljew at 11:31 PM on November 18, 2012 [6 favorites]


      I should add one more thought: The question isn't whether or not you're getting better results. The question is, what does "results" mean?
      posted by cthuljew at 11:53 PM on November 18, 2012


      thelonius, cthuljew: The thing by Charniak I linked to outlines his hunch that the human brain uses a particular kind of algorithm for learning language, and that learning from a large amount of training input (everything we hear before we can talk) is how we learn to use language.

      I understand there's an argument that "statistical results are great so who cares if that's how humans think or not", but there's no point in arguing that - saying statistical methods are highly successful isn't a concession, it's just acknowledging the reality of Google and other things we see every day. The point here is that statistical methods aren't just good replicators or predictors, but that as we find more accurate methods they can in fact teach us about how the brain functions.
      posted by 23 at 12:47 AM on November 19, 2012


      The latter refers to making computers work like human brains work. We have not been even a little successful at that, because we have absolutely no theory of how the human brain works.

      That's really the essential argument here. I can't argue in good faith that systems like Google emulate human intelligence, because on an intuitive level I don't believe that to be true. But as long as our knowledge of human intelligence can be approximated by a large fuzzy question mark, we have absolutely no real measure of determining how close we are to producing human-like intelligence beyond raw functional ability. In other words, how closely does the output of our AI resemble human intelligence? It's an imperfect measure, but it's literally all we have aside from intuition.

      I don't think we can assume that we could readily distinguish between human intelligence and an infinitely-optimized algorithmic simulation thereof, simply because the two operate on different mechanistic principles. Who are we to say that a brain is intelligent and a statistical algorithm is not, if the two produce results that cannot be distinguished? Dennett would likely argue that the two are equivalent.

      Of course it goes without saying that what everyone really wants to know is the true meaning of intelligence, consciousness, and, well, meaning. But we aren't anywhere close to having those answers, and no one has a good idea how to get there. So in the meantime, we're left with practical approximation. And, who knows, maybe if we approximate human intelligence close enough, we'll find that we actually created something like the phenomenon we're searching for, even if we don't quite understand how we got there.
      posted by dephlogisticated at 12:53 AM on November 19, 2012


      Of course it goes without saying that what everyone really wants to know is the true meaning of intelligence, consciousness, and, well, meaning.

      I would beg to differ.
      posted by 23 at 1:03 AM on November 19, 2012 [1 favorite]


      As a practicing cognitive scientist with an interest in language, the annoying thing about this debate (and many people's framing of it, including Chomsky's) is that he seems to assume that "statistics" just means "without a model." But that's just wrong - in fact, it's incoherent. "Statistics" is meaningless unless you have some notion of what you are computing your statistics over: and once you do, that is an implicit model of what you think the brain conceptualises.

      In practice, Chomsky and these discussions (with the comparisons to google and modern AI) seem to implicitly assume that the model is n-grams -- that is, that the statistics in question are the statistics of which words (or, sometimes, morphemes) co-occur. Although there is evidence that people are indeed sensitive to those kinds of statistics, it is also indubitably true that the human brain does far more than calculate and use n-grams. Thus, any model that fundamentally relies on statistics over n-grams is not going to be a good explanation of the human brain. In that Chomsky is absolutely correct.

      What is massively misleading, though, is few cognitive scientists actually ascribe to that simplistic of a view. One of the fundamental questions, if not the fundamental question, is what other things (besides n-grams) statistics can be calculated over. From that follow other major questions: to what extent people are sensitive to those statistics, what assumptions people make about those statistics, and to what extent that sensitivity and those assumptions explain complex behaviour like language. There are many researchers (full disclosure, I am one) investigating these questions across a wide variety of instantiations of "things" statistics can be calculated over, from phonemes on up to phrase structure. It is not a small research area, and yet all of Chomsky's rhetoric about statistics seems to entirely ignore the existence of this work. (Either that, or he fails to explain -- or I'm unclear about -- how his critiques are relevant).

      So the same debate keeps occurring, even though the points he is making don't apply to a lot of what is actually being done and what actually falls under the umbrella of statistical learning in cognitive science. In short, he is railing against a straw man that maybe describes most research in the 80s, but certainly doesn't describe it now.
      posted by forza at 1:42 AM on November 19, 2012 [8 favorites]


      If you enjoy this debate, there's a fascinating AI dissertation by Gary Drescher, Made-up minds: a contructivist approach to artificial intelligence (also on Archive.org) where he builds a model of an infant's world, gives the learning agent the power of statistical observation, and then sees how far it can get in terms of cognitive development (Im think in Piagetian stages, but I may be wrong).
      posted by zippy at 7:26 AM on November 19, 2012


      Constructivist ... I.

      I have no one to blame but my non-sentient touchscreen device.

      posted by zippy at 8:34 AM on November 19, 2012


      Otherwise, it can only simulate comprehension, not create it.

      I'm not certain the Searle's argument is even relevant to the statistical inference approach. If the Chinese room appears to all tests to be intelligent, who is to say it isn't? If a "simulated" AI started to produce great art or conduct novel research, how is that different from a structural "native" intelligence? I've never understood how or why that difference should be made.

      It brings up really interesting questions about where intelligence resides and souls and such, but, functionally, Searle's arguments about simulated comprehension have always seemed to me to be missing the point.
      posted by bonehead at 8:36 AM on November 19, 2012


      Fluid Concepts And Creative Analogies: Computer Models Of The Fundamental Mechanisms Of Thought, Douglas R. Hofstadter, Basic Books, 1996

      cf. Artificial Superintelligence, Azamat Abdoullaev, F.I.S. Intelligent Systems, 1999
      posted by ob1quixote at 10:29 AM on November 19, 2012


      Searle's Chinese Room argument, and I think I would call it a provocative position rather than an argument, or a straw man if I were feeling less charitable and had not had coffee yet, seems to me to be this: if we had an understandable model of intelligence, it wouldn't be intelligence ... BECAUSE HOMUNCULUS.

      Which I think is equivalent to saying "people cannot be smart, because neurons aren't smart."

      I share the desire to understand human intelligence, but I also think that systems that behave intelligently and are constructed upon more regular and more easily understood components than brains are still intelligence. I'm just waiting for one that can hold up its end of an interesting conversation.
      posted by zippy at 12:56 PM on November 19, 2012 [3 favorites]


      In any case the dichotomy that Chomsky invokes is a thing of the past. It is true that the field has changed in character over the past decades but it is pointless to suggest that the field is moving in the wrong or right direction from the vantage point of a debate that happened twenty or thirty years ago, when hardware and software development just started taking off. The way Chomsky frames the subject does not help to understand the current state of the field or the choices it faces at all. It's not even very clear what he's arguing for or how that (fundamentally) differs from what is going on. He holds up a few unrepresentative examples as typical for the field and engages with the interviewer in a little strut about "science vs. engineering" that seems like it belongs in a bull session. Still that doesn't mean Chomsky's appeal is entirely unreasonable. It makes sense on an emotional level, because there is admittedly something unsatisfactory about the results we're getting. What Chomsky expresses is just the impatient curiosity that's behind all research, and, coming from him, that's not without value.
      posted by deo rei at 2:43 PM on November 19, 2012 [3 favorites]


      Is there an article out there that lays out how Chomsky is wrong in his criticisms of modern statistical methods? Like, what sorts of actual models people are getting for the structure of language using them? Because the problem seems to me not that statistical, computer-powered methods can't reproduce language; maybe they can — that's basically an empirical question. The problem seems that, once we have these perfect statistical models in hand, we won't actually know anything about language. So, could someone actually explain in concrete terms what we're learning about the structure of language from statistical modeling, rather than just the fact that we're learning that language can be modeled using statistics?
      posted by cthuljew at 12:32 AM on November 20, 2012 [1 favorite]


      I'm going to throw this in here: Peter Norvig on Chomsky and Statistical Learning.
      posted by ianso at 3:24 AM on November 20, 2012 [2 favorites]


      ianso: I read that article the other day, and all I can say is that it's embarrassingly bad. He uses the success of search engines and speech recognition engines to argue for the success of statistical methods, which completely ignores all of Chomsky's problems. He makes the (admittedly circumspect) accusation that "Chomsky's theories" have not generated as much money, which completely misses the point of doing basic science. Then he goes on to cite a bunch of papers in extremely well-understood, thoroughly modeled, hundreds-of-years-old sciences which use statistical methods to show extremely specific and narrow (but nevertheless valuable) results as some sort of proof that the half-century old, not-even-barely-understood cognitive sciences should be doing the same thing. I finally had to stop reading when he said that Chomsky saying "...that the notion of 'probability of a sentence' is an entirely useless one..." is equivalent in any way to saying that the notion of a novel sentence is a useless one, which, again, completely misses Chomsky's point.

      None of this is to say that it's not relevant or valuable to this discussion. Indeed, if it represents the way of thinking typical among computational linguists, then I think it shows that the misunderstandings are just as bad on that side as people here are claiming they are on the Chomskian side. Of course, it might just be bad from all perspectives, which I vaguely hope is true.
      posted by cthuljew at 4:01 AM on November 20, 2012


      cthuljew, I think it would help the discussion if you summarize Chomsky's point for us, since I'm not sure everyone's clear on what it is; I know I could would appreciate something more straightforward than the article, which ranged over a lot of topics.
      posted by 23 at 7:06 AM on November 20, 2012


      I can't help but see the biological analogy here, though limited, between cloning and natural reproduction. One is mimicking the result, while the other is burdened with explaining the difference to remain relevant.
      posted by Brian B. at 7:06 AM on November 20, 2012 [2 favorites]


      Chomsky's point is exemplified by his parable of the physicists at the window, which I quoted for this post's title. It's the idea that no amount of predictive power — which, Chomsky is positing, is all you get with even the most refined statistical methods — will ever explain anything. Just like a bunch of physicists being able to perfectly predict what'll happen outside their window next won't actually know anything about physics.

      I like to think of it like this: Let's say you develop a mathematical model based on observations of caribou migration that can account for all apparent factors of climate, terrain, predators, etc., and that will tell you exactly where caribou will go in any given condition, and which, when you enter the conditions for that year, correctly predicts what path the caribou will take every time. That's really cool! And useful! And, if you want to hunt caribou, can make you lots and lots of money! However, have you actually learned anything about caribou? Can you explain what happens in the brain of any given caribou while it's deciding where to go next on its migration? Can you explain what keeps herds of caribou together? How leaders are determined? How individuals who get lost know how to find their herd again? Etc, etc.

      The analogy, I hope, is clear enough. But just to be completely transparent, statistical analysis here is the mathematical modeling based purely on observed behavior, with no attempt to model internal processes. To explain the later concerns, you'd need ethologists, biologists, ecologists, etc, all of whom would assume that there are concrete and well-defined rules at work in the caribou individuals, which can be described systematically, and not just probabilistically. This is equivalent to the linguists who want to find abstract but definite rules that describe how language takes on the shape it does.

      Even if the brain ultimately just does a bunch of statistics and then out comes the language, the very language itself still has rules! There has to be a reason you can say "how many cars did they wonder if the mechanics fixed?" but not "how many mechanics did they wonder if fixed the cars?" And the reason can't just be "because that's what a child is more likely to hear" because then you have the problem of infinite regress — that's what their parents were most likely to hear, and their parents, and so on. And if you try to get out of it by saying, "Well, back in the ur-language, some factor meant that you could get the first kind of sentence but not the second" then I get to instantly reply, "Great! Let's figure out what those reasons were!" And no statistical analysis can help!

      I hope I haven't just muddied the waters even more. And I hope that this doesn't just represent a gross misunderstanding of how modern computational linguistics works. But that's why I ask above for an introduction to the topic that addresses some of these concerns.
      posted by cthuljew at 8:17 AM on November 20, 2012


      I can only speak for myself, cthuljew, but what I find so troubling about the "statistics can't tell us anything" argument is that it just doesn't make any sense at all. A statistical model tells us all sorts of relationships that we didn't know about before, and even better, tells us the accuracy of these relationships.

      And the only way for me to make sense of that non-sensical statement is to invoke a mindset that I'm almost certainly incorrect in assuming: that Chomsky doesn't realize that he is already doing modeling. When one finishes reading (and possibly even understanding) Government and Binding Theory, one has an understanding in the form of allowable syntaxes and sentence structures and what not, but this is exactly what a statistical model also provides. You need to ask the statistical model these questions in order to find out, but they are in the model. There's nothing special about Chomskian modeling that guarantees that it represents anything going on in the human brain. In fact, if it can't match the accuracy of a statistical model, then that's a very strong argument that it's not representing anything in the human language faculty, and that what's going on in the statistical model has a better chance of matching biology.
      There has to be a reason you can say "how many cars did they wonder if the mechanics fixed?" but not "how many mechanics did they wonder if fixed the cars?" And the reason can't just be "because that's what a child is more likely to hear" because then you have the problem of infinite regress
      First off, I don't understand this problem of infinite regress, because presumably you'd still have that problem under Chomsky's model which (in my primitive understanding) is that there are a small number of parameters that flip to make a child's native language faculty match the language's syntax as opposed to other languages' syntax. And the same goes with words, why do we use "blue" for the color blue rather than "glue" or "stew"? Because that's what our parents used. No infinite regress problem at all. Presumably a Chomskian grammar is useful in distinguishing these two sentences because it would tell us what rules are violated by the second sentence, or that according to a generative grammar there's no possible parsing that would eventually lead to those words in that order. Presumably the statistical model is bad because it can't provide any insight into the difference between those two sentences? But that's simply not true. The statistical model that says that the second sentence is ungrammatical will tell us what particular features of that second string of words contribute to the poor likelihood of the sentence under the trained language. But even in the worst case, let's say a near black box like a radial basis function support vector machine or some model with cryptic parameters, we can still query this black box to figure out what's going on. For example, we can figure out what types of features are essential to determine grammaticality. Such complete-black-box methods are somewhat rare, and not a defining characteristic of statistical methods, and if the argument is just about these particular black-box methods it should be made against that target rather than against the broader and inappropriate target of statistical modeling. If, in the end, such black-box modeling turns out to be the only way forward in modeling language, then we will devote the effort to figuring out what's in the black box, and if we can't figure that out, we've still learned something about language: that all those non-black-box methods are not sufficient for understanding language, and that is a valuable discovery in itself, as it tells us about the minimum level of complexity needed for human grammar.

      Perhaps the difference is that Chomsky sees a hard-and-fast rule "X always Y", i.e. a definitive yes and no answer, as real and useful, but a statement such as "'X always Y' with XX% accuracy on this corpus" as a non-real, non-knowledge statement. However, that hard-and-fast rule does not exist, except for in the mind of the beholder. It is an abstraction, leaky, that does not mesh well with the world, and we know this because people tried to use those hard-and-fast rules on real language and they always had XX% accuracy, not 100% accuracy. So this distinction between hard-and-fast rules and wish-washy probabilistic statements, if it's what Chomsky is saying, belies a fundamental misunderstanding of the problem as it stands. The hard-and-fast rule is as real as a unicorn, perhaps such rules exist and describe language, and it would be fantastic if we found some, but we haven't any yet, and until we do the unicorn is a fictional beast. Perhaps rather than waiting to ride a unicorn, we should take this perfectly serviceable horse for a trot and see how far it can take us on our journey, and perhaps we'll discover something about finding unicorns on the way. Maybe we never discover unicorns, and we get to our final destination a bit more shabbily than on a unicorn, but let's focus on the real goal.

      Like all scientific breakthroughs, whatever truly underlies language is most likely going to be quite unintuitive, and we're going to have to come up with new abstractions (e.g. math) to deal with these insights. And the only way that we will find those unintuitive underlying truths is by dealing with real language, playing with it in our hands, turning it around and seeing how it behaves. Philosophizing without looking at real data only gets us so far, it gives us directions, guesses, and places to look, but until the philosophy recapitulates reality it is merely speculation, not science. F=MA is a sterile equation, incapable of describing the rich texture of day-to-day interaction with physical objects, yet it tells us quite a lot about what properties there are in the world, embodies several unintuitive concepts about how objects behave, and we know that it's right because it's predictive. F=MA also came with lots of new abstractions (calculus) that let us deal with the insights that it provides. F=MA was discoverable without computers because it is an exceedingly simple relationship; we already know that language is far far more complex. The only tools we have to rigorously play with language these days are computational models.

      Chomsky chose rules-based computational models when he started, with good reason. Rules-based computation is what our human logical frameworks are good at, what our discrete computers are natively good at, it's what computer scientists are good at, and it's the only thing that computers could do early on when Chomsky was getting started. And it turns out that Chomsky plowed quite a bit of ground for early computer scientists, and all computer scientists learn (or should learn) the Chomsky Hierarchy when learning about the logical foundations of programming languages, as it provides helpful guideposts when dealing with all sorts of computational problems. But rules-based models doesn't seem to correspond well with language, biology, or natural processes. And as we get better at probabilistic reasoning via discrete computers, it's infiltrating everything that used to be rules-based in computer science, not just computational linguistics, but coding theory (highly discrete), prosaic but essential tasks such as email filtering, and absolutely every single bit of modeling of the natural world, from particle physics to proteins to computer vision to caribou herds. To say that language is some sort of special case that does not require statistical inference is to ignore the past 30 years of scientific discovery. To say that we can learn nothing from statistical models is, at best, a very difficult statement to parse.
      posted by Llama-Lime at 11:45 AM on November 20, 2012 [5 favorites]


      ...people tried to use those hard-and-fast rules on real language and they always had XX% accuracy, not 100% accuracy.

      But that's not what those rules did. There was absolutely no talk of accuracy, except in the purely binary sense. Those rules were hypotheses for what the structure of language was. And when we found grammatical forms that contradicted those rules, we discarded or modified them and came up with new rules that explained the new data. And we've been getting better. But we don't just look at "grammar" as being "so and so accurate across all language". We look at discrete rules as explaining each of the forms of grammatical language we've observed so far. And once new observations (that is, new data) come in, we revise our rules. And we've been pretty good at modeling a lot of stuff in language using the rules approach, and we've been getting closer to a complete model (although we're still very, very far away). (I for one am a fan of Jackendoff's approach, which is a generative model very similar to LFG, although not worked out in detail yet.)

      As for infinite regress, all of the examples you mentioned are things that are determined arbitrarily. Which parameter you pick, what word sound you produce, etc., are all things that can be invented completely by a child during language acquisition with little consequence to language production. (Obviously here I mean small individual differences won't affect the child's communicative ability, while long-term changes in a population will produce significant language shifts, etc., etc.) However, there are things that are in fact universal and highly peculiar to language that are not arbitrary at all: embedding, phrase movement, non-linearity, etc. How did those patterns get established, if there are not rule-like structures in the brain that determine them specifically? And especially if human language has had more than one recent evolutionary origin (which I don't think is terribly likely, but who knows) then there's basically no chance that every language on Earth would share such features. The problem of infinite regress is where did our statistical patterns come from way back when we were still saying "Ug throw rock" and not "Ug shall now use his mighty arm to propel, like a bird soaring above the ocean, a stone into the clouds that float in the heavens."

      Finally, we're not saying that we can learn nothing from statistical models. Just that we can't learn the sorts of things linguists are interested in learning — namely, what rules the brain is using to generate the highly constrained and structured language that we see.
      posted by cthuljew at 1:14 AM on November 21, 2012 [2 favorites]


      My specialty is statistical models, and I'm not familiar with postgrad linguistics and haven't looked at the field in a decade, so I'll cede to your judgements in many of these areas. But I want to stress that (1) a particular statistical model embodies assumptions about structure, can embody some types of rules easily, and other types of rules with greater computational difficulty, and (2) with appropriate model structure and sufficient data, a model can learn additional structural features of language which is read off by seeing that certain sets of parameters are set a certain way. Statistical models can always mimc a rules based system by setting some of the probabilities to 1 and 0. And just as rules-based grammar is always improving by finding new types of rules, we're continually improving statistical models' ability to learn. Further, statistical models provide bounds on learning rates; how much data needs to observed to learn particular things in the model. This could be a very useful tool for discovering which language structures must be innate and which can be learned.

      The other thing I want to question is the assumption that the brain is using rules to generate sentences which we utter. Though it's possible that we may some day find some brain structures that mimic these increasingly complex rules, it's far from certain, and given how strict rules do not correspond well to the biology we do know, a statistically driven system seems far more likely to me.

      My undergrad advisor, before he brought me with him over to computational biology, had done some great work on learning word segmentation from continuous streams of speech, an essential task for infants. When I first started discussing the model with him, I saw the random variables, distributions, and their relationships of the model and said "Aha! So this is where you put in all your pre-existing knowledge about words, about what types of things you can learn." And he replied to me "Oh no, not at all, we want to put as little knowledge as possible in, a completely blank slate if we can." And we were both right of course; he was coming off the Bayesian vs. Frequentist debates where Bayesians were always defending themselves against biased analyses and always trying to come up with uninformative priors. But as someone without that baggage, I could see the utility of shoving as much knowledge as possible into your model, so that you can learn the most new things, and only backing off if you wanted to test some of your underlying knowledge (an affliction that I have to this day). So much of this debate may be as much about the emphasis of terms as it is about particular research programs.
      posted by Llama-Lime at 10:49 AM on November 21, 2012 [2 favorites]


      It seems like the basic fight is that chomsky wants to know the source code to the human mind while the 'statistical' people just want to be able to write a new program that is statistically indistinguishable from a person.
      posted by delmoi at 1:48 PM on November 22, 2012 [1 favorite]


      When Deep Learning is covered by the New York Times, we've definitely crossed a rubicon of sorts, as the popular media almost never covers these things. Deep Learning is a revival of a very old idea, neural networks, but now empowered with much better learning algorithms, better computational power for learning, and larger data sets to learn from.

      This is exactly the type of learning that I believe Chomsky is being critical of, as this type of statistical model is often inscrutable. We know that internally in the model there are some parameter groups that recognize 'square' or 'circle' in an image, and we know that the model learned those concepts entirely on its own, but it's difficult to point and say there that's what recognizes a circle.

      Still, this type of model is far more inspired by the brain than, say, transformative grammars are. It seems incredibly odd to say that a grammar is inspired by a search for the source code in the brain when there's no basis for correspondences to anything in the brain. Chomskian syntax is a search for an understanding of the structures in language; the connection to the brain is at most implied. Chomskian syntax creates higher-level logical formalisms about language that take us 15-20 years to understand; and there's no reason to believe that these formalisms are representative of brain biology.

      Meanwhile, Deep Learning structures are directly inspired by a very naive understanding of the functioning of neurons, and are focussed on performing the tasks that even simple brains are good at, but which computers and human logical constructions are very bad at. Deep Learning is directly breaking apart this dichotomy between our intellectual understanding of things and the intuitive understanding of things that happens in brains. If you want to talk about "source code to the brain" then you had better be looking at systems that have some sort of correspondence to the brain.

      What Geoffrey Hinton is working on is methods to learn at all, and less on saying "this is how the human visual cortex works." However, there are others that are working on precisely these aspects of figuring out brain function. And when they get to language, perhaps they will work out ways to point at parts of a deep restricted Boltzmann machine (or whatever model it happens to be) and say "there, that's what recognizes a circle, and here's how it does it." But it's premature to say that we will never be able to point at that, and furthermore, that this is the wrong way to go about eventually learning that fact.
      posted by Llama-Lime at 12:20 PM on November 24, 2012 [2 favorites]


      « Older Alan Moore and Superfolks   |   Why are men so emotional? Newer »


      This thread has been archived and is closed to new comments




      ¡°Why?¡± asked Larry, in his practical way. "Sergeant," admonished the Lieutenant, "you mustn't use such language to your men." "Yes," accorded Shorty; "we'll git some rations from camp by this evenin'. Cap will look out for that. Meanwhile, I'll take out two or three o' the boys on a scout into the country, to see if we can't pick up something to eat." Marvor, however, didn't seem satisfied. "The masters always speak truth," he said. "Is this what you tell me?" MRS. B.: Why are they let, then? My song is short. I am near the dead. So Albert's letter remained unanswered¡ªCaro felt that Reuben was unjust. She had grown very critical of him lately, and a smarting dislike coloured her [Pg 337]judgments. After all, it was he who had driven everybody to whatever it was that had disgraced him. He was to blame for Robert's theft, for Albert's treachery, for Richard's base dependence on the Bardons, for George's death, for Benjamin's disappearance, for Tilly's marriage, for Rose's elopement¡ªit was a heavy load, but Caro put the whole of it on Reuben's shoulders, and added, moreover, the tragedy of her own warped life. He was a tyrant, who sucked his children's blood, and cursed them when they succeeded in breaking free. "Tell my lord," said Calverley, "I will attend him instantly." HoME²Ô¾®¿Õ·¬ºÅѸÀ×Á´½Ó ENTER NUMBET 0017
      zezi7.com.cn
      difa2.net.cn
      www.hxpq.com.cn
      yuba0.net.cn
      shuju9.net.cn
      www.lehao1.net.cn
      tican8.net.cn
      www.guian8.com.cn
      kaihe3.com.cn
      afrom1.com.cn
      成人图片四月色月阁 美女小美操逼 综合图区亚洲 苍井空的蓝色天空 草比wang WWW.BBB471.COM WWW.76UUU.COM WWW.2BQVOD.COM WWW.BASHAN.COM WWW.7WENTA.COM WWW.EHU8.COM WWW.XFW333.COM WWW.XF234.COM WWW.XIXILU9.COM WWW.0755MSX.NET WWW.DGFACAI.COM WWW.44DDYY.COM WWW.1122DX.COM WWW.YKB168.COM WWW.FDJWG.COM WWW.83CCCC.COM WWW.7MTP.COM WWW.NXL7.COM WWW.UZPLN.COM WWW.SEA0362.NET WWW.LUYHA.COM WWW.IXIAWAN.COM WWW.HNJXSJ.COM WWW.53PY.COM WWW.HAOYMAO.COM WWW.97PPP.COM 医网性交动态图 龙腾视频网 骚姐av男人天堂444ckcom wwwvv854 popovodcom sss色手机观看 淫荡之妇 - 百度 亚洲人兽交欧美A片 色妹妹wwwsemm22com 人妻激情p 狼国48Q 亚洲成人理论网 欧美男女av影片 家庭乱伦无需任何播放器在线播放 妩媚的尼姑 老妇成人图片大全 舔姐姐的穴 纯洁小处男 pu285ftp 大哥撸鲁鲁修 咪米色网站 丝袜美腿18P 晚上碰上的足交视频 avav9898 狠狠插影院免费观看所视频有电影 熟女良家p 50s人体 幼女av电影资源种子 小说家庭乱伦校园春色 丝袜美女做爱图片 影音先锋强奸影片 裸贷视频在线观 校园春色卡通动漫的 搜索wwwhuangtvcom 色妹影视 戊人网站 大阴茎男人性恋色网 偷拍自怕台湾妹 AV视频插进去 大胆老奶奶妈妈 GoGo全球高清美女人体 曼娜回忆录全文 上海东亚 舔柯蓝的脚 3344d最近十天更新 av在线日韩有码 强奸乱伦性爱淫秽 淫女谁 2233p 123aaaa查询 福利AV网站 世界黄色网址 弟姐撸人人操 婷婷淫色色淫 淫姐姐手机影院 一个释放的蝌蚪窝超碰 成人速播视频 爱爱王国 黄色一级片影视 夫妻主奴五月天 先锋撸撸吧 Xxoo88 与奶奶的激情 我和老女人美妙经历 淫妻色五月 zaiqqc 和姐姐互舔15p 色黄mp4 先锋2018资源 seoquentetved2k 嫩妹妹色妹妹干妹妹 欧美性爱3751www69nnnncom 淫男乱女小说 东方在线Av成人撸一撸 亚洲成人av伦理 四虎影视二级 3p性交 外国人妖口交性交黑人J吧插女人笔视观看 黑道总裁 人人x艹 美女大战大黑吊 神马电影伦理武则天 大鸡八插进的戏 爆操情人 热颜射国产 真实自拍足交 偷拍萝莉洗澡无码视频 哥哥狠狠射狠狠爱 欲体焚情搜狗 妹子啪啪网站 jizzroutn 平井绘里在线观看 肏男女 五月天逍遥社区 网站 私色房综合网成人网 男人和女人caobi 成人共享网站 港台三级片有逼吗 淫龙之王小说 惠美里大战黑人 我为美女姐姐口交 乱论色站 西田麻衣大胆的人体艺术 亚洲 包射网另类酷文在线 就爱白白胖胖大屁股在线播放 欧美淫妻色色色 奥蕾人艺术全套图片 台湾中学生门ed2k 2013国产幼门 WWW_66GGG_COM WWW_899VV_COM 中国老女人草比 qingse9 nvtongtongwaiyintou 哥哥妹妹性爱av电影 欧美和亚洲裸体做爱 肏胖骚屄 美国十此次先锋做爱影视 亚里沙siro 爆操人妻少妇 性交的骚妇 百度音影动漫美女窝骚 WWW_10XXOO_COM 哥两撸裸体图片 香洪武侠电影 胖美奈 我和女儿日屄 上海礼仪小姐 紫微斗数全书 优酷视频联盟 工作压力大怎么办 成人动漫edk 67ijcom WWW15NVNVCOM 东京热逼图 狠狠干自拍 第五色宗 少妇的b毛 t56人体艺术大胆人体模特 大黄狗与美女快播播放 美女露屄禁图 大胆内射少妇 十二种屄 苍井空绿色大战 WWWAFA789COM 淫老婆3p 橹二哥影院影视先锋 日本h动漫继母在线观看 淫乱村庄 强奸少妇采花魔 小泽玛莉亚乱伦电影 婷婷五月红成人网 我爱色洞洞 和老婆日屄图片 哪个网站能看到李宗瑞全集 操小姨的穴 白洁亚洲图片 亚洲色图淫荡内射美女 国外孕妇radio 哪本小说里有个金瓶经的拉完屎扣扣屁眼闻俩下 在线亚洲邪恶图 快播最新波哆野结依 wwwgigi22com 操紧身妹 丁香五月哥 欧美强奸幼童下载wwwgzyunhecom 撸波波rrr777 淫兽传 水淫穴 哥哥干巨乳波霸中文字幕 母子相奸AV视频录像 淫荡的制服丝袜妈妈 有强奸内容的小黄文 哪里艺术片 刘嘉玲人体艺术大胆写真 www婷婷五月天5252bocom 美女护士动态图片 教师制服诱惑a 黄色激情校园小说 怡红院叶子喋 棚户区嫖妓pronhub 肏逼微博 wwppcc777 vns56666com 色哥哥色妹妹内射 ww99anan 清纯秀气的学生妹喝醉 短头发撸碰 苍井空一级片tupian 够爽影院女生 鲁大娘久草 av淘之类的网站 谷露AV日本AV韩国AV 电台有声小说 丽苑春色 小泽玛利亚英语 bl动漫h网 色谷歌短片 免费成人电影 台湾女星综合网 美眉骚导航(荐) 岛国爱情动作片种子 兔牙喵喵在线观看影院 五月婷婷开心之深深爱一本道 动漫福利啪啪 500导航 自拍 综合 dvdes664影音先锋在线观看 水岛津实透明丝袜 rrav999 绝色福利导航视频 200bbb 同学聚会被轮奸在线视频 性感漂亮的保健品推销员上门推销套套和延迟剂时被客户要求当场实验效果操的 羞羞影院每日黄片 小黄视频免费观看在线播放 日本涩青视频 日本写真视频 日本女人大尺度裸体操逼视频 日韩电影网 日本正在播放女教师 在线观看国产自拍 四虎官方影库 男男a片 小武妈妈 人妻免费 视频日本 日本毛片免费视频观看51影院 波多野结衣av医院百度网盘 秋假影院美国影阮日本 1亚欧成人小视频 奇怪美发沙龙店2莉莉影院 av无码毛片 丝袜女王调教的网站有哪些 2499在线观视频免费观看 约炮少妇视频 上床A级片 美尻 无料 w字 主播小电影视频在线观看 自拍性porn 伦理片日本猜人电影 初犬 无码 特级毛片影谍 日日在线操小妹视频 日本无码乱论视频 kinpatu86 在线 欧美色图狠狠插 唐朝AV国产 校花女神肛门自慰视频 免费城人网站 日产午夜影院 97人人操在线视频 俺来也还有什么类似的 caopron网页 HND181 西瓜影音 阿v天堂网2014 秋霞eusses极速播放 柳州莫菁第6集 磁力链 下载丝袜中文字 IPZ-694 ftp 海牙视频成人 韩国出轨漫画无码 rbd561在线观看 色色色 magnet 冲田杏梨爆乳女教师在线 大桃桃(原蜜桃Q妹)最新高清大秀两套6V XXX日本人体艺术三人 城市雄鹰。你个淫娃 久久最新国产动漫在线 A级高清免费一本道 人妻色图 欧美激情艳舞视频 草莓在线看视频自拍 成电人影有亚洲 ribrngaoqingshipin 天天啪c○m 浣肠video在线观看 天堂av无码av欧美av免费看电影 ftxx00 大香蕉水 吉里吉里电影网 日本三级有码视频 房事小视频。 午午西西影院 国内自拍主播 冲田爱佳 经典拳交视频最新在线视频 怡红影晥免费普通用户 青娱乐综合在线观看 藏经阁成人 汤姆影视avtom wwWff153CoM 一本道小视频免费 神马影影院大黄蜂 欧美老人大屁股在线 四级xf 坏木啪 冲田杏梨和黑人bt下载 干莉莉 桃乃木香奈在线高清ck 桑拿888珠海 家庭乱伦视频。 小鸟酱自慰视频在线观看 校园春色 中文字幕 性迷宫0808 迅雷资源来几个 小明看看永久免费视频2 先锋hunta资源 国产偷拍天天干 wwwsezyz4qiangjianluanlun 婷婷五月社区综合 爸爸你的鸡巴太大轻点我好痛 农村妇女买淫视屏 西瓜网赤井美月爆乳女子在校生 97无码R级 日本图书馆暴力强奸在线免费 巨乳爱爱在线播放 ouzouxinjiao 黄色国产视频 成人 自拍 超碰 在线 腿绞论坛 92福利电影300集 人妻x人妻动漫在线 进入 91视频 会计科目汇总表人妻x人妻动漫在线 激情上位的高颜值小少妇 苹果手机能看的A片 一本道av淘宝在线 佐藤美纪 在线全集 深夜成人 国内自拍佛爷在线 国内真实换妻现场实拍自拍 金瓶梅漫画第九话无码 99操人人操 3737电影网手机在线载 91另类视频 微兔云 (指甲油) -(零食) ssni180迅雷中字 超清高碰视频免费观看 成人啪啪小视频网址 美女婶婶当家教在线观看 网红花臂纹身美女大花猫SM微拍视频 帅哥美女搞基在床上搞的视频下载东西 日本视频淫乱 av小视频av小电影 藤原辽子在线 川上优被强奸电影播放 长时间啊嗯哦视频 美女主播凌晨情趣套装开车,各种自·慰加舞技 佳色影院 acg乡村 国产系列欧美系列 本土成人线上免费影片 波罗野结衣四虎精品在线 爆乳幼稚园 国产自拍美女在线观看免插件 黑丝女优电影 色色的动漫视频 男女抽插激情视频 Lu69 无毛伦理 粉嫩少妇9P 欧美女人开苞视频 女同a级片 无码播放 偷拍自拍平板 天天干人人人人干 肏多毛的老女人 夜人人人视频 动漫女仆被揉胸视频 WWW2018AVCOM jizzjizzjizz马苏 巨乳潜入搜查官 藤浦惠在线观看 老鸹免费黄片 美女被操屄视频 美国两性 西瓜影音 毛片ok48 美国毛片基地A级e片 色狼窝图片网 泷泽乃南高清无码片 热热色源20在线观看 加勒比澳门网 经典伦理片abc 激情视频。app 三百元的性交动画 97爱蜜姚网 雷颖菲qq空间 激情床戏拍拍拍 luoli hmanh 男人叉女人视频直播软件 看美女搞基哪个app好 本网站受美坚利合众国 caobike在线视频发布站 女主播电击直肠两小时 狠狠干高清视频在线观看 女学生被强奸的视频软件 欧美喷水番号 欧美自拍视频 武侠古典伦理 m13113美女图片 日本波多野结衣三级无马 美女大桥AV隐退 在线中文字幕亚洲欧美飞机图 xxx,av720p iav国产自拍视频 国内偷拍视频在线 - 百度 国歌产成人网 韩国美女主播录制0821 韩国直播av性 fyeec日本 骚逼播放 偷拍你懂的网站 牡蛎写真视频 初川南个人资源 韩国夏娃 ftp 五十度飞2828 成人区 第五季 视频区 亚洲日韩 中文字幕 动漫 7m视频分类大全电影 动漫黄片10000部免费视频 我骚逼丝袜女网友给上了 日本女人的性生活和下水道囧图黄 肏婶骚屄 欧美美女性爰图 和美女明星做爱舒服吗 乱伦小说小姨 天天舅妈 日本极品淫妇美鲍人体艺术 黄色录像强奸片 逍遥仙境论坛最新地址 人插母动物 黄s页大全 亚洲无码电影网址 幼女乱伦电影 雯雅婷30p caopran在线视频 插b尽兴口交 张佰芝yinbu biantaicaobitupian 台湾18成人电影 勾引同学做爱 动态性交姿势图 日本性交图10p 操逼动态图大全 国产后入90后 quanjialuanlun 裸女条河图片种子 坚挺的鸡吧塞进少妇的骚穴 迅雷亚洲bt www56com 徐老板去农村玩幼女小说故事 大尺度床吻戏大全视频 wwwtp2008com 黑丝大奶av 口述与爸爸做爱 人兽完全插入 欧美大乳12p 77hp 教师 欧美免费黄色网 影音先锋干女人逼 田中瞳无码电影 男人与漂亮的小母 在线观看 朴妮唛骚逼 欧美性感骚屄浪女 a片马干人 藤原绘里香电影 草草逼网址 www46xxxcn 美女草屄图 色老太人体艺网 男人的大阴茎插屄 北京违章车辆查询 魅影小说 滨岛真绪zhongzi 口比一级片 国产a片电影在线播放 小说我给男友刮毛 做爱视屏 茜木铃 开心四色播播网影视先锋 影音先锋欧美性爱人与兽 激情撸色天天草 插小嫚逼电影 人与动物三客优 日本阴部漫画美女邪恶图裸体护士美女露阴部 露屄大图 日韩炮图图片 欧美色图天天爱打炮 咪咕网一路向西国语 一级激情片 我爱看片av怎么打不开 偷拍自拍影先锋芳芳影院 性感黑丝高跟操逼 女性阴部摄影图片 自拍偷拍作爱群交 我把大姨给操了 好色a片 大鸡吧黄片 操逼和屁眼哪个爽 先生肉感授业八木梓 国产电影色图 色吧色吧图片 祖母乱伦片 强悍的老公搞了老婆又搞女儿影音先锋 美女战黑人大鸟五月 我被大鸡吧狂草骚穴 黄狗猪性交妇 我爱少女的逼 伦理苍井空百度影音 三姨妈的肥 国产成人电影有哪些 偷拍自拍劲爆欧美 公司机WWW日本黄色 无遮挡AV片 sRAV美女 WLJEEE163com 大鸡巴操骚12p 我穿着黑丝和哥哥干 jiujiucaojiujiucao 澳门赌场性交黄色免费视频 sifangplanxyz 欧美人兽交asianwwwzooasiancomwwwzootube8com 地狱少女新图 美女和黄鳝xxx doingit电影图片 香港性爱电影盟 av电影瑜伽 撸尔山乱伦AV 天天天天操极品好身材 黑人美女xxoo电影 极品太太 制服诱惑秘书贴吧 阿庆淫传公众号 国产迟丽丽合集 bbw热舞 下流番号 奥门红久久AV jhw04com 香港嫩穴 qingjunlu3最新网 激情做爱动画直播 老师大骚逼 成人激情a片干充气娃娃的视频 咪图屋推女郎 AV黄色电影天堂 aiai666top 空姐丝袜大乱11p 公公大鸡巴太大了视频 亚洲午夜Av电影 兰桂坊女主播 百度酷色酷 龙珠h绿帽 女同磨豆腐偷拍 超碰男人游戏 人妻武侠第1页 中国妹妹一级黄片 电影女同性恋嘴舔 色秀直播间 肏屄女人的叫声录音 干她成人2oP 五月婷婷狼 那里可以看国内女星裸照 狼友最爱操逼图片 野蛮部落的性生活 人体艺术摄影37cc 欧美色片大色站社区 欧美性爱喷 亚洲无码av欧美天堂网男人天堂 黑人黄色网站 小明看看主 人体艺术taosejiu 1024核工厂xp露出激情 WWWDDFULICOM 粉嫩白虎自慰 色色帝国PK视频 美国搔女 视频搜索在线国产 小明算你狠色 七夜郎在线观看 亚洲色图欧美色图自拍偷拍视频一区视频二区 pyp影yuan 我操网 tk天堂网 亚洲欧美射图片65zzzzcom 猪jb 另类AV南瓜下载 外国的人妖网站 腐女幼幼 影音先锋紧博资源 快撸网87 妈妈5我乱论 亚洲色~ 普通话在线超碰视频下载 世界大逼免费视频 先锋女优图片 搜索黄色男的操女人 久久女优播免费的 女明星被P成女优 成人三级图 肉欲儿媳妇 午夜大片厂 光棍电影手机观看小姨子 偷拍自拍乘人小说 丝袜3av网 Qvodp 国产女学生做爱电影 第四色haoav 催眠赵奕欢小说 色猫电影 另类性爱群交 影像先锋 美女自慰云点播 小姨子日B乱伦 伊人成人在线视频区 干表姐的大白屁股 禁室义母 a片丝袜那有a片看a片东京热a片q钬 香港经典av在线电影 嫩紧疼 亚洲av度 91骚资源视频免费观看 夜夜日夜夜拍hhh600com 欧美沙滩人体艺术图片wwwymrtnet 我给公公按摩 吉沢明涉av电影 恋夜秀晨间电影 1122ct 淫妻交换长篇连载 同事夫妇淫乱大浑战小说 kk原创yumi www774n 小伙干美国大乳美女magnet 狗鸡巴插骚穴小说 七草千岁改名微博 满18周岁可看爱爱色 呱呱下载 人妻诱惑乱伦电影 痴汉图书馆5小说 meinvsextv www444kkggcom AV天堂手机迅雷下载 干大姨子和二姨子 丝袜夫人 qingse 肥佬影音 经典乱伦性爱故事 日日毛资源站首页 美国美女裸体快播 午夜性交狂 meiguomeishaonvrentiyishu 妹妹被哥哥干出水 东莞扫黄女子图片 带毛裸照 zipailaobishipin 人体艺术阴部裸体 秘密 强奸酒醉大奶熟女无码全集在线播放 操岳母的大屄 国产少妇的阴毛 影音先锋肥熟老夫妻 女人潮吹视频 骚老师小琪迎新舞会 大奶女友 杨幂不雅视频种子百度贴吧 53kk 俄罗斯骚穴 国模 露逼图 李宗瑞78女友名单 二级片区视频观看 爸爸妈妈的淫荡性爱 成人电影去也 华我想操逼 色站图片看不了 嫖娼色 肛交lp 强奸乱伦肏屄 肥穴h图 岳母 奶子 妈妈是av女星 淫荡性感大波荡妇图片 欧美激情bt专区论坛 晚清四大奇案 日啖荔枝三百颗作者 三国防沉迷 印度新娘大结局 米琪人体艺术 夜夜射婷婷色在线视频 www555focom 台北聚色网 搞穴影音先锋 美吻影院超体 女人小穴很很日 老荡妇高跟丝袜足交 越南大胆室内人体艺术 翔田千里美图 樱由罗种子 美女自摸视频下载 香港美女模特被摸内逼 朴麦妮高清 亚寂寞美女用手指抠逼草莓 波多野结衣无码步兵在线 66女阴人体图片 吉吉影音最新无码专区 丝袜家庭教师种子 黄色网站名jane 52av路com 爱爱谷色导航网 阳具冰棒 3334kco 最大胆的人体摄影网 哥哥去在线乱伦文学 婶婶在果园里把我了 wagasetu 我去操妹 点色小说激 色和哥哥 吴清雅艳照 白丝护士ed2k 乱伦小说综合资源网 soso插插 性交抽插图 90后艳照门图片 高跟鞋97色 美女美鲍人体大胆色图 熟女性交bt 百度美女裸体艺术作品 铃木杏里高潮照片图 洋人曹比图 成人黄色图片电影网 幼幼女性性交 性感护士15p 白色天使电影 下载 带性视频qq 操熟女老师 亚洲人妻岛国线播放 虐待荡妇老婆 中国妈妈d视频 操操操成人图片 大阴户快操我 三级黄图片欣赏 jiusetengmuziluanlun p2002午夜福 肉丝一本道黑丝3p性爱 美丽叔母强奸乱伦 偷拍强奸轮奸美女短裙 日本女人啪啪网址 岛国调教magnet 大奶美女手机图片 变态强奸视频撸 美女与色男15p 巴西三级片大全 苍井空点影 草kkk 激情裸男体 东方AV在线岛国的搬运工下载 青青草日韩有码强奸视频 霞理沙无码AV磁力 哥哥射综合视频网 五月美女色色先锋 468rccm www色红尘com av母子相奸 成人黄色艳遇 亚洲爱爱动漫 干曰本av妇女 大奶美女家教激情性交 操丝袜嫩b 有声神话小说 小泽玛利亚迅雷 波多野结衣thunder 黄网色中色 www访问www www小沈阳网com 开心五月\u0027 五月天 酒色网 秘密花园 淫妹影院 黄黄黄电影 救国p2p 骚女窝影片 处女淫水乱流 少女迷奸视频 性感日本少妇 男人的极品通道 色系军团 恋爱操作团 撸撸看电影 柳州莫菁在线视频u 澳门娱银河成人影视 人人莫人人操 西瓜视频AV 欧美av自拍 偷拍 三级 狼人宝鸟视频下载 妹子漏阴道不打码视频 国产自拍在线不用 女牛学生破处視频 9877h漫 七色沙耶香番号 最新国产自拍 福利视频在线播放 青青草永久在线视频2 日本性虐电影百度云 pppd 481 snis939在线播放 疯狂性爱小视频精彩合集推荐 各种爆操 各种场所 各式美女 各种姿势 各式浪叫 各种美乳 谭晓彤脱黑奶罩视频 青青草伊人 国内外成人免费影视 日本18岁黄片 sese820 无码中文字幕在线播放2 - 百度 成语在线av 奇怪美发沙龙店2莉莉影院 1人妻在线a免费视频 259luxu在线播放 大香蕉综合伊人网在线影院 国模 在线视频 国产 同事 校园 在线 浪荡女同做爱 healthonline899 成人伦理 mp4 白合野 国产 迅雷 2018每日在线女优AV视频 佳AV国产AV自拍日韩AV视频 色系里番播放器 有没有在线看萝莉处女小视频的网站 高清免费视频任你搞伦理片 温泉伦理按摸无码 PRTD-003 时间停止美容院 计女影院 操大白逼baby操作粉红 ak影院手机版 91老司机sm 毛片基地成人体验区 dv1456 亚洲无限看片区图片 abp582 ed2k 57rrrr新域名 XX局长饭局上吃饱喝足叫来小情人当众人面骑坐身上啪啪 欲脱衣摸乳给众人看 超震撼 处女在线免费黄色视频 大香巨乳家政爱爱在线 吹潮野战 处女任务坉片 偷拍视频老夫妻爱爱 yibendaoshipinzhaixian 小川阿佐美再战 内人妻淫技 magnet 高老庄八戒影院 xxxooo日韩 日韩av12不卡超碰 逼的淫液 视频 黎明之前 ftp 成人电影片偷拍自拍 久久热自拍偷在线啪啪无码 2017狼人干一家人人 国产女主播理论在线 日本老黄视频网站 少妇偷拍点播在线 污色屋在线视频播放 狂插不射 08新神偷古惑仔刷钱BUG 俄罗斯强姦 在线播放 1901福利性爱 女人59岁阴部视频 国产小视频福利在线每天更新 教育网人体艺术 大屁股女神叫声可射技术太棒了 在线 极品口暴深喉先锋 操空姐比 坏木啪 手机电影分分钟操 jjzyjj11跳转页 d8视频永久视频精品在线 757午夜视频第28集 杉浦花音免费在线观看 学生自拍 香蕉视频看点app下载黄色片 2安徽庐江教师4P照片 快播人妻小说 国产福二代少妇做爱在线视频 不穿衣服的模特58 特黄韩国一级视频 四虎视频操逼小段 干日本妇妇高清 chineseloverhomemade304 av搜搜福利 apaa-186 magnet 885459com63影院 久久免费视怡红院看 波多野结衣妻ネトリ电影 草比视频福利视频 国人怡红院 超碰免费chaopeng 日本av播放器 48qa,c 超黄色裸体男女床上视频 PPPD-642 骑马乳交插乳抽插 JULIA 最后是厉害的 saob8 成人 inurl:xxx 阴扩 成八动漫AV在线 shawty siri自拍在线 成片免费观看大香蕉 草莓100社区视频 成人福利软件有哪些 直播啪啪啪视频在线 成人高清在线偷拍自拍视频网站 母女午夜快播 巨乳嫩穴影音先锋在线播放 IPZ-692 迅雷 哺乳期天天草夜夜夜啪啪啪视频在线 孩子放假前与熟女的最后一炮 操美女25p freex性日韩免费视频 rbd888磁力链接 欧美美人磁力 VR视频 亚洲无码 自拍偷拍 rdt在线伦理 日本伦理片 希崎杰西卡 被迫服从我的佐佐凌波在线观看 葵つか步兵在线 东方色图, 69堂在线视频 人人 abp356百度云 江媚玲三级大全 开心色导 大色哥网站 韩国短发电影磁力 美女在线福利伦理 亚洲 欧美 自拍在线 限制级福利视频第九影院 美女插鸡免得视频 泷泽萝拉第四部第三部我的邻居在线 色狼窝综合 美国少妇与水电工 火影忍者邪恶agc漫画纲手邪恶道 近亲乱伦视频 金卡戴珊视频门百度云 极虎彯院 日本 母乳 hd 视频 爆米花神马影院伦理片 国产偷拍自拍丝袜制服无码性交 璩美凤光碟完整版高清 teen萝莉 国产小电影kan1122 日日韩无码中文亚洲在线视频六区第6 黄瓜自卫视频激情 红番阔午夜影院 黄色激情视频网视频下载 捆梆绳模羽洁视频 香蕉视频页码 土豆成人影视 东方aⅴ免费观看p 国内主播夫妻啪啪自拍 国内网红主播自拍福利 孩子强奸美女软件 廿夜秀场面业影院 演员的诞生 ftp 迷奸系列番号 守望人妻魂 日本男同调教播放 porn三级 magnet 午夜丁香婷婷 裸卿女主播直播视频在线 ac制服 mp4 WWW_OSION4YOU_COM 90后人体艺术网 狠狠碰影音先锋 美女秘书加班被干 WWW_BBB4444_COM vv49情人网 WWW_XXX234_COM 黄色xxoo动态图 人与动物性交乱伦视频 屄彩图