Comments on: Social Neuroscience http://www.metafilter.com/79560/Social-Neuroscience/ Comments on MetaFilter post Social Neuroscience Fri, 27 Feb 2009 21:15:38 -0800 Fri, 27 Feb 2009 21:15:38 -0800 en-us http://blogs.law.harvard.edu/tech/rss 60 Social Neuroscience http://www.metafilter.com/79560/Social-Neuroscience <a href="http://www.seedmagazine.com/news/2009/02/that_voodoo_that_scientists_do.php">That Voodoo That Scientists Do.</a> "When findings are debated online, as with a yet to be released <a href="http://www.pashler.com/Articles/Vul_etal_2008inpress.pdf">paper</a> (PDF) that <a href="http://www.edvul.com/voodoocorr.php">calls out</a> the field of <a href="http://en.wikipedia.org/wiki/Social_neuroscience">social neuroscience</a>, who wins?" post:www.metafilter.com,2009:site.79560 Fri, 27 Feb 2009 20:31:55 -0800 homunculus Blogs Internet Neuroscience Science SocialNeuroscience By: hal_c_on http://www.metafilter.com/79560/Social-Neuroscience#2469648 The person who gets there first. Actually...with academics, its the one who publishes first. Coincidentally...FIRST! comment:www.metafilter.com,2009:site.79560-2469648 Fri, 27 Feb 2009 21:15:38 -0800 hal_c_on By: Jimbob http://www.metafilter.com/79560/Social-Neuroscience#2469649 <i>He envisages a wider formal online-review process, in which scientists could respond to papers, with their comments weighted based on their own publication record.</i> Part of me is dismayed that someone wants to turn the peer review process into one big blog debate. Part of me feels this may well be an interesting thing to try, although I fear that, as in the infamous "blogosphere", the loud voices may drown out the sensible ones. "You've got your statistics all wrong" papers often have a big impact. My favorite one lately was <a href="http://www.ncbi.nlm.nih.gov/pubmed/12142253?ordinalpos=1&itool=EntrezSystem2.PEntrez.Pubmed.Pubmed_ResultsPanel.Pubmed_DiscoveryPanel.Pubmed_Discovery_RA&linkpos=3&log$=relatedarticles&logdbfrom=pubmed">Dominici 2002</a>. In general, eventually the dust settles and everyone starts doing things better. comment:www.metafilter.com,2009:site.79560-2469649 Fri, 27 Feb 2009 21:18:48 -0800 Jimbob By: logicpunk http://www.metafilter.com/79560/Social-Neuroscience#2469663 You know what? Fuck Vul and the rest of the authors on that paper. Their main argument is that there is a pervasive "non-independence" of analysis in fMRI studies. The problem they identify is that there are two inferential steps in fMRI analyses: The first step identifies voxels (volume elements) in the brain based on the correlation of activity in the voxels with behavioral measures, and the second step tests whether the activity of the voxels correlates with those behavioral measures. If that seems redundant to you, it's because it is. Vul et al. argue that the non-independence stems from the second step relying on the results of the first step. Wager et al. argue (<a href="http://www.scribd.com/doc/11471802/Correlations-in-social-neuroscience-arent-voodoo-A-reply-to-Vul-et-al-by-Lieberman-Berkman-Wager">rebuttal</a>) that this is not a "non-independent" analysis, but a single inferential step - to wit, the same analysis that identifies the voxels that are correlated with behavioral measures is used to report the strength of that correlation. More to the point, however, is that Vul is committing an unforgivable sin: "we wanted to make the paper entertaining and to increase its readership." (from the first link). The fact that it's getting play on "the blogs" isn't a guarantee that it's right. The language was deliberately chosen to get a rise out of people, and it worked. Vul has turned an argument about methodology in science into a public relations clusterfuck. Peer review has a way of eventually getting it right - bad methods go away, eventually. This spat about statistics should have been hashed out in the scientific literature, not turned loose with words like "voodoo" that get the emotions revving. comment:www.metafilter.com,2009:site.79560-2469663 Fri, 27 Feb 2009 21:41:11 -0800 logicpunk By: agent http://www.metafilter.com/79560/Social-Neuroscience#2469665 What's most interesting about this is the point that this broader audience has the power to shape the agenda for scientific funding based on "science-lite" versions of the work in question. In particular, the author of the work in question may inadvertently overshoot his mark: his goal was to call into question the methodology employed in social neurobiology; I would expect that the intent was to convince his peers that the methods should be revised. However, the broader audience consuming the stripped down accounts of the paper have the potential to actually stunt research into social neurobiology as a field: right now, my impression of social neurobiology is (almost certainly uncharitably) that it is based on shoddy statistics and that none of its claims can be trusted. If that becomes the popular view of the field it could be catastrophic, and this episode could push potential results from this line of inquiry decades into the future. comment:www.metafilter.com,2009:site.79560-2469665 Fri, 27 Feb 2009 21:45:26 -0800 agent By: StickyCarpet http://www.metafilter.com/79560/Social-Neuroscience#2469675 <em>Noise may make the correlation higher sometimes, and lower at other times. </em> Yeah, I've long imagined a service that would "clean up" surveillance videos, to enable victims to better identify perpetrators. The cleanup would involve filtering with a sharpening filter, like in photoshop, where it lets you type in the coefficients for a rectangular kernel. But the coefficients, only a 9 x 9 grid of content-neutral numbers, would be specially concocted to reinforce the likeness of a predetermined suspect. But then I decided to use my powers for good instead. comment:www.metafilter.com,2009:site.79560-2469675 Fri, 27 Feb 2009 22:03:03 -0800 StickyCarpet By: afu http://www.metafilter.com/79560/Social-Neuroscience#2469685 <em>More to the point, however, is that Vul is committing an unforgivable sin: "we wanted to make the paper entertaining and to increase its readership." (from the first link).</em> Yes indeed, God forbid a scientific paper is entertaining and widely read. The last thing we need is a better informed public. It's telling that the critics of the paper are focusing on the word "voodoo," it's almost as if they are trying to distract attention from the legitimate criticism Vul brought up. comment:www.metafilter.com,2009:site.79560-2469685 Fri, 27 Feb 2009 23:01:57 -0800 afu By: Jimbob http://www.metafilter.com/79560/Social-Neuroscience#2469691 Yeah, there is a recent trend towards giving papers snappy, witty, eye-catching titles. I don't like it much myself, but this is hardly the first paper to ever do this. comment:www.metafilter.com,2009:site.79560-2469691 Fri, 27 Feb 2009 23:26:22 -0800 Jimbob By: Krrrlson http://www.metafilter.com/79560/Social-Neuroscience#2469693 That paper sounds arrogant as all fuck, quite frankly. If you're going to call large amounts of research into questions and "urge" authors to correct the scientific record, it seems like it may be bad form to be an arrogant douchebag about it. comment:www.metafilter.com,2009:site.79560-2469693 Fri, 27 Feb 2009 23:39:09 -0800 Krrrlson By: spacediver http://www.metafilter.com/79560/Social-Neuroscience#2469705 i didn't feel any arrogance when reading it but perhaps it's coz I wasn't paying attention to the style as much. I did find the noise simulation quite remarkable though. The fact that you can get ridiculously high correlations out of pure noise was instructive, to myself comment:www.metafilter.com,2009:site.79560-2469705 Sat, 28 Feb 2009 00:29:37 -0800 spacediver By: you http://www.metafilter.com/79560/Social-Neuroscience#2469737 I'm divided about this. They seem to have a point (the graph on page 14 looks pretty damning), but the presentation is a little bit biased (I'd prefer if they didn't include each paper more than once, and using a stacked graph this way is misleading, especially when screen reading). I'm also deducting some points because it <i>appears</i> <small>to</small> <i>be</i></big> <u>typeset</u> with</big> <small>Microsoft</small> <i>Word</i>. comment:www.metafilter.com,2009:site.79560-2469737 Sat, 28 Feb 2009 03:24:18 -0800 you By: Postroad http://www.metafilter.com/79560/Social-Neuroscience#2469741 As usual, I know nothing abut any of this stuff. But I think it is a bit funny that they debate publishjing a refutation online before the paper gets published and Metafelons debate the uses of pre-publication on line whilst a-swim in the Big Blue. comment:www.metafilter.com,2009:site.79560-2469741 Sat, 28 Feb 2009 06:01:12 -0800 Postroad By: kldickson http://www.metafilter.com/79560/Social-Neuroscience#2469742 Krrrlson, it might be useful to note that Ed is backed up by a professor here. Kanwisher is fairly prominent in neuroscience. The only thing that should be concern here is whether the research being called into question is in fact wrong and not the perceived arrogance of the graduate student main author. comment:www.metafilter.com,2009:site.79560-2469742 Sat, 28 Feb 2009 06:05:45 -0800 kldickson By: localroger http://www.metafilter.com/79560/Social-Neuroscience#2469750 Anyone who has ever hung out at the casino knows you can get ridiculously high correlations out of pure noise. Just ask the folks at the Roulette and Baccarat tables if they see any patterns in the little scorecards they keep. comment:www.metafilter.com,2009:site.79560-2469750 Sat, 28 Feb 2009 06:28:24 -0800 localroger By: logicpunk http://www.metafilter.com/79560/Social-Neuroscience#2469763 <em> Yes indeed, God forbid a scientific paper is entertaining and widely read. The last thing we need is a better informed public. </em> Oh, hell. Now the public wants to know what we're up to in those shiny labs. It. Never. Ends. Seriously, no one would say that the public shouldn't take an interest. The problem is that Vul's paper is being delivered to the public as if it were a scientific law when it's anything but. There're whole rounds of debate to go before the neuroscientific community figures out what, if any, changes need to be made to fMRI methods. By starting off with the goal of making his paper "entertaining", Vul has left off being a scientist and started being a performer, and it's not good for science. Now the public has grabbed onto the idea that a perfectly okay technique with maybe a couple kinks that need to be worked out has somehow undermined the entire field. <em> It's telling that the critics of the paper are focusing on the word "voodoo," it's almost as if they are trying to distract attention from the legitimate criticism Vul brought up.</em> It's almost like Vul is an incredible asshat with no legitimate criticisms to bring up, so he needs to use the word "voodoo" to get any attention at all. comment:www.metafilter.com,2009:site.79560-2469763 Sat, 28 Feb 2009 06:55:30 -0800 logicpunk By: kliuless http://www.metafilter.com/79560/Social-Neuroscience#2469770 <a href="http://arstechnica.com/web/news/2009/02/neuroscientist-internet-video-games-rewiring-kids-brains.ars">Neuroscientist: Internet, video games rewiring kids' brains</a> - "Fast-paced bytes of information gathered from today's social networks and video games are responsible for rewiring kids' brains, says UK neuroscientist Susan Greenfield. <i>Though she doesn't cite any research</i>, she connects these new technological habits to the behaviors of infants and autistic children." [em added if you didn't catch an oxymoron in there somewhere] comment:www.metafilter.com,2009:site.79560-2469770 Sat, 28 Feb 2009 07:14:07 -0800 kliuless By: cogneuro http://www.metafilter.com/79560/Social-Neuroscience#2469774 Whatever the validity of their arguments, the authors' use of voodoo and other snark was wrong. Such terms inappropriately cast aspersions on the intentions and motivations of the authors being criticized. The critique is supposed to be based on the science, not the critics' feelings about the work. They slimed the individuals and neuroimaging research in general by calling it voodoo. Withdrawing the comment (removing the term from the title at the editor's request) is like a witness slipping something inadmissable into testimony and then having it stricken from the record, once the damage is done. Journal editors routinely ask authors to take loaded language out of their papers and it's a good thing they do. This is a new journal with a new editor and he probably regrets not having been tougher with them. The evidence and arguments should stand on their own. You want a readable entertaining version of the controversy, try blogging about it or write popular science. While entertaining us, try presenting the arguments correctly, too. comment:www.metafilter.com,2009:site.79560-2469774 Sat, 28 Feb 2009 07:17:34 -0800 cogneuro By: voltairemodern http://www.metafilter.com/79560/Social-Neuroscience#2469805 It seems like a positive development to me because it reveals some of the processual nature of science -- what events like this help nonscientists understand is that just because a paper gets published doesn't mean the results it reports can't be questioned. But the fact that there is serious debate, rather than the standard frivolous online banter, also suggests a further fact: science is hard work with important consequences. And this seems like an important point to bring to light, especially given the US's recent (i.e., within the past few decades) failure to bankroll science. Keeping science interesting to the public might not be the best way to obtain scientific results, but it <i>is</i> good advertising for the whole scientific endeavor. comment:www.metafilter.com,2009:site.79560-2469805 Sat, 28 Feb 2009 08:31:31 -0800 voltairemodern By: bonehead http://www.metafilter.com/79560/Social-Neuroscience#2469824 The problem with Vul's paper is that, for all it's supposed "seriousness", it is, at its heart, a troll. If you prefer, you could call it iconoclastic, but the end result is the same. The difficulty is deciding if it's worth spending time on or not. If there is a point to their argument---which I'm not capable of evaluating, this isn't my field at all---the authors certainly don't help their case with sarcasm or snark. All that serves is to bring out the science writers, who love a good fight. One can only conclude that Vul and his co-authors did this because they are, yes, trolls. <em>Yes indeed, God forbid a scientific paper is entertaining and widely read. The last thing we need is a better informed public.</em> This does not appear to be Vul's goal at all. He and his coauthors seemingly wanted to stir up the pot just to get attention. The substance of his paper could have been presented much differently. This is a net negative if his goal is to be considered advancement of the field. comment:www.metafilter.com,2009:site.79560-2469824 Sat, 28 Feb 2009 09:12:35 -0800 bonehead By: Xezlec http://www.metafilter.com/79560/Social-Neuroscience#2469851 <i>Yes indeed, God forbid a scientific paper is entertaining and widely read. The last thing we need is a better informed public.</i> Actually, I have to agree that a scientific paper should <i>not</i> be entertaining or widely read. The result would not be a better-informed public, but it might be a more confused, misled, or even cynical public. Scientific papers are written for an audience of experts and are intended to show a boring, fact-based case for a boring, fact-based claim. Trying to spin your findings to be more interesting rather than more accurate (as may be happening here) defeats the purpose of science, which is not, in fact, entertainment. If unqualified people read papers, they usually won't pick up on all the nuances, though they may not be aware of this. They may think the author's claim is stronger than it really is, because they are not well-versed enough to spot any of the kind of subtle flaws that typically derail an argument. They may also misunderstand the author's point. It's not a scientific paper, but I know of a guy who read Lee Smolin's <i>The Trouble With Physics</i> and actually thought his message was that you should use pure intuition and "natural philosophy" to understand the world instead of rigor and experiment! All this isn't to say that people shouldn't be informed about science or educated. But this isn't the purpose of scientific papers. For public education, we have an education system, we have TV shows, we have PBS, we have some scientists' web sites, and we have things like Science News magazine. These are all specifically <i>designed</i> to tell non-experts about the latest developments in a field. comment:www.metafilter.com,2009:site.79560-2469851 Sat, 28 Feb 2009 09:52:12 -0800 Xezlec By: Humanzee http://www.metafilter.com/79560/Social-Neuroscience#2469858 They probably could have done without the word "voodoo", but otherwise I don't see a problem with the paper itself. They've identified a problem that has been understood for a LONG time: <a href="http://en.wikipedia.org/wiki/Multiple_comparisons">multiple comparisons</a>. Basically, the more things you observe, the more likely it is that two of them will seem to correlate with each other (and therefore, the more demanding your standards need to be to say that correlation is significant). Sometimes it's easy to forget this, or not recognize that it applies to a certain situation. It's a problem that I think many scientists are aware that fMRI is especially vulnerable to, but it's a cross-discipline problem. Any time you see a study that indicates a weird connection (eating cheeseburgers increases your likelihood of death in terrorist attack), you should be suspicious of scientists not correcting for multiple comparisons. What the authors have done that is interesting, is they've come up with a rather simple measure of saying, "if you're honest and do good statistics, you probably shouldn't get correlations above X", and they can show that there's quite a few that are above that. I haven't had time to read the whole paper, but it should be noted that such large correlations can validly occur under certain circumstances (perhaps one quantity being measured is insensitive to the typical fluctuations that underly neural processes) and of course they can occur occasionally by coincidence. For me, this sort of statistical problem is extremely frustrating. Most of the math necessary to do science well is actually pretty basic. But teaching is spotty at best. I've spent YEARS taking one form of calculus after another, but I've had to pick up the really useful stuff on the job, often by making a mistake and getting called out on it, or by reinventing the wheel. Since scientists are human, they often become overly defensive and refuse to admit error when they get caught doing bad statistics. So if you're a crusader for good statistics, it can feel like pissing into the wind. comment:www.metafilter.com,2009:site.79560-2469858 Sat, 28 Feb 2009 10:03:47 -0800 Humanzee By: spacediver http://www.metafilter.com/79560/Social-Neuroscience#2469921 For all those commenting on how snarky the paper seemed, have you actually read it? Or are you just basing the snark allegation on the title? comment:www.metafilter.com,2009:site.79560-2469921 Sat, 28 Feb 2009 11:46:59 -0800 spacediver By: logicpunk http://www.metafilter.com/79560/Social-Neuroscience#2469923 <em> They probably could have done without the word "voodoo", but otherwise I don't see a problem with the paper itself. They've identified a problem that has been understood for a LONG time: multiple comparisons.</em> That is certainly not the problem they're attacking: <blockquote>This multiple comparisons correction problem is well known and has received much attention. The problem we describe arises when authors then report secondary statistics. Vul et al, 2008 </blockquote> There are methods to correct for multiple comparisons, and there are neuroimagers who ignore those methods. Those are Bad People and we hate them. Those aren't the ones Vul is attacking; rather, he has constructed a straw man to knock down: Imagine you have a class of 100 students, and you want to study whether there's a correlation between # of hours studying for a Psychology class with final grades in that class. Now, you'll have some students who study their ass off and get good grades, and some students who don't study at all and fail. These are the ones that show a large correlation between studying and grades. BUT, you'll also have students who get good grades without studying at all, and others that will fail regardless of the amount of the time studying. Vul is claiming that neuroscientists are essentially just looking at the students who show a strong correlation, and leaving the others out of the analysis. If that were actually what neuroscientists did, it would be pretty damning. Here's the non straw man version of the above: You have 1000 students in 10 groups. Each group is enrolled in a different class, but you, the experimenter, don't know which class the groups are assigned to, and you don't know exactly how many students are in each group. You want to study the effects of # of hours studying on final grades, but only in Class A (and not classes B-J). Now each student is going to spend a certain number of hours studying, but only some of them will be studying for Class A. At the end of the semester, you look at final grades in Class A, and lo and behold, a group of 100 students shows a strong correlation, while all the others show a weak or nonexistent one. It is acceptable to ignore the correlations from the other 900 students because that is not what you were interested in to begin with. The problem in this scenario is, as Vul gets right, that you might only get 90 of the 100 students who properly belong in Class A because the last 10 are less strongly correlated. This will have the effect of making the correlation look stronger than it actually is, but it doesn't mean the correlation doesn't exist, or that the researcher is taking additional inferential steps, as Vul claims. Vul has massively overstated his case, and he's done it in as dickish a way as possible. There are reasonable steps one could take to prevent correlation inflation, and the academic community would have been better served if Vul had spent time working on how to do that rather than being a big jackass about it. comment:www.metafilter.com,2009:site.79560-2469923 Sat, 28 Feb 2009 11:49:45 -0800 logicpunk By: cogneuro http://www.metafilter.com/79560/Social-Neuroscience#2469926 Spacediver: Read it. Understood it. The snark extends beyond the egregious title. comment:www.metafilter.com,2009:site.79560-2469926 Sat, 28 Feb 2009 11:54:54 -0800 cogneuro By: Humanzee http://www.metafilter.com/79560/Social-Neuroscience#2469945 <i>logicpunk: That is certainly not the problem they're attacking:</i> Oh wow, you're right. That's what I get for skimming. After reading more carefully: what you said. comment:www.metafilter.com,2009:site.79560-2469945 Sat, 28 Feb 2009 12:34:17 -0800 Humanzee By: spacediver http://www.metafilter.com/79560/Social-Neuroscience#2469948 Logicpunk, from what I understood, the targets of vul's criticism were <i>not</i> looking at the group in class A only. That would be an acceptable way to choose which voxels are of interest and then analyze them (for example in face perception research, the region of interest is often defined in advance by some face vs house localizer scans, and then those regions which show preferential activation for faces are then tested further and analyzed). But vul's targets aren't deciding in advance that they'd like to analyze a certain brain area, or a certain Class (A in your example). Rather, they are choosing which areas to analyze based <i>solely</i> on correlations. So it's as if they just looked at all the classes as one big unit, and searched for correlations, and chose the highest correlating students for analysis. This, from what I understand, is what the non-independence error is all about. The survey he sent out to the authors explicitly asked exactly <i>how</i> they chose their voxels for analysis. He only targeted those who responded that they chose the voxels in the strawman'd version of your post. Now it may be the case that the survey wasn't fine grained enough, or that there were misunderstandings, but I've not looked into the issue in enough depth to say whether that's the case. comment:www.metafilter.com,2009:site.79560-2469948 Sat, 28 Feb 2009 12:38:59 -0800 spacediver By: logicpunk http://www.metafilter.com/79560/Social-Neuroscience#2469951 Humanzee: Completely agree that multiple comparisons is still a problem, by the way... I get the feeling that some fMRI studies do everything they can to skirt around the issue, otherwise they wouldn't have, you know, results. comment:www.metafilter.com,2009:site.79560-2469951 Sat, 28 Feb 2009 12:40:34 -0800 logicpunk By: en forme de poire http://www.metafilter.com/79560/Social-Neuroscience#2469984 Actually, I think the issue that Vul et al. identify has less to do with multiple hypothesis correction (which I get the sense is, fortunately, standard practice) and more with cross-validation. (On preview, other people have said this already.) This isn't my field, so I apologize in advance if I'm butchering it, but this is the understanding that I came away with after reading the article. Suppose you perform an fMRI study and compute correlation coefficients between brain voxels and a behavior of interest. You then correct these correlation coefficients for multiple hypothesis testing. You then report the mean or maximum correlation of these significant voxels: let's say, for the sake of example, you find a maximum correlation of r = 0.9. In effect, what you have done is build a statistical model that relates brain activity to behavior, where based on some function of voxel activity you can predict whether some behavior is being performed. The implication of reporting this number is that, if you were to analyze the brains of ten more patients, one should find a correlation of 0.9 (or close to that) between the previously-identified voxel and the behavior in the new subjects as well. However, if you actually did analyze ten new subjects, performing the same analysis, a likely outcome would be that a somewhat different set of voxels would then appear to be most significantly correlated. You would probably find some new voxels to be significant, and lose some of the ones you identified previously. More importantly, the levels of correlation that you observe between the voxels and the behavior would differ, due to experimental noise. What this all means is that the relationship you would learn concerning brain voxels and behavior from the two independent studies could be different. As a consequence, to give an extreme example, if your model from the first study was "look at voxel #131" (essentially the result of picking the maximum correlation), you could potentially get a drastically lower correlation value applying this to the second data set, even if #131 is still significant. This is problematic. If we want to claim that we have identified an association between a brain region and an activity, the nature and strength of this association shouldn't change that much between two studies with the same design. Say that you instead reported the average correlation of several voxels (e.g. r = 0.8) with significant p-values. However, a p-value only tells you the likelihood of observing as good a result by random chance. It does not say anything about the probability that, if you repeated the experiment, you would find a correlation that strong. In other words, it doesn't explicitly tell you anything about the predictive power, even after the appropriate MH correction. In fact, in some ways, the more stringent you are with your selection, the worse the problem is: you're betting that a more and more specific group of voxels will behave consistently, while making the mean correlation to the behavior that you report higher and higher. This seems like a case of <a href="http://en.wikipedia.org/wiki/Overfitting_(machine_learning)">overfitting</a>. In other words, the predictive power of your association is the quantity that is the most interesting. However, the correlation values for the voxels that you have already selected is not necessarily linked to prediction accuracy, and can even have an inverse relationship. The best test of predictive power would be, obviously, to repeat the experiment, but that's not always feasible. A reasonable alternative would be to use a technique called <a href="http://www.cs.cmu.edu/~schneide/tut5/node42.html">cross-validation</a>. Essentially, you divide the data randomly into (for example) two halves. You learn a relationship from the first half of the data, and when it comes time to assess the reliability of this relationship, you only evaluate on data you haven't already used as part of the learning procedure. This affords some protection against overfitting and gives you a much better idea of how reliable your association is. Anyway, sorry to be so long-winded - this is partly my kicking the tires on my own thought processes by writing them out - but it certainly seems to me that the authors have an excellent point. Cross-validation and related techniques are widely used in machine learning and statistics for precisely the reasons that they identify (i.e, in general, the correlation between a fitted model and a behavior of interest tends to drop when you look at new data). comment:www.metafilter.com,2009:site.79560-2469984 Sat, 28 Feb 2009 13:30:10 -0800 en forme de poire By: empath http://www.metafilter.com/79560/Social-Neuroscience#2470016 So, uh, what's the actual point of social neurobiology? What are we trying to get out of it? Can someone give some practical examples of how it's being used? comment:www.metafilter.com,2009:site.79560-2470016 Sat, 28 Feb 2009 14:08:26 -0800 empath By: logicpunk http://www.metafilter.com/79560/Social-Neuroscience#2470054 en forme de poir: you might be interested in the reply by <a href="http://www.scn.ucla.edu/pdf/LiebermanBerkmanWager(invitedreply).pdf">Lieberman et al</a> (pdf) which addresses some of the issues you mention. As far as cross-validation goes, yeah, it would be a good thing to do in general. However, MRI time is expensive, and most researchers don't have the funding to collect enough data that they can chop it in half. comment:www.metafilter.com,2009:site.79560-2470054 Sat, 28 Feb 2009 15:08:01 -0800 logicpunk By: Mental Wimp http://www.metafilter.com/79560/Social-Neuroscience#2470156 <em>Wager et al. argue (rebuttal) that this is not a "non-independent" analysis, but a single inferential step - to wit, the same analysis that identifies the voxels that are correlated with behavioral measures is used to report the strength of that correlation</em> I just had to read the abstract to realize that Vul et al. are correct, these analyses are bogus. It's not "non-independence" that is the problem, but culling the highest correlations out of a bunch of them and reporting them as "important". The problem is multiple inference. It doesn't just affect p-values, it affects any extreme values. If I randomly generate 200 variables and look at the correlations with 200 other independent randomly generated variables, I guarantee you will get some of them that are very high. And that's the problem with these optimistic analyses: you can't tell whether the high correlations are real or just chance. comment:www.metafilter.com,2009:site.79560-2470156 Sat, 28 Feb 2009 16:49:25 -0800 Mental Wimp By: Mental Wimp http://www.metafilter.com/79560/Social-Neuroscience#2470169 <em>Wager et al. argue (rebuttal) that this is not a "non-independent" analysis, but a single inferential step - to wit, the same analysis that identifies the voxels that are correlated with behavioral measures is used to report the strength of that correlation</em> It's amazing how slowly statistical knowledge filters down to practice among non-statisticians. Halving datasets for cross-validation is horse-and-buggy practice. <a href="http://en.wikipedia.org/wiki/Cross-validation">Leave-1-out methods</a> (or, more generally, K-fold cross validation) make use of the entire dataset for both training and validation and can be wrapped with bootstrapping to get robust measures of variability from the same process. Essentially, you leave one observation out of your dataset, fit your model to the remainder and see how well your model predicts the omitted observation. Replace the observation and repeat until all observations have been left out once. Now combine all the deviations of your predictions from your observations and that's your measure of model validity. (Somewhat oversimplified, but that's the basic idea.) To get variance, apply this method to bootstrap samples drawn from the original data and do that a thousand times, and <em>voila</em>, a robust estimate of the variance of the validity measure. comment:www.metafilter.com,2009:site.79560-2470169 Sat, 28 Feb 2009 16:58:21 -0800 Mental Wimp By: Mental Wimp http://www.metafilter.com/79560/Social-Neuroscience#2470170 Oops, the quote that was to head the previous comments was this: <blockquote>However, MRI time is expensive, and most researchers don't have the funding to collect enough data that they can chop it in half.</blockquote> comment:www.metafilter.com,2009:site.79560-2470170 Sat, 28 Feb 2009 16:59:09 -0800 Mental Wimp By: Smegoid http://www.metafilter.com/79560/Social-Neuroscience#2470269 This is tricky! It's great to see the blog world go crazy over a science paper, it sucks that careers might get damaged and funding may dry up over what is otherwise a non-issue. Lieberman's reply estimate the inflation of r-values to be minuscule, while Vul et al. explicitly state that the studies they criticize are: "To sum up, then, we are led to conclude that a disturbingly large, and quite prominent, segment of social neuroscience research is using seriously defective research methods and producing a profusion of numbers that should not be believed." This is irresponsible writing in extremis. A "disturbingly large", "seriously defective", "numbers that should not be believed". For one, the papers that performed this obstensibly non-independent analysis are tiny in number compared to the sum total of 10+ years of social neuroscience. "Seriously defective" and "numbers that should not be believed" have both been addressed by Liebermen. The inflation that has Vul and colleagues all in a tizzy is likely real but also likely rather small. Especially in studies using normal sample sizes, something Vul and colleagues disingenuously underestimated in their simulations (10 subjects? come on that hasn't been the norm since the mid 90s). Also, just to reply to one of the previous comments here. Appeals to authority notwithstanding, yes Kanwisher is a big wig, but so are the authors on the rebuttal and even more so are the people listed as having provided comments on the rebuttal. Two camps are developing around this, and they're filled with top names and pro statisticians, so let's just not pick a side just because of who's on it. The real damage here is that people have gone away with the take-home message that social neuroscience methods are flawed and not to be trusted. The analysis method under fire represents only a minor portion of the analysis strategies (and papers) in social neuroscience. Moreover, it's a method of analyzing brain data, not "social" brain data, but any and is used in all fields of neuroimaging, be it clinical or cognitive. It's just that going after clinical or cognitive neuroscience isn't as likely to get you noticed. comment:www.metafilter.com,2009:site.79560-2470269 Sat, 28 Feb 2009 18:17:34 -0800 Smegoid By: logicpunk http://www.metafilter.com/79560/Social-Neuroscience#2470675 <em>culling the highest correlations out of a bunch of them and reporting them as "important". The problem is multiple inference.</em> Again, this is not the problem. It is standard in any fMRI analysis to correct for multiple comparisons to avoid false positives. Even for the high-probability correlations that remain, it would be nonsensical to cull all the highest correlations from everywhere in the brain. No one does this. Generally a researcher is interested in one or two areas that have reasonably well-defined anatomical boundaries - the correlation strengths that are reported are usually going to describe the strength of the effect in that one region, not the strength for all voxels that passed everywhere. comment:www.metafilter.com,2009:site.79560-2470675 Sun, 01 Mar 2009 06:56:33 -0800 logicpunk By: afu http://www.metafilter.com/79560/Social-Neuroscience#2470713 <em>The real damage here is that people have gone away with the take-home message that social neuroscience methods are flawed and not to be trusted.</em> The thing is that a lot of people already had a pretty low opinion of social neuroscience already because of the tendancy of practitioners too pimp out their FMRIs for ridicoulus studies like "Republican brians/ Democratic Brains". This is the real reason this study is getting so much attention is that a lot of people already thought something smelt funny in social neuroscience, and they jumped on this study because it confirmed their feelings. <em>So, uh, what's the actual point of social neurobiology? What are we trying to get out of it? Can someone give some practical examples of how it's being used?</em> To be honest, I don't think there is much of a point to it besides pretty FMRI pictures and press releases. We simply do not know enough about social psychology or neuroscience to have a fruitful interdisciplinary study. FMRIs in particular are much too course of an instrument to be used to study extremely complex social interactions. At the most it can point to an approximate brain region to study, but most of these regions are already known through animal and lesion studies. comment:www.metafilter.com,2009:site.79560-2470713 Sun, 01 Mar 2009 07:39:33 -0800 afu By: forrestal http://www.metafilter.com/79560/Social-Neuroscience#2470746 The error that they discuss here is actually pervasive in scientific analysis. The problem that is being discussed is that any discipline that uses "statistical significance" as an important factor in deciding publication will generate erroneous results due specifically to that factor. As one example, a researcher runs a study with 100 variables. After combining some of those variables, carrying out transformation, etc, it's not difficult to have several thousand statistical correlations amongst the variables, their combined/transformed indices. For the sake of example, let's say there are 2000 correlations. So, if the researcher uses, say the conservative .01 significance level (as opposed to the more liberal .05 level), then just by chance alone 20 will come out as significant (.01 x 2000 = 20). Now the problem comes when the researchers treats those as actually significant (i.e., replicable) results. It's the same thing as a nonscientist who sees a correlation between winning at the track and wearing a lucky hat. Although in principle, a scientist should have the training and discipline to see the error, as a recovering academician, I know of dozens of examples where that's not the case. A prof needs to publish or perish, so they overlook the speciousness of their statistical analysis. As mentioned above, cross-validation, is one solution. Another is to use analysis that is either analytically, or via monte carlo methods corrected to the number of comparisons made. But the number of times I have suggested this to ex-colleagues who asked me for statistical advice is in the hundreds; whereas the number of times that advice is followed is in the single digits. comment:www.metafilter.com,2009:site.79560-2470746 Sun, 01 Mar 2009 08:11:19 -0800 forrestal By: logicpunk http://www.metafilter.com/79560/Social-Neuroscience#2470971 forrestal: the problem you describe is not what this paper is about. I'm not saying that problem doesn't exist, but it's not the one under discussion amongst neuroscientists right now. Decent researchers in neuroscience already correct for the numbers of comparisons made. But, sure, even with the corrections there will be false positives. But let's put it in context: An fMRI study might scan the brain at a resolution of 64x64x33 voxels. That means for each correlation, you're testing 135168 variables. For a significance level of .01, you'll get around 1400 false positives without correction for multiple comparisons. If the false positives are really random, they will be distributed more or less randomly around the brain, a single voxel here, a single voxel there. The probability of finding 1400 contiguous voxels that are all false positives is vanishingly small, and that's what researchers are looking for: brain regions that span hundreds or thousands of voxels that correlate with behavior. Ironically, the mathematics of the paper actually suggest that the correlation values are inflated by selecting more rigorous p values. If you set your p value to .00001, only one or two voxels are going to pass just by chance, but the correlation coefficient is going to be really, really high. comment:www.metafilter.com,2009:site.79560-2470971 Sun, 01 Mar 2009 10:57:14 -0800 logicpunk By: Mental Wimp http://www.metafilter.com/79560/Social-Neuroscience#2471498 <em>Generally a researcher is interested in one or two areas that have reasonably well-defined anatomical boundaries - the correlation strengths that are reported are usually going to describe the strength of the effect in that one region, not the strength for all voxels that passed everywhere.</em> But even so, the researcher will experience the same problem as in cancer clusters. By chance, even rare occurrences like cancer (or large correlations) will cluster by any defined group metric (like geography or "brain area") you want to pick and with the number of voxels available it is inevitable. Correcting p-values using Bonferroni or any other method can't overcome it, because the voxels themselves are intercorrelated spatially. The only way around the problem is to severely limit the number of <em>a priori</em> hypotheses, state them quantitatively and extremely specifically, and cross-validate them carefully. Otherwise, they are just exploratory analyses and not worthy of publication as "findings." comment:www.metafilter.com,2009:site.79560-2471498 Sun, 01 Mar 2009 21:23:26 -0800 Mental Wimp By: logicpunk http://www.metafilter.com/79560/Social-Neuroscience#2471842 <em>The only way around the problem is to severely limit the number of a priori hypotheses, state them quantitatively and extremely specifically, and cross-validate them carefully. Otherwise, they are just exploratory analyses and not worthy of publication as "findings."</em> And? This is like warning dentists not to have sex with their anaesthetized patients: we all agree, in principle, but christ, who actually does this? Sure there's going to be a few bad researchers who will throw anything and everything at the wall to see what sticks. We don't like these people because they give the field a bad image. Competent researchers in the area start off with <em>a priori</em> hypotheses about where correlations with behavioral measures will occur in the brain - they look for those correlations to appear in those bounded areas. If a cluster happens to show up elsewhere, they don't go willy-nilly shouting that they've found the new X center of the brain. I'd also like to point out that we are not even discussing the Vul paper anymore. His point was utter crap, but it put the idea in people's minds that social neuroscience is utter crap and that the scientists doing the research are charlatans and frauds, to the point where you're implying that they're not even following basic scientific methods. I'll happily agree with your earlier point that some of the statistical methods in use are quaint and could use an update - that's far different from accusing them of out-and-out deception or incompetence. comment:www.metafilter.com,2009:site.79560-2471842 Mon, 02 Mar 2009 08:45:57 -0800 logicpunk By: spacediver http://www.metafilter.com/79560/Social-Neuroscience#2471932 logicpunk: <i>If a cluster happens to show up elsewhere, they don't go willy-nilly shouting that they've found the new X center of the brain.</i> But this is precisely what Vul is accusing them of doing them, if I'm not mistaken. Can you point to examples of where his accusation is false here? Again, remember the survey he sent out to the authors. comment:www.metafilter.com,2009:site.79560-2471932 Mon, 02 Mar 2009 10:13:34 -0800 spacediver By: logicpunk http://www.metafilter.com/79560/Social-Neuroscience#2472522 <em> But this is precisely what Vul is accusing them of doing them, if I'm not mistaken.</em> Keep in mind, Vul is attacking only one kind of analysis - correlations of brain activity with behavioral measures - and it's not even the most common kind of analysis. It's far more typical to do a straightforward t-test without reference to any behavioral traits, and those can give you lovely blobs of brain activity. So, no, that's not the accusation he's making. comment:www.metafilter.com,2009:site.79560-2472522 Mon, 02 Mar 2009 17:46:20 -0800 logicpunk By: spacediver http://www.metafilter.com/79560/Social-Neuroscience#2472572 I'm not following - are you saying that he's making more than one kind of attack, one about cherry picking voxels which correlate with behavioural measures, and another about the far more typical t-tests? or are you saying that there is a whole field of social neuroscience that isn't subject to his criticism? I'm just concerned with the papers he analyzes in his piece, not papers that weren't attacked. comment:www.metafilter.com,2009:site.79560-2472572 Mon, 02 Mar 2009 18:52:01 -0800 spacediver By: homunculus http://www.metafilter.com/79560/Social-Neuroscience#2472649 <a href="http://www.mindhacks.com/blog/2009/03/junk_food_marketers_.html">Junk food marketers rediscover the Crockus</a> comment:www.metafilter.com,2009:site.79560-2472649 Mon, 02 Mar 2009 20:27:16 -0800 homunculus By: Mental Wimp http://www.metafilter.com/79560/Social-Neuroscience#2472731 <em>This is like warning dentists not to have sex with their anaesthetized patients: we all agree, in principle, but christ, who actually does this?</em> Um, most dentists don't have sex with their anaesthetized patients. I think. And, yes, good scientific practice is rare in a lot of fields. That's why the literature is clogged with mostly useless findings. It takes a lot of work to find the gems among the paste. comment:www.metafilter.com,2009:site.79560-2472731 Mon, 02 Mar 2009 22:24:10 -0800 Mental Wimp By: logicpunk http://www.metafilter.com/79560/Social-Neuroscience#2473023 <em> I'm not following - are you saying that he's making more than one kind of attack, one about cherry picking voxels which correlate with behavioural measures, and another about the far more typical t-tests?</em> In the paper, Vul is only attacking what he regards as cherry picking voxels from correlations. He seems perfectly happy with significance tests (like t-tests). The method he accuses the papers he discusses of using doesn't, as far as I can tell, apply to any of them (see the response by Lieberman et al I <a href="http://www.metafilter.com/79560/Social-Neuroscience#2470054">linked above</a>. They discuss this in greater detail.) I'm going to try another example, because thinking about it in terms of brain areas seems to obscure the issue. Take a state like Colorado which (let's say) has 1/2 its cities in the mountains, and 1/2 at sea level. One day you want to find out what the average temperature of cities at sea level is, except you don't know which cities are at sea level and which are in the mountains. You can assume, however, that cities at sea level will have higher temperatures than the ones in the mountains. So you look for cities with a temperature higher than the average for all cities, and you say those are the ones that are at sea level. This isn't perfect, however - some cities that are actually at sea level might be having a cold snap that day, so you classify them as being in the mountains. So now you have a list of numbers, but what you set out to get was the average temperature for cities at sea level. Seems easy enough, right? This is the step that Vul has a problem with. He claims that by taking the average, you are actually performing an additional inferential step which is non-independent of the first one (the first inferential step was figuring out which cities were (probably) at sea level. In science terms, "non-independent" is a Bad Thing.) The inference (he says) you're making is whether the cities you've selected are significantly above the average temperature for all cities. Since you selected those cities based on them having higher than average temperatures, you're guaranteed to get a significant result. Again, however, <strong>NO ONE DOES THAT</strong>. Taking the average of a group of numbers is a descriptive step, not an inferential one; you're just reporting the average temperature for cities you classified as being at sea level. This number will be slightly inflated because a couple of the cities were having cold snaps, so you missed them, but it doesn't invalidate the whole analysis. comment:www.metafilter.com,2009:site.79560-2473023 Tue, 03 Mar 2009 08:24:18 -0800 logicpunk By: logicpunk http://www.metafilter.com/79560/Social-Neuroscience#2473024 <em>Um, most dentists don't have sex with their anaesthetized patients. I think.</em> Which is my point. By bringing the issue up, however, you're implying that the problem is more pervasive than it actually is, much like you bringing up that scientists need to have a priori hypotheses that constrain their analyses implies that this there's a huge problem with scientists not doing that. <em>And, yes, good scientific practice is rare in a lot of fields. That's why the literature is clogged with mostly useless findings. It takes a lot of work to find the gems among the paste.</em> If you're arguing that Important Papers that make Big Steps Forward are less common than minor papers that only add a bit of uncertain knowledge to a field, I agree. This is, I'd say, built in to the system. If you're saying that most published papers are deeply methodologically flawed, cite please? comment:www.metafilter.com,2009:site.79560-2473024 Tue, 03 Mar 2009 08:24:50 -0800 logicpunk By: Smegoid http://www.metafilter.com/79560/Social-Neuroscience#2473467 Love that this conversation is still going on. Spacediver, I refer you to my above comment with regards to whether he's accusing all of social neuroscience of making stats blunders or just a subset of the studies. (the short answer is yes he's accusing all of social neuroscience of being in error and he's flat out wrong because this type of analysis represents a tiny minority of the analysis strategies in social neuro... but hey making irresponsible sensational soundbite comments gets you presstime!) comment:www.metafilter.com,2009:site.79560-2473467 Tue, 03 Mar 2009 12:59:17 -0800 Smegoid By: kliuless http://www.metafilter.com/79560/Social-Neuroscience#2473960 <a href="http://bactra.org/notebooks/neuropsychology.html">Neuropsychology</a> comment:www.metafilter.com,2009:site.79560-2473960 Tue, 03 Mar 2009 19:37:45 -0800 kliuless By: spacediver http://www.metafilter.com/79560/Social-Neuroscience#2474181 thanks for replies logicpunk &amp; smegoid. comment:www.metafilter.com,2009:site.79560-2474181 Wed, 04 Mar 2009 00:44:33 -0800 spacediver By: Mental Wimp http://www.metafilter.com/79560/Social-Neuroscience#2474514 Look, there's a whole minefield of potentially bad statistical practice in areas where variability is large. Regression to the mean, selection bias, multiple inference (in both subtle and less subtle varieties), mis-modeling error (such as treating clustered data as independent), length bias, lead-time bias, etc., etc. Much of it goes unrecognized until someone points out a major blunder directly related to ignoring it. (Even then, the bad practices sometimes continue; see research on antiarrhythmic drug therapies, for example.) Most papers are not capable of creating "major blunders" since the publish-or-perish culture means most of the literature comprises insignificant findings and many of them have questionable validity because the studies were misdesigned or the analyses misdirected. And this ignores the studies where the bad practices are buried by a methods section that omits or mischaracterizescritical details. You only recognize these latter problems when you review for journals and ask hard questions based on your knowledge of the discrepancy between published methods and actual practice. I really don't have time to review all of the social neuroscience literature, but I have reviewed enough of the general life science literature to recognize what Vul et al. are saying is consistent with my own observations as an applied statistician. comment:www.metafilter.com,2009:site.79560-2474514 Wed, 04 Mar 2009 08:35:18 -0800 Mental Wimp By: forrestal http://www.metafilter.com/79560/Social-Neuroscience#2475512 regarding <em>If you're saying that most published papers are deeply methodologically flawed, cite please? posted by logicpunk</em> see <a href="http://ourworld.compuserve.com/homepages/rajm/openesef.htm">Use and abuse of subjectivity in scientific research</a> <a href="http://www.dharma-haven.org/science/myth-of-scientific-method.htm">Scientific method myth</a> comment:www.metafilter.com,2009:site.79560-2475512 Thu, 05 Mar 2009 06:49:49 -0800 forrestal By: logicpunk http://www.metafilter.com/79560/Social-Neuroscience#2475672 Thanks for the first link, forrestal. It's an interesting choice, in that it encourages acknowledgement of the subjectivity of scientists (via Bayesian stats) rather than a false objectivity that comes with traditional significance testing. Which is all to the good; there's a number of people in neuroscience and psychology who are pushing Bayesian statistics, and eventually, a lot of people hope, it will supplant traditional sig. tests. However, saying that the use of traditional significance testing is a fundamental methodological flaw invalidates close to a century of published results. At worst, it's an imperfect method that can still yield useful conclusions, and better methods are currently being adopted. That second link was just a whiny rant, though. <em>Look, there's a whole minefield of potentially bad statistical practice</em> But what Vul et al. are describing isn't that. Just because you're sympathetic with his point that there are bad statisticians doesn't make his specific critique correct. He called out an entire field and he didn't have the goods. He's far more guilty of bad statistical practice than the papers he included in his study. comment:www.metafilter.com,2009:site.79560-2475672 Thu, 05 Mar 2009 08:40:05 -0800 logicpunk By: forrestal http://www.metafilter.com/79560/Social-Neuroscience#2476282 hmmm, can you give some examples of how the second link is a "whiny rant" <em>(not that it matters, but I adore statisticians, they keep the barbarians at the gate: see the New Yorker cartoon at the end of the pdf:<a href="http://www.radstats.org.uk/no018/commentary.pdf">Well, I'll be damned if I'll defend to the death your right to say something that's statistically incorrect.</a>)</em> comment:www.metafilter.com,2009:site.79560-2476282 Thu, 05 Mar 2009 13:56:04 -0800 forrestal ¡°Why?¡± asked Larry, in his practical way. "Sergeant," admonished the Lieutenant, "you mustn't use such language to your men." "Yes," accorded Shorty; "we'll git some rations from camp by this evenin'. Cap will look out for that. Meanwhile, I'll take out two or three o' the boys on a scout into the country, to see if we can't pick up something to eat." Marvor, however, didn't seem satisfied. "The masters always speak truth," he said. "Is this what you tell me?" MRS. B.: Why are they let, then? My song is short. I am near the dead. So Albert's letter remained unanswered¡ªCaro felt that Reuben was unjust. She had grown very critical of him lately, and a smarting dislike coloured her [Pg 337]judgments. After all, it was he who had driven everybody to whatever it was that had disgraced him. He was to blame for Robert's theft, for Albert's treachery, for Richard's base dependence on the Bardons, for George's death, for Benjamin's disappearance, for Tilly's marriage, for Rose's elopement¡ªit was a heavy load, but Caro put the whole of it on Reuben's shoulders, and added, moreover, the tragedy of her own warped life. He was a tyrant, who sucked his children's blood, and cursed them when they succeeded in breaking free. "Tell my lord," said Calverley, "I will attend him instantly." HoME²Ô¾®¿Õ·¬ºÅѸÀ×Á´½Ó ENTER NUMBET 0017
dzhi.com.cn
mayue8.com.cn
qpq.net.cn
www.delei6.net.cn
baila3.net.cn
lunye9.net.cn
www.liedi0.com.cn
jiaer6.net.cn
dberry.com.cn
www.3xf75.com.cn
成人图片四月色月阁 美女小美操逼 综合图区亚洲 苍井空的蓝色天空 草比wang WWW.BBB471.COM WWW.76UUU.COM WWW.2BQVOD.COM WWW.BASHAN.COM WWW.7WENTA.COM WWW.EHU8.COM WWW.XFW333.COM WWW.XF234.COM WWW.XIXILU9.COM WWW.0755MSX.NET WWW.DGFACAI.COM WWW.44DDYY.COM WWW.1122DX.COM WWW.YKB168.COM WWW.FDJWG.COM WWW.83CCCC.COM WWW.7MTP.COM WWW.NXL7.COM WWW.UZPLN.COM WWW.SEA0362.NET WWW.LUYHA.COM WWW.IXIAWAN.COM WWW.HNJXSJ.COM WWW.53PY.COM WWW.HAOYMAO.COM WWW.97PPP.COM 医网性交动态图 龙腾视频网 骚姐av男人天堂444ckcom wwwvv854 popovodcom sss色手机观看 淫荡之妇 - 百度 亚洲人兽交欧美A片 色妹妹wwwsemm22com 人妻激情p 狼国48Q 亚洲成人理论网 欧美男女av影片 家庭乱伦无需任何播放器在线播放 妩媚的尼姑 老妇成人图片大全 舔姐姐的穴 纯洁小处男 pu285ftp 大哥撸鲁鲁修 咪米色网站 丝袜美腿18P 晚上碰上的足交视频 avav9898 狠狠插影院免费观看所视频有电影 熟女良家p 50s人体 幼女av电影资源种子 小说家庭乱伦校园春色 丝袜美女做爱图片 影音先锋强奸影片 裸贷视频在线观 校园春色卡通动漫的 搜索wwwhuangtvcom 色妹影视 戊人网站 大阴茎男人性恋色网 偷拍自怕台湾妹 AV视频插进去 大胆老奶奶妈妈 GoGo全球高清美女人体 曼娜回忆录全文 上海东亚 舔柯蓝的脚 3344d最近十天更新 av在线日韩有码 强奸乱伦性爱淫秽 淫女谁 2233p 123aaaa查询 福利AV网站 世界黄色网址 弟姐撸人人操 婷婷淫色色淫 淫姐姐手机影院 一个释放的蝌蚪窝超碰 成人速播视频 爱爱王国 黄色一级片影视 夫妻主奴五月天 先锋撸撸吧 Xxoo88 与奶奶的激情 我和老女人美妙经历 淫妻色五月 zaiqqc 和姐姐互舔15p 色黄mp4 先锋2018资源 seoquentetved2k 嫩妹妹色妹妹干妹妹 欧美性爱3751www69nnnncom 淫男乱女小说 东方在线Av成人撸一撸 亚洲成人av伦理 四虎影视二级 3p性交 外国人妖口交性交黑人J吧插女人笔视观看 黑道总裁 人人x艹 美女大战大黑吊 神马电影伦理武则天 大鸡八插进的戏 爆操情人 热颜射国产 真实自拍足交 偷拍萝莉洗澡无码视频 哥哥狠狠射狠狠爱 欲体焚情搜狗 妹子啪啪网站 jizzroutn 平井绘里在线观看 肏男女 五月天逍遥社区 网站 私色房综合网成人网 男人和女人caobi 成人共享网站 港台三级片有逼吗 淫龙之王小说 惠美里大战黑人 我为美女姐姐口交 乱论色站 西田麻衣大胆的人体艺术 亚洲 包射网另类酷文在线 就爱白白胖胖大屁股在线播放 欧美淫妻色色色 奥蕾人艺术全套图片 台湾中学生门ed2k 2013国产幼门 WWW_66GGG_COM WWW_899VV_COM 中国老女人草比 qingse9 nvtongtongwaiyintou 哥哥妹妹性爱av电影 欧美和亚洲裸体做爱 肏胖骚屄 美国十此次先锋做爱影视 亚里沙siro 爆操人妻少妇 性交的骚妇 百度音影动漫美女窝骚 WWW_10XXOO_COM 哥两撸裸体图片 香洪武侠电影 胖美奈 我和女儿日屄 上海礼仪小姐 紫微斗数全书 优酷视频联盟 工作压力大怎么办 成人动漫edk 67ijcom WWW15NVNVCOM 东京热逼图 狠狠干自拍 第五色宗 少妇的b毛 t56人体艺术大胆人体模特 大黄狗与美女快播播放 美女露屄禁图 大胆内射少妇 十二种屄 苍井空绿色大战 WWWAFA789COM 淫老婆3p 橹二哥影院影视先锋 日本h动漫继母在线观看 淫乱村庄 强奸少妇采花魔 小泽玛莉亚乱伦电影 婷婷五月红成人网 我爱色洞洞 和老婆日屄图片 哪个网站能看到李宗瑞全集 操小姨的穴 白洁亚洲图片 亚洲色图淫荡内射美女 国外孕妇radio 哪本小说里有个金瓶经的拉完屎扣扣屁眼闻俩下 在线亚洲邪恶图 快播最新波哆野结依 wwwgigi22com 操紧身妹 丁香五月哥 欧美强奸幼童下载wwwgzyunhecom 撸波波rrr777 淫兽传 水淫穴 哥哥干巨乳波霸中文字幕 母子相奸AV视频录像 淫荡的制服丝袜妈妈 有强奸内容的小黄文 哪里艺术片 刘嘉玲人体艺术大胆写真 www婷婷五月天5252bocom 美女护士动态图片 教师制服诱惑a 黄色激情校园小说 怡红院叶子喋 棚户区嫖妓pronhub 肏逼微博 wwppcc777 vns56666com 色哥哥色妹妹内射 ww99anan 清纯秀气的学生妹喝醉 短头发撸碰 苍井空一级片tupian 够爽影院女生 鲁大娘久草 av淘之类的网站 谷露AV日本AV韩国AV 电台有声小说 丽苑春色 小泽玛利亚英语 bl动漫h网 色谷歌短片 免费成人电影 台湾女星综合网 美眉骚导航(荐) 岛国爱情动作片种子 兔牙喵喵在线观看影院 五月婷婷开心之深深爱一本道 动漫福利啪啪 500导航 自拍 综合 dvdes664影音先锋在线观看 水岛津实透明丝袜 rrav999 绝色福利导航视频 200bbb 同学聚会被轮奸在线视频 性感漂亮的保健品推销员上门推销套套和延迟剂时被客户要求当场实验效果操的 羞羞影院每日黄片 小黄视频免费观看在线播放 日本涩青视频 日本写真视频 日本女人大尺度裸体操逼视频 日韩电影网 日本正在播放女教师 在线观看国产自拍 四虎官方影库 男男a片 小武妈妈 人妻免费 视频日本 日本毛片免费视频观看51影院 波多野结衣av医院百度网盘 秋假影院美国影阮日本 1亚欧成人小视频 奇怪美发沙龙店2莉莉影院 av无码毛片 丝袜女王调教的网站有哪些 2499在线观视频免费观看 约炮少妇视频 上床A级片 美尻 无料 w字 主播小电影视频在线观看 自拍性porn 伦理片日本猜人电影 初犬 无码 特级毛片影谍 日日在线操小妹视频 日本无码乱论视频 kinpatu86 在线 欧美色图狠狠插 唐朝AV国产 校花女神肛门自慰视频 免费城人网站 日产午夜影院 97人人操在线视频 俺来也还有什么类似的 caopron网页 HND181 西瓜影音 阿v天堂网2014 秋霞eusses极速播放 柳州莫菁第6集 磁力链 下载丝袜中文字 IPZ-694 ftp 海牙视频成人 韩国出轨漫画无码 rbd561在线观看 色色色 magnet 冲田杏梨爆乳女教师在线 大桃桃(原蜜桃Q妹)最新高清大秀两套6V XXX日本人体艺术三人 城市雄鹰。你个淫娃 久久最新国产动漫在线 A级高清免费一本道 人妻色图 欧美激情艳舞视频 草莓在线看视频自拍 成电人影有亚洲 ribrngaoqingshipin 天天啪c○m 浣肠video在线观看 天堂av无码av欧美av免费看电影 ftxx00 大香蕉水 吉里吉里电影网 日本三级有码视频 房事小视频。 午午西西影院 国内自拍主播 冲田爱佳 经典拳交视频最新在线视频 怡红影晥免费普通用户 青娱乐综合在线观看 藏经阁成人 汤姆影视avtom wwWff153CoM 一本道小视频免费 神马影影院大黄蜂 欧美老人大屁股在线 四级xf 坏木啪 冲田杏梨和黑人bt下载 干莉莉 桃乃木香奈在线高清ck 桑拿888珠海 家庭乱伦视频。 小鸟酱自慰视频在线观看 校园春色 中文字幕 性迷宫0808 迅雷资源来几个 小明看看永久免费视频2 先锋hunta资源 国产偷拍天天干 wwwsezyz4qiangjianluanlun 婷婷五月社区综合 爸爸你的鸡巴太大轻点我好痛 农村妇女买淫视屏 西瓜网赤井美月爆乳女子在校生 97无码R级 日本图书馆暴力强奸在线免费 巨乳爱爱在线播放 ouzouxinjiao 黄色国产视频 成人 自拍 超碰 在线 腿绞论坛 92福利电影300集 人妻x人妻动漫在线 进入 91视频 会计科目汇总表人妻x人妻动漫在线 激情上位的高颜值小少妇 苹果手机能看的A片 一本道av淘宝在线 佐藤美纪 在线全集 深夜成人 国内自拍佛爷在线 国内真实换妻现场实拍自拍 金瓶梅漫画第九话无码 99操人人操 3737电影网手机在线载 91另类视频 微兔云 (指甲油) -(零食) ssni180迅雷中字 超清高碰视频免费观看 成人啪啪小视频网址 美女婶婶当家教在线观看 网红花臂纹身美女大花猫SM微拍视频 帅哥美女搞基在床上搞的视频下载东西 日本视频淫乱 av小视频av小电影 藤原辽子在线 川上优被强奸电影播放 长时间啊嗯哦视频 美女主播凌晨情趣套装开车,各种自·慰加舞技 佳色影院 acg乡村 国产系列欧美系列 本土成人线上免费影片 波罗野结衣四虎精品在线 爆乳幼稚园 国产自拍美女在线观看免插件 黑丝女优电影 色色的动漫视频 男女抽插激情视频 Lu69 无毛伦理 粉嫩少妇9P 欧美女人开苞视频 女同a级片 无码播放 偷拍自拍平板 天天干人人人人干 肏多毛的老女人 夜人人人视频 动漫女仆被揉胸视频 WWW2018AVCOM jizzjizzjizz马苏 巨乳潜入搜查官 藤浦惠在线观看 老鸹免费黄片 美女被操屄视频 美国两性 西瓜影音 毛片ok48 美国毛片基地A级e片 色狼窝图片网 泷泽乃南高清无码片 热热色源20在线观看 加勒比澳门网 经典伦理片abc 激情视频。app 三百元的性交动画 97爱蜜姚网 雷颖菲qq空间 激情床戏拍拍拍 luoli hmanh 男人叉女人视频直播软件 看美女搞基哪个app好 本网站受美坚利合众国 caobike在线视频发布站 女主播电击直肠两小时 狠狠干高清视频在线观看 女学生被强奸的视频软件 欧美喷水番号 欧美自拍视频 武侠古典伦理 m13113美女图片 日本波多野结衣三级无马 美女大桥AV隐退 在线中文字幕亚洲欧美飞机图 xxx,av720p iav国产自拍视频 国内偷拍视频在线 - 百度 国歌产成人网 韩国美女主播录制0821 韩国直播av性 fyeec日本 骚逼播放 偷拍你懂的网站 牡蛎写真视频 初川南个人资源 韩国夏娃 ftp 五十度飞2828 成人区 第五季 视频区 亚洲日韩 中文字幕 动漫 7m视频分类大全电影 动漫黄片10000部免费视频 我骚逼丝袜女网友给上了 日本女人的性生活和下水道囧图黄 肏婶骚屄 欧美美女性爰图 和美女明星做爱舒服吗 乱伦小说小姨 天天舅妈 日本极品淫妇美鲍人体艺术 黄色录像强奸片 逍遥仙境论坛最新地址 人插母动物 黄s页大全 亚洲无码电影网址 幼女乱伦电影 雯雅婷30p caopran在线视频 插b尽兴口交 张佰芝yinbu biantaicaobitupian 台湾18成人电影 勾引同学做爱 动态性交姿势图 日本性交图10p 操逼动态图大全 国产后入90后 quanjialuanlun 裸女条河图片种子 坚挺的鸡吧塞进少妇的骚穴 迅雷亚洲bt www56com 徐老板去农村玩幼女小说故事 大尺度床吻戏大全视频 wwwtp2008com 黑丝大奶av 口述与爸爸做爱 人兽完全插入 欧美大乳12p 77hp 教师 欧美免费黄色网 影音先锋干女人逼 田中瞳无码电影 男人与漂亮的小母 在线观看 朴妮唛骚逼 欧美性感骚屄浪女 a片马干人 藤原绘里香电影 草草逼网址 www46xxxcn 美女草屄图 色老太人体艺网 男人的大阴茎插屄 北京违章车辆查询 魅影小说 滨岛真绪zhongzi 口比一级片 国产a片电影在线播放 小说我给男友刮毛 做爱视屏 茜木铃 开心四色播播网影视先锋 影音先锋欧美性爱人与兽 激情撸色天天草 插小嫚逼电影 人与动物三客优 日本阴部漫画美女邪恶图裸体护士美女露阴部 露屄大图 日韩炮图图片 欧美色图天天爱打炮 咪咕网一路向西国语 一级激情片 我爱看片av怎么打不开 偷拍自拍影先锋芳芳影院 性感黑丝高跟操逼 女性阴部摄影图片 自拍偷拍作爱群交 我把大姨给操了 好色a片 大鸡吧黄片 操逼和屁眼哪个爽 先生肉感授业八木梓 国产电影色图 色吧色吧图片 祖母乱伦片 强悍的老公搞了老婆又搞女儿影音先锋 美女战黑人大鸟五月 我被大鸡吧狂草骚穴 黄狗猪性交妇 我爱少女的逼 伦理苍井空百度影音 三姨妈的肥 国产成人电影有哪些 偷拍自拍劲爆欧美 公司机WWW日本黄色 无遮挡AV片 sRAV美女 WLJEEE163com 大鸡巴操骚12p 我穿着黑丝和哥哥干 jiujiucaojiujiucao 澳门赌场性交黄色免费视频 sifangplanxyz 欧美人兽交asianwwwzooasiancomwwwzootube8com 地狱少女新图 美女和黄鳝xxx doingit电影图片 香港性爱电影盟 av电影瑜伽 撸尔山乱伦AV 天天天天操极品好身材 黑人美女xxoo电影 极品太太 制服诱惑秘书贴吧 阿庆淫传公众号 国产迟丽丽合集 bbw热舞 下流番号 奥门红久久AV jhw04com 香港嫩穴 qingjunlu3最新网 激情做爱动画直播 老师大骚逼 成人激情a片干充气娃娃的视频 咪图屋推女郎 AV黄色电影天堂 aiai666top 空姐丝袜大乱11p 公公大鸡巴太大了视频 亚洲午夜Av电影 兰桂坊女主播 百度酷色酷 龙珠h绿帽 女同磨豆腐偷拍 超碰男人游戏 人妻武侠第1页 中国妹妹一级黄片 电影女同性恋嘴舔 色秀直播间 肏屄女人的叫声录音 干她成人2oP 五月婷婷狼 那里可以看国内女星裸照 狼友最爱操逼图片 野蛮部落的性生活 人体艺术摄影37cc 欧美色片大色站社区 欧美性爱喷 亚洲无码av欧美天堂网男人天堂 黑人黄色网站 小明看看主 人体艺术taosejiu 1024核工厂xp露出激情 WWWDDFULICOM 粉嫩白虎自慰 色色帝国PK视频 美国搔女 视频搜索在线国产 小明算你狠色 七夜郎在线观看 亚洲色图欧美色图自拍偷拍视频一区视频二区 pyp影yuan 我操网 tk天堂网 亚洲欧美射图片65zzzzcom 猪jb 另类AV南瓜下载 外国的人妖网站 腐女幼幼 影音先锋紧博资源 快撸网87 妈妈5我乱论 亚洲色~ 普通话在线超碰视频下载 世界大逼免费视频 先锋女优图片 搜索黄色男的操女人 久久女优播免费的 女明星被P成女优 成人三级图 肉欲儿媳妇 午夜大片厂 光棍电影手机观看小姨子 偷拍自拍乘人小说 丝袜3av网 Qvodp 国产女学生做爱电影 第四色haoav 催眠赵奕欢小说 色猫电影 另类性爱群交 影像先锋 美女自慰云点播 小姨子日B乱伦 伊人成人在线视频区 干表姐的大白屁股 禁室义母 a片丝袜那有a片看a片东京热a片q钬 香港经典av在线电影 嫩紧疼 亚洲av度 91骚资源视频免费观看 夜夜日夜夜拍hhh600com 欧美沙滩人体艺术图片wwwymrtnet 我给公公按摩 吉沢明涉av电影 恋夜秀晨间电影 1122ct 淫妻交换长篇连载 同事夫妇淫乱大浑战小说 kk原创yumi www774n 小伙干美国大乳美女magnet 狗鸡巴插骚穴小说 七草千岁改名微博 满18周岁可看爱爱色 呱呱下载 人妻诱惑乱伦电影 痴汉图书馆5小说 meinvsextv www444kkggcom AV天堂手机迅雷下载 干大姨子和二姨子 丝袜夫人 qingse 肥佬影音 经典乱伦性爱故事 日日毛资源站首页 美国美女裸体快播 午夜性交狂 meiguomeishaonvrentiyishu 妹妹被哥哥干出水 东莞扫黄女子图片 带毛裸照 zipailaobishipin 人体艺术阴部裸体 秘密 强奸酒醉大奶熟女无码全集在线播放 操岳母的大屄 国产少妇的阴毛 影音先锋肥熟老夫妻 女人潮吹视频 骚老师小琪迎新舞会 大奶女友 杨幂不雅视频种子百度贴吧 53kk 俄罗斯骚穴 国模 露逼图 李宗瑞78女友名单 二级片区视频观看 爸爸妈妈的淫荡性爱 成人电影去也 华我想操逼 色站图片看不了 嫖娼色 肛交lp 强奸乱伦肏屄 肥穴h图 岳母 奶子 妈妈是av女星 淫荡性感大波荡妇图片 欧美激情bt专区论坛 晚清四大奇案 日啖荔枝三百颗作者 三国防沉迷 印度新娘大结局 米琪人体艺术 夜夜射婷婷色在线视频 www555focom 台北聚色网 搞穴影音先锋 美吻影院超体 女人小穴很很日 老荡妇高跟丝袜足交 越南大胆室内人体艺术 翔田千里美图 樱由罗种子 美女自摸视频下载 香港美女模特被摸内逼 朴麦妮高清 亚寂寞美女用手指抠逼草莓 波多野结衣无码步兵在线 66女阴人体图片 吉吉影音最新无码专区 丝袜家庭教师种子 黄色网站名jane 52av路com 爱爱谷色导航网 阳具冰棒 3334kco 最大胆的人体摄影网 哥哥去在线乱伦文学 婶婶在果园里把我了 wagasetu 我去操妹 点色小说激 色和哥哥 吴清雅艳照 白丝护士ed2k 乱伦小说综合资源网 soso插插 性交抽插图 90后艳照门图片 高跟鞋97色 美女美鲍人体大胆色图 熟女性交bt 百度美女裸体艺术作品 铃木杏里高潮照片图 洋人曹比图 成人黄色图片电影网 幼幼女性性交 性感护士15p 白色天使电影 下载 带性视频qq 操熟女老师 亚洲人妻岛国线播放 虐待荡妇老婆 中国妈妈d视频 操操操成人图片 大阴户快操我 三级黄图片欣赏 jiusetengmuziluanlun p2002午夜福 肉丝一本道黑丝3p性爱 美丽叔母强奸乱伦 偷拍强奸轮奸美女短裙 日本女人啪啪网址 岛国调教magnet 大奶美女手机图片 变态强奸视频撸 美女与色男15p 巴西三级片大全 苍井空点影 草kkk 激情裸男体 东方AV在线岛国的搬运工下载 青青草日韩有码强奸视频 霞理沙无码AV磁力 哥哥射综合视频网 五月美女色色先锋 468rccm www色红尘com av母子相奸 成人黄色艳遇 亚洲爱爱动漫 干曰本av妇女 大奶美女家教激情性交 操丝袜嫩b 有声神话小说 小泽玛利亚迅雷 波多野结衣thunder 黄网色中色 www访问www www小沈阳网com 开心五月\u0027 五月天 酒色网 秘密花园 淫妹影院 黄黄黄电影 救国p2p 骚女窝影片 处女淫水乱流 少女迷奸视频 性感日本少妇 男人的极品通道 色系军团 恋爱操作团 撸撸看电影 柳州莫菁在线视频u 澳门娱银河成人影视 人人莫人人操 西瓜视频AV 欧美av自拍 偷拍 三级 狼人宝鸟视频下载 妹子漏阴道不打码视频 国产自拍在线不用 女牛学生破处視频 9877h漫 七色沙耶香番号 最新国产自拍 福利视频在线播放 青青草永久在线视频2 日本性虐电影百度云 pppd 481 snis939在线播放 疯狂性爱小视频精彩合集推荐 各种爆操 各种场所 各式美女 各种姿势 各式浪叫 各种美乳 谭晓彤脱黑奶罩视频 青青草伊人 国内外成人免费影视 日本18岁黄片 sese820 无码中文字幕在线播放2 - 百度 成语在线av 奇怪美发沙龙店2莉莉影院 1人妻在线a免费视频 259luxu在线播放 大香蕉综合伊人网在线影院 国模 在线视频 国产 同事 校园 在线 浪荡女同做爱 healthonline899 成人伦理 mp4 白合野 国产 迅雷 2018每日在线女优AV视频 佳AV国产AV自拍日韩AV视频 色系里番播放器 有没有在线看萝莉处女小视频的网站 高清免费视频任你搞伦理片 温泉伦理按摸无码 PRTD-003 时间停止美容院 计女影院 操大白逼baby操作粉红 ak影院手机版 91老司机sm 毛片基地成人体验区 dv1456 亚洲无限看片区图片 abp582 ed2k 57rrrr新域名 XX局长饭局上吃饱喝足叫来小情人当众人面骑坐身上啪啪 欲脱衣摸乳给众人看 超震撼 处女在线免费黄色视频 大香巨乳家政爱爱在线 吹潮野战 处女任务坉片 偷拍视频老夫妻爱爱 yibendaoshipinzhaixian 小川阿佐美再战 内人妻淫技 magnet 高老庄八戒影院 xxxooo日韩 日韩av12不卡超碰 逼的淫液 视频 黎明之前 ftp 成人电影片偷拍自拍 久久热自拍偷在线啪啪无码 2017狼人干一家人人 国产女主播理论在线 日本老黄视频网站 少妇偷拍点播在线 污色屋在线视频播放 狂插不射 08新神偷古惑仔刷钱BUG 俄罗斯强姦 在线播放 1901福利性爱 女人59岁阴部视频 国产小视频福利在线每天更新 教育网人体艺术 大屁股女神叫声可射技术太棒了 在线 极品口暴深喉先锋 操空姐比 坏木啪 手机电影分分钟操 jjzyjj11跳转页 d8视频永久视频精品在线 757午夜视频第28集 杉浦花音免费在线观看 学生自拍 香蕉视频看点app下载黄色片 2安徽庐江教师4P照片 快播人妻小说 国产福二代少妇做爱在线视频 不穿衣服的模特58 特黄韩国一级视频 四虎视频操逼小段 干日本妇妇高清 chineseloverhomemade304 av搜搜福利 apaa-186 magnet 885459com63影院 久久免费视怡红院看 波多野结衣妻ネトリ电影 草比视频福利视频 国人怡红院 超碰免费chaopeng 日本av播放器 48qa,c 超黄色裸体男女床上视频 PPPD-642 骑马乳交插乳抽插 JULIA 最后是厉害的 saob8 成人 inurl:xxx 阴扩 成八动漫AV在线 shawty siri自拍在线 成片免费观看大香蕉 草莓100社区视频 成人福利软件有哪些 直播啪啪啪视频在线 成人高清在线偷拍自拍视频网站 母女午夜快播 巨乳嫩穴影音先锋在线播放 IPZ-692 迅雷 哺乳期天天草夜夜夜啪啪啪视频在线 孩子放假前与熟女的最后一炮 操美女25p freex性日韩免费视频 rbd888磁力链接 欧美美人磁力 VR视频 亚洲无码 自拍偷拍 rdt在线伦理 日本伦理片 希崎杰西卡 被迫服从我的佐佐凌波在线观看 葵つか步兵在线 东方色图, 69堂在线视频 人人 abp356百度云 江媚玲三级大全 开心色导 大色哥网站 韩国短发电影磁力 美女在线福利伦理 亚洲 欧美 自拍在线 限制级福利视频第九影院 美女插鸡免得视频 泷泽萝拉第四部第三部我的邻居在线 色狼窝综合 美国少妇与水电工 火影忍者邪恶agc漫画纲手邪恶道 近亲乱伦视频 金卡戴珊视频门百度云 极虎彯院 日本 母乳 hd 视频 爆米花神马影院伦理片 国产偷拍自拍丝袜制服无码性交 璩美凤光碟完整版高清 teen萝莉 国产小电影kan1122 日日韩无码中文亚洲在线视频六区第6 黄瓜自卫视频激情 红番阔午夜影院 黄色激情视频网视频下载 捆梆绳模羽洁视频 香蕉视频页码 土豆成人影视 东方aⅴ免费观看p 国内主播夫妻啪啪自拍 国内网红主播自拍福利 孩子强奸美女软件 廿夜秀场面业影院 演员的诞生 ftp 迷奸系列番号 守望人妻魂 日本男同调教播放 porn三级 magnet 午夜丁香婷婷 裸卿女主播直播视频在线 ac制服 mp4 WWW_OSION4YOU_COM 90后人体艺术网 狠狠碰影音先锋 美女秘书加班被干 WWW_BBB4444_COM vv49情人网 WWW_XXX234_COM 黄色xxoo动态图 人与动物性交乱伦视频 屄彩图