Like maybe when you add 7 and 6, let's say, one algorithm is to say "I'll see how much it takes to get to 10" -- it takes 3, and now I've got 4 left, so I gotta go from 10 and add 4, I get 14. That's an algorithm for adding -- it's actually one I was taught in kindergarten.Here's another:
But there are other ways to add -- there's no kind of right algorithm. These are algorithms for carrying out the process the cognitive system that's in your head.And another:
The way they did it was -- of course, nobody knew anything about photosynthesis -- so what you do is you take a pile of earth, you heat it so all the water escapes. You weigh it, and put it in a branch of a willow tree, and pour water on it, and measure you the amount of water you put in. When you're done, you the willow tree is grown, you again take the earth and heat it so all the water is gone -- same as before.Also, he's talking about science and its failures to account for data, but I really don't think he understands the things he's talking about. Look at his references to Peano's axioms - yes, the axioms do not describe a process for addition. They're not supposed to; that's not what axioms are. Similarly, a 2d representation of a 3d scene may be consistent or inconsistent with the axioms of geometry, but the axioms don't tell you how the drawing was made. But he jumps from that to his assumption that the mental algorithms we use don't have any sort of deeper algorithmic layer instead of asking whether in fact you can have algorithms about algorithms - which you can, and we use them (in our computers) every day; and this has really fascinating implications for the connection between the way we think and fundamental arithmetic. But he's oblivious to this, despite the fact that it's been around since Kurt G?del in the 1930s, if not earlier.
There has to be a reason you can say "how many cars did they wonder if the mechanics fixed?" but not "how many mechanics did they wonder if fixed the cars?" And the reason can't just be "because that's what a child is more likely to hear" because then you have the problem of infinite regressFirst off, I don't understand this problem of infinite regress, because presumably you'd still have that problem under Chomsky's model which (in my primitive understanding) is that there are a small number of parameters that flip to make a child's native language faculty match the language's syntax as opposed to other languages' syntax. And the same goes with words, why do we use "blue" for the color blue rather than "glue" or "stew"? Because that's what our parents used. No infinite regress problem at all. Presumably a Chomskian grammar is useful in distinguishing these two sentences because it would tell us what rules are violated by the second sentence, or that according to a generative grammar there's no possible parsing that would eventually lead to those words in that order. Presumably the statistical model is bad because it can't provide any insight into the difference between those two sentences? But that's simply not true. The statistical model that says that the second sentence is ungrammatical will tell us what particular features of that second string of words contribute to the poor likelihood of the sentence under the trained language. But even in the worst case, let's say a near black box like a radial basis function support vector machine or some model with cryptic parameters, we can still query this black box to figure out what's going on. For example, we can figure out what types of features are essential to determine grammaticality. Such complete-black-box methods are somewhat rare, and not a defining characteristic of statistical methods, and if the argument is just about these particular black-box methods it should be made against that target rather than against the broader and inappropriate target of statistical modeling. If, in the end, such black-box modeling turns out to be the only way forward in modeling language, then we will devote the effort to figuring out what's in the black box, and if we can't figure that out, we've still learned something about language: that all those non-black-box methods are not sufficient for understanding language, and that is a valuable discovery in itself, as it tells us about the minimum level of complexity needed for human grammar.
« Older Alan Moore and Superfolks | Why are men so emotional? Newer »
This thread has been archived and is closed to new comments
posted by erniepan at 2:12 PM on November 18, 2012 [8 favorites]