Natural languages are adapted/designed to deal with the entire breadth of human experience very imprecisely.
Formal languages are adapted/designed to deal with narrow pieces of human experience very precisely. The pieces they deal with are typically fragments of (a set of) natural languages.
If you were to develop a formal super-language which united all heretofore formal languages in an effort to deal more precisely with the entire breadth of human experience you'd be confronted with two problems. First, having created this superlanguage you'd have no way of definitively deciding exactly which natural language terms were cognate to which superlanguage terms, or which sentences in the two languages expressed exactly the same proposition. Put another way, natural language is intractably imprecise. Second, it's formally provable that formal languages have severe expressive limitations: basically if you want to say anything really interesting you're bound to lapse into hopeless ambiguity. In this sense the formal language project of precisification is doomed to fail; try to make formal languages do too much and they explode.
More importantly, we shouldn't want or expect natural languages to be too precise. There are times when we should be interested in precision and there are other times when good enough is good enough. Striving for exacting precision when we should settle for good enough can be bad, dangerous, and stupid. Transmitting lots of detailed information may be generally good but if enough of the information isn't sufficiently relevant to the situation at hand or if its transmission takes too long then it's bad. When you're being chased by a lion you don't stop to open your pack and unfold your topographical map. When you want to launch a rocket you don't dither around with theories about quantum-gravity.
There's been a lot written on this subject in the last 90-100 or so years. One great essay that goes into some of what I've talked about is by Paul Benacerraf "What Numbers Could Not Be". In it he demolishes the notion that there can be a formal language translation of a certain fragment of natural language talk (ie. of pretheoretical, intuitive, or "naive" mathematics) much less of a whole natural language. He elaborates on this in his later work.
it is bull taken to the nth degree. The learn of the strategies, the place each and every strategies is different (in any different case it can be a monotonous place) each and every strategies has its very own neuron community and each strategies has different stadia's of nature and nurture. And this convoluted formulation ought to take all 2 hundred billion variations into attention? Drop this baloney, it is going to confuse you even greater beneficial than you're already. Peace
Answers & Comments
Verified answer
I don't think enough people are ever going to understand it for it to catch on.
No, it definitely can't - not ever.
Natural languages are adapted/designed to deal with the entire breadth of human experience very imprecisely.
Formal languages are adapted/designed to deal with narrow pieces of human experience very precisely. The pieces they deal with are typically fragments of (a set of) natural languages.
If you were to develop a formal super-language which united all heretofore formal languages in an effort to deal more precisely with the entire breadth of human experience you'd be confronted with two problems. First, having created this superlanguage you'd have no way of definitively deciding exactly which natural language terms were cognate to which superlanguage terms, or which sentences in the two languages expressed exactly the same proposition. Put another way, natural language is intractably imprecise. Second, it's formally provable that formal languages have severe expressive limitations: basically if you want to say anything really interesting you're bound to lapse into hopeless ambiguity. In this sense the formal language project of precisification is doomed to fail; try to make formal languages do too much and they explode.
More importantly, we shouldn't want or expect natural languages to be too precise. There are times when we should be interested in precision and there are other times when good enough is good enough. Striving for exacting precision when we should settle for good enough can be bad, dangerous, and stupid. Transmitting lots of detailed information may be generally good but if enough of the information isn't sufficiently relevant to the situation at hand or if its transmission takes too long then it's bad. When you're being chased by a lion you don't stop to open your pack and unfold your topographical map. When you want to launch a rocket you don't dither around with theories about quantum-gravity.
There's been a lot written on this subject in the last 90-100 or so years. One great essay that goes into some of what I've talked about is by Paul Benacerraf "What Numbers Could Not Be". In it he demolishes the notion that there can be a formal language translation of a certain fragment of natural language talk (ie. of pretheoretical, intuitive, or "naive" mathematics) much less of a whole natural language. He elaborates on this in his later work.
it is bull taken to the nth degree. The learn of the strategies, the place each and every strategies is different (in any different case it can be a monotonous place) each and every strategies has its very own neuron community and each strategies has different stadia's of nature and nurture. And this convoluted formulation ought to take all 2 hundred billion variations into attention? Drop this baloney, it is going to confuse you even greater beneficial than you're already. Peace
It would have to include more complicated terms such as modal expressions and vague and fuzzy terms.