Mailing List Archive
tlug.jp Mailing List tlug archive tlug Mailing List Archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]Re: [tlug] Open Access Journals
- Date: Mon, 31 Mar 2014 01:16:10 +0900
- From: "Stephen J. Turnbull" <stephen@example.com>
- Subject: Re: [tlug] Open Access Journals
- References: <53292BF2.6030309@dcook.org> <CAAhy3dsA3yJ+dhP8y5AnkDm0Rhepfe6TyxXwENkiWtrqtqAgYQ@mail.gmail.com> <20140322100123.920638c262ed2e35be0ecc2d@kinali.ch> <87zjkggv3n.fsf@uwakimon.sk.tsukuba.ac.jp> <20140326092128.ce15a21d03bfafbbcfd660d5@kinali.ch> <87wqfgown8.fsf@uwakimon.sk.tsukuba.ac.jp> <87ppl7ou5g.fsf@uwakimon.sk.tsukuba.ac.jp> <20140330123127.db17cd41959005fa6002d3c6@kinali.ch>
Attila Kinali writes: > I don't know how it was in other countries, but the (AFAIK) 80s and > 90s were all about interdisciplinary work in switzerland. By the > time i entered university, if you weren't doing something > interdisciplinary, you weren't doing important research in the eye > of the general population (and also politics). The response of the lay people sounds very faddish to me. There's plenty of good research to be done within disciplines, and the long-term goal of interdisciplinary research should be establish a discipline, ie, a common set of concepts and methods used to treat the same kind of information allowing people working on those problems to communicate without tripping over terminology. If *everybody* is doing "interdisciplinary research," I doubt that will be the outcome. > > Rather, it points to the problem of "creating appropriate > > standards of rigor" in the field. > > Eh.. Name me a field and i can pull up a dozen papers containing > nothing but bullshit from that field. Even hard sciences and > engineering contains lot of stuff that should never have been > published. But that's not anywhere near the point I was trying to make. Of course crap happens in every field. The problem is that in interdisciplinary work identifying crap is very hard, because everything looks like crap from the point of view of the established disciplines. The problem with bottom-of-the-barrel research is related: people doing crap research *within* an established field generally have not assimiliated the *discipline* (concepts, methods, standards of rigor) associated with the field. The trick to keeping your job in that case is to hang out with people who will praise your research anyway, and then show that to your dean. :-) > Soft sciences like computer science, social sciences (or anything > that contains the word "science" in it), and anything else that is > hard to measure and quantify is much more prone to bullshit. That's a fallacy, actually. The current standard in hard science (from what my colleagues tell me) seems to be a reproducible recipe, with just enough insight to get a patent. You don't need to have a clue what's actually going on (STAP cells), or it can be a combination of a 100-year-old fact about the density of hydrogen dissolved in a particular metal and a 50-year-old crank conjecture that doesn't come close to generating the desired result (cold fusion). What these experimenters are doing, sure, maybe they've got "hard" measurements but mostly they're just throwing shit against the wall to see if any of it sticks. (You know that's literally how an awful lot of drugs are discovered, right? They dig up a few tons of Amazon muck, sieve out as many unknown chemicals as they can, and start injecting bacteria and Drosophila and mice with them to see if anything "interesting" happens to the experimental subject.) Sure, these people *do* get lucky and *do* produce reproducible anomolies from time to time (Michelson-Morley) -- but it takes an Einstein to make sense of it. People like Murray Gell-Mann and Richard Feynman saw the world in a different way, and they actually *understood* how it works -- and then they teach us about it. They did a good enough job at it that governments are willing to pay a couple billion dollars a pop for particle accelerators (what is CERN worth, anyway, about 2/3 of Swiss GDP? ;-) And what they do in their heads is not so different from what folks like Kenneth Arrow and Roger Myerson (in my field) do.[1] At the other end of the quality scale, how my colleagues do the shit-stick test doesn't involve laboratory measurement. Instead they take economic data (which is quite accurate in most cases, and things like prices are basically infinite-precision since they're integers), and run a bunch of statistical regressions switching variables in and out or tweaking nonlinearities until they get a "statistically significant" deviation from theory (these are the better ones) or zero for some coefficient. And that's a publication ... Yay! :-( I'm sorry if it's easier and cheaper to do shit-stick research in economics than in physical chemistry or psychology, them's the breaks. But it's still shit-stick research in either field. > I'm not exactly sure where this comes from, but i hold the > requirement to publish at least partialy responsible for this. Of course it is. But the real problem is that it's really hard to deprecate shit-stick research. Consider Masatoshi Koshiba, the Todai physicist who actually trapped a "cosmic neutrino" and got a Nobel Prize for it. The device he got the government to spend a couple hundred oku-yen on was primarily looking for proton decay[2], and failed to find it. He was smart enough to use the device for other things, like looking for neutrinos, and was rewarded for that. Now, I think it's fair to assume that the Nobel committee know what it's doing, and this was a Nobel-class result. Proton decay would have been even bigger though -- but he didn't get that. Question: how can you tell which will work? Of course you can't, but what makes Koshiba a Nobel-class physicist that even though he can't predict what will stick, he has a "nose" :-) for it, and he gets more than his share. So, in some sense all you need to be successful at that kind of research is enough money and some luck. Nobel Prize on luck? Once in a million years, I suppose. But you can produce a steady stream of (mostly meaningless) publications. And it's hard for deans to tell the difference between the intuitive scientists and the wannabes. So it turns a creative activity into a job. And I think that's the attraction for third-rate researchers. Footnotes: [1] And my all-time favorite is Saunders MacLane, a pure mathematician, who is able to express *why* math is what it is. [2] This is really important for validating and refining "Grand Unified Theories" in quantum relativity physics.
- Follow-Ups:
- Re: [tlug] Open Access Journals
- From: Raymond Wan
- References:
- [tlug] Open Access Journals
- From: Darren Cook
- Re: [tlug] Open Access Journals
- From: Raymond Wan
- Re: [tlug] Open Access Journals
- From: Attila Kinali
- Re: [tlug] Open Access Journals
- From: Stephen J. Turnbull
- Re: [tlug] Open Access Journals
- From: Attila Kinali
- Re: [tlug] Open Access Journals
- From: Stephen J. Turnbull
- Re: [tlug] Open Access Journals
- From: Stephen J. Turnbull
- Re: [tlug] Open Access Journals
- From: Attila Kinali
Home | Main Index | Thread Index
- Prev by Date: Re: [tlug] Open Access Journals
- Next by Date: Re: [tlug] Open Access Journals
- Previous by thread: Re: [tlug] Open Access Journals
- Next by thread: Re: [tlug] Open Access Journals
- Index(es):
Home Page Mailing List Linux and Japan TLUG Members Links