>>> Posting number 9, dated 1 Jun 1996 16:36:33 Date: Sat, 1 Jun 1996 16:36:33 GMT Reply-To: Discussion of Fraud in Science Sender: Discussion of Fraud in Science From: "Ted Gerrard (MMF-NH)" Subject: Re: The truck driver [. . .] Anyone wanting to read a couple of good "hoax" scientific articles - try my WWW home page, especially if you happen to be a statistician. I'm willing to bet a social scientist would have less trouble spotting the "spoof" than the editor of NATURE or SCIENCE. Enjoy your holiday Al, I'll hold your end up meanwhile. Ted Gerrard. ******************************************************************* E. C. Gerrard Ornithology Section Museu Municipal do Funchal (Historia Natural) Rua da Mouraria, 31 9000 FUNCHAL, MADEIRA, PORTUGAL Tel.: +351-91-792591 Fax: +351-91-225180 e-mail egerrard@tethys.uma.pt WWW page: http://www.mmf.uma.pt/~egerrard/ ******************************************************************* [. . .] >>> Posting number 1434, dated 22 Nov 1996 00:34:24 Date: Fri, 22 Nov 1996 00:34:24 -0500 Reply-To: Discussion of Fraud in Science Sender: Discussion of Fraud in Science From: Leon Mintz Subject: Re: (Fwd) Re: Compensation? Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" I want to rephrase one well known politician. Scifraud is filled with off topic discussions and is a worse possible place for an open and interesting discussion of scientific fraud, unless you consider alternatives... and there are none. Take Science for example. It is a party publication which sweeps any information about fraud under the carpet unless it is first published in The New York Times. Leon Mintz November 22, 1996 >He also has sailed imperturbably on without any public response to those >who have asked questions online of him concerning his goals/values for >this listserv group. I also have had concerns about the main purpose of >this listserv group. Bashing science seems to outweigh critical >discussion of how to deal with problems of integrity in scholarly >research. Do you think that this a reasonable conclusion, Dr. Higgins? > >On Thu, 21 Nov 1996, Al Higgins wrote: > >> My thanks to R. Cammer for his beautifully reasoned and very much deserved >> condemnation of the latest in a benighted series of complaints against >> A.C.Higgins, who sails imperturbably on as his detractors spatter >> themselves with mud. >> ________________________________________________________________ >> A. C. Higgins ACH13@louise.csbs.albany.edu >> College of Arts and Sciences VOX: 518-442-4678 >> Sociology Department FAX: 518-442-4936 >> University At Albany >> Albany, NY, USA 12222 >> [. . .] >>> Posting number 1815, dated 2 Mar 1997 13:56:56 Date: Sun, 2 Mar 1997 13:56:56 +0000 Reply-To: Discussion of Fraud in Science Sender: Discussion of Fraud in Science From: Ted Gerrard Subject: Re: Family Traditions (was Should we all just give up?) Comments: cc: peter.berthold@uni-konstanz.de, nature@nature.com, nature@natureny.com, nature@naturedc.com Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" At 14:24 02-03-1997 +0100,Per Dalen posted a piece in response to the "Should we all just give up?" debate which provided us with a piece by Max Planck which explained why we should NOT give up. How sad for science that some of the biggest fraudsters in the animal behavioural field are still working at a Max Planck Institut (Prof. Dr. Peter Berthold to name but one...see cc above) despite widespread publicity (on Scifraud and elsewhere). How sad also that this debate was started following a piece published by Nature (Whistleblowers face blast of hostility, Nature 20th Feb 97), yet that same magazine published Berthold's rubbish and still steadfastly supports him. Ted Gerrard. ********************************************************* E.C.Gerrard Ornithology Section Museu Municipal do Funchal (Historia Natural) Rua da Mouraria, 31 9000 FUNCHAL, MADEIRA, PORTUGAL Tel.: +351-91-792591 Fax.: +351-91-225180 e-mail egerrard@tethys.uma.pt WWW page: http://www.mmf.uma.pt/~egerrard/ [. . .] >>> Posting number 2078, dated 1 Apr 1997 22:23:21 Date: Tue, 1 Apr 1997 22:23:21 EST Reply-To: Discussion of Fraud in Science Sender: Discussion of Fraud in Science From: Ted Hermary Subject: Re[2]: Crisis in Science In-Reply-To: In reply to your message of Wed, 26 Mar 1997 11:39:00 EST Scifraud members, I found Ms. Gunsalus' editorial interesting, partly because it poses a question asked repeatdly over the past 20 years: namely, what would be involved in a "scientific approach" to fraud/misconduct. The answers provided seem to go in two different (if related) directions, both of which are at least implicit in the editorial. The first answer is simply to say that a scientific approach is one where scientists are in control of misconduct/fraud (including the definition of what those terms). This is usually phrased in terms of conflicting with legalistic or governmental- bureaucratic approaches and control. One graphic illustration of this theme -- at least the "legal versus scientific" variety -- came following the overturning of the ORI's Imanishi-Kari decision last year, when there were some discussions about the composition of the appeal panel (two layers, one scientist) and the character of its (written) decision. But there are many other less dramatic instances, including the Commission on Research Integrity's report, which was offered as an attempt to balance or integrate scientific and legal approaches (as well as the scientific, the governmental, and the public interests.) The second meaning given to a "scientific approach", is the character of the claims being made about it, especially its "epidemiology" (as some put it). This sense of a "scientific approach" has been perhaps less often discussed (something I find quite interesting). Sporadic claims have been made to having some "data" related to the question, most often initiated by "non-scientists" -- a few sociologists (Woolf, Swazey), a couple psychologists, but also by some scientists (Stewart and Feder's early attempts) and a couple science associations (Sigma Xi and the AAAS). Notwithstanding how these arguments have bee re- presented in the press, I don't think any of these people have ever claimed they really have data that directly or adequately addresses such questions. I find this question of what would constitute a "scientific approach" to this question fascinating. In fact, this was one thing behind my posting to this group a few months back concerning warious "tools" that have been used or posed for establishing misconduct/fraud (e.g., cases, investigations and official statistics, surveys, data audits or the "invigilation" of research). My post provoked some very intriguing replies, I thought, though I'd be interested to hear any other ideas along these lines, especially from scientists themselves. Gunsalus observes, that the lack of a scientific basis for making claims about such matters has not precluded some strong pronouncements on the matter. Indeed, I've seen the same scientists say, in one instant, "We have no data base", and in the next instant assert that its prevalence or distribution, etc. is X, Y or Z. The Koshland (1987) editorial she alludes to - an argument recently resurrected by a couple sociologists (see Appendix to this post for excruciating details) - is only one instance. I should note that she (or CHE, or Al) got the estimate wrong; _Science_ editor Koshland's estimate of scientific paper purity was actually 99.9999%, not the 99.9956% given in the Scifraud version of Gunslaus's editorial. All this might be chalked up to the general difficulty people seem to have talking in a vacuum. However, I do find it especially interesting in the context of scientists' utterances about fraud/misconduct. Ted Hermary czth@REDACTED.mcgill.ca =========================================================== Appendix A brief, "pre-Gunsalus" history of the 99.9999 claim Daniel Koshland, "Fraud in Science," Science, 235 (9 January 1987), p. 141. The source for the original argument that: ... 99.999 percent of [scientific] reports are accurate and truthful; often in rapdily advancing frontiers where data are hard to collect. [I assume he doesn't mean data on the accuracy of the literature.] There is no evidence that the small number of cases that have surfaced require a fundamental change in the procedures that have produced so much good science. Raymond R. White, "Accuracy and Truth" (Letter, with Reply), Science, 235 (20 March 1987). This biologist says "The idea that we scientists are 99.9999% ethically pure is not only ridiculous but also obviously self-serving", says that "actual falsifications of data ... probably pollute two orders of msgnitude more reports than Koshland Imagines, and cites a number of "less direct deceits" that White says are "abundant". In reply, Koshland says he was not discussing 'ethical purity' but only the correctness of published data; he also disputes that prevalence of the 'lesser deceits'. He explains his 99.9999 reasoning this way: I looked at one journal, the Journal of Boiological Chemistry, which published 17,000 pages in 1986. Using a rough estimate of 50 pieces of data per page, one gets close to 1 million bits of information, for one journal in one year. There are hunreds of journals in biochemistry alone and hundreds more in such diverse fields as physics, geology, psychaitry and so forth. yet only one or two cases of fraud are exposed per year. Stephan Fuchs and Saundra Davis Westervelt, "Fraud and Trust in Science", Perspectives in Biology and Medicine, 39 (December 1996), pp. 248-269. This article came highly recommended to me by a scientist who at least used to frequent Scifraud, as a paradigm example of how sociologists (of science) should approach misconduct/fraud, especially for its treatment of the prevalence issue. I cannot second the recommendation on either count. At any rate, they do not cite Koshland, but probably should have, since they approach it a similar way with similar numbers to offer. Fuchs and Westervelt Davis say There is no empirical way to decide this [the "few bad apples" versus "tip of the iceberg" views of misconduct prevalence], although we shall offer a theoretical argument as to why the iceberg theory is very likely false. (p.250) I've looked long and hard for their theoretical arguments, but can only find arguments that competition may breed suspicions of fraud (among other things) (p.254), that "Normally fraud is unexpected because scientific communication relies so much on trust" (p.256) and that "there are powerful disincentives for starting fraud trials" (p.257). Be that as it may, they go on to argue, in a section entitled "Rarity of Misconduct" (pp.260-263), that "there is some empirical evidence for the extreme rarity of fraud". They cite various claims to numbers of known or suspected accses, from official and unofficial sources, and suggest an average of 10 misconduct cases per year which are upheld, but the authors "up" this to 100 in order to offset non-detection rates. Acknowledging the difficulties in deciding on a baseline they focus first on the population of biomedical researchers. Using 27,000 NIH grants (1993 figures), and an average of 4 scientists per grants arrive at abot 112,000 U.S. biomedical researchers, adding This is probably a most conservative estimate, since it includes only scientists working for or being supported by the NIH. The actual figure is probably around 300,000. Given our estimate of 100 cases of misconduct, this yields a percentage of .089, less than one-tenth of 1 percent. However, there are good reasons for thinking that the actual population is not *scientists*, but (peer- reviewed) scientific *communications*; just as a more realistic, if immeasurable, base line for the crime rate would be *actions*, not population size. The reason for this is that most actions even of criminals are not criminal. Since the actual number of all sceintific communications is unknowable as well, let us choose publications, again focussing on the biomedical sciences. In 1992, in the United States alone, there were an estimated ... total of 3,800 biomedical journals. [Citing Ulrich's International Periodical Dictionary.] Again, estimating conservatively, we assume that each journal publishes five issues per year, each with five articles. This yields a misconduct percentage of .01. We must conclude that, given whatever little we do know about the frequency of misconduct, and also given the immense quantity of sceintific communications, fraud, plagiarism are numerically insignificant in science. (pp. 262-263, all emphasis in original). Incidentally, the bulk of this article is divided between two concerns: (1) the role of trust between scientists (largely speculative, unfortunately IMO), (2) an argument that this trust (and perhaps trust non-scientists put in science) is probably well-founded. I'm sure I must have missed some versions of this form of argument, but this should give the picture. Ted. [. . .] >>> Posting number 2104, dated 14 Apr 1997 15:49:07 Date: Mon, 14 Apr 1997 15:49:07 +0400 Reply-To: Discussion of Fraud in Science Sender: Discussion of Fraud in Science From: Dmitry Yuryev Subject: Mass 'routine' data fabrication in Scatchard plot analysis Scatchard plot analysis is, apparently, the most popular procedure of data analysis in modern science having applications in such fields as defining receptor specificity in pharmacology or measuring intensity of immune responce and in many other fields of biomedicine. Mentions of its using appear approximately in 5000 publications every year. It was designed for analysis of the adsorption isotherm, i.e. the experiment where concns. of one reagent (ligand) absorbed (or BOUND) to another reagent (binder) are measured at varying TOTAL concns. of added ligand. This experiment, obviously, yields a recangular hyperbola on coordinate plane BOUND vs. TOTAL. The Scatchard plot is just a redrawing this data on the plane BOUND/FREE vs. BOUND where FREE=TOTAL-BOUND. Indeed, strange and awkward procedure. And the reality is that understanding of all details of appearance of binding data on Scatchard plot turnes out to be a too difficult job for most researchers. One funny consequence is that fabricating trustworthy Scatchard plots is not easy as well. I have found two peculiarities in appearances of Scatchard plots clearly pointing to fraudulent origin of data. One other peculiarity in the mode of scattering of data point allows to make some 'statistical' conclusion that at least partial fabricating (or 'beautifying') data is a widspread practice in this field. 1. Some published Scatchard plots (e.g. ref. 1,2) contain impossible datapoints lying exactly on the Y-axis (i.e. with B=0 and B/F>0). There is no hope to claim that such point lies 'almost'_ on Y-axis when the next closest to Y-axis point lies at some detectable distance. Obviously, in this case there should be about 100-fold difference in BOUND and, hence in FREE concns. for these two points, what is unthinkable without special noting it in description of experimental procedure. Of course, undisputable cases of this type of error are very rare. 2. It is a peculiar property of Scatchard plot that both X- and Y- coordinates on this plot are linear functions of measured signal (BOUND); so pairs of datapoints obtained at the same concentration of added ligand in two experiments should lie approximately on the line drawn through the start of coordinates. Quite naturally, careless fabrication results in pairs lying exactly one under another on vertical lines (e.g. refs. 1,3). This sort of folly is a much more sure sign of data falsification. 3. Scatchard plot also has a very unusual pattern of data points' scattering. Small concns. of BOUND are measured with bigger relative error, and as the value of BOUND/FREE remains big at small BOUND, the visible scattering in BOUND/FREE is also big. So, scattering of points near Y-axis is defined by relative error in BOUND concns. and it should necessarily grow to infinity as the BOUND concn. approaches zero. Yet, in reality not more than 1-2% (my estimation) of published Scatchard plots do show widening of errors envelope at small BOUND concentrations. The explanation is that with very rare exceptions (e.g. ref.4) guides on binding experiments provide smooth lines without any errors envelopes as examples of a 'nice-looking' Scatchard plot. References 1.Scheibe R.J., Wagner J.A. (1992) - J.Biol.Chem., 267:17611-16. 2. Portolano S. et.al. (1993) - J.Immunol., 150:880-7. 3. J.W.Nice & W.J.Metzger (1997) - Nature 385:721. 4. Munson P.J. & Rodbard D. (1983) - Science, 220:979-81. _______________________ This message was derived from my paper 'Absurd Trivial Errors in Scatchard Plot Analysis' which may be loaded from: http://www.glasnet.ru/~yur77/absurd.htm Dmitriy K.Yuryev yur77@REDACTED.apc.org [. . .] >>> Posting number 2106, dated 15 Apr 1997 09:35:36 Date: Tue, 15 Apr 1997 09:35:36 +0400 Reply-To: Discussion of Fraud in Science Sender: Discussion of Fraud in Science From: Dmitry Yuryev Subject: Nature's editors conceal fraud? Comments: cc: nature@nature.com Actually I don't know whether scientific journals have any responsibility for publishing fraudulent papers. I have submitted appended letter to Nature's office on 7 March and still have no response even after additional quiery. Is it a common practice? Dmitriy K.Yuryev, yur77@REDACTED.apc.org ---------------------------- Sir - Recent publication in your journal (J.W.Nice & W.J.Metzger, Nature 385:721, 20Feb 97) presents on Figures 2 and 3 data from four experiments each drawn on linear coordinate plane and on Scatchard plane. Easy to see on Fig.3 that Bound concentrations on linear coordinate plane (Y-axis) do not coincide with Bound concetrations on Scatchard plot (X-axis). Moreover, none of Scatchard plots on Figs.2,3 can not be based on any real data. It is a peculiar property of Scatchard plot that both X- and Y- coordinates on this plot are linear functions of measured signal (Bound); so pairs of datapoints obtained at the same concentration of added ligand in two experiments should lie approximately on the line drawn through the start of coordinates. Obviously, these pairs in Fig 2 and 3 lie precisely one under another - it is a rather typical error in fabricating Scatchard plots. Perhaps, my paper 'Absurd Trivial Errors in Scatchard Plot Analysis' (http://www.glasnet.ru/~yur77/absurd.htm) may be of use for your 'peer reviewers'. I think it is shameful for such pretentious journal like yours to publish works committing any of those errors. [. . .] >>> Posting number 2125, dated 19 Apr 1997 12:35:56 Date: Sat, 19 Apr 1997 12:35:56 +0100 Reply-To: Discussion of Fraud in Science Sender: Discussion of Fraud in Science From: Ted Gerrard Subject: Re: Nature's editors conceal fraud? Comments: cc: nature@nature.com, nature@natureny.com, nature@naturedc.com Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" At 09:35 15-04-1997 +0400,Dmitry Yuryev wrote: > Actually I don't know whether scientific journals have any >responsibility for publishing fraudulent papers. > I have submitted appended letter to Nature's office on 7 March >and still have no response even after additional quiery. >Is it a common practice? > >Dmitriy K.Yuryev, >yur77@REDACTED.apc.org > In the case of the magazine "Nature", seemingly so. Many Scifrauders will recall my multitude of postings last year in which I publicly accused the then editor Sir John Maddox of knowingly publishing false scientific claims in order to boost circulation. His eventual reply on Scifraud prior to resigning --- oops --- retiring, was memorable. His successor is no better. Don't waste your time Dmitry - not good for the blood pressure. As usual copies of this post go to Nature, doubtless to be instantly trashed. Ted. ********************************************************* E.C.Gerrard Ornithology Section Museu Municipal do Funchal (Historia Natural) Rua da Mouraria, 31 9000 FUNCHAL, MADEIRA, PORTUGAL Tel.: +351-91-792591 Fax.: +351-91-225180 e-mail egerrard@tethys.uma.pt WWW page: http://www.mmf.uma.pt/~egerrard/ [. . .] >>> Posting number 2135, dated 21 Apr 1997 15:03:00 Date: Mon, 21 Apr 1997 15:03:00 EST Reply-To: Discussion of Fraud in Science Sender: Discussion of Fraud in Science From: "Gardenier, John S." Subject: Re: Editorial Nonsense I have to weigh in full force with Al's subject title. All the editor of Nature has done is reveal a vast ignorance of the operations of the National Research Council and of Federal Advisory Committees, as well as the effects of the FACA - the Federal Advisory Committee Act. A "faceless bureaucrat," I have been involved both with advisory committees/commissions and with several National Research Council studies - the latter as an official representative of my agency (then the U. S. Coast Guard.) Let me corroborate Al's point by debunking just one of the editorial's fatuously and unrelentingly inaccurate paragraphs: "FACA has certainly succeeded in shining some valuable light on the workings of US government agencies. The sometimes cumbersome operation of its committees is doubtless an improvement on what went on before, behind closed doors. But its insistence on long and unproductive open sessions, and more importantly/he degree of civil service supervision that it imposes, would undermine the academy complex." FACA does NOT "shine light on the workings of U. S. government agencies;" it ensures that officially established advisory committee processes and (most) deliberations are open to the public. Government agency deliberations are NOT similarly open. There is NO insistence on "long and unproductive open sessions." Each committee can establish its own meeting formats and times. The meetings must be open to the public, but normally the public cannot comment informally. Unless the Committee chooses expressly to invite comments either from targeted segments of society or from the general public, only the committee members speak. Thus, the length of the meetings is held to what the Committee chair and members feel is needed and affordable considering their charter and their funding. Most important of all, there is NO "civil service supervision" of the direction or deliberations of committees or commissions. Such bodies are formed precisely because the government wants expert advice from OUTSIDE government. The civil service involvement is logistical and facilitating; civil servants may provide a secretariat, process travel and per diem claims for committee members, arrange for the recording/minute taking of meetings, reproduce and distribute agendas and background materials as requested by the chair and members, arrange the official public announcements of the meetings, and take care of such items as attendee lists, name badges, etc. Most importantly, there is no civil servant participation in the deliberations of the committee/commission itself. (As with any other source of information, a government official may be asked to provide information or testimony to a committee, but as a guest for that input only, not as a part of the Committee.) Contrast this with the operation of committees of the National Research Council (NRC). When government agencies sponsor NRC research, the NRC is essentially acting as a contractor to the government, just as commercial technology companies or think tanks do. The main difference is the NRC committee format. The government appoints a "liaison" to the committee, whose job it is to ensure that the staff and the committee understand what the government wants from this particular effort. The NRC selects the committee members, sometimes asking the government liaison about specific potential sources of expertise. The NRC also handles the logistics, secretariat services, and provides multi-level INTERNAL (not "peer review") editorial reviews of the reports produced by the committees. Both an NRC staffer and the government liaison attend the meetings (which are neither open nor advertised nor, in most cases, recorded.) They participate fully in discussions. Inevitably, the liaison also imparts to the committee the interests, biases, and policy/political leanings of the sponsor. Often the interests and funding of several agencies are pooled in a standing committee of the NRC, which then formulates several projects over a period of years. This does not change the fundamental relationships. The tone of the editorial would suggest that NRC committees are mainly composed of Academy members, who (very arguably) are consistently the leading scientific lights of their disciplines. That is seldom true; the scientific makeup of the committee is usually working level scientists who are listed in American Men and Women of Science, who have worked and published in the topic area to be addressed, and who are hungry enough for the lesser prestige of having been a member of an NRC committee (NOT the Academy) that they are willing to put in a fair amount of work for no pay. Because government sponsored projects usually have policy aspects, non-scientist experts are also invited to participate. Finally, an NRC staff member (a salaried employee who is neither an Academy member nor a guest topical expert) is also an influential member of the group - typically the one who actually writes up the results of the deliberations - and also a participant in the discussions. Let me give an example. Congress directed the Coast Guard to find some way to deal with the problem of alcohol use in contributing to recreational boating accidents. I was tasked to formulate a research plan to accomplish that mission. After some background study of cultural influences on alcohol use generally and in recreational boating, accident reports, and activities of state and local police organizations which try (with moderate success) to limit the worst abuses on recreational waters, I determined there had never been a comprehensive, organized scientific assessment of the problem. My superiors enthusiastically accepted my recommendation that we ask the NRC to assemble a diverse group of experts to define the problem in researchable terms and to formulate the research program for us. We provided the funding, the NRC did the job well, and the project was successful except that the Coast Guard soon after "downsized" its research program for budgetary reasons, eliminating the Office which would have managed the program. No member of the Academy was involved. The scientific experts knew alcohol influence on individual judgment, coordination, and complex task performance; knew chemical, biological, and engineering aspects of breatholyzer testing; knew the engineering, operations, and legal aspects of recreational boating; and knew a great deal about the application of similar technologies/considerations in successful efforts to decrease alcohol involvement in highway accidents. The committee also had representation from boating associations, boatbuilders, the Coast Guard Auxiliary (a loosely affiliated civilian organization devoted to boating safety), and state and local maritime police. These latter were not "scientists," but their expertise was just as valuable. None of the meetings was open to the public, but through cooperation of boating magazines, the committee requested comments and suggestions on the program from the boating public. Both I and the NRC staff member participated fully in discussions. I had to explain the Coast Guard's Recreational Boating Safety Program, its research support, and its legal authority, as well as our relations with state and local governments, the Auxiliary, and the public. I also provided information on what policy options we had already considered and our reasons for favoring or disfavoring certain ones. Inasmuch as I represented "the customer" and the funding agency, my presence carried substantial influence which I believe I exercised ethically and responsibly. In my opinion, there was much right and nothing wrong that project and with most of the other NRC work in which I was involved. It is important to realize, however, (1) the total absence of Academy member input in most studies, (2) the deep involvement of both NRC staff and government liaison in committee deliberations, (3) the influence of the source of funding (often not revealed), and (4) the fact that the public has NO routine or necessary access to the deliberations. Potential for abuse is inherent in the present system, and it is occasionally abused. (In the one case I know of personally, the abuse came from NRC staff; it could as easily come from an unethical government liaison.) Also, if the study involves industry, key corporate executives or their legal staff may be appropriately included in committees. Similarly, influential representatives of unions or of environmental or other interest groups may be included. Depending on the relative charisma and ethical sternness of the scientific committee members as compared to the persuasiveness of interest group representatives, the interest groups POTENTIALLY could slant a committee's conclusions more toward their self-interest than objective science would support. That would be much more difficult if "the public," especially journalists routinely had access to the deliberations. The main downside of applying FACA to the NRC would be the increased expense of providing venues suitable for public attendance as opposed to small conference rooms, precisely suited to a known committee membership. Somewhat related, two to four day NRC committee meetings are sometimes held as retreats at resort facilities. This significantly enhances brainstorming and committee "team" bonding, but some parts of the public would undoubtedly protest strongly that such forms of meeting are wasteful boondoggles (especially if they are government funded.) It also would not be feasible to make such resort facilities available to anyone from the public who might choose to attend. In fact, it would probably attract people for the facilities who would have no interest in the intense deliberations which actually occur. On the plus side, the known potential for such retreats is a minor, but quite welcome incentive to those experts who are donating their time, effort, and expertise for the public benefit. In short, application of the FACA to the NRC would require solving some real and vexing problems, but not any of the ones cited in the Nature editorial. More positively, it would also have a very chilling effect on the extensive POTENTIAL for abuse which lies hidden in the present system of NRC operation. In my experience, the potential for NRC abuse seldom gets manifested in actual malfeasance. The problem is that we have no systematic way of knowing what abuse potential is present, when it rears its ugly head, or how much damage may result. I personally hope that the courts DO apply FACA to NRC studies which are government funded - and that reasonable and intelligent solutions are applied to the real problems. John Gardenier "May your results have practical as well as statistical significance." [. . .] >>> Posting number 2178, dated 3 Jul 1997 12:08:50 Date: Thu, 3 Jul 1997 12:08:50 +0100 Reply-To: Discussion of Fraud in Science Sender: Discussion of Fraud in Science From: Ted Gerrard Subject: Re: scientific norms Comments: cc: peter.berthold@uni-konstanz.de, bcarling@chall.co.uk, nature@nature.com, nature@natureny.com, nature@naturedc.com, news@newscientist.com, pdzdtp@pdn1.gene.nottingham.ac.uk, w.sutherland@uea.ac.uk Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Reiner Grundmann of The Max Planck Institute for the Study of Societies raised an interesting topic on 24th April which prompted considerable debate. >Is anyone aware of a study which examines the historical development of >dishonesty in science in quantitative terms? In other words, is there a real >basis for the recent emergence of ethics committees and codes of conduct? Or >is this just another cultural bandwagon? May I belatedly add my two bits worth? Reiner may well find food for further research in his own backyard. Professor Dr. Peter Berthold of The Max Planck Institute (e-mail peter.berthold@ uni-konstanz.de), exposed by me as a fraudster on Scifraud in 1995 is still employed by MPI. One of the reasons he and several other German animal behaviourists are still in situ is the ABSENCE of an effective German ethics committee. German banks are obviously still very easy to rob! Ted Gerrard. ********************************************************* E.C.Gerrard Ornithology Section Museu Municipal do Funchal (Historia Natural) Rua da Mouraria, 31 9000 FUNCHAL, MADEIRA, PORTUGAL Tel.: +351-91-792591 Fax.: +351-91-225180 e-mail egerrard@tethys.uma.pt WWW page: http://www.mmf.uma.pt/~egerrard/ [. . .] >>> Posting number 2419, dated 22 Jul 1997 19:15:27 Date: Tue, 22 Jul 1997 19:15:27 +0000 Reply-To: Discussion of Fraud in Science Sender: Discussion of Fraud in Science From: Neville Goodman Subject: bad stats - fraud? MIME-Version: 1.0 Content-Type: TEXT/PLAIN; CHARSET=US-ASCII John Gardenier commented on my saying: "Mostly this [ie, some misuses of statistics] is not fraud; mostly it's just a blind belief in p<0.05." He commented: >>>OK, let us stipulate that the word "fraud," presupposes that a scientist knows what he or she is doing and deliberately sets out to deceive peers about their research. Let us further stipulate that some of those submitting papers which Neville referred to were merely ignorant about statistics and used it as glossy paint for appearances' sake rather than as serious research methodology. Is that really innocent? Is ignorance of research methodology an acceptable excuse for shoddy work? Is it not fraud to claim scientific results when one merely plays at research rather than carries it out assiduously? How would Neville feel, I wonder, if the same research had wonderful work in statistics, but was very sloppy about the chemical formulations or quality control of the anesthetics used in the research?<<< I think John is absolutely correct. At many meetings I simply despair of the level of ignorance, not just of statistics but also of the methods the researchers have often themselves used. Some people, asked what the limits of measurement are, or what the coefficient of variability is, don't even know. It is a scandal, though I don't consider it a fraud - in the sense that the perpetrators have not set out to deceive. Which is a discussion really about semantics, and I might add that if these people were accused of fraud, then they'd have only themselves to blame. Richard Feynman pointed out that the easiest person to fool is yourself, and these guys are doing it all the time. I'm clear about why it happens: these people need to do "research" for clinical advancement. I regard it as one of the biggest wastes of time and energy (and destroyers of scientific probity) in UK medicine. To pick up a sentence from Robert Barasch's posting: >>>The danger in all of this is, of course, that the present epistemological zeitgiest gives young people the notion that one need only find plausible explanations to gain credibility for an argument.<<< I'd go further, and suggest that one can use one's own motives as a plausible way of gaining credibility. Let me give another example of this sort of thing. Most of us in clinical medicine now realise that you can't simply remove data that don't fit in with hypotheses. So, in a clinical study of (say) 80 patients, we must admit it if 15 patients don't make it to the end of the study. Similarly, because of biological variability, we know that small studies are unreliable because they can throw up false positives and false negatives far too easily. But I've noticed that n is rarely more than 5 or 6 in most of the test-tube work. Now, of course, some things are so blindingly obvious that 6 flips of the coin are enough (ie, the coin is heavily biased). But often this work does have wide variability, so the coin isn't all that heavily biased. Funnily enough, you need n=5 as the minimum for p>0.05. At the last meeting I went to, where these guys were presenting yet more n=6 stuff, I asked, "Did all your experiments work?" And the presenter answered, "No"! This opens up the possibility that some of the experiments that "didn't work" simply didn't fit with the hypothesis. I suggested that in future presentations, they really ought to give a history of the experiment as well as the results of the honed experiment. The next meeting is in the autumn, and I bet they will once again present n=6 results, without mentioning the cells that died, the responses that failed, and so on. And for those of you who scorn clinical research and think that pure science in Nature is the stuff: there was a paper in last week's issue (I think) with graphs showing standard error of the mean on n=4. Well, perhaps it's not fraudulent to do that; but what is the value of a summary measure of just 4 observations? Why didn't they show all their data? Especially as the SEM quadrupled between the start and finish of the graph, suggesting to me that only 2 of their preparations showed a change in value. (I'm sorry I can't quote the actual graph, but I finished with my copy - perhaps someone could flick through for it: issue of 10th or 17th, one of the Letters, fig on bottom left of a right-hand page.) cheers, Neville Dr Neville W Goodman Consultant Anaesthetist Southmead Hospital BS10 5NB UK Nev.W.Goodman@REDACTED.ac.uk [. . .] >>> Posting number 3092, dated 25 Feb 1998 23:56:57 Date: Wed, 25 Feb 1998 23:56:57 -0500 Reply-To: siano@REDACTED.med.upenn.edu Sender: Discussion of Fraud in Science From: Brian Siano Organization: University of Pennsylvania CCEB Subject: Re: Political Correctness MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Al Higgins wrote: > > Political Correctness > > Here's a case involving a major figure in psychology who made > the mistake of living too long. His early writings (the first > edition of his Beyondism is dated 1933) reveal their social > values. And he is being taken to task for those early words. Should > he be? Are we all to be politically correct by today's standards? > What will the standards be fifty or sixty years from > now? > > \Holden, Constance. "Cattell Relinquishes Psychology Award," > Science 279 (6 February 1998), p. 811.\ > Al, methinks you may have been misled by Constance Holden's account. If this post were the only thing I'd heard on the matter, I'd figure that Cattell was only guilty of sharing the beliefs of his time. But that sort of whitewashes Cattell in a BIG way. Yes, back in the 1930s, he did advance theories of the relative superiority of races: claims about Nordics being natural leaders, that Jews were marked with a "crafty spirit of calculation" who gave Europeans a "feeling of strangeness," that sort of thing. Back in the 1930s, Cattell's main concern was race; he even denounced advocates of meritocracy by saying that "to treat alien individuals as if they belonged to the same race, simply because their intelligence is on the same high or low level, is a mistake, for constitutional differences of greater importance are being overlooked." (Psychology and Social Progress, 1933). By 1937, Cattell was praising the eugenics programs of the Third Reich. In 1938, Cattell wrote that the rise of Germany, Italy and Japan "should be welcomed by the religious man as reassuring evidence that inspite of modern wealth and ease, we shall not be allowed to sink into stagnation or adapt foolish social practices in fatal detachment from the stream of evolution." This was when Germany had confined Jews into ghettos. Cattell issued his book _A New Morality from Science: Beyondism_ in 1972, during the Jensen controversy. In this book, Cattell outlined the same racial theories he'd advocated in the 1930s, only now he talked about selection for individualism in Europeean races, made only one reference to Jews (comparing their "Chosen Race" claim to the Third Reich's ideology), and not surprsingly, talked about how only a few men and women could comprehend this astounding message of his. Cattell also advocated the segregation of races until they "diverge into distinct non-interbreeding species." This was in _1972_, not the mid-1930s. Needless to say, Cattell has always been eager to dismiss his critics without exception as being "politically motivated," as opposed to his own scientific disinterest. It's not surprising that the Pioneer Fund was willing to provide Cattell with money. Cattell also turned up as an editorial board member of _Mankind Quarterly_, a race-science oriented journal, which published most of his race-and-policy ravings. (My source on most of the above is William Tucker's _The Science and Politics of Racial Research_, 1994.) Now, maybe Cattell has done work that's not affected by his notions of racial management, and perhaps those ideas are of scientific merit. Maybe, like Shockley and his transistor, we can separate it from his nuttiness and give him an award he may very well deserve. But characterizing him as a man who had this foolishness in him only before the war, and should be forgiven, is utterly without basis in fact. Throughout his career, Cattell has advocated political policies of sterilization and racial segregation, and he has continued to do so to the present day-- regardless of having seen the horrifying consequences of such policies, and regardless of a lot of scientific evidence. (Frankly, Constance Holden at _Science_ has a track record of promoting biological-determinist researchers, and it doesn't surprise me that she'd go easy on Cattell.) [. . .] >>> Posting number 3508, dated 19 Aug 1998 16:55:11 Date: Wed, 19 Aug 1998 16:55:11 -0400 Reply-To: Discussion of Fraud in Science Sender: Discussion of Fraud in Science From: Al Higgins Organization: Sociology Dept., SUNYA Subject: (Fwd) blind studies MIME-version: 1.0 Content-type: text/plain; charset=US-ASCII Content-transfer-encoding: 7BIT Here after some delay is a posting to Scifraud. Al ------- Forwarded Message Follows ------- Date: Wed, 19 Aug 1998 19:11:25 +0000 From: Neville Goodman Subject: blind studies To: SciFraud postings Priority: NORMAL Jeff Lee cited Sheldrake, Rupert, 1998, Experimenter Effects in Scientific Research: How Widely Are They Neglected?: Journal of Scientific Exploration, 1998;12(1): 73-78. A colleague and I are currently trying to get a paper accepted by a journal that publishes both clinical and experimental studies. We surveyed 3 anaesthesia journals. In almost all the clinical studies, there was randomization, blinding and reporting of withdrawals; in almost none of the experimental studies was there randomization, blinding or reporting of experiments that didn't work. Rarely was any reason given in an experimental study for why the number of experiments were done; sometimes it was very difficult to divine this. On first submission of our findings to the journal, a scientist reviewer was scathing in his criticism (a paper of no scientific worth). The nub of his argument was that because randomisation could not be applied to the examples he chose to give, randomisation was therefore unnecessary. Nor did he (I presume he) think it necessary to report failures (would make the papers unwieldy). Nowhere did he acknowledge that bias was a factor in experimental work. I shall let SciFrauders know the eventual fate of the paper. I wrote a commentary for Nature on the same subject, which was rejected overnight after E-mail submission. The editor wished me luck with submission elsewhere: seeing as the sole subject of the paper was papers published in Nature, I think the luck is unlikely to hold, cheers, Neville Dr Neville W Goodman Consultant Anaesthetist Southmead Hospital BS10 5NB UK Nev.W.Goodman@REDACTED.ac.uk "There once was a brave academic who was wont to deliver polemic on the farce and the fraud which most people ignored that, alas, had become epidemic." (AMSB of NWG, Xmas 95) __________________________________________________ A. C. Higgins ach13@cnsvax.albany.edu College of Arts and Sciences VOX: 518-442-4678 Sociology Department FAX: 518-442-4936 University At Albany Albany, NY, USA 12222 [. . .] >>> Posting number 3658, dated 20 Oct 1998 22:54:04 Date: Tue, 20 Oct 1998 22:54:04 -0400 Reply-To: Discussion of Fraud in Science Sender: Discussion of Fraud in Science From: George Steele-Perkins Subject: Re: Intent or sloppiness? In-Reply-To: <199810091654.RAA11197@florence.pavilion.net> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Simon Birnstingl wrote: >Richard Feynman once advocated that an author should include all the >counter-arguments to his claims in a paper. Of course, he was a physicist >and published short papers when compared with the output of many in the >biological sciences, for example, but I think his point about rigor applies >here. It occurs to me, reading the details of the Baltimore case in the >recent postings, that a lack of rigor is the stem of the case: I disagree. Both outright fraud and lesser deviations from the scientific tradition are the stem of the case, not only merely a "lack of rigor." >Imanishi-Kari made mistakes in a cumbersome paper, they were not spotted by >others involved and were published. I disagree. Imanishi-Kari fabricated and falsified data. The other authors didn't "spot" the "mistakes" because none bothered to check the original laboratory notebook data. Also, Imanishi-Kari published the Cell paper despite Margot O'Toole and other lab members having been unable to repeat some of its crucial results. Margot O'Toole worked in Imanishi-Kari's lab. >When it was discovered and the errors >pointed out they tried to cover up *and this is the first occurance of >misconduct in the case*. I disagree. The cover-up came after *fraud*. Also David Weaver misrepresented his Northern blot(RNA analyses) experiments in order to make them jibe with Imanishi-Kari's fabricated serology data. >Some questions arise: How were the errors missed by pre-publication >reviewers? The errors were in Imanishi-Kari's representation of laboratory notebook data in the Cell paper manuscript. During peer review, manuscript data are presumed authentic. The paper, on its face, is merely a bit sloppy. It's not surprising, therefore, that it passed through Cell's peer review as well as the authors' review(since none looked at the original data). Although O'Toole reviewed the manuscript, she believed her own failure to replicate some of the results was *her own* human error. O'Toole did not corroborate the manuscript's data with laboratory notebook data. Not until after the Cell paper was published did O'Toole discover/stumble upon the famous "17 pages." The "17 pages" contain the original data that Imanishi-Kari misrepresented in the Cell paper. Margot challenged the authors to publish a correction of the Cell paper's errors. However, since the errors were so highly blatant and significant - undermining the central claim - only fraud or one very special chimpanzee could have caused them. Months after Margot O'Toole lost her job, she gave Walter Stewart and Ned Feder a copy of these famous pages. Stewart and Feder learned molecular immunology on their own. They tried for several years to publish their analysis of the "17 pages," "Original data contradict published claims: Analysis of a recent paper." Cell, Science, and Nature refused. 28 March 1989, after the OSI draft report was leaked, Nature published an abbreviated version of it. The editor(Maddox) changed the title to "Analysis of a whistle blowing" without informing its readership. The complete version and the "17 pages" themselves were published as part of Dingell's hearings transcripts by the Government Printing Office >How well supervised was Imanishi-Kari? Imanishi-Kari was an assistant professor at MIT at the time. Professors aren't supervised. >In how many other labs >could these same events easily have occurred? Happens all the time. Cheers, George gsperkins@REDACTED.net