Six items about ESP appear below:

1 Extra-Sensory Perception
L Storm
2 A flawed ESP experiment H Edwards
3 Harry Edwards...Falacious Remarks Dr Potter
4 Response to Criticisms L Storm
5 Letter to editor wrt Dr Potter
H Edwards
6 Concept of Partial Success in ESP Experiments L Storm


Extra-Sensory Perception

 in the Vision-Impaired and in Sighted Subjects

Lance Storm

(Investigator 75, 2000 November)
   

Introduction

An experiment was conducted in the Department of Psychology at Adelaide University using a group of vision-impaired subjects and a group of sighted subjects. The University's Ethics Subcommittee gave approval for the experiment to proceed.

The first aim of the experiment was to determine if there was any statistical evidence of ESP in vision-impaired and sighted subjects. The second aim was to determine if vision-impaired subjects would perform better on the ESP task than sighted subjects. It was thought that ESP ability in vision-impaired people might be better developed than sighted subjects in the same way that other senses in many people (e.g. sense of smell, touch, etc.) often tend to improve with practice as compensation for some kind of impairment.

A total of 84 participants volunteered for the experiment. There were 42 participants in the vision-impaired group and 42 participants in the sighted group, which acted as a control group so that performances between the two groups could be compared. Each group was matched on age and sex. Each group had an average age of 48 years (range: 16 to 83 years). There were equal numbers of males (52%) and females (48%) in both groups.
 

Method

Prior to each session a target picture (a hand-drawn image randomly selected from a dictionary) was selected from four such pictures, and then concealed in a target envelope. The four pictures were also concealed in an envelope. The contents of both envelopes were never made known to the experimenter prior to the trial. (Note that the four–picture set, and the target picture were each wrapped in aluminium-foil before being concealed in their respective manila envelopes.)
 

Procedure

1.  On each trial, a target envelope was presented to the participant. The participant attempted to describe the picture concealed in this envelope before he/she was allowed to know what it was.

2.  The four pictures (one target plus three decoys) were removed from their envelope, and were then described (or shown) to the participant. The participant ranked (as '#1') the picture that best corresponded to his/her previous descriptions. The second most preferred picture was ranked as '#2', and so on, until all four pictures had been ranked '#1' to '#4'.

3.  The target picture was then removed from its envelope so that its rank score could be determined.
 

Hypotheses (with Results)

1. The whole sample would perform above chance on the ESP task.

Using subjects' rank scores, there was a statistically significant level of scoring for all 84 subjects, which means scoring rates were higher than chance would explain. There was therefore an anomalous effect in need of an explanation. Conventionally, this effect is called 'extra-sensory perception' (ESP).

2. Vision-impaired subjects would perform better on the ESP task than  sighted subjects.

Using subjects' rank scores, vision-impaired subjects did not perform as well as sighted subjects. The hypothesis was not confirmed. However, all 84 subjects, and the two groups on their own, showed an above-chance tendency to choose, or very nearly choose, the correct target. There was no evidence that totally blind subjects scored worse than partially blind subjects.

Conclusion

The overall performance of all 84 subjects was statistically significant based on the subjects' own scores. This means that although the sighted subjects performed significantly better than vision-impaired subjects, all 84 subjects contributed to the overall significant result. This result supports the hypothesis that some kind of anomalous performance (e.g. ESP) took place during the experiment. Also, the significant shift towards ranks #1 and #2 may also imply the effects of ESP in all subjects (vision-impaired and sighted).

The reason vision-impaired subjects did not perform as well as sighted subjects may have to do with personality differences between the two groups. It was noted, for example, that overall levels of enthusiasm and confidence were not as high amongst the vision-impaired subjects as it was amongst the sighted subjects. It has been found in past research that attitude and personality generally have a significant influence on the outcome of ESP tasks.

Therefore, it cannot be concluded from the results of one experiment that sighted people generally have better developed ESP than vision-impaired people because even well developed skills can suffer if confidence and enthusiasm are low. In future experimentation of this kind priority must be given to matching subjects in comparison groups on personality and attitudinal variables so that the comparison is fair for all subjects.


Acknowledgments

Vision-impaired participants were contacted with the assistance of Townsend House, the Royal Society for the Blind, the Blind Welfare Association, Guide Dogs Association, and Radio Station 5RPH. Elderly sighted participants were contacted through the assistance of the Probus Organization.

My thanks go to all the participants in this experiment. Special thanks to Michael Thalbourne, Terry Furness, Craig Gordon, Roley Stuart, Margaret Pyyvaara, Peter Greco, Phil van der Peer, Oystein Dale, Anita Idol, Paul Barrow, and the Probus Organisation, especially Mr. Brian Moore.
 
 



A flawed ESP experiment
the result of a sloppy protocol and bias?

Harry Edwards

(Investigator 76, 2001 January)


I refer to Lance Storm's report of an experiment conducted in the Department of Psychology at Adelaide University. (Investigator #75)  Its purpose was (1) to confirm the supposed phenomenon of ESP and (2) to see whether vision-impaired subjects would perform better than sighted subjects.

In my opinion the experiment failed on three counts. First it assumes that ESP is a fact. Second it uses a flawed protocol and third, the sampling number was too small. Each of the 84 participants was tested once and the results lumped together. On the basis of a single correct guess or a "near enough guess" they were then declared to have “extra-sensory perception”!

The sampling number was ridiculously small and produced misleading results. Consider tossing a coin. Over a period of say 100 tosses it will come down approximately 50 heads and 50 tails. The fewer the tosses the greater the discrepancy, the more tosses the more even the distribution. Basil Shackleton, who in the 1930s claimed ESP powers, was the subject of over half a million tests before being accepted as genuine. Even then the experiment was finally exposed as a fraud – the experimenter conducting the tests had faked the results.

The Adelaide test procedure started with the target envelope being presented to each participant who, after describing what they visualised the picture in the envelope to be, were then allowed to know what it was. We are not told how many, or if any for that matter, guessed the contents correctly. This was ipso facto, (albeit unrecognised and unacknowledged by the experimenter), a test of ESP. I would he interested to know how many of the participants correctly described the target picture prior to it being revealed. That in itself would have proved x-ray vision without the necessity of any further testing.

The methodology was also flawed as it allowed second guessing. The report informs us that "... all 84 subjects, and the two groups on their own, showed an above chance tendency to choose, or very nearly choose, the correct target."  "Nearly choose" implies that misses were also being counted as hits, thus distorting the results. Trees are trees, but a Norfolk Island pine bears no resemblance to a coconut tree – close enough is not near enough.

The writer erroneously concludes from the results that some kind of anomalous performance (e.g. ESP) took place and that they imply the effects of ESP in all subjects (vision-impaired and sighted). He also suggests that the reason vision-impaired subjects did not perform as well as sighted subjects may be because of personality differences.

I believe it would be more realistic to say that a person with 20:20 vision would stand a better chance simply because they had a much clearer view of the pictures.

Some of the vision-impaired subjects appear to have had little if any vision at all as the decoys and the target picture had to be described to some of them rather than viewed.

The experiment does raise some interesting questions however.

What does a person blind from birth "see" in their mind's eye when given a description of a picture by a sighted person?

How would the sighted person know that which the sightless person perceives is a true representation of the reality?

How does a blind person conceive colours when they've no basic references with which to make comparisons?

It's a pity the writer didn't include in his article copies of the pictures used in the experiment and a table showing the rankings.

Despite years of comprehensive testing from the time the term "extra-sensory perception" was coined by Dr J B Rhine back in the 1930s, the results of ESP experiments have shown no significant deviation from chance nor conclusively proven that such a faculty exists. When claims have been made by experimenters to have evidence to that effect, attempts at replication have been unsuccessful.

The joke is even when a "significant deviation" from chance is alleged to have been noted (and it's inevitably pathetically small), to what good use can the discovery be put?  ESP is nothing more than a guessing game.

See also ESP 55/5 and ESP and Mind 68/7.
 




 

HARRY EDWARDS – FALLACIOUS REMARKS

[Letter to editor]

(Investigator 77, 2001 March)


Investigator
No 76 arrived in time to provide a good read on Xmas day (whilst grandchildren were busy unwrapping their various presents!) I was especially pleased to have the five pages of VITA from Gerald R Bergman, which I found a fascinating compilation.

At first glance I was quite excited to read Harry Edwards' heading "A flawed ESP Experiment the result of a sloppy protocol and bias?”  But as I read on my enthusiasm rapidly evaporated. In almost every paragraph Harry not only misrepresents what Lance Storm has said, but is himself guilty of equally fallacious statistical remarks. Contrary to his third line (No. 76 p. 52) experiments are always to test hypotheses, never 'to confirm' them – Lance quite correctly says this. Nor does he 'assume ESP' (line 7).

Also Storm's sample is not "ridiculously small". If, for example, I was seeking data about the beliefs of Jehovah's Witnesses vis-à-vis the existence of an 'immortal soul' a sample size of one person would be quite sufficient. Non parametric tests provide 'degrees of freedom' to get over the sample size difficulty. No study would "prove X-ray vision" (in psychology, physics or any other field of human study.

On the 'interesting questions' raised (according to Edwards) there is so much work that has been done in this field in thousands of cross-cultural studies of human perception, colour and/or otherwise that I am surprised Harry is not familiar with it. And what can he mean by "a true representation of reality"?

Bob Potter
United Kingdom
 

 



Response to Criticisms against
Storm's
ESP Experiment with Vision-Impaired and Sighted Subjects

Lance Storm

(Investigator 77, 2001 March)


An ESP experiment conducted at Adelaide University (reported in Investigator #75), using sighted and vision-impaired participants, has generated some interest and criticism amongst Investigator readers. The following responses will hopefully serve to clarify issues raised by those readers.

Dr. Bob Potter of Lewes College, England (personal communication, 10th November 2000), criticizes my use of the word 'hypothesis' in the Investigator write-up. He considers that the word should be used to refer to effects in a population only, not in a sample. Traditionally, a hypothesis is proposed and a sample, ostensibly taken from that population, is then tested for evidence of the hypothesized effect. A so-called significant result from the tested sample is taken as support for the hypothesis so that, by inference, the effect is claimed as possibly existing in the population. The significant result, however, does not say anything about the truth status of the hypothesis. Apparently, even in the academic community there is some confusion over these differences.

Bob Potter acknowledges that my article was written for a 'lay' readership, but feels that lines 5 and 6 in the Conclusion (see Investigator #75, page 48) should read something like the following:

"This result supports the hypothesis that some kind of anomalous performance (e.g., ESP) exists in the population from which the sample was drawn."
In the original text, I claimed that:
"This result supports the hypothesis that some kind of anomalous performance (e.g., ESP) took place during the experiment,"
 implying that the hypothesis was only about the sample.

I accept the traditional convention, and remind Investigator readers that my concluding sentence was meant to alert them to the idea that a hypothesized effect (ostensibly ESP) may exist in a general way (i.e., in the population) because it had just been observed specifically (viz., by way of experimentation) in a group representative of that same population.

Harry Edwards (Investigator #76) raised a number of issues:

Firstly, he states that the experiment "assumes that ESP is a fact," and thus claims that this assumption means the experiment is a failure. Taken logically, this claim does not follow. An experiment cannot "fail" merely because someone detects, or thinks they detect an assumption. Mr. Edwards perhaps means the experiment is not valid, but again, such a claim would not follow since all experiments test assumptions (stated as hypotheses).

Secondly, Mr. Edwards claims that there were flaws in the protocol, but in fact he

(a) only identifies a problem in how the results were presented,
(b) regards "second guessing" as a way to count "misses" as "hits", and
(c) considers that vision-impaired subjects were disadvantaged more by a lack of clear vision than personality differences.

Only (c) refers specifically to possible flaws in the protocol.

Both (a) and (b) can be addressed at the same time. There was no need to present data on how many subjects "guessed the contents [of the envelopes] correctly" because subjects were rewarded for partial accuracy. Partial accuracy is given by rank-scores higher than rank #1 (the lower the rank, the more accurate the guess). Tables 1, 2 and 3 show the rank-scores for vision-impaired, sighted subjects, and all subjects, respectively.

Table 1
Target Rank Scores (Vision-Impaired Subjects)
 Rank      No. of Responses     Percent
 1                       8                      19.0
 2                       14                    33.3
 3                       11                    26.2
 4                       9                      21.4
 Total                42                    100.0

Table 2
Target Rank Scores (Sighted Subjects)
 Rank      No. of Responses     Percent
 1                       14                     33.3
 2                       15                     35.7
 3                        9                      21.4
 4                        4                        9.5
 Total                  42                  100.0

Table 3
Target Rank Scores (All Subjects)
 Rank      No. of Responses     Percent
 1                         22                   26.2
 2                         29                   34.5
 3                         20                   23.8
 4                         13                   15.5
 Total                    84                100.0


In all three tables, trends towards lower ranks can be seen, and these trends were significant in all three cases. Sighted participants performed significantly better than vision-impaired participants.

A few words about the ranks method:   The ranks method is very fair because it takes into consideration the fact that ESP has been found to he a weak effect (as has been demonstrated repeatedly – see the two articles listed below). A weak effect means it is not reasonable to expect 100% accuracy on targeting. Just as it is not fair to judge normal perception (e.g., vision) without considering contingent factors (e.g., the dimness of the ambient lighting, and/or the clarity of vision in each subject), ESP should be considered an effect that may also be affected by environmental factors, and may be stronger in some people, and weaker in others.

Consider this scenario: two headlights in the distance on a foggy night may be those of an oncoming vehicle, or they may be the lights of two motorbikes riding alongside each other. Even if I have 20:20 vision I should not be expected to guess correctly first time. Or if my vision is poor I might not be able to get it right at all, but I might make out some aspects of the phenomenon (i.e., the lights).

Thus, claims (a) and (b) are unjustified.

As regards (c), there was no evidence that good vision gives a subject an unfair advantage over subjects with poor vision. I found that 56% of totally blind subjects recorded ranks #1 and #2 compared to only 50% in partially impaired subjects on the same two ranks, which is hardly evidence of a 'sighted' advantage. By inference, then, the more successful group (the sighted subjects) appeared to have something working in their favour that simply cannot be better vision alone.

Thirdly, the size of the sample is criticized as being too small. As it happens, it is a statistical fact that effects are hardest to demonstrate in small samples if significance-testing procedures are utilized. Harry Edwards' example of coin tossing can be used to demonstrate this fact. In a small sample (10 coin-tosses), nine correct guesses out of ten coin-tosses (a 90% success rate!) would be necessary to get a significant result (odds of 1 in 50 by chance alone), but the same odds (1:50) can be achieved with a mere 62% success rate in a larger sample of 100 coin-tosses!

As a general rule for any kind of effect (normal or paranormal), when we really do have a hypothesized effect, the smaller the sample, the greater the risk there is of failing to find the effect. There was a very big risk of failure in using only 84 subjects, but I had no choice because it was very hard finding vision-impaired subjects.

Some additional comments:

(1) A 'Norfolk Island pine' may be nothing like a 'coconut palm', but for the same reasons given above, you can attribute partial success (e.g., rank #2), and legitimately so, to a subject who guessed 'tree' as a more general category.

(2)  X-ray vision could not have been proved on the basis of success in some subjects without testing to see if the number of 'correct' guesses were the result of chance alone or were sufficiently high to be statistically significant. These same tests of significance were conducted in the experiment.

(3)   Mr. Edwards' claim that "the writer erroneously concludes...that some kind of anomalous performance (e.g., ESP) took place" is not acceptable on the grounds that he has not put forward a reasonable, more parsimonious, 'normal' explanation of the significant result. A significant result is taken by statisticians as meaning that an effect other than chance had been shown, and such a result must be explained. I gave ESP as an example of such an anomalous effect. Only when all possible rival explanations are satisfactorily refuted can an experimenter claim that evidence of an anomalous effect has been found in the experiment. No normal sensory modality could give a subject an unfair advantage on the task, and there is no possible way that any subject could have cheated. Given the nature of the task, the conclusion is justified.

(4)  It was not claimed that "all subjects" demonstrated ESP. All that is needed is a sufficient trend toward lower ranks that proves to be a significant deviation away from chance.

(5) The claim that "the results of ESP experiments have shown no significant deviation from chance nor conclusively proven that such a faculty exists" is unsupported by the literature. ESP effects have not only been demonstrated, but have been shown to be increasingly repeatable. Readers are referred to the following articles for evidence in this regard:

(a) Bem, D. J., & Honorton, C. (1994). Does psi exist? Replicable evidence for an anomalous process of information transfer. Psychological Bulletin, 115, 4-18.

(b) Utts, J. (1991). Replication and meta-analysis in parapsychology. Statistical Science, 6, 363-378.

I have especially chosen these two articles because they appeared in journals that are very rigorous in their selection procedures, and thus maintain a very high status of credibility in the scientific community. Also, these journals are not specifically targeted at a parapsychological community.

(6) Small effect sizes have value in parapsychology, and researchers in the field endeavour to find ways of strengthening these effects. Nevertheless, researchers in other disciplines are not immune to finding small effects. For example:

 (a) An extremely small effect size was found in the aspirin/heart-attack study by the Steering Committee of the Physicians' Health Study Research Group (USA), but it was enough to stop the study on ethical grounds because 45% fewer heart attacks were reported in the experimental group compared to the control group.

 (b) On ethical grounds, the National Heart, Lung, and Blood Institute (USA) also discontinued a study testing propranolol because the control group was missing out on the positive effects of the treatment. Again, effect sizes were very small.

These effects are actually smaller than any of the effect sizes reported in the two parapsychological papers listed above. Therefore, small effect sizes can refer to meaningful and important phenomena. It all comes down to which institutes or organizations will be affected by these findings on the basis of how seriously the findings are taken.

(7)  Finally, practical applications of ESP are not far away (see Braud, W. G., & Schlitz, M. A. [1991]. Consciousness interactions with remote biological systems: Anomalous intentionality effects.  Subtle Energies, 2, 1-46). With further research, other applications may arise.

In conclusion, as Bob Potter correctly points out, conventional usage of scientific words should be adhered to in order not to confuse the reader. The major part of this response, however, was addressed to Harry Edwards. In his critical and skeptical analysis of the ESP experiment he has, one way or another, repeatedly made cliched and unfounded remarks against the paranormal hypothesis, or erected straw men in order to knock them down by making claims that are simply unwarranted.

There is good evidence of paranormal effects and parapsychologists know that these effects are generally weak. However, given the implications of these effects for science, any effect is of interest, so there is no reason to give up research altogether. Other disciplines suffer the same difficulties. Given all the facts and findings raised above, one is hard-pressed to see how ESP could be as Mr. Edwards says: "nothing more than a guessing game."
 
 



ESP TEST WAS BIASED

Harry Edwards

(Investigator 78, 2001 May)


Apropos Bob Potter's reference to my "Fallacious Remarks" on the ESP experiment. #77 page 5.

Bob disputes my conclusion that the experiment(er) assumed ESP to be a fact. This comment was based on the statement made by Lance Storm, the author, on page 47 lines 8 and 9. It reads, "It was thought that ESP ability in vision-impaired people might be better developed ..."  Doesn't this imply that the experimenter believed that those subjects already have some ESP ability?  Once this bias creeps into a test, objectivity flies out of the window. Ways are then sought to enhance any findings – second guessing is a typical ploy.

As Bob appears familiar with material on human perception, colour and/or otherwise perhaps he can answer the three questions I posed in #76 page 53. I for one would certainly like to know as would other readers.

Bob also refers to my 'Fallacious statistical remarks'. Does he refute the findings of those who exposed the fake Basil Shackleton experiments? Does he disagree with the findings of the many who unsuccessfully attempted to replicate Dr J B Rhine's experiments? What about the accolades by parapsychologsts heaped on the most famous of them all – Uri Geller. When tested by psychologists Rebert and Otis, Geller failed to identify one target in the whole series.

I cannot accede to Lance Storm's contention that while a Norfolk Pine may be nothing like a coconut palm it's legitimate to attribute a partial success simply because they're both trees. Supposing I had picked as examples a 20cm Japanese Bonsai and a 100 metre Canadian Sequoia? Imagine accepting the criterion 'near enough is close enough' and then using ESP to identify aircraft during hostilities – friend or foe they're all aeroplanes – would near enough be close enough?

A picture of a woman, an adolescent, an ape, a standing bear, an Emperor penguin, a deformed tree, a mannequin, a grandfather clock or a pair of long-johns hanging from a clothes line would all probably look similar to Mr Magoo. So if the picture  was of a male wearing a tutu and a bowler hat would any of the above completely disimilar subjects be counted as a partial success?

Harry Edwards.
Australia – NSW
 
 



The Concept Of Partial Accuracy

In Paranormal Experiments
And The Nature Of The Scientific Method

Lance Storm

(Investigator 79, 2001 July)


This article is written with the general interest of the Investigator readership in mind. But I will use my ESP experiment with the vision-impaired, and criticism of it (see Investigator #75, #76, #77, and #78), as case material to illustrate some points I will make about the nature of parapsychological experimentation. Therefore, I do not use this opportunity exclusively to address Harry Edwards and the recent issues he raised (see Investigator #78).

Actually, I can only refine my previous arguments, anyway, because the major assumptions made in regard to the ESP experiment have not been challenged by Mr. Edwards in a way that would undermine the findings. He raised two issues,

(I) "second guessing" (i.e., partial accuracy), and
(II) personal bias.

(I)  In terms of super-ordinate categories (such as 'tree'), there is no need in attempting to demarcate subcategories of tree (specific species, for example) if your participant does not refer to subcategories in the first place, regardless of the specificity of the target. Deferment to a super-ordinate category does not annul the possible 'meaning' (i.e., degree of correspondence) of a response. Mr. Edwards' rejection of a premise he constructed, not I – "near enough is close enough" (Investigator, #78, p. 8) – does not apply as I will now illustrate.

For example, Bonsai (for argument's sake, let's call it the target) and Sequoia (the response) may be different, but does that mean they are not similar? Would Mr. Edwards say that Bonsai and Sequoia (target and response, respectively) do not deserve to be considered matches of a type when the three remaining alternatives are 'balloon', 'penguin' and 'grandfather clock'? If someone's 20:20 vision could identify aircraft, but not whether they were “friend or foe,” is there any need to accuse them of being blind?  Unfortunately nothing is perfect in this world, and as I have said before, paranormal effects are not strong – that means we cannot expect perfection at this stage. Mr. Edwards has again set up a straw man in order to knock it down, as he has done previously (see Investigator #76, pp. 52-53).

Parapsychologists do not presume to know how the matching process works, or they would be able to design experiments that yield stronger effect sizes. On the basis of conventional interpretations of statistically significant results, they do not appeal to chance explanations of effects, no matter what size (the assumption being that there is an effect in need of explanation because the result is unlikely to be a chance outcome). Going from there, the experimenter simply 'resigns' him- or her-self to the fact that even though at a given time the results may appear to have limited applications because they appear to be too nebulous, subsequent analysis may show that there is much to be gained from the findings. In science, we build up from our findings from simple to more complex models of how the world may work.

From that position, I now move to my next point.

(II) The scientific method itself requires that experimenters make some assumptions about the universe and the phenomena contained therein. I referred to the idea of the assumption in a previous article (see Investigator #77, p. 45). In that article, I also referred to the so-called hypothesis, of which Bob Potter had much to say (personal communication, 10th November, 2000; see my response to Mr. Potter in Investigator, # 77, p. 44).

An experimenter may take a 'position' on a particular phenomenon, but he or she must, by convention, propose firstly that there is no effect (stated in the so-called Null Hypothesis) and, secondly, that there is an effect (stated in the so-called Alternative Hypothesis). These hypotheses are made irrespective of personal and emotional considerations. They are mutually exclusive and embracive of all possible outcomes.

That's the conventional position, but it appears Mr. Edwards holds a counter-position whereby stating one's hypotheses in advance of experimentation is to be viewed with suspicion. His position (which is where his discourse inevitably leads him) is clear when he accuses me of bias in my statement "ESP ability in vision-impaired people might be better developed than sighted subjects" (Investigator, #75, p. 47). (The actual hypothesis was: "Vision-impaired subjects will perform better on the ESP task than sighted subjects.")  My statement is simply the Alternative Hypothesis and it could readily be made under the assumption that I did not believe it to be true. Either way, alternative hypotheses are stated in that way.

We all know (and I include Mr. Edwards, see Investigator, #78, p. 8) that the personal influence of the experimenter may be a hazard from time to time. I submit to the reader that personal bias does not carry the weight that Mr. Edwards thinks it carries. Nor does it result in "ploys" merely to satisfy that bias. Even though a significant effect was found for the whole sample, my personal bias failed to influence the vision-impaired participants to perform significantly better than the sighted participants. If I really had that much influence, I would report nothing but significant results that support all the paranormal hypotheses I have made thus far. Yet this has never happened and is not likely to.

Furthermore, if there really are no paranormal effects, then this will be borne out in the long run. And, that's where the meta-analyses come in. (References to two meta-analyses were given in Investigator #77, p. 48.) I anticipate that Mr. Edwards is likely to criticize the meta-analyses (should he ever read them) because he would say they include large numbers of studies done by parapsychologists, all of whom are tarred with the same brush for apparently cheating, committing fraud, and introducing personal bias. Mr. Edwards knows what it is like to be in this position, and I dare say he does not like it (see Investigator, #70, pp. 40-41). Imagine how it must feel for honest parapsychologists, who have been and are still being accused in the same way.

I also anticipate that Mr. Edwards is still likely to insist and maintain that near enough is not good enough. Is it sensible and realistic to apply such insistence universally? Does anyone know, for example, whether aspirin works 100% of the time? If it doesn't, would it be necessary to boycott the product?

In closing, to use Mr. Edwards' words, "ways" cannot be "sought to enhance any findings" (Investigator #78, p. 8) when the protocols and hypotheses are set up in advance. The ranking method described above is well established. It was not chosen as a last resort to desperately resurrect a failing experiment, or give some kind of leverage in favour of a personal bias. Some hypotheses were supported, some weren't. The experiment was offered to the readership as possible evidence for paranormal effects. It was conducted under expert supervision, using nothing but the best available techniques and methods condoned and established according to scientific convention and peer review requirements.

Mr. Edwards has offered layperson's criticisms against the experiment, and I rebutted these criticisms. He must now come to terms with the concept of partial accuracy, and learn to make a distinction between weak effects and strong effects. Until then, further responses from me are merely repetitious and therefore redundant.


Hundreds of articles and investigations dealing with the paranormal:

http://ed5015.tripod.com/