There were a series of extremely mild winters when I was working as an ADAS field adviser in coastal Essex in the 1970s. Perhaps as a result of these weather conditions, newly emerged peas were often infested with thrips and the debate was whether or not they should be controlled. So I organised a field trial to test a range of different insecticides. Every one of the six or seven insecticides tested gave a 3% yield ‘response’; remarkably consistent results.
There was no chance that such a low response would be statistically significant, but if the differences were real then it was just about worth spraying the crop, particularly if the insecticide could be tank-mixed with a herbicide being applied at the same time. The ADAS entomologists said that as the treated yields were not statistically higher than the untreated yields there was no yield response. The trial was repeated the following year because the results were so debatable.
The same trial the following year gave the same results, with a 3% yield response to every insecticide tested. The discussions over the practical implications of the results led to a big falling out between me and the entomologists. My interpretation was that because every insecticide tested had consistently given a small increase in two separate years then there was a response and it might be worth farmers spraying. The entomologists stuck to their guns and said that there was no statistically significant yield response and so infested crops should not be sprayed.
The industry continues to be faced with similar statistical ‘challenges’. When is a possibly cost-effective benefit real if the response to the input is not statistically significant? This question is more likely to be asked where the cost of the input is very low relative to the monetary value of a statistically significant response.
It is a question that is now very pertinent because of the plethora of, let us call them, ‘plant tonics’ that farmers are being encouraged to buy this spring to apply to backward crops. On one hand, they can be expensive in the context of their declared contents. On the other hand, they are very cheap compared to pesticides because they have not gone through a regulatory process. And a worthwhile response from their use is way below the level of that which can be recorded as statistically significant in a trial. For information, in wheat a yield response of 5% or more is typically required to be statistically significant.
How do we deal with this situation? Some say that the current methods of analysing experiments are not up to the job. However, eminent statisticians have concurred that there are no alternative techniques that can be adopted to resolve this impasse.
One answer may be to modify the current statistical approach which typically sets the probability of a response being real at 95%. This means that there is a 95% probability that the response is real and a 5% probability that the response has occurred by chance.
Perhaps these odds can be changed when the cost of the input is very low compared to the monetary value of a statistically significant response. I’m sure that, in this situation, many farmers would accept the lower odds, such as a 75% probability that the response is real and a 25% probability that the response has occurred by chance. This would reduce the size of the response required to get a significant difference. As far as I’m concerned, this is a perfectly valid approach provided that the probability of the analysis is openly declared.
In my opinion, a higher number of trials have to be carried out in situations where possible responses to cheap inputs are difficult to assess with confidence. This can then bring in the common-sense element of judging the consistency of response over a greater number of sites.
It can also enable a cross-site analysis to be done on a greater number of sites which will improve the chances of getting a statistical difference at the more conventional probability levels. What really is not acceptable is an interpretation that involves cherry-picking the results of one or two trials where there are small cost-effective responses that may be significant at the 75% or even the 95% probability level and ignoring all the other results where there are no responses or ‘negative’ responses. As always, common sense rules.
The cost of such ‘plant tonics’ when multiplied over a few hundred hectares is not inconsiderable. So next time someone tries to sell you something on the basis that ‘it is so cheap why take the risk of not using it?’ you know what questions to ask about the evidence you need to see to be persuaded. Also, dig around for independent sources of information; there may be more around than you think.