NIAB - National Institute of Agricultural Botany

Orson's Oracle

Boring Brits?

Posted on 16/02/2012 by Jim Orson

We went out for lunch last Saturday. The weather was glorious and all the tables in the garden were occupied, so unfortunately we had to sit inside. The conversation soon got around to managing risk when growing wheat in low rainfall areas of Australia. I was talking to a consultant who has been a major driver in the development of risk management strategies in such situations; we were having lunch in Bendigo, Australia.

Obviously, much of the risk management strategy revolves around water availability. Last year soils were full of moisture at sowing in early May (our October equivalent), but this year they could be bone dry. So how do Victoria’s farmers cope with this extreme variation, when typically rainfall after sowing could be insufficient to meet the full needs of the crop?

When I was first in Australia, around ten years ago, this was done by estimating how much water there was in the soil at the start of the season. Either a steel rod was pushed into the soil or a calculation was made based on the amount of post-harvest rainfall, providing a crude measure of how many mm of water was available at the time of sowing.

The assumption was for 20 kg of grain for each mm of moisture, and as rain fell during the season this was measured and the estimated yield as adjusted accordingly, with nitrogen (N) applied based on yield potential.

The issue with N is that there should be sufficient to achieve the anticipated yield. However, too much will mean excessive green leaf and water loss and as a result, reduction in both yield and grain size.

Now, computerised systems are used that require information at the start of the season based on a soil core analysis. Available moisture and N are measured and where prospects are good for yield and some additional N is justified this is applied in the combine drill.

As the season progresses the computer programme predicts N uptake and, based on any rainfall after sowing, expected yield. Additional N is applied if a shortage is predicted.

Water availability, along with plagues of mice and locusts, are not the only risks to take into account.

Sow too early and the risk of frost damage increases. Crops are grown through the winter and harvested in November (equivalent to our April) with the threat of wipeout due to frost at flowering. Sow too late and there is an increased prospect of yield damage from high temperatures during flowering.

Variables impacting Australian wheat production (specific variety) These risks can also be assessed by a computer programme based on weather records over the last 100 years. The image shows a printout for a specific variety at a specific location:

  • potential yield according to sowing date is the thick line;
  • the falling lines on the left are the decreasing risk of frost damage as sowing is delayed; and
  • the increasing lines on the right are the increasing risk of heat damage as sowing is delayed. 

The optimum date for sowing, assuming sufficient moisture for germination, is early May.

Doesn’t this make British wheat production sound boring?

Jim is on a study trip to Australia so expect further Oz-blogs over the coming weeks

Leave a comment / View comments


That 70s nostalgia...

Posted on 09/02/2012 by Jim Orson

I went badly off-course in last week’s blog. It started with a suggestion that I was going to discuss some aspects of the contents of the 1970 edition of Approved Products for Farmers and Growers. Perhaps this objective will be met this week.

First of all, it wasn’t a very thick book - this was still the early days of pesticide development – and there were no really effective ‘approved’ grass-weed herbicides in cereals.

Tri-allate (Avadex) had an entry, but this didn’t fit the ‘very effective’ description. I think that chlorotoluron may have been in the following year’s edition and isoproturon in the year after that.

[By the way, chlorotoluron was for a few years known as chlortoluron before the extra ‘o’ was adopted. Nobody seemed to know where the extra ‘o’ came from, but most seemed to blame the Americans. It was fashionable to blame the Americans for anything unwelcome.]Blackgrass

I actually remember the first time I saw the results of isoproturon on black-grass. It was a March application to very large black-grass plants and they were all scythed down to the ground. At that time, both chlor(o)toluron and isoproturon averaged 99% control in trials. I will pause whilst a few nostalgic tears are shed.

In my opinion, it was these two pesticides that started the revolution in wheat growing and yields. Up to their introduction, farmers on clay soils couldn’t grow winter wheat very regularly because of black-grass. Spring cropping often resulted in low yields and, with the equipment and understanding of that time, terrible soil compaction. Those farmers who tried to grow winter wheat more regularly had to resort to late drilling to avoid black-grass, which also resulted in damaged soil structure. No wonder that the UK Government commissioned a report in the late 1960s on the poor state of soils.

Just think what high levels of control of black-grass offered. Farmers on potentially high yielding clays could grow wheat more often or continuously, and soil structure improved because of earlier wheat sowing and no spring cropping.

The pace to higher yields was accelerated by the introduction of semi-dwarf varieties in the mid-1970s. These placed more of their above ground growth into grain rather than straw. Then the triazoles were introduced in the late 1970s and as yields grew so did the use of nitrogen.

The strange thing is that during the 1970s we didn’t think we were in the middle of a revolution in crop management. We came to expect new products and techniques to be produced on a regular basis. It was only when we looked back a few years later that there was a recognition of what was achieved during those few years.

The sad reality is that we seem to have about exhausted those technologies. The battle is now on maintaining their benefits from the challenges of resistance and legislation.

We now need a new fledgling technology that could be developed within the industry over time. Perhaps GM technology is currently at a similar stage of development worldwide that pesticides were in 1970. With a market, albeit initially for simple traits, it could bloom and be the foundation for the next step forward in yields. Unfortunately, the technology is being held back by a public that have been fed misconceptions peddled by those who go to bed every night with a full stomach.

Worldwide the technology is racing ahead and we have recently heard that the BASF biotech operation is being moved from Europe to the US where the technology has an immediate future.

There is much talk of the rest of the world moving ahead of Europe in the press. The anti-GM movement is perhaps just one example of a wider malaise in the EU. Many of our citizens still harbour the attitude that unwelcome technologies, such as GM, emanate from America.

Well, having got that off my chest, I realise that I have still not fully reviewed much of the content of the 1970 book of Approved Products. Perhaps I will have another go next week – as I am sitting on the plane on the way to Australia. Then again...

Leave a comment / View comments


What should be on the label?

Posted on 03/02/2012 by Jim Orson

I said a few weeks ago that I would return to the 1970 edition of the Approved Products for Farmers and Growers. In those days approval meant that the product efficacy had been assessed and ‘it did what was said on the tin’. Efficacy assessment was not part of the registration process, but it was de rigueur to enter products into the voluntary Approvals Scheme.

Personally, I’ve regularly questioned why products have to be assessed for efficacy. It’s understandable that there are obligations on the label regarding crop safety and safety to following crops. The argument for statutory efficacy testing is that pesticides should not be used in the environment unless they do a useful job. On the other hand, the market would soon drop the products that didn’t do the job. That takes place despite a statutory efficacy assessment.

On the other hand, a target needn’t necessarily be on the label for a product to be widely adopted to control it. Trifluralin had a huge market for black-grass control in cereals, but the weed was not listed on the label as being susceptible.

In addition, the label recommended dose to do a particular job is often ignored. The best example of this is with cereal fungicides. In efficacy testing for authorisation the product is only applied once and no other fungicides are used. In practice, a programme of different fungicide products may be used to control the target. In this situation, adopting a lower than recommended doses is inevitable.

The other issue with the recommended dose is that it’s set to control the most difficult target on the label. Significantly lower doses may suffice for many of the more susceptible targets listed. Everyone can quote examples where this takes place. It’s regrettable that much of the dose information generated during initial development trials isn’t detailed on the label.

The other efficacy issue that now has great emphasis during the authorisation of products is resistance management. Overall I welcome this, but there are issues relating to some of the restrictions imposed. All too often it is closing the stable door after the horse has bolted. A good example of this is resistance to the ‘fops’ and ‘dims’. To be fair, the labels suggest that a wider view is taken of resistance management than just the specific measures they outline.

In many cases resistance is inevitable unless rather extreme measures are taken. Bayer imposed the need to use mixtures with Atlantis. However, it soon became clear that the only way to prevent resistance to Atlantis, and products that share its mode of action, was not to use them on a regular basis. Mixtures may have delayed the inevitable, but not by much.

I am increasingly convinced that resistance management is more of a socio-economic issue rather than merely producing guidelines and introducing some restrictions of use on labels. Resistance to outstanding pesticide-based solutions can occur very quickly. Glyphosate weed resistance developing in crops genetically modified to be tolerant to this herbicide. Roundup Ready was initially a great option, not only because of the level of weed control achieved but also the ease of management. The cost of the GM seed also meant that the breeders invested more in developing varieties. This resulted in farmers totally relying upon this one herbicide solution year after year. Weed resistance has taken a little of the icing off the cake but the area of Roundup Ready crops has not diminished.

There really is a need to learn the lessons of the past. This would suggest that an even more restrictive use may increase the lifelong benefits of new modes of action known to be vulnerable to resistance development. After the experience of the strobilurins, is it wise to allow two SDHI-based fungicides per crop?

Unfortunately, new discoveries are very active substances with doses, at most, of a few hundred grams/ha required to achieve control. This typically results in them having a single site of action making them more vulnerable to resistance development. The low doses and single site of action is a reflection of rapid screening methods in the discovery phase. They also mean that products based upon such active substances are more likely to be cheap to produce and, perhaps more importantly, they are more likely to get through our rigorous registration systems.

I’m rather surprised about the direction this blog has taken. I started by indicating I was going to discuss the contents of the 1970 Approvals book. That’ll have to wait for next week.


Leave a comment / View comments


Not as bad as first thought?

Posted on 24/01/2012 by Jim Orson

There have been some very doom laden studies on the financial implications of the Drinking Water Directive on pesticide availability. These suggest that herbicides relied upon to control black-grass in oilseed rape could disappear, with dire financial consequences for the whole rotation. The view of the Environment Agency (EA) is that these studies are far too gloomy.

So what’s the truth? Well the EA is drawing up maps of areas that influence the pesticide content of raw drinking water. These represent very much the minority of arable land. So, by no means is the whole of the arable area expected to meet the demanding targets set by the Drinking Water Directive, which is incorporated into the Water Framework Directive.

And do the targets set by the Drinking Water Directives need to be totally met in raw drinking water? It seems not. The Water Framework Directive aims to stop the situation getting worse and avoid further investment in treatment.

But, but, but…this does not let any arable farmer off the hook. They need to adopt every reasonable measure in order to minimise movement of pesticides into water. It may be that additional measures need to be adopted in those areas that are sources of drinking water.

One major management target is for farmers to do their best to avoid huge spikes in pesticide levels in watercourses that can occur after heavy rain. Reducing water run-off, particularly from recently sprayed fields, will help significantly.

Outside the areas where drinking water is sourced, water bodies (typically lakes and rivers) have to meet the standards set by the Environmental Quality Standards (EQS) for the content of specific pesticides. The comforting fact is that a 2010 report suggests that 99% of water bodies are meeting these standards. A new report is being prepared and we await its results.

But, but, but…there is concern in arable farming that there are now interim arrangements for wider aquatic buffer zones, which can be up to 20 metres wide.

These were introduced at the request of the pesticide companies in order that the maximum number of existing and future pesticides is available to farmers. This is because the standards required preventing the impact of spray drift in water ecosystems have increased over the last few years.

The question is why have these standards increased? I have a sneaking suspicion that it’s because of the standards set in order to meet the requirements of the Water Framework Directive!

On the one hand, it may take only a small amount of spray drift to exceed EQS standards in a small field side ditch. On the other hand, the impact of individual fields on pesticide levels in a larger water body is literally watered down because of dilution of water from untreated areas.

EQS standards may largely being met for water bodies, but the impact may be felt at field level through the probability of wider buffer zones.

So, is the potential financial impact of the Water Framework Directive less than originally forecast in the reports on the subject? Probably yes, but, but, but……there is a potential financial impact due to wider buffer zones that was not forecast in these reports.


Leave a comment / View comments


Can N recommendations be improved?

Posted on 18/01/2012 by Jim Orson

The main focus of Rothamsted when it was set up in 1843 was to improve fertiliser practice. So it’s sobering to reflect on where we are nowadays with nitrogen recommendations for crops.

The depressing fact is that, as an industry, we haven’t got much further. In winter wheat, at best we can only predict within 50 kg/ha of optimum fertiliser N in 75% of cases, even where there’s not much N in the soil; i.e. long term arable soils with low organic matter and where no organic manures are used.

This sounds unimpressive, but the other methods of prediction are far worse. And they are even less reliable when it comes to predicting optimum fertiliser N rates where there’s a good bit of nitrogen already kicking about the system.

But why?

It’s hard to blame the scientists at the blue sky end of the research spectrum. Nitrogen cycling in the soil is a terrifyingly difficult subject, even with the advantages of modern research techniques. In addition, the optimum fertiliser N requirement can be influenced by weather after application.

N fertiliser application

Personally I consider that some progress has been lost at the more practical end of research, because there was an aura that it had all been resolved. We had nice tables in RB209 that implied there was accuracy. Measuring soil mineral nitrogen implied even greater accuracy. However, some of the simple assumptions that are behind these prediction methods may be flawed. Scientists charged with improving recommendations need to look at all the recent evidence and to see if things can be improved.

With this background in mind I was excited to read, and analyse, the HGCA project report PR447 on canopy management of winter oilseed rape. There was a remarkably (for nitrogen trials) close relationship between N in the canopy, involving laboratory N analysis, in the spring and optimum N fertiliser requirement. Perhaps this relationship was even more remarkable when the fickleness of the oilseed rape crop is taken into account.

However, this relationship was made worse when soil mineral nitrogen was also taken into account. The same occurred in trials on canopy management in Germany’s Schleswig Holstein.

We now know the possible pitfalls when trying to accurately measure soil mineral nitrogen. Additionally, the assumptions used to predict the impact of soil mineral nitrogen on crop fertiliser N requirement may be erroneous.

However, it is clear that another stumbling block is getting a practical field method that provides an accurate estimate of N in the canopy.

There are electronic methods of estimating canopy size which can lead to a reliable estimate of the total N in the canopy. However, the ability to predict the optimum economic dose of applied N is much reduced when compared to the method that involves plant analysis.

So, perhaps, history repeats itself. Have we here an improved method of prediction, but it is not reaching its full potential because of underlying assumptions? Perhaps only a researcher will say this, but we need more R&D on this issue...

Leave a comment / View comments


Page: ‹ First  < 32 33 34 35 36 37 38 39 40 41 42 43 44 >