is part of the Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

  • American Agriculturist
  • Beef Producer
  • Corn and Soybean Digest
  • Dakota Farmer
  • Delta Farm Press
  • Farm Futures
  • Farm Industry news
  • Indiana Prairie Farmer
  • Kansas Farmer
  • Michigan Farmer
  • Missouri Ruralist
  • Nebraska Farmer
  • Ohio Farmer
  • Prairie Farmer
  • Southeast Farm Press
  • Southwest Farm Press
  • The Farmer
  • Wallaces Farmer
  • Western Farm Press
  • Western Farmer Stockman
  • Wisconsin Agriculturist
Why Farm Tests Must Be Repeated

Why Farm Tests Must Be Repeated

You risk losing data if you only have one example of each treatment.

You need look no further than the Indiana Prairie Farmer and Precision Planting plots for a prime example of what would happen if you didn't repeat, or replicate, test plots on your farm. The study was conducted at the Throckmorton Purdue University Ag Center near Romney. Farm crew did the work. Jeff Phillips, Tippeanoe County Extension ag educator, helped monitor the plot, collect data and run statistics on the results.

If you don't repeat a trial and have, say, only one example of Hybrid A at 30,000 plans per acre and Hybrid A at 36,000 plants per acre, then you can't draw conclusions. With no replications, it's impossible to put much confidence in the results. You have nothing to compare against. What if one was on low ground and one on high ground? What would have happened if the situation was reversed and the second was on low ground and the other one on high ground?

That's where replication comes in. Repeating an experiment twice is better than not having a repeat of treatments at all. Three times are better yet, and four replications of the entire experiment are even better yet.

Why is that so? Because every time you get another chance to see the same combination, and if the results come out the same, the higher the odds that the treatment, not chance, caused the real difference. Experts call it the significant difference. Phillips will run statistics on these plots. The more reps he has, the more sure he can be of the results. Statisticians refer to it as least significant difference. With more data, for example, you can be 95% sure the difference is either real or do to chance, compared ot how the results vary.

What happened this year was that the plot was hit within a week after planting with a two-inch rain. Even though the lay of the land didn't look that different, the underground drainage was considerably different, plus there was enough difference in surface topography to cause ponding. In the first replication, it was easy to note where aobut 8 plots of the 27 in the rep were affected. They were going to be low yielding because of excess N loss, too many weeds, too slow of growth.

The beauty is that those same treatments in other plots weren't affected by too much water. So using various statistical techniques, it's possible to remove the few plots within the entire rep that should be discarded. That leaves three good plots to work with. Any individual plot where some external factor not directly to the test appeared to cause variation should be discarded.

What's left is still usable data. But if the only test you had was one run-through of the 27 treatments, you could not have fairly made any determinations because the wetness skewed the results in that first replication.

Hide comments


  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.