It starts with having a clearly defined, ethically justifiable objective. Once that is sorted, you consult statistics tables (or formulas) which indicate the minimum number of experimental units (e.g. animals) required to tell whether a difference or response is statistically significant or not. One may even argue that it is unethical to start an animal trial with an insufficient number of animals, since you have a good chance of “wasting” those animals on a lost cause...
How do you eat an elephant? You cut it into small enough pieces and eat it bit by bit (no pun intended)... A lot of what I said about “small” data also applies to big data: The basic data, the actual building blocks of big data, must be collected and handled in the proper way, for data to be valid and reliable. Big data refers to the sheer volume of data that is becoming available. Once the first tenets (representativeness, reliability) are met it is up to scientists and experts to process this data, using powerful mathematical models available today, and communicate the outputs to the farmer; the farmer and other customers (e.g. feed company nutritionist) do not need to process the data; they only need to know the bottom-line and what implementation will bring for them. But then they need to trust you for collecting good data (even if it is big data) and processing and interpreting it in an objective and reliable manner.
That is true: Meta-analysis is a powerful statistical tool that can combine results from a number of COMPARABLE trials, some of which on their own do not have the numbers, and then make “better” statistical inference. However, it is not a magical tool. Each individual trial still needs to be “sound” (even though it may be small); it needs to have the relevant information reported and it must be comparable – compare apples with apples... A meta-analysis still depends on good data!
This is a difficult one. It may vary for many reasons. It is all about the old adage of risk versus return: If the data are “very convincing” (a number of relevant studies, all showing a substantial improvement (e.g. 3 ℓ/d of milk), with good P-values (e.g. P≤0.05) then I will be quite comfortable with a ROI of 2:1; if the data looks good but the improvement is relatively small (e.g. 1 ℓ/d of milk), and the P value is merely a “trend” (e.g. P<0.10), and it involves a limited number of studies, then I may want to see a ROI of 4:1, or higher. It is difficult to say without looking at the individual case.
Report them to SACNASP. No, just joking. I believe many nutritionists are here today and I trust that my message about good data reached them as well; in our profession decisions should be based on good data! The purpose of today’s presentation is just that: to make us, as an industry, aware of “good data”. Hopefully it will give all of us the liberty to point out where we have good data and to demand that we be presented with good data.
Yes, un-thoughtful pooling of samples will do exactly that. With any sample or sampling it is important to know beforehand what you want to learn or show from that sample. If “sample” is the name; REPRESENTATIVE is the “surname”! A sample is not merely the 100 grams of dried and grounded material in the bottle. It represents something. Sometimes it makes sense to pool samples (make a composite sample); at other times it is a grave mistake. That decision will be determined by your needs and objectives. So, think what you are doing. And always make sure that the final sample, whether composited or not, is a truthful representation of the material you want it to present.
Yes I do. There are various ways to do it (will not go into detail here). The important thing is to make sure that the result which you want to enter into the appropriate point of a formulation model is reliable, i.e. based on good data. A second point is to be sure that the conditions/circumstances under which the data was generated are comparable with those where the formulated feed is now going to be fed.
They are not directly comparable. A peer reviewed article refers to an article that have been reviewed and judged by peers (scientists working in the same field) before it is accepted for publication. It implies that the article meets a certain set of criteria, that it contains sufficient information for a peer to repeat the trial and get the same results, and that the data is good – kind of an independent evaluation. A meta-analysis on the other hand is a type of study where a number of sufficiently comparable trials are analysed together. The results of a meta-analysis may then be published as a peer reviewed article, or not.