Data ApplicatIon ZAB 103 - Week 2

This week, we’re going to delve lightly into the realm of describing data – understanding the important parameters of data sets, how we can get a feeling for the statistical worth of a data set and how we can work out ahead of time how many samples we should be looking to secure.

Let’s begin by looking at ways that we can describe and evaluate a data set.

Describing Data | 4:37 mins

Video Transcript

Additional Reading

TASK

Please visit this Lynda tutorial and complete all components therein. This is part of your Practice and Portfolio (ZAP103) requirements. Please note: you must log into Lynda with your UTAS email.

Initial Analyses

Intitial Analyses | 5:05 mins

Video 2 Transcript

There are many different terms and concepts used when studying data and trying to determine if a treatment has had a real effect and that the effect is large enough to be commercially meaningful.

One of these is called the Null Hypothesis (often written as H0) – which states that a given treatment (e.g. irrigating potatoes as compared to not irrigating them) does not result in any statistically significant change. This is the basis for the majority of the statistical comparisons that we would use in our day-to-day work. Our experiments are designed to test the Null Hypothesis.

Other terms are Type I and Type II errors. Type I errors are those that give you a false positive. In statistical terms, this means that even though H0 is TRUE (i.e. there IS NO statistically significant difference between the treatments), our data indicates that there IS a difference.

Type II errors are the opposite. If we make a Type II error, we have found a false negative In statistical terms, this means that even though H0 is FALSE (i.e. there IS a statistically significant difference between the treatments), our data indicates that there IS NOT a difference.

Power Analysis

With all our talk about needing precision, accuracy, low variability and high validity, it can all start to get a bit daunting. We know that the more variable are our data sets, then the more data points we are going to need to get a result in which we can be confident. How can we tell if the data we are generating will provide us with an answer that means something?

Just how much is enough . . .?

Power Analysis | 7:36 mins

This session will have provided you with some initial ideas on how to evaluate data sets for statistical usefulness. You’ll see again that high precision, high accuracy and low variability are king when it comes to getting the most out of your data. Now, it isn’t always, or even often, possible to arrange all these to come together at the one time, so we need to understand

To review your knowledge of this module, please complete the following quiz. The quiz comprises assessment one.

Readings

A bit more on effect size and what it means. They are a bit technical, but worth a read

A longer read around power analyses. This fills in a lot of detail and gives you a take home message of “do your research before you do your research”

Power Analysis

Credits:

Created with images by blickpixel - "pins cpu processor" • PublicDomainPictures - "black business computer" • janetmck - "DATA" • Todd Huffman - "Data?" • Pexels - "batch bookcase books"

Report Abuse

If you feel that this video content violates the Adobe Terms of Use, you may report this content by filling out this quick form.

To report a Copyright Violation, please follow Section 17 in the Terms of Use.