This screen performs "after the fact" power analyses. After you have performed a statistical analysis and have a p-value, this screen tells you: (1) the power of your original analysis to detect this difference with this sample size, (2) the minimum difference detectable with a sample this size, and (3) the minimum sample size to detect this difference. This sort of analysis is most often done after a non-significant (p>0.05) result has been obtained, although it can be used with any value for p.
It should be noted that the whole idea of retrospective power calculations is a controversial one. Purists maintain that the very concept is meaningless, and that power calculations must be performed prior to the execution of the experiment. (Download the PDF document by Russel Lenth to get this side of the story. And anyone feeling that they need to do one of these calculations might want to first check out Richard Stevens' interesting web page, which can be thought of as a "pre-retrospective-power-calculation screening test", to see if you really need to do one.) Yet the major statistical software products (SAS, SPSS, etc.) routinely provide this kind of post-hoc analysis. It is perhaps best thought of as using the results of your just-completed experiment as if it were a pilot run to help you design your next experiment. Anyway, for good or for bad, here's a web page that carries out a typical set of basic retrospective power calculations.
This method used here is based on formulas for comparing two large unpaired samples, but it appears to be applicable to a wide variety of two-variable tests.
Note: Before using this page for the first time, make sure you read the
JavaStat user interface guidelines for important
information about interacting with JavaStat pages.