Randomization inference = non-parametric tests (Mann Whitney/Wilcoxon ranked sum test)?

If I understand this correctly, randomization inference is nothing but non-parametric tests such as Mann-Whitney etc. In R its wilcox.test(). I never thought about the difference in the interpretation of the uncertainty in the parameters coming from non-parametric vs parametric tests (eg. t-tests). As quoted in the book “in randomization-based inference, uncertainty in estimates arises naturally from the random assignment of the treatments, rather than from hypothesized sampling from a large population”. Does this mean that the SE/95%CI for parameters computed using a parametric test cannot be compared with the one computed using a non-parametric one as they are measuring 2 different things?

I think non-parametric vs parametric is a bit of a separate concern for randomization inference - you can just as easily apply non-parametric methods to observational data that are definitely not “Randomization Inference”.

I think the important / defining part is the randomized assignment step of the experimental design.

To your question, many frequentist approximations of sampling distributions are constructed by invoking the central limit theorem (or other theorems, less commonly) and converge in distribution in the limit. (Contrast with so-called “exact tests”) This can be lead to tighter intervals than the equivalent non-parametric procedure if the distributional assumption is “good” and the sample size is “large”, or it can be worse if the distributional assumption is violated, or if the sample is “not large” and therefore the approximation is “not good”.

Conceptually, you can do a non-parametric procedure on the same data that admits some frequentist assumptions, and the contract the nonparametric method offers should hold (they work regardless of underlying distributions, that’s why they are non-parametric).

But, many old-school non-parametrics also invoke asymptotics for the calculation of test stats / p-values, and you never really know when exactly they’ve converged in distribution either… Just like the parametric tests… and parametric asymptotics can sometimes converge “faster”, so you can’t just look at n.

So you could compare CIs if the CLT assumptions were true, but you never really “know” if they are true for actual data.

Sorry for wordy, digressive answer, but it’s the best I got rn. I haven’t touched large sample theory in like 10 years.

3 Likes