(Non-)Linear Tests for Null Hypotheses, Joint Hypotheses, Equivalence, Non Superiority, and Non Inferiority

Description

Uncertainty estimates are calculated as first-order approximate standard errors for linear or non-linear functions of a vector of random variables with known or estimated covariance matrix. In that sense, hypotheses emulates the behavior of the excellent and well-established car::deltaMethod and car::linearHypothesis functions, but it supports more models; requires fewer dependencies; expands the range of tests to equivalence and superiority/inferiority; and offers convenience features like robust standard errors.

Warning #1: Tests are conducted directly on the scale defined by the type argument. For some models, it can make sense to conduct hypothesis or equivalence tests on the “link” scale instead of the “response” scale which is often the default.

Warning #2: For hypothesis tests on objects produced by the marginaleffects package, it is safer to use the hypothesis argument of the original function. Using hypotheses() may not work in certain environments, in lists, or when working programmatically with *apply style functions.

Warning #3: The tests assume that the hypothesis expression is (approximately) normally distributed, which for non-linear functions of the parameters may not be realistic. More reliable confidence intervals can be obtained using the inferences() function with method = “boot”.

Usage

hypotheses(
model,
hypothesis = NULL,
vcov = NULL,
conf_level = 0.95,
df = NULL,
equivalence = NULL,
joint = FALSE,
joint_test = "f",
numderiv = "fdforward",
...
)


Arguments

 model Model object or object generated by the comparisons(), slopes(), or predictions() functions. hypothesis specify a hypothesis test or custom contrast using a numeric value, vector, or matrix; a string equation; string; a formula, or a function. Numeric: Single value: the null hypothesis used in the computation of Z and p (before applying transform). Vector: Weights to compute a linear combination of (custom contrast between) estimates. Length equal to the number of rows generated by the same function call, but without the hypothesis argument. Matrix: Each column is a vector of weights, as describe above, used to compute a distinct linear combination of (contrast between) estimates. The column names of the matrix are used as labels in the output. String equation to specify linear or non-linear hypothesis tests. If the term column uniquely identifies rows, terms can be used in the formula. Otherwise, use b1, b2, etc. to identify the position of each parameter. The b* wildcard can be used to test hypotheses on all estimates. If a named vector is used, the names are used as labels in the output. Examples: hp = drat hp + drat = 12 b1 + b2 + b3 = 0 b* / b1 = 1 String: "pairwise": pairwise differences between estimates in each row. "reference": differences between the estimates in each row and the estimate in the first row. "sequential": difference between an estimate and the estimate in the next row. "meandev": difference between an estimate and the mean of all estimates. "meanotherdev": difference between an estimate and the mean of all other estimates, excluding the current one. "revpairwise", "revreference", "revsequential": inverse of the corresponding hypotheses, as described above. Formula: comparison ~ pairs | group Left-hand side determines the type of comparison to conduct: difference or ratio. If the left-hand side is empty, difference is chosen. Right-hand side determines the pairs of estimates to compare: reference, sequential, or meandev Optional: Users can supply grouping variables after a vertical bar to conduct comparisons withing subsets. Examples: ~ reference ratio ~ pairwise difference ~ pairwise | groupid Function: Accepts an argument x: object produced by a marginaleffects function or a data frame with column rowid and estimate Returns a data frame with columns term and estimate (mandatory) and rowid (optional). The function can also accept optional input arguments: newdata, by, draws. This function approach will not work for Bayesian models or with bootstrapping. In those cases, it is easy to use posterior_draws() to extract and manipulate the draws directly. See the Examples section below and the vignette: https://marginaleffects.com/vignettes/hypothesis.html vcov Type of uncertainty estimates to report (e.g., for robust standard errors). Acceptable values: FALSE: Do not compute standard errors. This can speed up computation considerably. TRUE: Unit-level standard errors using the default vcov(model) variance-covariance matrix. String which indicates the kind of uncertainty estimates to return. Heteroskedasticity-consistent: “HC”, “HC0”, “HC1”, “HC2”, “HC3”, “HC4”, “HC4m”, “HC5”. See ?sandwich::vcovHC Heteroskedasticity and autocorrelation consistent: “HAC” Mixed-Models degrees of freedom: "satterthwaite", "kenward-roger" Other: “NeweyWest”, “KernHAC”, “OPG”. See the sandwich package documentation. One-sided formula which indicates the name of cluster variables (e.g., ~unit_id). This formula is passed to the cluster argument of the sandwich::vcovCL function. Square covariance matrix Function which returns a covariance matrix (e.g., stats::vcov(model)) conf_level numeric value between 0 and 1. Confidence level to use to build a confidence interval. df Degrees of freedom used to compute p values and confidence intervals. A single numeric value between 1 and Inf. When using joint_test=“f”, the df argument should be a numeric vector of length 2. equivalence Numeric vector of length 2: bounds used for the two-one-sided test (TOST) of equivalence, and for the non-inferiority and non-superiority tests. See Details section below. joint Joint test of statistical significance. The null hypothesis value can be set using the hypothesis argument. FALSE: Hypotheses are not tested jointly. TRUE: All parameters are tested jointly. String: A regular expression to match parameters to be tested jointly. grep(joint, perl = TRUE) Character vector of parameter names to be tested. Characters refer to the names of the vector returned by coef(object). Integer vector of indices. Which parameters positions to test jointly. joint_test A character string specifying the type of test, either "f" or "chisq". The null hypothesis is set by the hypothesis argument, with default null equal to 0 for all parameters. numderiv string or list of strings indicating the method to use to for the numeric differentiation used in to compute delta method standard errors. "fdforward": finite difference method with forward differences "fdcenter": finite difference method with central differences (default) "richardson": Richardson extrapolation method Extra arguments can be specified by passing a list to the numDeriv argument, with the name of the method first and named arguments following, ex: numderiv=list(“fdcenter”, eps = 1e-5). When an unknown argument is used, marginaleffects prints the list of valid arguments for each method. … Additional arguments are passed to the predict() method supplied by the modeling package.These arguments are particularly useful for mixed-effects or bayesian models (see the online vignettes on the marginaleffects website). Available arguments can vary from model to model, depending on the range of supported arguments by each modeling package. See the "Model-Specific Arguments" section of the ?slopes documentation for a non-exhaustive list of available arguments.

Joint hypothesis tests

The test statistic for the joint Wald test is calculated as (R * theta_hat - r)’ * inv(R * V_hat * R’) * (R * theta_hat - r) / Q, where theta_hat is the vector of estimated parameters, V_hat is the estimated covariance matrix, R is a Q x P matrix for testing Q hypotheses on P parameters, r is a Q x 1 vector for the null hypothesis, and Q is the number of rows in R. If the test is a Chi-squared test, the test statistic is not normalized.

The p-value is then calculated based on either the F-distribution (for F-test) or the Chi-squared distribution (for Chi-squared test). For the F-test, the degrees of freedom are Q and (n - P), where n is the sample size and P is the number of parameters. For the Chi-squared test, the degrees of freedom are Q.

Equivalence, Inferiority, Superiority

$$\theta$$ is an estimate, $$\sigma_\theta$$ its estimated standard error, and $$[a, b]$$ are the bounds of the interval supplied to the equivalence argument.

Non-inferiority:

• $$H_0$$: $$\theta \leq a$$

• $$H_1$$: $$\theta > a$$

• $$t=(\theta - a)/\sigma_\theta$$

• p: Upper-tail probability

Non-superiority:

• $$H_0$$: $$\theta \geq b$$

• $$H_1$$: $$\theta < b$$

• $$t=(\theta - b)/\sigma_\theta$$

• p: Lower-tail probability

Equivalence: Two One-Sided Tests (TOST)

• p: Maximum of the non-inferiority and non-superiority p values.

Thanks to Russell V. Lenth for the excellent emmeans package and documentation which inspired this feature.

Examples

library("marginaleffects")

library(marginaleffects)
mod <- lm(mpg ~ hp + wt + factor(cyl), data = mtcars)

hypotheses(mod)

Term Estimate Std. Error     z Pr(>|z|)     S   2.5 %    97.5 %
(Intercept)   35.8460      2.041 17.56   <0.001 227.0 31.8457 39.846319
hp            -0.0231      0.012 -1.93   0.0531   4.2 -0.0465  0.000306
wt            -3.1814      0.720 -4.42   <0.001  16.6 -4.5918 -1.771012
factor(cyl)6  -3.3590      1.402 -2.40   0.0166   5.9 -6.1062 -0.611803
factor(cyl)8  -3.1859      2.170 -1.47   0.1422   2.8 -7.4399  1.068169

Columns: term, estimate, std.error, statistic, p.value, s.value, conf.low, conf.high 
# Test of equality between coefficients
hypotheses(mod, hypothesis = "hp = wt")

Estimate Std. Error    z Pr(>|z|)    S 2.5 % 97.5 %
3.16       0.72 4.39   <0.001 16.4  1.75   4.57

Term: hp = wt
Columns: term, estimate, std.error, statistic, p.value, s.value, conf.low, conf.high 
# Non-linear function
hypotheses(mod, hypothesis = "exp(hp + wt) = 0.1")

Estimate Std. Error     z Pr(>|z|)   S  2.5 %  97.5 %
-0.0594     0.0292 -2.04   0.0418 4.6 -0.117 -0.0022

Term: exp(hp + wt) = 0.1
Columns: term, estimate, std.error, statistic, p.value, s.value, conf.low, conf.high 
# Robust standard errors
hypotheses(mod, hypothesis = "hp = wt", vcov = "HC3")

Estimate Std. Error    z Pr(>|z|)    S 2.5 % 97.5 %
3.16      0.805 3.92   <0.001 13.5  1.58   4.74

Term: hp = wt
Columns: term, estimate, std.error, statistic, p.value, s.value, conf.low, conf.high 
# b1, b2, ... shortcuts can be used to identify the position of the
# parameters of interest in the output of
hypotheses(mod, hypothesis = "b2 = b3")

Estimate Std. Error    z Pr(>|z|)    S 2.5 % 97.5 %
3.16       0.72 4.39   <0.001 16.4  1.75   4.57

Term: b2 = b3
Columns: term, estimate, std.error, statistic, p.value, s.value, conf.low, conf.high 
# wildcard
hypotheses(mod, hypothesis = "b* / b2 = 1")

Term Estimate Std. Error     z Pr(>|z|)   S   2.5 % 97.5 %
b1 / b2 = 1    -1551      764.0 -2.03   0.0423 4.6 -3048.9    -54
b2 / b2 = 1        0         NA    NA       NA  NA      NA     NA
b3 / b2 = 1      137       78.1  1.75   0.0804 3.6   -16.6    290
b4 / b2 = 1      144      111.0  1.30   0.1938 2.4   -73.3    362
b5 / b2 = 1      137      151.9  0.90   0.3679 1.4  -161.0    435

Columns: term, estimate, std.error, statistic, p.value, s.value, conf.low, conf.high 
# term names with special characters have to be enclosed in backticks
hypotheses(mod, hypothesis = "factor(cyl)6 = factor(cyl)8")

Estimate Std. Error      z Pr(>|z|)   S 2.5 % 97.5 %
-0.173       1.65 -0.105    0.917 0.1 -3.41   3.07

Term: factor(cyl)6 = factor(cyl)8
Columns: term, estimate, std.error, statistic, p.value, s.value, conf.low, conf.high 
mod2 <- lm(mpg ~ hp * drat, data = mtcars)
hypotheses(mod2, hypothesis = "hp:drat = drat")

Estimate Std. Error    z Pr(>|z|)   S 2.5 % 97.5 %
-6.08       2.89 -2.1   0.0357 4.8 -11.8 -0.405

Term: hp:drat = drat
Columns: term, estimate, std.error, statistic, p.value, s.value, conf.low, conf.high 
# predictions(), comparisons(), and slopes()
mod <- glm(am ~ hp + mpg, data = mtcars, family = binomial)
cmp <- comparisons(mod, newdata = "mean")
hypotheses(cmp, hypothesis = "b1 = b2")

Estimate Std. Error    z Pr(>|z|)   S  2.5 %  97.5 %
-0.28      0.104 -2.7  0.00684 7.2 -0.483 -0.0771

Term: b1=b2
Type:  response
Columns: term, estimate, std.error, statistic, p.value, s.value, conf.low, conf.high 
mfx <- slopes(mod, newdata = "mean")
hypotheses(cmp, hypothesis = "b2 = 0.2")

Estimate Std. Error     z Pr(>|z|)   S  2.5 % 97.5 %
0.0938      0.109 0.857    0.391 1.4 -0.121  0.308

Term: b2=0.2
Type:  response
Columns: term, estimate, std.error, statistic, p.value, s.value, conf.low, conf.high 
pre <- predictions(mod, newdata = datagrid(hp = 110, mpg = c(30, 35)))
hypotheses(pre, hypothesis = "b1 = b2")

Estimate Std. Error      z Pr(>|z|)   S     2.5 %   97.5 %
-3.57e-05   0.000172 -0.207    0.836 0.3 -0.000373 0.000302

Term: b1=b2
Type:  response
Columns: term, estimate, std.error, statistic, p.value, s.value, conf.low, conf.high 
# The hypothesis argument can be used to compute standard errors for fitted values
mod <- glm(am ~ hp + mpg, data = mtcars, family = binomial)

f <- function(x) predict(x, type = "link", newdata = mtcars)
p <- hypotheses(mod, hypothesis = f)
head(p)

Term Estimate Std. Error      z Pr(>|z|)   S 2.5 % 97.5 %
1   -1.098      0.716 -1.534    0.125 3.0 -2.50  0.305
2   -1.098      0.716 -1.534    0.125 3.0 -2.50  0.305
3    0.233      0.781  0.299    0.765 0.4 -1.30  1.764
4   -0.595      0.647 -0.919    0.358 1.5 -1.86  0.674
5   -0.418      0.647 -0.645    0.519 0.9 -1.69  0.851
6   -5.026      2.195 -2.290    0.022 5.5 -9.33 -0.725

Columns: term, estimate, std.error, statistic, p.value, s.value, conf.low, conf.high 
f <- function(x) predict(x, type = "response", newdata = mtcars)
p <- hypotheses(mod, hypothesis = f)
head(p)

Term Estimate Std. Error     z Pr(>|z|)   S   2.5 % 97.5 %
1  0.25005     0.1343 1.862  0.06257 4.0 -0.0131 0.5132
2  0.25005     0.1343 1.862  0.06257 4.0 -0.0131 0.5132
3  0.55803     0.1926 2.898  0.00376 8.1  0.1806 0.9355
4  0.35560     0.1483 2.398  0.01648 5.9  0.0650 0.6462
5  0.39710     0.1550 2.562  0.01041 6.6  0.0933 0.7009
6  0.00652     0.0142 0.459  0.64653 0.6 -0.0213 0.0344

Columns: term, estimate, std.error, statistic, p.value, s.value, conf.low, conf.high 
# Complex aggregation
# Step 1: Collapse predicted probabilities by outcome level, for each individual
# Step 2: Take the mean of the collapsed probabilities by group and cyl
library(dplyr)
library(MASS)
library(dplyr)

dat <- transform(mtcars, gear = factor(gear))
mod <- polr(gear ~ factor(cyl) + hp, dat)

aggregation_fun <- function(x) {
predictions(x, vcov = FALSE) |>
mutate(group = ifelse(group %in% c("3", "4"), "3 & 4", "5")) |>
summarize(estimate = sum(estimate), .by = c("rowid", "cyl", "group")) |>
summarize(estimate = mean(estimate), .by = c("cyl", "group")) |>
rename(term = cyl)
}

hypotheses(mod, hypothesis = aggregation_fun)

Group Term Estimate Std. Error     z Pr(>|z|)     S  2.5 % 97.5 %
3 & 4    6   0.8390     0.0651 12.89   <0.001 123.9 0.7115  0.967
3 & 4    4   0.7197     0.1099  6.55   <0.001  34.0 0.5044  0.935
3 & 4    8   0.9283     0.0174 53.45   <0.001   Inf 0.8943  0.962
5        6   0.1610     0.0651  2.47   0.0134   6.2 0.0334  0.289
5        4   0.2803     0.1099  2.55   0.0108   6.5 0.0649  0.496
5        8   0.0717     0.0174  4.13   <0.001  14.7 0.0377  0.106

Columns: term, group, estimate, std.error, statistic, p.value, s.value, conf.low, conf.high 
# Equivalence, non-inferiority, and non-superiority tests
mod <- lm(mpg ~ hp + factor(gear), data = mtcars)
p <- predictions(mod, newdata = "median")
hypotheses(p, equivalence = c(17, 18))

Estimate Std. Error    z Pr(>|z|)     S 2.5 % 97.5 % p (NonSup) p (NonInf)
19.7          1 19.6   <0.001 281.3  17.7   21.6      0.951    0.00404
p (Equiv)  hp gear
0.951 123    3

Type:  response
Columns: rowid, estimate, std.error, statistic, p.value, s.value, conf.low, conf.high, hp, gear, mpg, statistic.noninf, statistic.nonsup, p.value.noninf, p.value.nonsup, p.value.equiv 
mfx <- avg_slopes(mod, variables = "hp")
hypotheses(mfx, equivalence = c(-.1, .1))

Estimate Std. Error     z Pr(>|z|)    S   2.5 %  97.5 % p (NonSup) p (NonInf)
-0.0669      0.011 -6.05   <0.001 29.4 -0.0885 -0.0452     <0.001    0.00135
p (Equiv)
0.00135

Term: hp
Type:  response
Comparison: mean(dY/dX)
Columns: term, contrast, estimate, std.error, statistic, p.value, s.value, conf.low, conf.high, predicted_lo, predicted_hi, predicted, statistic.noninf, statistic.nonsup, p.value.noninf, p.value.nonsup, p.value.equiv 
cmp <- avg_comparisons(mod, variables = "gear", hypothesis = "pairwise")
hypotheses(cmp, equivalence = c(0, 10))

Estimate Std. Error     z Pr(>|z|)   S 2.5 % 97.5 % p (NonSup) p (NonInf)
-3.94       2.05 -1.92   0.0543 4.2 -7.95 0.0727     <0.001      0.973
p (Equiv)
0.973

Term: (mean(4) - mean(3)) - (mean(5) - mean(3))
Type:  response
Columns: term, estimate, std.error, statistic, p.value, s.value, conf.low, conf.high, statistic.noninf, statistic.nonsup, p.value.noninf, p.value.nonsup, p.value.equiv 
# joint hypotheses: character vector
model <- lm(mpg ~ as.factor(cyl) * hp, data = mtcars)
hypotheses(model, joint = c("as.factor(cyl)6:hp", "as.factor(cyl)8:hp"))


Joint hypothesis test:
as.factor(cyl)6:hp = 0
as.factor(cyl)8:hp = 0

F Pr(>|F|) Df 1 Df 2
2.11    0.142    2   26

Columns: statistic, p.value, df1, df2 
# joint hypotheses: regular expression
hypotheses(model, joint = "cyl")


Joint hypothesis test:
as.factor(cyl)6 = 0
as.factor(cyl)8 = 0
as.factor(cyl)6:hp = 0
as.factor(cyl)8:hp = 0

F Pr(>|F|) Df 1 Df 2
5.7  0.00197    4   26

Columns: statistic, p.value, df1, df2 
# joint hypotheses: integer indices
hypotheses(model, joint = 2:3)


Joint hypothesis test:
as.factor(cyl)6 = 0
as.factor(cyl)8 = 0

F Pr(>|F|) Df 1 Df 2
6.12  0.00665    2   26

Columns: statistic, p.value, df1, df2 
# joint hypotheses: different null hypotheses
hypotheses(model, joint = 2:3, hypothesis = 1)


Joint hypothesis test:
as.factor(cyl)6 = 1
as.factor(cyl)8 = 1

F Pr(>|F|) Df 1 Df 2
6.84  0.00411    2   26

Columns: statistic, p.value, df1, df2 
hypotheses(model, joint = 2:3, hypothesis = 1:2)


Joint hypothesis test:
as.factor(cyl)6 = 1
as.factor(cyl)8 = 2

F Pr(>|F|) Df 1 Df 2
7.47  0.00273    2   26

Columns: statistic, p.value, df1, df2 
# joint hypotheses: marginaleffects object
cmp <- avg_comparisons(model)
hypotheses(cmp, joint = "cyl")


Joint hypothesis test:
cyl mean(6) - mean(4) = 0
cyl mean(8) - mean(4) = 0

F Pr(>|F|) Df 1 Df 2
1.6    0.221    2   26

Columns: statistic, p.value, df1, df2