Measurement invariance - creation of a publication ready table

Measurement invariance

This post is dedicated to one particular problem: how to create publication ready table, after conducting measurement invariance testing in R.

There are done many nice tutorials about how to conduct measurement equivalence via latent variable analysis package - lavaan in R or in related packages like semTools. Btw, many thanks to Erin Buchanan for her very instructive tutorials!

However after invariance testing is done, it is sometimes not easy to report its results in a structured way which is easy to read and publication ready.

To start more broadly, as I like time efficient way to conduct statistical analysis and report its results, I have looked in the past for some easy way how to create publication ready table - ideally in the APA format. I spend some relatively long time to find out the most efficient way, but I still could not find satisfactory solution. For this reason, I have put different peaces of code together and created mitab function published in psychtoolbox package. Example of how this function was used in the published study can be found here.

Now, lets demonstrate how mitab function works on some example. Below you can find a step by step procedure, which results in a publication ready table. The example used here is a one paper, which me and my colleagues recently published. study code for all analysis and for measurement invariance testing can be found on the OSF repository. But lets now to demonstrate

1. Packages installation and Data loading

Lets first install and load necessary packages:

knitr::opts_chunk$set(echo = TRUE)

# List of packages to check and potentially install
packages_to_install <- c("remotes", "lavaan", "papaja")

# Check installed packages
installed <- installed.packages()

for (pkg in packages_to_install) {
  if (!pkg %in% installed[, "Package"]) {
    install.packages(pkg)
  }
}

# necessary packages:
library(remotes)
library(lavaan)
library(papaja)

# a key package is psychtoolbox which can be downloaded from GitLab. This code will download it for you if you dont have it
if(any(rownames(installed.packages()) == "psychtoolbox") == FALSE) {
remotes::install_gitlab("lukas.novak/psychtoolbox",  dependencies = TRUE) 
  library(psychtoolbox)
  } else {
    library(psychtoolbox)
}

And load the data.

data <- lavaan::HolzingerSwineford1939

Ok, lets try to understand the dataset

The ‘HolzingerSwineford1939’ dataset from the ‘lavaan’ package in R is a classic dataset often used in demonstrating various psychometric analyses, including confirmatory factor analysis and structural equation modeling.

The dataset consists of mental ability test scores of 301 students in grades 7 and 8 from two different schools. These students were from two schools: Pasteur and Grant-White. The data were collected as part of a study by Holzinger and Swineford in 1939.

The dataset contains 26 variables in total, including:

  • 9 variables representing different psychometric tests:
    1. Visual perception (x1, x2, x3)
    2. Cubes (x4, x5, x6)
    3. Lozenges (x7, x8, x9)
  • Demographic and other information like school (Pasteur or Grant-White), grade, and sex of the students.

Now, we will use this to demonstrate how to create measurement invariance table.

2. Performing measurement invariance

# The famous Holzinger and Swineford (1939) example
# Defining a confirmatory factor analysis model with three factors: visual, textual, and speed.
# Each factor is defined by three observed variables (x1 to x9).
HS.model <- ' visual  =~ x1 + x2 + x3
textual =~ x4 + x5 + x6
speed   =~ x7 + x8 + x9 '

# Loading the 'lavaan' package, which is used for structural equation modeling.
library(lavaan)

# Using the HolzingerSwineford1939 dataset from the 'lavaan' package.
dat <- HolzingerSwineford1939

# Using the 'mitab' function to perform measurement invariance testing between two groups: Grant-White and Pasteur.
# This function will compare the specified model across these two groups.
res.tab.mi <- mitab(
    group1_nam = "Grant-White", # Name of the first group for comparison
    group2_nam = "Pasteur",     # Name of the second group for comparison
    ordered = FALSE,            # Indicates whether variables are ordered (ordinal) or not
    model = HS.model,           # The specified CFA model
    data = dat,                 # The dataset to use
    std.lv = TRUE,              # Standardize latent variables
    meanstructure = TRUE,       # Include mean structures in the model
    group = "school",           # The grouping variable in the dataset
    yes_no_results = TRUE,      # Display results in a binary (yes/no) format for simplicity
    estimator = "MLR",          # Estimator to use, MLR is robust maximum likelihood
    robust = TRUE,              # Use robust statistics
    cfi.difference = TRUE,      # Calculate and report CFI difference test
    rmsea.difference = TRUE     # Calculate and report RMSEA difference test
)

Printing created table

papaja::apa_table(res.tab.mi) # Print table in the apa format - this works if you render your Rmd document in docx format
Table 1:
Model x2 df pvalue CFI delta CFI TLI RMSEA (90% CI) delta RMSEA SRMR Model difference - CFI Model difference - RMSEA
Overall model 87.132 24 p < .001 0.925 NA 0.888 0.093 (0.073-0.115) NA 0.06 NA NA
Grant-White 53.746 24 p < .001 0.933 0.01 0.899 0.092 (0.059-0.126) 0.00 0.066 No No
Pasteur 68.106 24 p < .001 0.895 0.04 0.842 0.109 (0.078-0.14) 0.02 0.07 Yes Yes
Configural  model 121.741 48 p < .001 0.914 0.02 0.872 0.101 (0.078-0.124) 0.01 0.068 Yes No
Metric  model 125.997 54 p < .001 0.917 0.00 0.889 0.094 (0.073-0.116) 0.01 0.072 No No
Scalar  model 166.748 60 p < .001 0.876 0.04 0.851 0.109 (0.089-0.129) 0.02 0.082 Yes No
Strict  model 181.429 69 p < .001 0.87 0.01 0.864 0.104 (0.086-0.123) 0.00 0.088 No No

Interpretation of Measurement Invariance Test Results

The output from the mitab function provides a detailed comparison of the measurement model across different groups. The results of the measurement invariance test, focusing on Comparative Fit Index (CFI) and Root Mean Square Error of Approximation (RMSEA) differences, provide insights into whether the measurement model operates similarly across the Grant-White and Pasteur groups.

  1. Assessing Measurement Invariance:
    • Configural Model (baseline model): CFI = 0.914, RMSEA = 0.101. This model serves as the baseline for comparison.

    • Metric Model (factor loadings equality): ΔCFI = 0.003, ΔRMSEA = 0.007. These minimal changes suggest that factor loadings are invariant across groups.

    • Scalar Model (intercepts equality): ΔCFI = 0.041, ΔRMSEA = 0.015. The changes in CFI and RMSEA exceed the common thresholds (ΔCFI ≤ 0.01, ΔRMSEA ≤ 0.015), indicating potential issues with intercepts invariance.

    • Strict Model (residuals equality): ΔCFI = 0.006, ΔRMSEA = 0.005. The changes are within acceptable limits, suggesting residuals invariance.

    • The results indicate that while the model maintains metric invariance (factor loadings are equivalent across groups), it fails to fully achieve scalar invariance (intercepts are not equivalent). This suggests that even though kids from Grant-White and Pasteur schools would have the same degree of latent variable (e.g., processing speed), their scores in processing speed test would still differ.

Conclusion

While the initial fit indices suggest an adequate fit to the data, the measurement invariance tests (at the Scalar level) indicate potential differences in how the latent constructs are manifested across the groups. This implies that caution should be exercised when comparing the Grant-White and Pasteur groups.

Lukas Novak
Lukas Novak
Researcher

My research interests include affective neuroscience and psychometrics.

Related