Skip to main content

Report Interpretation

Your institution’s PACE reports consist of sixteen tables and two figures. The tables are of two types: frequency distributions (Tables 1-4) and item mean comparisons (Tables 5-8). Figure 1 compares your institution’s overall PACE mean and means for each of the four PACE climate factors (Institutional Structure, Student Focus, Supervisory Relationships, and Teamwork) with the PACE normbase as a whole, institutions of a similar size, and one comparison group chosen by your institution.

Overall Report Interpretation

Frequencies: the number and percentage of respondents who replied with each possible answer to a specific question

Means: the calculated average of respondents’ answers to a specific question

Significance: there is a difference between your current mean and the comparison group mean is not due to chance alone. However, even if there is a statistically significant difference, there may not be a practically meaningful difference between two means. For this reason, we report an effect size in the tables as well.

Effect Size: The higher the absolute value of the effect size, the bigger the difference between the two items being compared, be that a positive or negative difference. We encourage your institution’s leadership to pay special attention to items with effect sizes of .2 or greater.

Detailed Report Interpretation

Comparison Groups

Every institution that participates in PACE receives comparison data in three categories of its choosing. When a comparison group is selected, your institution is compared to all other institutions in the PACE normbase that share your institution’s classification along that dimension. Clients are able to select three of the following categories:

  • PACE Normbase
  • Previous Administration
  • Census Region
  • Size
  • Locale
  • Degree Type
  • Division
  • State

Interpreting Frequency Distributions Tables

The frequency distributions tables in your reports present statistics for each question on the PACE Survey. Questions are grouped by the four PACE climate factors (see page two for descriptions of the four climate factors) with one table for each factor. In the first (gray) column, each table presents the count and percentage of respondents at your institution who answered “very satisfied,” “satisfied,” “neither satisfied nor dissatisfied,” “dissatisfied,” and “very dissatisfied” for each question on the PACE Survey. Respondents who answered “not applicable” are treated as missing responses and are not included in frequencies.

The other three columns provide the same statistics corresponding to your selected comparison groups. These tables do not report statistical significance or effect size and are intended to show only the distribution of responses within your administration and your comparison groups. In order to understand to what extent and how your institution differed from your comparison groups, you will want to review the mean comparison tables throughout your reports.

Interpreting Item Mean Comparisons Tables

The mean comparison tables in your reports present your institution’s mean for each question on the PACE Survey. The mean comparison tables follow the same structure as the frequency comparison tables. The gray column presents your institution’s data for each PACE item, showing the total number of respondents (N) to that item and the mean score for that item. The other three columns present mean difference comparisons between your institution and the three comparison groups you selected with corresponding statistical significance and effect size. In your reports, — indicates the results are redacted for confidentiality, whereas ∅ indicates a mean could not be calculated for this response option because there were zero responses. Respondents who answered “not applicable” are treated as missing responses and are not included in means.

Statistical Significance

Statistical significance is an indicator of the probability that the difference between your current mean and the comparison group mean is not due to chance alone. There are three levels of statistical significance or p value used in our reports: p < .05 (*), p < .01 (**), and p < .001 (***). If there is a statistically significant difference between your institution’s current mean and the comparison groups, either one, two, or three asterisks will be in the “Sig.” column depending on the level of significance. If the statistical significance column for an item is blank, then the mean difference for that item may be due to chance alone and should not be considered meaningful for the sake of informing institutional decision-making. In the example below, there is a significant difference between NOCC and the Small 2-year comparison group. The three asterisks indicate the significance level is .001, meaning that there is a .1% chance that this result is due to chance alone. However, it is important to note that even if there is a statistically significant difference, there may not be a practically meaningful difference between two means, especially if your institutional sample is large. For this reason, we report an effect size in the tables as well.

Effect Size

When making comparisons between your current administration and a comparison group, you want to know if the statistically significant differences are practically meaningful differences. Not all differences are meaningful and worth exploring, so we begin by looking for statistically significant differences, as previously discussed in the statistical significance section. While the significance level or p-value notes that the differences are statistically significant, we still do not know how different. Effect size (Cohen’s D) is used to describe the magnitude of the difference, which helps to further understand the relationship between the two items being compared. The higher the absolute value of the effect size, the bigger the difference between the two items being compared, be that a positive or negative difference. Practically speaking, we encourage your institution’s leadership to pay special attention to items with effect sizes of .2 or greater, as these are the areas in which there are the largest differences between your institution and your selected comparison groups.