Introduction
Prompted by recent advances in science and technology, as well as societal changes brought about by globalization, educators worldwide are discussing new perspectives on education. As a result, many schools have begun reviewing their curricula to better support students in this evolving environment. One significant outcome of these changes is the adoption of the Science, Technology, Engineering, Arts, and Mathematics(STEAM)education framework (Quigley et al., 2020), which integrates the foregoing into a unified curriculum (Hom & Dobrijevic, 2022). According to Boice et al. (2021), STEAM “involves utilizing student-centered instructional pedagogies, including [problem-based inquiry learning], group learning, and real-world application, to increase cross-disciplinary content knowledge through learning goals for students in both Science, Technology, Engineering, and Mathematics (STEM) and arts disciplines” (p. 5). Through STEAM education, students can acquire essential 21st-century knowledge and skills, such as critical thinking, creativity, problem-solving, innovation, communication, cooperation, and entrepreneurship (Jolly, 2014). Alongside changes to traditional curricula, these developments have necessitated professional development to prepare teachers for integrating lesson plans (Belbase et al., 2022). Because STEAM education encompasses five distinct subjects, it necessitates a collaborative team approach in which teachers from various disciplines work together (Asghar et al., 2012).
The Ordinal Priority Approach to Group Decision-Making
When integrating opinions within a team, common methods include majority voting or the Borda count. Yet, decisions involving multiple criteria should be shaped directly by these criteria. Various methods have been developed to address the challenges of multi-criteria decision-making (MCDM), which involves selecting the best alternative from a set of options based on criteria provided by multiple experts (Kumar et al., 2017). Among these, the newest isthe ordinal priority approach (OPA), developed by Ataei et al. (2020).OPA addresses MCDM problems through a relationship-based approach, whereby ordinal numbers are assigned to criteria, alternatives, and experts’ opinions according to their relative importance (Ataei et al., 2020). It requires only simple input data and is especially suitable for ranking items by importance, as it does not require quantitative numerical data. Moreover, it avoids the need for scaling data, averaging, or pairwise comparison matrices (Penadés-Plàet al., 2016) because it relies solely on ordinal data (Mahmoudi et al., 2021). Additionally, OPA allows for a clearer determination of the dominance among items than using quantitative preferences or exact ratios (Sadeghi et al., 2023). Its flexible information processing also makes it suitable for implementation under dynamic and ambiguous conditions (Ond et al., 2023).
Teacher Collaboration for STEAM Education
Many educators believe that teacher collaboration is fundamental to delivering effective interdisciplinary education (Herro & Quigley, 2016; Margot & Kettler, 2019). Indeed, Reeves et al. (2017) analyzed data from the Trends in International Mathematics and Science Study and identified teacher collaboration as a significant predictor of student achievement. Teachers working within STEAM education require time to coordinate and integrate their respective subjects (Margot & Kettler, 2019). However, Park et al. (2016) found that most teachers reported insufficient time and concerns about increased workload when implementing STEAM education. Given its simplicity, OPA requires no specialized training, making it easy for teachers to adopt and potentially minimizing redundant discussions.
Although teachers acknowledge the need for cross-disciplinary education in STEAM (Herro & Quigley, 2016), collaboration frequently falls short of expectations (Vangrieken et al., 2015). Limitations in communication between teachers from different subjects have also been noted (Al Salami et al., 2017). Differences in teachers’ perceptions of subject integration can result in varied approaches (Wang et al., 2011) and concerns about collaborative lesson planning (Asghar et al., 2012) or following others’ lesson plans (Bagiati& Evangelou, 2015). Misunderstandings among teachers from different subjects can jeopardize the success of an interdisciplinary curriculum, and conflicting subject knowledge may lead teachers to challenge one another’s approaches, potentially making collaboration more difficult or even impossible (Costantino, 2017). One function of OPA in group decision-making is identifying unreliable experts and inappropriate criteria, which helps minimize sources of distrust(Mahmoudi & Javed, 2022).
STEAM Assessment Challenges
Additionally, when implementing STEAM education, teachers must consider factors such as lesson time, student level, task consistency, progress monitoring, technology use, and assessment methods (Herro et al., 2019). The choice of assessment methods varies based on the task, the competencies and skills to be evaluated, and the projects students undertake (Belbase et al., 2022). While traditional assessment methods are inadequate due to STEAM’s interdisciplinary nature (Dubek et al., 2021), the literature lacks clarity on how students’ learning across STEAM subjects at various levels should be assessed, from basic understanding to advanced application (Belbase et al., 2022). There is also limited guidance for teachers on integrating and assessing these diverse competencies and skills (Glisic & Favaro, 2019). Effective assessment must capture the unique aspects of STEAM education. In this regard, OPA offers a promising solution, as it can rank options by accommodating diverse criteria (Kadaeiet al., 2023) and can also be applied in fuzzy or ambiguous environments to address real-world uncertainty (Kadaeiet al., 2023; Mahmoudi & Javed, 2022).
Group Decision-Making for STEAM Assessment Using OPA
Given OPA’s utility, as explored above, it has the potential to address challenges in teacher collaboration and STEAM assessment. However, the use of structured decision-making tools in education is largely limited to evaluating programs or selecting teachers in higher education. While research on teacher collaboration in STEAM, such as teacher training programs (Boiceet al.,2021) and knowledge management approaches (Wu, 2022)exists, none provides specificgroupdecision-making methodologies.This study, therefore, proposes the use of OPA to support teacher collaboration in determining assessment methods in STEAM education. Two research questions (RQs) guide this study:
RQ1: How does the degree of inter-subject differences affect teacher collaboration in STEAM education?
RQ2: How can OPA help eliminate barriers teachers face when collaborating to formulate STEAM assessments?
Methodology
To rank assessment methods in STEAM education, this study recruited five teachers (hereafter referred to as “experts”)from a single high school renowned locally for its extensive STEAM program .The selection was based on the fact that these teachers are the primary instructors for STEAM education at the school. As this is an exploratory study, the limited generalizability of the findings is acknowledged.
At this school, students work in small groups to explore topics of interest, with teachers assisting with experiments and analyses. Students are given opportunities to present their results at a learning presentation event attended by other grades and invited teachers from other schools. The experts recruited for this study were core members of the school’s STEAM program, with lesson planning typically carried out by two or three teachers working together. The experts completed a preliminary questionnaire, participated in the OPA task, and subsequently completed a post-questionnaire.
Pre-questionnaire
The five experts completed the pre-questionnaire the day before the OPA task. This questionnaire investigated how teachers collaborate with colleagues in planning STEAM assessments to understand the current state of such collaboration. Responses were collected using various formats, including a four-point Likert-type scale, binary yes/no items, and open-ended questions for aspects requiring more detailed exploration (see Appendix A).
Ordinal Priority Approach Task
The OPA task was conducted in November 2024. The subjects taught by the five experts are listed in Table 1. The experts were divided into two groups to match the typical number of teachers involved in lesson planning and to facilitate comparison between decision-making in a homogeneous group (same subject; HoGr) and a heterogeneous group (different subjects;HeGr). The time taken to reach decisions was recorded to provide insight into the widely cited challenge of time constraints in implementing STEAM education. In addition, the decision-making process was recorded, and the participants’ comments were transcribed.
Table 1. Expert Information
| Group | Expert | Subject | Experience (years) |
| HoGr | 1 | Science (geology) | 15 |
| 2 | Science (chemistry) | 10 | |
| HeGr | 3 | Life science | 29 |
| 4 | Informatics | 20 | |
| 5 | Mathematics | 24 |
The OPA task followed the eight steps of the OPA process (Table 2). The components used in the OPA ranking calculation (Table 2, Step 7) are presented with formulae (1)–(4) in Table 3(Appendix B).
Table 2. The Eight-Step OPA Process
| Step | Process | Content |
| 1 | Identify a goal. | The teachers aim to determine an assessment method. |
| 2 | Decide on the experts to participate in the decision-making and rank them according to their experience and knowledge. | Identify the teachers and rank them based on their position (previously decided during discussion). |
| 3 | Identify alternatives. | The alternatives are various assessment methods. Experts discuss these and write down their preferred alternatives on a blank sheet of paper. |
| 4 | Identify the important criteria relevant to the goal. | The teachers determine the competencies and skills they wish to assess. |
| 5 | Rank the criteria according to their perceived importance. | Each teacher writes their rankings down on paper. |
| 6 | Rank the alternatives according to their relevance to the criteria. | |
| 7 | Create a linear programming model using the components and execute it. | Teachers’ ranking data are entered into OPA Solver v1. |
| 8 | Check the results. | Share the results with the five teachers. |
Post-questionnaire
Immediately after the OPA task, the five experts completed the post-questionnaire. This questionnaire examined how applying the OPA during teacher collaboration influenced their selection of an assessment method. Responses were collected using a four-point Likert-type scale and open-ended questions for aspects requiring further detail (see Appendix C).
Results
Difficulties of STEAM Assessment
During the OPA task, two experts commented on challenges in STEAM assessment.Expert1stated, “Some competencies cannot be assessed objectively. For example, the ‘ability to work in teams’ is an indispensable competency in STEAM education, and I would like to assess it. However, it is difficult because there is no way to assess it objectively.”Expert 2 noted, “I’m not very keen on ‘applying the learning to real-world situations,’ but it is said to be necessary as an element of STEAM education, ”and “I would like to assess by watching student discussions, but it is difficult because this assessment method takes a lot of time and effort.”
According to the pre-questionnaire, teachers needed an average of 57.5 minutes to reconcile their various opinions (item 1) and an average of 88 minutes to reach a decision (item 2). When asked what takes the most time when preparing for STEAM assessment with other teachers (item 3), experts mentioned “deciding what to follow from the previous year and what to improve,” “choosing teaching materials and coming up with ideas for assessment questions,” “sharing objectives and content and making lesson plans,” and “understanding the learning content and syllabus of other subjects.”
The homogeneous group (HoGr) took 17 minutes to determine the alternatives and criteria, 9 minutes to rank them independently, and 2 minutes to input the data into the OPA Solver. The heterogeneous group (HeGr) took 30 minutes to determine the alternatives and criteria, 14 minutes to rank them independently, and 4 minutes to input data.
From the post-questionnaire, regarding the length of time (item 1), three experts answered that it was the “same as usual,” and two answered that it was “longer.” Regarding workload (item 2), three experts answered that it was the “same as usual,” and two answered that it was “greater.”
Process of Homogeneous Group Task
When determining the alternatives and criteria, Ho reached decisions through dialogue. They recorded five criteria and five alternatives on the sheet. Table 4(Appendix D)shows the experts’ rankings of criteria, and Table 5(Appendix D)their rankings of alternatives for each criterion.The data from each expert were then input into OPA Solver. Using the rankings in Tables 4 and 5, the weights of the experts, criteria, and alternatives were calculated (Table 6, Appendix D).
During the OPA task, Expert 1 remarked, “The alternatives did not include a method for assessing the competency of collaborating within a team” and “We eliminated the trivial criteria and reached a consensus on which criteria were mostimportant.” Expert 2 said, “To assess this competency, this alternative was necessary, yet it was not included,” “Through sufficient discussion and consensus-building, the result gradually moved closer to one that was convincing,” and “Through the discussion, it became clear that some areas were lacking, and I felt that further discussion was necessary.”
Process of Heterogeneous Group Task
In contrast to HoGr, HeGr first considered the alternatives and criteria individually, then shared their selections, ensuring duplicates were included in the overall list. They recorded eight criteria and five alternatives. Table 7 (Appendix E) shows the experts’ rankings of criteria, and Table 8 (Appendix E) shows the rankings of alternatives for each criterion. The data from each expert was then input into OPA Solver. Based on the prioritization results in Tables 7 and 8, the weights of the experts, criteria, and alternatives were calculated (Table 9, Appendix E).
During the OPA task, Expert 4 commented, “The problem-solving [criteria], which was supposed to be ranked first when the alternatives and criteria were identified in the group, ended up with a lower ranking when prioritizing individually.” Expert 5 observed, “There was a difference between the rankings anticipated at the stage of identifying alternatives and criteria as a group and the rankings assigned when prioritizing individually.”
DiversePerspectives
In the pre-questionnaire, regarding their ability to make decisions while considering all necessary factors (item 4), all experts answered “usually.”During the OPA task, experts commented as follows.Expert1stated,“The competencies I want to measure are numerous.”Expert 2 remarked,“Even among colleagues teaching the same subject, there are instances where they emphasize criteria that I had not considered significant or choose to omit questions from the test that I believe should be included.”Expert3said,“It is very difficult for three teachers to discuss and integrate each opinion,” and“If you are in another subject, I do not know what your intentions are, and they will be completely different.”
In the post-questionnaire, when asked about considering the various factors (item 3), two experts answered “not much” and three answered “somewhat.” One reason given for answering “not much” was that “conflicts between teachers always occur due to differences in beliefs (these are non-negotiable things). Respecting the opinions of many people will delay the lesson plan, so there are situations where it is better to proceed on my own.” Reasons for answering “somewhat” included “The order in which things should be prioritized was clear” and “I felt it was meaningful to encourage deeper thinking through prioritizing.”
Satisfaction ofDecisionMaking
In the pre-questionnaire, regarding whether differences in direction or understanding occur when deciding on assessments (item 5), two experts answered “yes,” and three answered “no.” “Yes” responses mentioned “differences in perception will always occur due to differences in the knowledge they have,” and “differences in choosing how to develop students, and differences in perception of how deeply to think about the assessment method.”Regarding past experiences of being unconvinced or discussions proceeding without consensus (item 6), two answered “rarely” and three answered “sometimes.” “Sometimes” responses cited, for example, “discussions with a teacher who has different degrees of detail regarding assessment,” “materials that I thought were important were omitted because another teacher thought these were unnecessary,” and “discussions with a teacher who wanted to reduce the descriptive questions to reduce the burden on teachers, or who wanted to increase the descriptive questions to improve writing skills.”
According to the post-questionnaire, regarding whether the experts were convinced by the OPA’s outcome (item 4), two HoGrexperts answered “not much” and three HeGrexperts answered “somewhat.” The reasons given for answering “not much” included, “Assessment of STEAM education is a difficult task, so it was meaningful that we were able to recognize the results that we were not convinced by,” and “We were unable to come up with appropriate alternatives, and the issues related to the assessment of STEAM education itself have become clear. We were able to build consensus to some extent through discussion.” The reason given for answering “somewhat” was “because the results and my feelings were matched to some extent.” Regarding the clarity of the process (item 5), four experts answered “somewhat” and one “extremely.” Reasons for answering “somewhat” included, “because it can be predicted to a certain extent at the ranking stage” and “if there were valid criteria and alternatives, I feel that it would be somewhat clear.” The reason given for answering “extremely” was “because we can clearly rank the usually ambiguous decision-making factors.”
SharingMutualUnderstanding
According to the pre-questionnaire,regarding sharing mutual understanding of assessment methods (item 7), two experts answered “never,” and three answered “usually.” Of those three, two indicated that mutual understanding was achieved through daily casual conversations (the other response was invalid).
During the OPA task, Expert 2 remarked, “Because there were the sufficient discussions, I feel that my priorities and [Expert 1’s] will be almost the same,” “Although I thought that we had thoroughly discussed and reached a consensus atthe stage of proposing alternatives and criteria, it is surprising that there were differences in the rankings,” and “Even after thorough discussions, there are areas where different rankings have been assigned.”
According to the post-questionnaire, regarding OPA’s ability to facilitate shared understanding (item 6), two experts answered “somewhat” and three answered “extremely.” One reason given for answering “somewhat” was, “Although there were considerable differences in the competencies to be assessed depending on the subject, we could get a consensus to some extent.” The reasons given for answering “extremely” were “because we were able to smoothly discuss what was important to me,” “because we conducted lessons together regularly,” and “because the other expert was a good facilitator.”
Influence of Factors Unrelated to the Discussion
According to the pre-questionnaire, regarding whether past decisions had been influenced by factors not directly related to the discussion (item 8), three experts answered “rarely,” and two answered “sometimes.”In the post-questionnaire, regarding this same issue (item 7), three experts answered “not at all” and two answered “not much.”
Prioritizing Experts
InHoGr, Expert 1took the lead during lesson planning. Their ranking is shown in Table 6.In HeGr, Expert 3 usually led, followed by Expert 4 and Expert 5; their rankings are shown in Table 9. According to the post-questionnaire, regarding expert prioritization (item 8), experts mentioned, “There is usually a hierarchy among teachers, so I think that prioritizing is essential,” and “It would be good to be able to determine how to decide who is most prioritized.”
Impressions of Using OPA
During the OPA task, Expert 1 said, “I think that OPA would be most effective during group decision-making with a large group of around five to seven people.” Expert 4 said, “OPA will be useful when trying to include the opinions of new teachers, for example, who may find it difficult to express their opinions.” In the post-questionnaire, regarding how they felt about decision-making using OPA (item 9), experts expressed the following views: “Although it is important that the criteria and alternatives are valid, I felt it was difficult in educational activities, especially lessons,” “I felt OPA was an interesting method; however, the decision-making will not improve unless we repeat the steps several times,” “I felt that we needed to think a little more about introducing OPA,” “I think it would be good if it was a process that integrates many opinions in the process of creating something new with multiple teachers,” and “It is good to use OPA as a method to understand each expert’s opinion.”
Discussion
Degree of Inter-Subject Differences Affecting Teacher Collaboration
The characteristics of each group were examined based on experts’ statements, and differences were observed in how they identified alternatives and criteria. In the homogeneous group (HoGr), consisting of teachers from the same subject, opinions were summarized through dialogue. In contrast, the heterogeneous group (HeGr), composed of teachers from different subjects, had more dispersed opinions, and the summaries reflected more individualized perspectives. Experts’ comments suggested that HoGr formed a deeper consensus when organizing alternatives and criteria than HeGr.
Regarding diversity within teams, effective decision-making requires both considering a wide range of options and reaching consensus on a preferred solution. This reflects the diversity paradox: although team diversity can generate more options to address decision-making problems successfully, it may also increase the likelihood of conflict and hinder convergence (Linder et al., 2025). A comparison of Tables 4 and 7 shows substantial differences in prioritization within HeGr, indicating greater variance in priority rankings compared with HoGr. However, according to post-questionnaire item 4, HeGrmembers were more satisfied with the decision-making outcome than HoGrmembers.
Statements from HeGr participants indicate that their individual rankings sometimes differed from what they expected after collaboratively presenting alternatives and criteria. This suggests that what they considered important changed when working individually compared with in groups. Listing alternatives and criteria through the OPA appeared to make each expert more aware of subject-area differences, prompting discussions aimed at consensus building. These findings suggest that OPA is especially suitable for converging diverse viewpoints rather than fostering deeper consensus within already homogeneous groups.
Additionally, an unexpected finding was that even colleagues who frequently engaged in discussions did not always assign the same rankings to criteria and alternatives, contrary to expectations. Differences in perception persisted even among participants who believed they had achieved complete consensus. Because OPA expresses rankings as ordinal numbers, it clearly revealed these differences even after consensus-building efforts. This was also reflected in responses to post-questionnaire item 5, where participants praised OPA’s ability to make the decision-making process transparent.
OPA Helps Eliminate Obstacles Teachers Face When Collaborating
Because STEAM requires collaboration among teachers from multiple subjects, the competencies and skills they prioritize for assessment inevitably vary. Notably, OPA places no limit on the number of criteria and alternatives (Mahmoudi & Javed, 2022) , allowing for the inclusion of numerous competencies, skills, and potential assessment methods.
Moreover, assigning priorities enables teachers to share which competencies and skills to emphasize and to what extent in the assessment.OPA facilitates fair and transparent decision-making by avoiding ambiguous judgments, reducing misunderstandings, and mitigating potential distrust among colleagues. Transparency is considered a key component of school organizations (Shagholi & Hussin, 2009). By visualizing prioritization and results, OPA contributed to transparency in decision-making,as evidenced by the shift between pre-questionnaire item 7 and post-questionnaire item 6 responses.
OPA also helps prevent the personalization of opinions. Comparing responses to pre-questionnaire item 8 and post-questionnaire item 7 indicates that the method was meaningful for limiting power imbalances. In some school contexts, senior or more experienced teachers hold disproportionate authority, which can erode trust and hinder collaboration. As Sachs et al. (2012) note, senior teachers must recognize when to relinquish authority and when to empower novice teachers. Because in OPA, experts rank alternatives and criteria individually after their identification, each opinion is evaluated independently, helping to balance influence among participants.
Conclusion
Given the interdisciplinary nature of STEAM education, collaboration among teachers from multiple subjects is essential. However, challenges such as limited communication and distrust stemming from subject-specific conflicts have been reported. Additionally, existing STEAM assessment methods for evaluating interdisciplinary learning outcomes have been criticized as ambiguous.
To improve collaboration in determining STEAM assessment methods, this study employed the ordinal priority approach (OPA), a specific and innovative method not previously used in education. The results revealed differences in decision-making between homogeneous and heterogeneous teacher groups. Homogeneous groups were better able to clarify mutual differences in opinion, while heterogeneous groups facilitated decision-making by prioritizing diverse perspectives.
OPA was shown to contribute to teacher collaboration in STEAM education in three ways:
It allows for the consideration of diverse competencies and skills that teachers from various subjects wish to assess.
Its clear decision-making process ensures transparency, helping mitigate distrust.
By providing each teacher with an opportunity to express their views, OPA supports fair decision-making without undue influence from power dynamics.
Given these benefits, OPA appears to be effective for synthesizing diverse expert opinions through a structured and transparent process. As group size increases, capturing all viewpoints becomes more difficult, and discussions risk being dominated by a few teachers. OPA may therefore be particularly useful in groups with power imbalances, such as those with varying levels of experience or institutional positions.
Recommendations
In terms of time and effort, the duration for identifying alternatives and criteria and reaching decisions using OPA was shorter than the averages reported in pre-questionnaire items 1 and 2. However, according to post-questionnaire items 1 and 2, participants felt the decision-making time and effort were similar to or longer than those of standard discussions. As noted in the Introduction, appropriate STEAM assessment remains a contested and challenging issue; therefore, participants may have focused more on the difficulty of the decision-making contents than on the OPA itself. Their statements about the inherent difficulty of STEAM assessment were consistent with earlier research, which also found uncertainty about how best to implement and assess STEAM education.
In post-questionnaire items 3 and 4, two HoGr participants answered “not much.” As shown in the HoGr process section, these participants discovered that alternatives they had not considered emerged when ranking individually. However, because the OPA procedure was strictly followed, no alternatives could be added at this stage—an outcome that influenced their post-task evaluations. It is therefore recommended that an optional step be introduced to verify the validity of alternatives and criteria at multiple points during the process. Additionally, since OPA only prioritizes multiple alternatives, it is important to also consider situations where the outcome may be unsatisfactory. Future study is needed on including a subsequent phase for generating new alternatives, when the best choice cannot be identified from the available alternatives.
Limitations
This study involved only a small number of experts to facilitate rapid evaluation of a novel method. Future studies should include larger samples to strengthen the findings.
Ethics Statement
The study was reviewed and approved by Tohoku University. All participants were informed about the study in advance and signed an informed consent form.
The author thanks the five teachers who generously shared their time and insights by participating in this study.
Conflict of Interest
The author declares no conflict of interest.
Funding
This research was funded by JSPS KAKENHI, Grant-in-AidforJSPSFellows, Grant Number 24KJ0404.
Generative AI Statement
As the author of this work, I used the AI tool Elicit for literature searches. After using this tool, I reviewed and verified the final version of the work. I take full responsibility for the content of the published work.