Multicriteria Linear Assignment Problem in Healthcare Improvement Oksana Pichuginaa , Lyudmyla Kirichenkoc and Yurii Skoba a National Aerospace University "Kharkiv Aviation Institute", 17 Vadim Manko Street, Kharkiv, 61070 Ukraine b Kharkiv National University of Radio Electronics, 14 Nauki Avenue, Kharkiv, 61166 Ukraine Abstract A two-criteria linear assignment problem is applied to the distribution of medical university students for hos- pital internships. The optimization goal is to improve the overall level of service by maximizing total grades for the internship. The main criteria selected are the combined academic performance score and the motivational score of students to occupy specific job positions. The stated problem is reduced to the task of predicting grades for the student internship and is solved on the basis of historical data. It also solves a series of ordinary linear assignment problems with prohibitions on some assignments. A computer experiment was designed, and test calculations were conducted. The results showed the effectiveness of using the chosen technique to solve the original problem. Keywords multicriteria linear assignment problem, multi-objective optimization, medical service, linear convolution method, Hun- garian method, polynomial solvability Introduction The 𝑛-dimensional linear assignment problem (LAP) is formulated as follows: there are 𝑛 job positions and 𝑛 applicants for these positions. It is required to find the applicants’ assignments to the positions so that each position is occupied and each applicant receives a position while minimizing the total cost of the overall assignment. LAP has numerous applications in various areas [1]. A broader application is the assignment problem, where each position and each applicant are evaluated according to several criteria, and the final assignment aims to optimize all these criteria. This transforms the problem into a multi-objective linear assignment problem (MLAP). Multicriteria optimization methods are applied to solving MLAP, most of which reduce it to a single-objective LAP. The main challenge is choosing a way to reduce MLAP to LAP, particularly forming the final cost matrix of the assignments. Methods of mathematical statistics can be applied to solve this task by finding and utilizing predicted values of these costs instead of the unknown real costs of the assignments. This paper examines the practical problem of assigning medical university students to internships in hospitals. Students are assessed by their academic performance grades, while the available job po- sitions are assessed by the ratings given to them by the students. The main goal is to place students in positions that maximize the overall grade for the internship. The main challenge is that students are assigned to the jobs at the beginning of the internship, while the scores are received at the end. Therefore, it is necessary to predict the internship grades, reduce the problem to a two-criteria assign- ment problem aiming to maximize both the total academic score and the total motivational score, and ProfIT AI 2024: 4th International Workshop of IT-professionals on Artificial Intelligence (ProfIT AI 2024), September 25–27, 2024, Cambridge, MA, USA $ o.pichugina@khai.edu (O. Pichugina); lyudmyla.kirichenko@nure.ua (L. Kirichenko); y.skob@khai.edu (Y. Skob)  0000-0002-7099-8967 (O. Pichugina); 0000-0002-2780-7993 (L. Kirichenko); 0000-0003-3224-1709 (Y. Skob) Β© 2024 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CEUR Workshop Proceedings http://ceur-ws.org ISSN 1613-0073 CEUR Workshop Proceedings (CEUR-WS.org) CEUR ceur-ws.org Workshop ISSN 1613-0073 Proceedings finally reduce this two-criteria assignment problem to a single-criteria one using the predicted values of grades for practice as the effectiveness metric of the assignments. To solve the problem, methods of multicriteria optimization, linear integer optimization, and statistics, particularly regression analysis, are used. 1. Main Part The classical linear assignment problem (LAP) [2] is the problem of optimizing a linear function on a set of permutation matrices: 𝑧 = 𝐢 βˆ™ 𝑋 β†’ min; (1) 𝑋 ∈ Π𝑛 . (2) Here, Π𝑛 is a set of permutation matrices of order 𝑛 [3, 4, 5, 6], where 𝑛 is the number of applicants for the positions and the positions themselves, 𝐢 ∈ 𝑅+𝑛×𝑛 is the matrix of individual assignments costs. { } Π𝑛 = 𝑋 ∈ 𝐡𝑛×𝑛 ∢ 𝑋 β‹… 𝑒 = 𝑋 𝑇 β‹… 𝑒 = 𝑒 , 𝐡 = {0, 1} , 𝑒 = (1, ..., 1)𝑇 . LAP is widely used in many theoretical and practical fields, such as scheduling theory, logistics, graph theory, and computational logic, both as a standalone problem and as a subproblem in more complex problems [1, 7, 8]. As a rule, LAP is formulated as a minimization problem but it is also found in the literature as a maximization problem. In this case, LAP has the form 𝑧 β€² = π‘ˆ βˆ™ 𝑋 β†’ max . (3) Under the constraints (2), where the matrix π‘ˆ ∈ ℝ𝑛×𝑛 is the utility matrix and represents the profit from the assignments, it is clear that LAP (2), (3) can be reduced to the standard LAP (1), (2) using a linear transformation. For example, let the elements of the matrix π‘ˆ express the utility of individual assignments on a scale from 0 to 100 score. π‘ˆ ∈ [0, 100]𝑛×𝑛 . (4) Then the transition from LAP (2),(3) to the classical LAP (1), (2) can be done as follows π‘ˆ β†’ 𝐢 = [𝑐𝑖𝑗 ]𝑖𝑗 = [100 βˆ’ 𝑒𝑖𝑗 ]𝑖𝑗 ∈ [0, 100]𝑛×𝑛 . (5) As can be seen, the elements of the matrix 𝐢 express the cost of individual assignments on a scale from 0 to 100 points. In what follows, when we refer to LAP, we mean the classical problem (1), (2), to which, in particular, the Hungarian method is applicable [9]. The attractiveness of LAP from a computational point of view is determined by its equivalence to the following continuous linear relaxation (1). 𝑋 ∈ 𝑃𝑛 = π‘π‘œπ‘›π‘£Ξ π‘› , (6) where 𝑃𝑛 is a polyhedron of stochastic matrices, which is an integral [3]. This determines the compar- ative simplicity of solving LAP using linear programming methods compared to other combinatorial problems. Moreover, LAP is polynomially solvable. namely, there is a Hungarian algorithm for solv- ing it with computational complexity 𝑂(𝑛3 ) [9]. 1.1. LAP generalizations LAP generalizes in several ways. The first is the complication of the objective function, transitioning from LAP to the nonlinear assignment problem (NAP) [2, 10, 11], where (1) is replaced by 𝑧 = 𝑓 (𝑋 ) β†’ min, 𝑓 (𝑋 ) is nonlinear. (7) In particular, this is about quadratic (QAP) [11], cubic, and biquadratic assignment problems. The next generalization of LAP concerns the number of objective functions. Thus, the problem 𝑧 (π‘˜) = 𝐢 (π‘˜) βˆ™ 𝑋 β†’ min, π‘˜ = 1, 𝐾 , 𝐾 > 1 (8) under the constraints (2) is called the multi-objective linear assignment problem (MLAP) [12, 13, 14]. Here, 𝐢 (π‘˜) ∈ 𝑅+𝑛×𝑛 , π‘˜ = 1, 𝐾 are the cost matrices for each criterion. Finally, the third generalization concerns the presence of additional constraints in addition to the main constraints (2). So the problem (1), (2), 𝐴(𝑙) βˆ™ 𝑋 ≀ 𝑏(𝑙) , 𝑙 = 1, 𝐿, 𝐿 β‰₯ 1 (9) is called the constrained linear assignment problem (CLAP) [10]. By combining the conditions (7) βˆ’ (9) with (1), (2), other generalizations of LAP are formed. For example, problem (2), (8), (9) will be a constrained multicriteria linear assignment problem (CMLAP). From a computational point of view, solving these generalizations is much more challenging than LAP. Problems in the NAP and CLAP classes are known to be NP-hard [2, 10, 11]. As for MLAP, we encounter the typical challenges of solving multi-objective optimization problems here. When using standard approaches, such as the weighting method (the linear convolution method) [15, 16, 17] and the priority method [15, 16, 17], additionally addresses the problem of expert assessment and determination of weights of the objective functions representable by a vector π‘Š = (𝑀(1) , ..., 𝑀(𝐾 ) ) ∈ 𝑅+𝐾 , 𝑀(1) + ... + 𝑀(𝐾 ) = 1, (10) after which, using the linear convolution method, MLAP is converted into LAP with a cost matrix 𝐾 𝐢 = βˆ‘ 𝑀(π‘˜) 𝐢 (𝐾 ) . (11) π‘˜=1 If the priority method is used, then a sequence of CLAPs is solved with one of the criteria (8) and iterative addition of constraints on non-deterioration of higher priority criteria, which are optimized in previous iterations. Likewise, some CLAPs are reduced to NAPs and sometimes even to LAPs. Thus, the traditional way of solving constrained optimization problems, the penalty method [18], consists of incorporating all or part of additional constraints into the objective function using penalty terms. . In contrast, linear optimization problems turn into nonlinear ones without constraints, i.e., CLAP becomes NAP. If additional constraints express prohibitions on certain pair-wise assignments, then instead of using the penalty method, one can transfer from CLAP to an equivalent LAP of the following form: 𝑧 = 𝐢 β€² βˆ™ 𝑋 β†’ min (12) { β€² β€² 𝑐𝑖𝑗 , (𝑖, 𝑗) βˆ‰ 𝑍 β€² 𝐢 = [𝑐𝑖𝑗 ]𝑖,𝑗 , 𝑐𝑖𝑗 = . (13) ∞, (𝑖, 𝑗) ∈ 𝑍 Here, 𝑍 is the set of pairs (𝑖, 𝑗) of prohibited assignments, where 𝑖 is the student number and 𝑗 is the job position number. The reducibility of this CLAP to LAP justifies its polynomial solvability, for example, by the Hungarian method [9]. This paper addresses the practical problem of enhancing the level of medical service, which en- tails solving problems of regression analysis and a finite sequence of CMLAPs. These CMLAPs are reduced to ordinary LAPs by incorporating prohibitions on certain assignments and applying linear convolution of optimization criteria. As a result, we propose an algorithm for solving the formulated complex problem, solvable in polynomial time, depending on the problem dimension 𝑛, the number of criteria 𝐾 , and the number of periods 𝑇 for which the forecast of the assignment cost matrix is made. 2. Problem statement The following practical task is considered: at the end of the academic year, students of medical univer- sities undergo hospital internships. The objective is to assign students to suitable hospital positions to maximize the medical service they provide. A quantitative indicator of the level of service is the grades given to students in hospitals as scores for their internship. These scores, in turn, accumulate patient and staff evaluations of the student performance. Historical data demonstrates that the level of success of an internship depends on two main indi- cators. The first indicator is academic performance, expressed by the factor 𝐾 𝑔 (the academic score, grade coefficient), which depends on scores in key academic disciplines 𝐷1 , … , 𝐷𝑀 and the average score in all academic disciplines. A higher value of 𝐾 𝑔 indicates better learning outcomes, increasing the likelihood of the student possessing sufficient knowledge to succeed in solving assigned practical problems. The second indicator is the student’s desire to work in a specific position or hospital, referred to as the motivational factor and assessed by the numerical indicator 𝐾 π‘š (the motivation coefficient). A higher 𝐾 π‘š indicates higher motivation, suggesting that the student is more likely to succeed in the position entrusted to them. As can be seen from the formulation, this is the standard linear assignment problem of assigning students to positions, assuming that grades for the internship are known for all students and all job positions: 𝑧 βˆ— = 𝐢 βˆ™ 𝑋 βˆ— = min {𝑧 = 𝐢 βˆ™ 𝑋 } , (14) 𝑋 βˆˆΞ π‘› minimizing the total cost of the assignment 𝑧 and, as a consequence, maximizing the total efficiency of the final assignment: β€² 𝑧 βˆ— = π‘ˆ βˆ™ 𝑋 βˆ— = max {𝑧 = π‘ˆ βˆ™ 𝑋 } . (15) 𝑋 βˆˆΞ π‘› Here, 𝐢 = [𝑐𝑖𝑗 ]𝑖,𝑗 (16) is a matrix of costs of assigning the applicants to the positions, in particular, 𝑐𝑖𝑗 is the cost of assigning applicant 𝑖 (further a student 𝑆𝑖 ), to the position 𝑗 (further a job 𝐽𝑗 ). Respectively, π‘ˆ = [𝑒𝑖𝑗 ]𝑖,𝑗 (17) is a matrix of efficiency (utility matrix) of assigning the students to the jobs, particularly 𝑒𝑖𝑗 represents the cost of assigning a student 𝑆𝑖 to a job 𝐽𝑗 . However, the challenge arises because grades for the internship will only be given after the students have been assigned to the positions and have completed it. Therefore, the matrix π‘ˆ is unknown. In the mathematical model, instead of real estimates (17), only predicted values of these utilities can be used: π‘ˆ β†’ π‘ˆΜ‚ = [𝑒 ̂𝑖𝑗 ]𝑖,𝑗 , (18) where π‘ˆΜ‚ is the forecast utility matrix of assigning the students to the positions, in particular, 𝑒 ̂𝑖𝑗 is the forecast utility of assigning the applicant 𝑖 to the position 𝑗. Thus, instead of the problem (15) we come to the following problem: { } 𝑧 βˆ— = π‘ˆΜ‚ βˆ™ 𝑋̂ βˆ— = max Μ‚ Μ‚ 𝑧 = π‘ˆΜ‚ βˆ™ 𝑋 , (19) 𝑋 βˆˆΞ π‘› in which the real values of the estimates are replaced by the predicted ones, and the utility matrix π‘ˆ 𝑧 βˆ— ⟩, is replaced by π‘ˆΜ‚ . Accordingly, the optimal solution to the problem (19) will be the pair βŸ¨π‘‹Μ‚ βˆ— , Μ‚ where 𝑋̂ βˆ— is the predicted matrix of optimal assignments, Μ‚ 𝑧 βˆ— is the predicted value of the optimal total utility of the assignments 𝑧 . βˆ— Assuming that there is historical data on the appointments of previous students to the same po- sitions, their grades in academic disciplines and summer internships for previous years and motiva- tional scores, this forecasting task can be formulated as follows. The goal of the forecast includes searching for the matrix π‘ˆΜ‚ with the subsequent solution of the problem (19) and consists of finding a forecast matrix of assignments 𝑋̂ βˆ— , which is close to the optimal matrix of assignments 𝑋 βˆ— . This goal can be represented as π‘§βˆ— β‰ˆ π‘§βˆ— . Μ‚ Another metric for comparison 𝑧 βˆ— = π‘ˆ βˆ™ 𝑋̂ βˆ— β‰ˆ Μ‚ Μƒ π‘§βˆ— , which we can calculate at the end of the process of acceptance and implementation of the found assignments, when scores for the internship are given, that is, Μƒ 𝑧 βˆ— is a posteriori estimate. 3. Problem formalization Let us introduce some notations: β€’ 𝑖 ∈ 1, 𝑛 is a student index; β€’ 𝑗 ∈ 1, 𝑛 is a position index; β€’ 𝑑 ∈ 1, 𝑇 + 1 is an index of the time period, including 1, 𝑇 is an index of the historical period, 𝑇 + 1 is an index of the current (forecast) period; β€’ π‘š ∈ 1, 𝑀 is an index of the key academic discipline; β€’ β„Ž ∈ 1, 𝐻 is a hospital type. As input data for making forecasts, we have: 1. for the forecast 𝑇 + 1-th period and historical periods 1, ..., 𝑇 : a) by the student: i. the score for key academic disciplines for this practice and the average score of the students: 𝐺𝑑 = (π‘”π‘–π‘š 𝑑 𝑑 𝑑 )𝑖,π‘š,𝑑 , π‘Žπ‘” = (π‘Žπ‘”π‘– )𝑖,𝑑 , where π‘”π‘–π‘š 𝑑 ∈ [0, 100] is the score of the student 𝑆𝑖 in the key discipline π·π‘š in the period 𝑑; ii. student ratings of the positions (the motivational factor), starting from 0 for the worst and up to 100 for the best: 𝐾 π‘šπ‘‘ = [𝐾 π‘šπ‘‘π‘–π‘— ]𝑖,𝑗,𝑑 (20) where 𝐾 π‘šπ‘‘π‘–π‘— ∈ [0, 100] is the motivational score of the student 𝑆𝑖 assigned to the position 𝐽𝑗 in the period 𝑇𝑑 ; b) by the job: i. their link to the hospitals; ii. the relative weight of disciplines and the average score of them (can be the same for all periods, the same for the positions, the same for the hospitals). For example, the vector of weights: 𝑀 = (𝑀1 , ..., 𝑀𝑀 , π‘€π‘Ž ) ∈ 𝑅+𝑀+1 , 𝑀1 + ... + 𝑀𝑀 + π‘€π‘Ž = 1, (21) corresponds to the case when these weights are uniform across positions and periods. In particular, π‘€π‘š is the relative weight of the key academic discipline π·π‘š in the factor 𝐾 𝑔, while π‘€π‘Ž is the relative weight of the average score in the factor 𝐾 𝑔; iii. intervals of passing scores, both average and in individual disciplines (can be uniform across hospitals and periods). Let us introduce a constraint: 𝑔 π‘‘π‘—π‘š ≀ π‘”π‘–π‘š 𝑑 ≀ 𝑔 π‘‘π‘—π‘š , π‘Žπ‘” 𝑑𝑗 ≀ π‘Žπ‘”π‘–π‘‘ ≀ π‘Žπ‘” 𝑑𝑗 , βˆ€π‘–, 𝑗, π‘š, 𝑑. (22) 𝑔 π‘‘π‘—π‘š , 𝑔 π‘‘π‘—π‘š are, respectively, the lowest and highest passing scores in the discipline π·π‘š for the occupation of the position 𝐽𝑗 in the period 𝑇𝑑 , while π‘Žπ‘” 𝑑𝑗 , π‘Žπ‘” 𝑑𝑗 are the lowest and highest pass average score for position 𝐽𝑗 during period 𝑇𝑑 . These passing scores can also be uniform across the periods and positions within the same hospital. For example, the lower bound on passing academic scores, uniform across all periods, looks like this 𝑑 𝑔 π‘—π‘š ≀ π‘”π‘–π‘š , π‘Žπ‘” 𝑗 ≀ π‘Žπ‘”π‘–π‘‘ , βˆ€π‘–, 𝑗, π‘š, 𝑑, (23) where 𝑔 π‘—π‘š is the minimal passing score in the discipline π·π‘š for occupation of the position 𝐽𝑗 , π‘Žπ‘” 𝑗 is the minimal passing average score for the position 𝐽𝑗 . c) by hospital: i. hospital rating ii. requirements for passing grades 2. for previous periods 1, ..., 𝑇 : a) by student: i. score for the practice and position occupied by them 𝑑 𝑑 (𝑖, 𝑗𝑖 , 𝑐𝑖,𝑗𝑖𝑑 )𝑖 , 𝑑 = 1, ..., 𝑇 . (24) For example, (𝑖, 𝑗𝑖1 , 𝑐𝑖,𝑗 1 = (2, 3, 85) means that in period 𝑑 = 1, the student 𝑆2 was 𝑖 ) 1 assigned to the position 𝐽3 and received a grade of 85. ii. There are also statistics on which students were appointed to which positions and what grade they received When constructing our mathematical model, we limit our consideration to the cases (21) and (23). The academic performance coefficient looks like this: 𝑀 𝐾 𝑔𝑖𝑑 = βˆ‘ π‘€π‘š π‘”π‘–π‘š 𝑑 + π‘€π‘Ž π‘Žπ‘”π‘–π‘‘ ∈ [0, 100] , βˆ€π‘–, 𝑑 ∈ 1, 𝑇 + 1. (25) π‘š=1 The two main factors of academic achievement and motivation are combined into a single metric: 𝐾 = 𝑓 (𝐾 𝑔, 𝐾 π‘š) (26) in the form of a weighted sum of the indicators 𝐾 𝑔 and 𝐾 π‘š, which relative weights depend on the period 𝑑, i.e. 𝐾𝑖𝑗𝑑 = 𝛼 𝑑 𝐾 𝑔𝑖𝑑 + 𝛽 𝑑 𝐾 π‘šπ‘‘π‘–π‘— ∈ [0, 100] , βˆ€π‘–, 𝑗, 𝑑, (27) where 𝛼 𝑑 , 𝛽 𝑑 β‰₯ 0, 𝛼 𝑑 + 𝛽 𝑑 = 1, βˆ€π‘‘. As a result, we obtain a set of matrices of total scores: 𝐾 𝑑 = [𝐾𝑖𝑗𝑑 ]𝑖𝑗 ; 𝐾𝑖𝑗𝑑 = 𝛼 𝑑 𝐾 𝑔𝑖𝑑 + 𝛽 𝑑 𝐾 π‘šπ‘‘π‘–π‘— ∈ [0, 100] , βˆ€π‘–, 𝑑. (28) As stated earlier, the weighted sum coefficients are supposed to be known for historical periods 𝑑 ∈ 1, 𝑇 and unknown for the current period 𝑇 + 1. Therefore, under the assumption the vector 𝛼 = (𝛼 𝑑 )𝑑=1,𝑇 (29) is given, while 𝛼 𝑇 +1 is unknown, we formulate the auxiliary forecasting problem of assessing 𝛼 𝑇 +1 (the forecast value is denoted as 𝛼 ̂𝑇 +1 ) as follows: we assume that 𝛼 𝑑 stochastically linearly depends on the period 𝑑. Therefore, we introduce a simple linear regression [19, 20]: 𝛼 𝑑 = π‘Ž β‹… 𝑑 + 𝑏 + 𝑣, (30) where π‘Ž, 𝑏 are regression parameters and 𝑣 is the error. After estimating its parameters by the least squares method [19], we can construct the desired forecast: 𝛼̂ 𝑇 +1 = π‘Ž β‹… (𝑇 + 1) + 𝑏. (31) Now, we can find the forecast matrix of total scores adapting (28) as follows: 𝑇 +1 𝑇 +1 𝑇 +1 𝑇 +1 𝐾̂ = [𝐾̂ 𝑖𝑗 ] ; 𝐾̂ 𝑖𝑗 = 𝛼̂ 𝑇 +1 𝐾 𝑔𝑖𝑇 +1 + 𝛽̂ 𝐾 π‘šπ‘‡π‘–π‘—+1 ∈ [0, 100] , βˆ€π‘–, 𝑗. (32) 𝑖𝑗 In order to{incorporate } the constraints (23) into the utility matrix, we perform the transition from the matrices 𝐾 𝑑 𝑑 to { ′𝑑 𝑑 ′𝑑 ′𝑑 ′𝑑 𝐾𝑖𝑗 , 𝑖𝑓 (23) holds for all π‘š 𝐾 β†’ 𝐾 = [𝐾𝑖𝑗 ] ; 𝐾𝑖𝑗 = (𝑖, 𝑗, 𝑑) 𝑖𝑗 βˆ’βˆž, otherwise Similarly, { β€² 𝑇 +1 𝐾̂ 𝑇 +1 β†’ 𝐾̂ β€² 𝑇 +1 β€² 𝑇 +1 = [𝐾̂ 𝑖𝑗 Μ‚ β€² 𝑇 +1 𝐾̂ 𝑖𝑗 , 𝑖𝑓 (23) holds for all π‘š (𝑖, 𝑗) ]𝑖𝑗 ; 𝐾 𝑖𝑗 = βˆ’βˆž, otherwise Now we solve a series of ordinary LAPs of type (19) with the efficiency matrices: { } 𝑧 π‘‘βˆ— = 𝐾 𝑑 βˆ™ 𝑋 π‘‘βˆ— = max 𝑧 𝑑 = 𝐾 𝑑 βˆ™ 𝑋 𝑋 βˆˆΞ π‘› getting a set of auxiliary optimal solutions βŸ¨π‘‹ π‘‘βˆ— , 𝑧 π‘‘βˆ— ⟩ , 𝑑 = 1, 𝑇 . In the same way, the following LAP is solved for the current period 𝑇 + 1: 𝑇 +1 { 𝑇 +1 } 𝑧 𝑇 +1,βˆ— = 𝐾̂ βˆ™ 𝑋 𝑇 +1,βˆ— = max 𝑧̂ 𝑑 = 𝐾̂ βˆ™π‘‹ , 𝑋 βˆˆΞ π‘› yielding the solution βŸ¨π‘‹ 𝑇 +1,βˆ— 𝑇 +1,βˆ— ,𝑧 ⟩ , 𝑑 = 1, 𝑇 . (33) The solution (33) is the main result of optimization as it yields the student assignment based on the forecast score for their internship in the form of the matrix 𝑋 𝑇 +1,βˆ— , while the corresponding entries 𝑇 +1 of the matrix 𝐾̂ provide expected internship score of the students given that the assignment given by 𝑋 𝑇 +1,βˆ— is implemented. Note that, in this case, the value of 𝑧 𝑇 +1,βˆ— yields the expected total score for the internship. Remark 1. The above scheme of obtaining the assignment solution (33) relies on the known parameters (29). However, the information can be hidden from the decision-maker. In this situation, another set of data given by assumption can be utilized: the historical scores (24) for the internship. We propose to assess the vector (29) solving a set of additional 2-factor linear regressions with a constraint: 𝐢 𝑑 = 𝛼 𝑑 𝐾 𝑔 𝑑 + 𝛽 𝑑 𝐾 π‘šπ‘‘ , 𝛼 𝑑 + 𝛽 𝑑 = 1, 𝛼 𝑑 , 𝛽 𝑑 β‰₯, 𝑑 = 1, 𝑇 , where 𝑛 observation is utilized for every 𝑑, while the values of 𝑗𝑖𝑑 column of 𝐾 π‘šπ‘‘ and 𝐢 𝑑 are participate, 𝑖 = 1, 𝑛. Then the above algorithm becomes applicable with the only difference that now the vector 𝛼 contains predicted parameters rather than real. 4. Experiment design 1. Enter 𝑛, 𝑀, 𝑇 , 𝑀 β€² , where 𝑀 β€² is total number of disciplines; 2. To generate the academic score matrices 𝐺𝑑 , π‘Žπ‘” 𝑑 , βˆ€π‘‘, for each discipline, we set the average scores 𝐸 (π·π‘š ) and the standard deviation 𝜎 (π·π‘š ), π‘š = 1, 𝑀 β€² . The generation is conducted according to the normal distribution: 𝑑 π‘”π‘–π‘š ∼ 𝑁 (𝐸 (π·π‘š ) , 𝜎 (π·π‘š )) , βˆ€π‘–, π‘š, 𝑑. (34) The first π‘š columns of the matrix form the matrix 𝐺𝑑 . Next, the matrix-column π‘Žπ‘” 𝑑 is filled with arithmetic averages over the columns of the matrix: 𝑀′ 1 π‘Žπ‘”π‘–π‘‘ = βˆ‘ 𝑔𝑑 . 𝑀 β€² π‘š=1 π‘–π‘š 3. Using the relative weights (21), one can now find the { values } of the factor 𝐾 𝑔 4. We will generate matrices of motivational scores 𝐾 π‘š 𝑑 according to a normal distribution 𝑑 with a given parameters 𝐸 (𝐾 π‘šπ‘— ) , 𝜎 (𝐾 π‘šπ‘— ): 𝐾 π‘šπ‘‘π‘–π‘— ∼ 𝑁 (𝐸 (𝐾 π‘šπ‘— ) , 𝜎 (𝐾 π‘šπ‘— )) , βˆ€π‘–, 𝑑 5. To convolute the academic and motivational points into the combined score 𝐾𝑖𝑗𝑑 , we use the weights 𝛼 𝑑 and 𝛽 𝑑 = 1 βˆ’ 𝛼 𝑑 known for historical periods. 6. We will set the low bound for the passing score for different positions in key academic disci- plines and the average academic score. Thus, we do not set the upper bound. We will start from the previously specified mathematical expectation and standard deviation of these values (see (34)), allowing at most specified deviation from the mathematical expectation depending on the standard deviation. We use the formula (22), where the lower bounds 𝑔 π‘—π‘š , π‘Žπ‘” 𝑗 , βˆ€π‘—, π‘š are calculated by the formula 𝑔 π‘—π‘š = max{0, 𝐸 (π·π‘š ) βˆ’ 𝛾𝑗 𝜎 (π·π‘š )}, π‘Žπ‘” 𝑗 = min{100, 𝐸 (𝐷) βˆ’ 𝛾𝑗 𝜎 (𝐷) , βˆ€π‘—, π‘š}, assuming that the parameters {𝛾𝑗 }𝑗 are given. 7. we assess the parameters of the regression (30) and use them for the forecast value 𝛼̂ 𝑇 +1 (see (31)). A test experiment was conducted for the following parameters 𝑛 = 14, 𝑀 = 4, 𝑇 = 5, 𝑀 β€² = 10 and the outlined experimental design scheme. 5. Conclusions This paper addresses the practical task of improving medical care services through the internships of medical university students in hospitals. Based on historical data, the main assumption is that student performance is statistically linearly proportional to their academic performance scores and the attrac- tiveness of the positions to the students. A mathematical model of the problem was constructed using multi-objective optimization methods, linear integer optimization, and regression analysis. A com- putational experiment was designed, and a test calculation for a small-scale problem was conducted to demonstrate the applicability of the proposed approach in solving the original task. References [1] M. Carter, C. C. Price, G. Rabadi, Operations Research: A Practical Introduction, 2nd ed., Chap- man and Hall/CRC, 2023. [2] R. E. Burkard, E. Γ‡ela, Linear assignment problems and extensions, in: D.-Z. Du, P. M. Pardalos (Eds.), Handbook of Combinatorial Optimization: Supplement Volume A, Springer US, Boston, MA, 1999, pp. 75–149. doi:10.1007/978-1-4757-3023-4_2. [3] M. Artin, Algebra, 1st ed., Pearson, Upper Saddle River, N.J, 1991. [4] O. Pichugina, Theoretic background of computer solution of combinatorial and geometric configuration problems, in: 2021 IEEE 8th International Conference on Problems of Info- communications, Science and Technology (PIC S&T), IEEE, Kharkiv, Ukraine, 2021, pp. 62–66. doi:10.1109/PICST54195.2021.9772223. [5] O. Matsyi, O. Pichugina, Permutation-matrix approach to optimal linear assignment design, in: Proceedings of the II International Scientific Symposium β€œIntelligent Solutions” (IntSol-2021), volume 3018 of CEUR Workshop Proceedings, CEUR, Kyiv - Uzhhorod, Ukraine, 2021, pp. 141– 149. URL: ceur-ws.org/Vol-3018/Paper_13.pdf, ISSN: 1613-0073. [6] O. Pichugina, S. Yakovlev, Quadratic optimization models and convex extensions on permutation matrix set, in: N. Shakhovska, M. O. Medykovskyy (Eds.), Advances in Intelligent Systems and Computing IV, volume 1080 of Advances in Intelligent Systems and Computing, Springer International Publishing, Cham, 2020, pp. 231–246. doi:10.1007/978-3-030-33695-0_17, ISSN 2194-5357. [7] O. Pichugina, O. Matsiy, Y. Skob, Performance comparison of unbounded knapsack problem formulations, in: Proceedings of the 7th International Conference on Computational Linguistics and Intelligent Systems. Volume III: Intelligent Systems Workshop, volume 3403 of CEUR Work- shop Proceedings, CEUR, 2023, pp. 263–272. URL: https://ceur-ws.org/Vol-3403/paper21.pdf, issn 1613-0073. [8] O. Pichugina, L. Kirichenko, Y. Skob, O. Matsiy, Constraint community detection: mod- elling approaches with applications, in: Proceedings of the 3rd International Workshop of IT- professionals on Artificial Intelligence (ProfIT AI 2023), volume 3641 of CEUR Workshop Proceed- ings, CEUR, 2023, pp. 204–215. URL: https://ceur-ws.org/Vol-3641/paper18.pdf. [9] H. W. Kuhn, The hungarian method for the assignment problem, in: M. JΓΌnger, T. M. Liebling, D. Naddef, G. L. Nemhauser, W. R. Pulleyblank, G. Reinelt, G. Rinaldi, L. A. Wolsey (Eds.), 50 Years of Integer Programming 1958-2008: From the Early Years to the State-of-the-Art, Springer, Berlin, Heidelberg, 2010, pp. 29–47. doi:10.1007/978-3-540-68279-0_2. [10] Y. Berstein, J. Lee, S. Onn, R. Weismantel, Parametric nonlinear discrete optimization over well-described sets and matroid intersections, Mathematical Programming 124 (2010) 233–253. doi:10.1007/s10107-010-0358-6. [11] E. Cela, The Quadratic Assignment Problem: Theory and Algorithms, softcover reprint of hard- cover 1st ed., Springer US, 2010. [12] Bufardi, Ahmed, On the efficiency of feasible solutions of a multicriteria assignment problem, The Open Operational Research Journal 2 (2008). URL: https://benthamopen.com/ABSTRACT/ TOORJ-2-25. [13] E. K. Mensah, E. Keshavarz, M. Toloo, 10 - finding efficient solutions of the multicriteria assignment problem, in: M. Toloo, S. Talatahari, I. Rahimi (Eds.), Multi-Objective Combi- natorial Optimization Problems and Solution Methods, Academic Press, 2022, pp. 193–211. doi:10.1016/B978-0-12-823799-1.00008-5. [14] A. Odior, O. Charles-Owaba, F. Oyawale, Determining feasible solutions of a multicriteria as- signment problem., Journal of Applied Sciences and Environmental Management 14 (2010). doi:10.4314/jasem.v14i1.56481, number: 1. [15] Y. Collette, P. Siarry, Multiobjective Optimization: Principles and Case Studies, 2004th ed., Springer, 2011. [16] M. Ehrgott, Multicriteria Optimization, Springer-Verlag Berlin and Heidelberg GmbH & Co. KG, 2010. [17] I. Kaliszewski, J. Miroforidis, D. Podkopaev, Multiple Criteria Decision Making by Multiobjective Optimization: A Toolbox, softcover reprint of the original 1st ed., Springer, 2018. [18] M. Ayodele, Penalty weights in QUBO formulations: Permutation problems, volume 13222, 2022, pp. 159–174. doi:10.1007/978-3-031-04148-8_11. arXiv:2206.11040 [quant-ph]. [19] P. Bruce, A. Bruce, P. Gedeck, Practical Statistics for Data Scientists: 50+ Essential Concepts Using R and Python, 2nd ed., O’Reilly Media, Beijing Boston Farnham Sebastopol Tokyo, 2020. [20] J. Mandel, The Statistical Analysis of Experimental Data, revised ed., Dover Publications, New York, 1984.