=Paper=
{{Paper
|id=None
|storemode=property
|title=Program Assessment via a Capstone Project
|pdfUrl=https://ceur-ws.org/Vol-920/p21-galletly.pdf
|volume=Vol-920
|dblpUrl=https://dblp.org/rec/conf/bci/GalletlyCKB12
}}
==Program Assessment via a Capstone Project==
Program Assessment via a Capstone Project John Galletly Dimitar Christozov American University in Bulgaria American University in Bulgaria Blagoevgrad 2700 Blagoevgrad 2700 Bulgaria Bulgaria +359 73 888 466 +359 73 888 443 jgalletly@aubg.bg dgc@aubg.bg Volin Karagiozov Stoyan Bonev American University in Bulgaria American University in Bulgaria Blagoevgrad 2700 Blagoevgrad 2700 Bulgaria Bulgaria +359 73 888 456 +359 73 888 419 vkaragiozov@aubg.bg sbonev@aubg.bg ABSTRACT experience of aspects of teaching computer science at a liberal This paper describes an approach that has been adopted at the arts university, including an examination of the accreditation American University in Bulgaria in order to assess the Computer process by these two different agencies, at the institutional and Science degree program for accreditation purposes. program levels. Accreditation, as a means to assure and improve higher-education Categories and Subject Descriptors quality, uses a set of standards that have been developed by accreditation agencies such NEASC and NEAA [9]. As part of the K.3.2 [Computers and Education]: Computer and Information accreditation process, institutions and programs must show that Science Education – Curriculum, Self-assessment they meet the standards that require them to provide quality education. General Terms Management, Measurement, Standardization One standard is an assessment of student learning in each program of study, i.e. the expected student learning outcomes (SLOs). Assessment is based on clear statements of what students are Keywords expected to gain, achieve, demonstrate, or know by the time they Program assessment, student learning outcomes, accreditation, complete their academic program. The program must implement rubrics, primary trait analysis, capstone project and provide support for the systematic and broad-based assessment of what, and how, students are learning in the 1. INTRODUCTION academic program. Assessment plays a key role in evaluating Accreditation is the primary process for assuring and improving student learning, and making improvements to the program [10]. the quality of higher-education institutions. A university or Assessment is not just about examining students; it is much more. program of study, such as computer science, that has successfully The idea is that each program conducts an assessment of student completed an accreditation review is considered to have the learning each year, and uses the results to improve student required instructional and supporting services to help students to learning by making appropriate changes to the program when and achieve their educational goals. An accredited university means where necessary. Basically, each program should that students can expect that the university or program will live up to its name. It means that a student can be assured that his/her (a) Identify and develop a set of student learning outcomes. degree has value. (b) Develop an assessment plan. The American University in Bulgaria (AUBG), situated in Blagoevgrad, Bulgaria, offers an undergraduate American-style, (c) Determine an assessment method. liberal arts education. AUBG is subject to two accreditation (d) Develop assessment metrics or rubrics. agencies – one for American accreditation (via the New England Association for Schools and Colleges – NEASC [1]) as AUBG is (e) Collect and analyze assessment data, and draw conclusions an American institution; and the other for Bulgarian accreditation about collective student achievement in each outcome. (via the National Evaluation and Accreditation Agency – (f) When necessary, based on the above analysis, propose NEAA [2]) as AUBG is situated in Bulgaria and is a Bulgarian necessary changes to the program in order to improve student institution, also. Earlier papers [3, 4, 5, 6, 7, 8] shared the learning for any under-performing outcomes. In other words, BCI’12, September 16–20, 2012, Novi Sad, Serbia. close the loop. Copyright © 2012 by the paper’s authors. Copying permitted only for private and academic purposes. This volume is published and copyrighted by its editors. Various approaches to this problem have been described in the Local Proceedings also appeared in ISBN 978-86-7031-200-5, Faculty of Sciences, literature [11]. This paper describes the method selected by the University of Novi Sad. 21 Computer Science department at AUBG to assess the computer At the end of the semester, each project is evaluated via a public science program for accreditation purposes. presentation of the project, along with a demonstration. Additionally, a detailed project report must also be submitted. The project is evaluated by a panel of Computer Science faculty. 2. PROGRAM ASSESSMENT 2.1 Formulation of Learning Outcomes The SLOs (or goals) for the computer science program were first 2.3 Choice of Assessment Method developed from the program’s mission statement and the (high- After some deliberation, the Computer Science department level) educational goals of the university. decided that a suitable method for assessing the projects was Primary Trait Analysis (PTA) [15, 16, 17]. (“Trait” here equates The following are the key set of outcomes that the Computer to a performance indicator, i.e. a measurable attribute that defines Science department thought was necessary for our graduating some learning outcome.) Primary trait analysis, an evaluation tool students: used extensively in liberal arts institutions, defines a number of specific criteria or traits to be evaluated along with specific “The program is designed to enable students meet the following measures of performance for each trait. To paraphrase Walvoord skill or competency-based outcomes and show mastery of and McCarthy [16], the PTA method allows us to take what we computer science knowledge and skills, through a capability to are already doing, i.e. scoring the students’ capstone projects, and • Demonstrate an understanding of, and ability to apply, current translating that process into an assessment device. Using PTA for theories, models, techniques and technologies that provide a basis student evaluation provides the faculty with clear guidelines for for problem solving. student evaluation, and the students with a clear understanding of performance expectations. • Work as an effective individual and as part of a team to develop and deliver quality software. The method makes use of a scoring grid (or matrix) which was developed from the computer science program learning outcomes, • Have the ability to communicate effectively both orally and in and, importantly, provides feedback to the faculty for any future writing. curricular enhancements by indicating performance strengths and • Be aware of key ethical issues affecting computer science and weaknesses in the given outcomes, i.e. it allows program the responsibilities of computer science professionals. assessment and improvement. An example partial grid for one student is shown in the Figure 1 below. • Learn new theories, models, techniques and technologies as they emerge and appreciate the necessity of such continuing professional development.” Having developed this set of learning outcomes, it was then necessary to decide how best they could be evaluated. The Computer Science department decided to approach this through the assessment of the computer science senior projects – evaluating senior projects based on the program’s goals. 2.2 Assessment Method for Learning Outcomes A number of assessment techniques have been suggested and used in practice [12]. Assessment methods are classified as being either direct or indirect [13]. Direct methods evaluate what a student has Figure 1: Example partial scoring grid for one student. learned. Examples are capstone projects; tests and examinations; and portfolios of the students’ work. Indirect methods, on the 2.4 Development of Assessment Rubrics other hand, gather information through means other than looking Rubrics were developed from the student learning outcomes. at samples of student work. Examples are exit interviews; alumni Essentially, a rubric is a translation of each outcome in the surveys; and employer surveys. context of the capstone project. For example, for the outcome “Work effectively … to develop and deliver quality software”, the Any assessment method should reflect the type of learning to be rubric developed was “the requirements for the software were measured. Computer science is a practical discipline with thoroughly discussed with the client, analyzed, and a working emphasis, at AUBG, on quality software development. This led software solution has been designed based on quality design the Computer Science department to decide that the ideal goals, implemented and documented”. assessment method is the computer science senior project – an existing compulsory capstone course [14] for graduating Rubrics aid both the students and faculty. For the students, rubrics computer science students at AUBG. Completing the senior let the students know the criteria by which their projects will be project successfully, broadly demonstrates a student's evaluated. For the faculty, rubrics allow all projects to be competencies as a computer professional. evaluated according to the same criteria. The senior project requires the development of a substantial software package developed by each student, individually, over a 2.5 Collection of Assessment Data semester-long period. As such, it provides evidence of how well Each row of the PTA grid represents a trait, i.e. an outcome, plus our students integrate and apply principles, concepts, and its associated rubric; each column represents a score in the range abilities, learnt in preceding computer science courses, into this 1 to 5, where 1 represents poor and 5 represents excellent for a culminating project. given outcome. 22 For every student presentation, demonstration and project report, each member of the judging panel scores the student for each outcome, according to its rubric. The overall score (the aggregate of the judging panel’s scores) for each student allows the assignment of a grade for that student for the project. And also, importantly for program assessment, aggregating the scores for all students provides a quantitative, direct measure of student learning for each key program SLO. 2.6 Analysis of Assessment Data The aggregated scores from the grids of all students for all Figure 3: Example of adding finer detail to communications outcomes are analyzed by the faculty to determine if there are any trait. outcomes for which the students are collectively under- performing. If there are one or more such outcomes, then steps are taken to understand why this happening and apply corrective action in those courses that service those outcomes. The quantitative nature of the assessment allows faculty to focus on strategies for any improvement necessary in the program. An example of a partial grid with the aggregated score for 30 students is shown in Figure 2 below. The second row shows that in this example, students are under-performing in communication abilities. Figure 4: Example of detailed rubric for communications trait. 4. CONCLUSION This paper has described how capstone projects in the computer science degree program at AUBG are assessed using the method of Primary Trait Analysis. This assessment is based on the expected student learning outcomes developed by the Computer Science department at AUBG. The approach also allows tracking Figure 2: Example partial grid of aggregated scores for 30 of quantitative measures over time to provide a clearer view of students. student learning. 3. DISCUSSION 5. REFERENCES The PTA method of program assessment has been in use for a few [1] NEASC home page: http://www.neasc.org/ years at AUBG. It has provided useful feedback to faculty on how [2] NEAA home page: http://www.neaa.government.bg/en student learning, related to each learning objective, is progressing. One early indicator was the one outlined above – some students [3] Christozov D., Galletly J., Miree L., Karagiozov V. and were not performing well in communication, either writing or Bonev S , 2011, Accreditation – a Tale of Two Systems, presenting. As a result, courses were updated to give more Sixth International Conference - Computer Science 2011, feedback to students, with, for example, better rubrics for report Ohrid, Macedonia. writing and presentation skills. More courses included student [4] Bonev S., Christozov D., Galletly J. and Karagiozov V., report writing and presentations. 2005, Computer Science Curriculum in a Liberal Arts Institution: Transition from ACM/IEEE Curriculum Model Although the traits and rubrics served a purpose, experience has 1992 to 2001, Second International Scientific Conference – shown that current scoring matrix is too coarse – there needs to be Computer Science, Halkidiki, Greece. finer detail of traits and accompanying rubric. [5] Karagiozov V., Christozov D., Galletly J. and Bonev S. For example, for the outcome “Have the ability to communicate 2005, E-learning in a Liberal Arts Institution: An Open effectively both orally and in writing”, the traits may be Source Solution –the AUBG Experience, Second Preparation/Content and Presentation. The first trait may be International Scientific Conference – Computer Science, broken down to Organization; Quality of Content; Quality of Halkidiki, Greece. Conclusion; etc., and the second may be broken down to Style: pace, voice quality, mannerisms; Handling of questions; etc. Such [6] Karagiozov V., Christozov D., Galletly J. and Bonev S., break downs would require to be accompanied by more detailed 2008, Facilities and Support for Teaching Computer Science rubrics. at the American University in Bulgaria, 2008, Fourth International Scientific Conference – Computer Science, Kavala, Greece. 23 [7] Christozov D., Galletly J., Karagiozov V. and Bonev S., [12] Angelo, T. A., and Cross, K. T., Classroom Assessment 2007, Learning by Doing – the Way to Develop Computer Techniques (2nd Edition), 1993, Jossey-Bass Publishers Science Professionals, IEEE Conference: Informatics [13] See, for example: Education - Europe II, Thessaloniki, Greece. http://assessment.aas.duke.edu/documents/DirectandIndirect [8] Christozov D., Galletly J., Karagiozov V. and Bonev S., AssessmentMethods.pdf Cooperatve Learning – the Role of Extra Curricular [14] Berheide, C., Doing Less Work, Collecting Better Data: Activities, 2009 Fourth Balkan Conference in Informatics, Using Capstone Courses to Assess Learning, 2007 Peer Thessaloniki, Greece. Review, No. 9, pp. 27–30 [9] NEASC standards page: [15] Lloyd-Jones, R., Primary Trait Scoring, in C. Cooper and L. http://cihe.neasc.org/standards_policies/standards/standards_ Odell (eds.), Evaluating Writing: Describing, Measuring, html_version Judging, 1997, Urbana, Ill., National Council of Teachers of [10] Bailie F., Whitfield D., & Abunawass A., Assessment and Its English Role in Accreditation, 2007, International Conference on [16] Walvoord, B.E. and McCarthy, L.P., Thinking and Writing in Frontiers in Education: Computer Science and Computer College, 1991, Urbana, Ill., National Council of Teachers of Engineering, Las Vegas, NV English [11] Sanders K., McCartney R., Program Assessment Tools in [17] Walvoord, B.E. and Anderson, V.J., Effective Grading: A Computer Science: a Report from the Trenches, 2003, Tool for Learning and Assessment, 1998. Jossey-Bass SIGCSE '03 Proceedings of the 34th SIGCSE Technical Publishers Symposium on Computer Science Education 24