=Paper=
{{Paper
|id=Vol-1686/LightningTalkPaper17
|storemode=property
|title=None
|pdfUrl=https://ceur-ws.org/Vol-1686/WSSSPE4_paper_26.pdf
|volume=Vol-1686
}}
==None==
Lightning Talk: Report on Software Metrics for
Research Software
Gabrielle Allen∗ , Emily Chen∗ , Ray Idaszak† , Daniel S. Katz∗
∗ University of Illinois Urbana-Champaign, Urbana, Illinois, USA, Email: {gdallen, echen35, dskatz}@illinois.edu
† RENCI, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, USA, Email: rayi@renci.org
Abstract—We report on a survey to investigate common met- 3) If you are now collecting additional metrics for the
rics for research software, following a plan of work established software components beyond those you planned at that
at a WSSSPE3 working group. time, what are they?
I. L IGHTNING TALK 4) Are there any metrics you planned in your proposal or
agreed to before your proposal was awarded that you
One of the working groups at WSSSPE3 [1] focused on
have since realized that you are not able to collect?
discussing metrics for research software. Metrics for research
5) Are there any metrics you planned in your proposal or
software were seen as being important for promotion and
agreed to before your proposal was awarded that you
tenure, quantifying scientific impact, reducing duplication, and
have since realized are not useful?
prioritizing development, among other motivations. The group
6) Did you find collecting metrics led to improving the soft-
planned to investigate common metrics for research software,
ware (e.g. quality, usefulness, sustainability, reliability,
to be able to publish a white paper that would be of interest
performance, impact, etc.)? If so, please indicate which
to the community. As a first step in this direction, we began
metrics were the most useful and why?
an activity to investigate metrics for software that are being
used for the awardees of the National Science Foundation The responses to this survey are currently being collected.
(NSF) Software Infrastructure for Sustained Innovation (SI2) We will present an initial analysis of the replies and describe
program [2]. All lead principle investigators for SI2 awards next steps in this activity as a lightning talk at WSSSPE4.
were contacted with a request to complete a survey to provide ACKNOWLEDGMENT
the metrics they had originally proposed to use to assess
Work by E. Chen was supported by the NCSA SPIN
their software components, and any additional metrics they
program.
are currently using.
Specifically, the questions asked of the survey respondents R EFERENCES
were: [1] D. S. Katz, S. T. Choi, K. E. Niemeyer, J. Hetherington, F. Löffler, D.
1) Are the software components developed through Gunter, R. Idaszak, S. R. Brandt, M. A. Miller, S. Gesing, N. D. Jones, N.
Weber, S. Marru, G. Allen, B. Penzenstadler, C. C. Venters, E. Davis, L.
your award correctly listed at this web site: Hwang, I. Todorov, A. Patra, and M. de Val-Borro. Report on the Third
https://sites.google.com/site/softwarecyberinfrastructure/ Workshop on Sustainable Software for Science: Practice and Experiences
software/software? If this information is not accurate (WSSSPE3). Technical report, arXiv, 2016. arXiv:1602.02296 [cs.SE].
[2] National Science Foundation. Implementation of NSF CIF21 Software Vi-
please list all software components generated through sion (SW-Vision). http://www.nsf.gov/funding/pgm summ.jsp?pims id=
your award here. 504817
2) What metrics did you list in your SSI/SSE proposal or
agree to before your award for these software compo-
nents?
This work is licensed under a CC-BY-4.0 license.