=Paper= {{Paper |id=Vol-1435/paper9 |storemode=property |title=Are We Really Standing on the Shoulders of Giants? |pdfUrl=https://ceur-ws.org/Vol-1435/NoISE2015_paper_9.pdf |volume=Vol-1435 |dblpUrl=https://dblp.org/rec/conf/esws/MutharajuK15 }} ==Are We Really Standing on the Shoulders of Giants?== https://ceur-ws.org/Vol-1435/NoISE2015_paper_9.pdf
    Are We Really Standing on the Shoulders of
                     Giants?

                  Raghava Mutharaju and Pavan Kapanipathi

                        Wright State University, OH, USA.
                   mutharaju.2@wright.edu, pavan@knoesis.org




1    Introduction

          I have not failed. I have found 10,000 ways that won’t work.
                                                           –Thomas A. Edison

    About 3000 designs for bulbs were tested between 1878 and 1880 before the
first light bulb was invented in Edison’s lab. The tests include experiments with
several materials for filaments such as cotton, linen thread, and wood splints
that are reported in his first patent. However, months later the material that
was proven to be successful was carbonized bamboo filament that could last for
more than 1200 hours [1]. This story portrays the significant effort in analyzing
the failures of each of the materials to experiment with new ones. This is the
basic principle of science, where researchers examine what went wrong to get
things right. In the above light bulb story, what most of us remember is that
Thomas A. Edison invented light bulb in the year 1879. Success stories generally
overshadow the failures that lead to it and failures are often not well documented.
    This misconception of ignoring failures and highlighting only the success
stories permeated into the publication of research results. In the research world,
where publications are one of the main methods of disseminating work, discussion
of failures is often neglected. Research by its very nature involves risk and is prone
to failures. So there will likely be more failures than success stories. This makes
it all the more important to share and discuss the negative results in an unbiased
and open-minded environment so that others do not end up with similar dead
ends. In order to have such an environment, there needs to be a change in the
way negative results are perceived by the community. Steps need to be taken in
order to have this gradual change.
    We argue that negative results should also be considered as a research con-
tribution and state the benefits in such a case. We also indicate the possible next
steps that can be taken in order for negative results to get their due.


2    Impact of Overrating Success and Ignoring Failures

“Publish or Perish” is a deeply rooted paradigm in the research domain. In con-
junction with this, the common perception of success over failures in research
2         Raghava Mutharaju and Pavan Kapanipathi

publications has introduced the “fear of failure” in researchers. The exact oppo-
site emotion is required to perform high impact and high quality research. This
fear of failure has in turn led to a chain of problems that include:
    – Selective publishing of results: Due to the pressure of publishing positive
      results, researchers opt to select a part of the results that supports their
      hypothesis and ignore the negative results. This is not only a problem in
      the data driven areas of computer science research, but also a significant
      challenge in other areas of research such as medicine and psychology [2].
    – Irreproducibility: Researchers find it hard to reproduce the results of a
      published paper either due to the lack of information or as a consequence
      of selectively published results. This research hence would have no value
      and can be considered as waste of time and effort for the authors and the
      researchers who intend to reproduce the results.
    – Wastage of research funds: United States federal government spent
      around $30 billion dollars in 2013 for basic scientific research [4]. Sequestra-
      tion has already cut down the funds that is allocated for research purposes.
      Irreproducible and selectively published research eat up a part of these funds
      that can be better utilized for quality research that also includes negative
      results.
    – Reinventing the wheel of negative results: Research extends the state-
      of-the-art and helps in the advancement of knowledge. However it is not
      uncommon that the proposed solution to a problem does not work as well
      as expected. It could very well be that this solution was already tried and
      tested by other researchers in the field but has not been published. Significant
      time, effort and funds are consumed in the process of reinventing this wheel
      of negative results.


3      Encouraging Negative Results
Positive results are helpful in determining what to do and how to do; whereas
negative results are useful to know what not to do. Furthermore, negative results
can help in mitigating the limitations introduced in Section 2. Although having
a workshop on negative and inconclusive results is a good start, more needs to
be done in order to integrate discussion of negative results into the mainstream.
Here we list some possible steps that can be taken in this regard.

3.1     Negative Results are a Contribution
A submission to the research track of conferences such as ISWC and ESWC will
be evaluated based on its contribution to the state-of-the-art. For a submission
to be determined as a research contribution, it should not depend on the type
of results obtained in the work. Irrespective of whether the results are positive
or negative, the quality of the work should be determined based on appropriate
review criteria. We need a different set of review criteria for assessing the quality
of negative results. Existing review criteria are more in favor of positive results.
                       Are We Really Standing on the Shoulders of Giants?       3

    On the other hand, encouraging the submission of negative results opens up
the possibility of conference submissions over flooded with negative results since
generally there are more dead ends than positive results in research. Strict met-
rics for judging the quality of negative results should be put in place. Following
criteria can be used to determine the quality of negative results.

1. Quality of proposed solution: Sometimes the most obvious solution to a
   problem need not be the most efficient or suitable. Since the solution is quite
   obvious, this is a potential pitfall for researchers to go through the same
   route and hit a dead end. Although determining how obvious a proposed
   solution (that lead to negative result) is subjective, experienced researchers
   (reviewers) should have a fair idea about the obviousness of a proposed
   solution. So the more obvious a solution is, higher is the quality of negative
   result that was a consequence of the proposed solution.
2. Scale of failure: A failure on larger scale should be avoided. So higher is
   the amount of time, effort and funds that were invested into the proposed
   approach that lead to the negative result, higher is the quality of negative
   result.
3. Lessons learned: The primary purpose of publishing negative results is
   to describe the lessons that were learned from the proposed approach. The
   quality of negative results depends on the number and the quality of lessons
   learned.
4. Impact on related work: Discussion of the following question will be
   helpful to the community: “Given that a particular approach gave negative
   results for a particular problem, is it also the case that this approach gives
   negative results to related problems or similar problems in other domains?”

    Furthermore, in order to assess the impact of these negative results it would
be important for researchers to come up with a criteria that translates to the
number of researchers who can avoid taking the dead end path (Reinventing the
Wheel of Negative Results) or learning from “what went wrong”.
    Long term impact of this encouragement and in turn the utility of negative
results overtime can be checked by using citations of papers published. More
citations would mean that several other researchers in the community were able
to make use of the lessons learned from the approach that lead to negative
results.


3.2   Open Reviews

Quality of reviews at conferences has consistently been a topic of debate. Since
modern researchers are prone to accept positive results, conferences open to
negative results must consider having an open review system (e.g. Semantic
Web Journal [3]). This not only encourages accountability of reviewers but also
provides an open grounds for discussion on the value of negative results if they
are ignored by the reviewers. This transparency would reduce the “fear of failure”
among researchers.
4        Raghava Mutharaju and Pavan Kapanipathi

3.3     Open Datasets and Code
One of the problems in the chain introduced by “fear of failure” is the issue
with verifiability. Since most researchers position their research towards positive
results, trust on the results has become abundant and there is a lack of verifia-
bility. Hence transparent research (where ever possible1 ) should be encouraged.
Open source code and open datasets of negative results can be strongly verified
in order to assess the impact of the results and its reproducibility for the pa-
per acceptance. On the other hand, this should also be encouraged for positive
results. In general, this approach would hopefully alleviate the irreproducibility
and verifiability problems induced by the fear of failure.

3.4     Flexible Funding agencies
Funding agencies play an important role in setting the direction of research. If
they specify that negative results in the funded projects will also be encouraged
(say, for example, by ensuring publication at a top conference, which in turn
is related to the previous discussion) then that would give researchers greater
freedom in pursuing risky topics as well as the confidence to publish the negative
results.

4     Conclusion
Negative results are as important as positive results since they help researchers
from wasting their time, effort, and money on approaches that did not give
positive results to others. The Publish or Perish approach with the Fear of
Failure (success overshadowing failures) has lead to a chain of problems that
can be addressed by recognizing negative results also as a contribution. In doing
so, we can learn from the negative results and truly say that failures are the
stepping stones to success. This in turn would enable us to see much further
since we can stand on the shoulders of giants (other researchers). Right now, we
are on a shaky ground since we are unwilling to learn from other’s failures.

References
1. A Brief History of the Light Bulb. http://www.bulbs.com/learning/history.
   aspx. Accessed: 2015-04-20.
2. How science goes wrong. http://www.economist.com/news/leaders/21588069-
   scientific-research-has-changed-world-now-it-needs-change-itself-how-
   science-goes-wrong. Accessed: 2015-04-20.
3. Semantic Web Journal. http://www.semantic-web-journal.net/. Accessed: 2015-
   04-20.
4. To Get More Out of Science, Show the Rejected Research.              http:
   //www.nytimes.com/2014/09/19/upshot/to-get-more-out-of-science-show-
   the-rejected-research.html. Accessed: 2015-04-20.

1
    Some organizations are restricted due to their non disclosure policies.