TU Wien Informatics

20 Years

Pitfalls in System Performance Evaluation

  • 2011-05-25
  • Research

In the last few years, researchers have identified disturbing flaws in the way that experiments are performed in computer science.

Abstract

In the last few years, researchers have identified disturbing flaws in the way that experiments are performed in computer science. For example, in the area of performance evaluation of computer systems, our measurements on one system are rarely reproducible on another. As hardware and software grow more complex, this problem just gets worse. Bad evaluations misdirect research and curtail creativity. A poorly performed but successfully published evaluation can encourage fruitless investigation of a flawed idea, while publication of flawed observation can discourage further exploration of an important area of research.

In this mini-lecture we distill not only our own experience in performance evaluation but also the insight gained through our involvement in the “Evaluate Collaboratory”, an open group of researchers with an interest in improving the state of practice in experimental evaluation of software and systems. This mini-lecture covers topics including the selection of benchmarks, performance metrics, measurement bias and accuracy, statistics, reproducibility, negative results, and observational studies. The lecture includes discussions of recent research results on these topics. Besides the above topics, the lecture can include discussions of topics and issues proposed by the participants. The goal of this mini-lecture is to enable the participants to learn from past mistakes of others and to avoid common pitfalls in their own future performance evaluations.

This lecture series is particularly devoted to PhD students and advanced master students.

Lecture series dates

  • Wednesday, 25 May 2011, 9-11 am, Library E185.1
  • Thursday, 26 May 2011, 8-10 am, Seminarraum Argentinierstraße
  • Friday, 27 May 2011, 9-11 am, Seminarraum Argentinierstraße

Biography

Matthias Hauswirth is an assistant professor at the Faculty of Informatics at the University of Lugano (Switzerland), where he leads the Software and Programmer Efficiency (Sape) research group. He is interested in performance measurement, understanding, and optimization. Matthias received his PhD from the University of Colorado at Boulder.

Note

These lectures are organised by the Compilers and Languages Group at the Institute of Computer Languages.

Speakers

Links

Curious about our other news? Subscribe to our news feed, calendar, or newsletter, or follow us on social media.

Note: This is one of the thousands of items we imported from the old website. We’re in the process of reviewing each and every one, but if you notice something strange about this particular one, please let us know. — Thanks!