Prof. Dr. Samuel A. Fricker, Blekinge Institute of Technology, Sweden and FHNW University, Switzerland
Prof. Dr. Markus Fiedler, Blekinge Institute of Technology, Sweden
Prof. Dr. Jürgen Bröstler, Blekinge Institute of Technology, Sweden
Dr. Eric Knauss, Associate Proffessor, Chalmers University, Sweden
Prof. Günther Ruhe, University of Calgary, Canada
Dr. Nelly Bencomo, Aston University, UK
Prof. Danny Weyns, University Leuven, Belgien
Date and Time:
15 juni 2020, kl. 15.00
Context. Companies continuously explore their software systems to acquire evidence for software evolution, such as bugs in the system and new functional or quality requirements. So far, managers have made decisions about software evolution based on evidence gathered from interpreting user feedback and monitoring data collected separately from software in use. These evidence-collection processes are usually unmethodical, lack a systematic guide, and have practical issues. This lack of a systematic approach leaves unexploited opportunities for detecting evidence for system evolution. Objective. The main research objective is to improve evidence collection from software in use and guide software practitioners in decision-making about system evolution. Understanding use ful approaches to collect user feedback and monitoring data, two important sources of evidence, and combining them are key objectives as well. Method. The thesis proposed a method for Gathering of Evidence from software in Use (GESU), using design science research method. GESU was designed in three iterations and validated in case-studies from European Projects, namely FI-Start, Supersede, and Wise-IoT, respectively. The last version of GESU that was designed in the frame of knowledge creation theory proposed goal-based event-driven monitoring and system-triggered user feedback. Survey and systematic mapping were other research methods used to support the knowledge required for the design. The systematic mapping study contributed to produce a map overviewing important analytics for managing a software ecosystem. Understanding the characteristics of user feedback media, as well as the relations of analytics and product goal was carried out through a survey. Results. The results show that GESU is not only successful in industrial environments but also yields new evidence for software evolution by bringing user feedback and monitoring data together. This combination helps software practitioners improve their understanding of end-user needs and system drawbacks, ultimately supporting continuous requirements elicitation and product evolution. GESU suggests monitoring a software system based on its goals to filter relevant data (i.e., goal-driven monitoring) and gathering user feedback when the system requests feedback about the software in use (i.e., system-triggered user feedback). The system identifies interesting situations of system use and issues automated requests for user feedback to interpret the evidence from user perspectives. We justified using goal-driven monitoring and system- triggered user feedback with complementary findings of the thesis. That showed the goals and characteristics of software systems constrain monitoring data. We thus narrowed the monitoring and observational focus on data aligned with goals instead of a massive amount of potentially useless data. Finally, we found that requesting feedback from users with a simple feedback form is a useful approach for motivating users to provide feedback. Conclusion. Combining user feedback and monitoring data is helpful to acquire insights into the success of a software system and guide decision-making regarding its evolution. This work can be extended in the future by implementing an adaptive system for gathering evidence from combined user feedback and monitoring data.
The thesis is a compilation of seven papers presented in chapters 2 to 8, plus an overview (kappa). Donwload the Phd thesis here, or see each chapter seperately below.
Chapter 1. F. Fotrousi (2020). "Overview".
Chapter 2. F. Fotrousi, M. Stade, N. Seyff, S. Fricker, M. Fiedler (2020). “How do Users Characterise Feedback Features of an Embedded Feedback Channel?” – To be submitted to a Journal.
Chapter 3. F. Fotrousi, S. Fricker, M. Fiedler (2018). “The Effect of Requests for User Feedback on Quality of Experience”, Software Quality Journal, 26(2), 385-415. DOI: 10.1007/s11219-017-9373-7.
Chapter 4. F. Fotrousi, S. Fricker, M. Fiedler (2014). “KPIs in Software Ecosystem: A Systematic Mapping Study”, 5th International Conference on the Software Business (ICSOB), Paphos, Cyprus: Springer, pp 194-211. DOI: 10.1007/978-3-319-08738-2.
Chapter 5. F. Fotrousi, S. Fricker (2016). “Software Analytics for Planning Product Evolution”, 7th International Conference of Software Business (ICSOB), Ljubljana, Slovenia: Springer, pp. 16-31. DOI: 10.1007/978-3-319-40515-5_2.
Chapter 6. F. Fotrousi, S. Fricker, M. Fiedler (2014). “Quality Requirements Elicitation based on Inquiry of Quality-Impact Relationships”, 22nd International Conference on Requirements Engineering (RE), Karlskrona, Sweden: IEEE, pp: 303-312. DOI: 10.1109/RE.2014.6912272.
Chapter 7. M. Oriol, M. Stade, F. Fotrousi, S. Nadal, J. Varga, N. Seyff, A. Abello, X. Franch, J. Marco, O. Schmidt (2018). “FAME: Supporting Continuous Requirements Elicitation by Combining User Feedback and Monitoring”, 26th International Conference on Requirements Engineering (RE), Banff, Canada: IEEE. pp: 217-227. DOI: 10.1109/RE.2018.00030.
Chapter 8. F. Fotrousi, S. Fricker, D. Wüest (2020). “A Method for Evidence Elicitation from Software in Use to Support Software Evolution”- To be submitted to a Journal.
1. Contribution to deliverables of FI-STAR European project:
D6.2: Common test platform (confidential)
D6.4: Validated services at experimentation sites (confidential)
2. Contribution to deliverables of SUPERSEDE European project:
D1.2: Direct multi-modal feedback gathering techniques, V1 (link)
D1.4: Comprehensive monitoring techniques, v1 (link)
1. F. Fotrousi, S. Fricker (2016). “QoE probe: A requirement-monitoring tool”, 22nd International Working Conference on Requirements Engineering: Foundation for Software Quality (REFSQ), Gothenburg, Sweden
2. F. Fotrousi, N. Seyff, J. Börstler (2017). “Ethical Considerations in Research on User Feedback”, Second Workshop on Crowd-Based Requirements Engineering (CrowdRE), Lisbon, Portugal.
3. D. Wüest, F. Fotrousi, S. A. Fricker (2019). “Combining Monitoring and Autonomous Feedback Requests to Elicit Actionable Knowledge of System Use”, 25nd International Working Conference on Requirements Engineering: Foundation for Software Quality (REFSQ), Essen, Germany.
4. M. Stade, F. Fotrousi, N. Seyff, and O. Albrecht (2017). “Feedback gathering from an industrial point of view, 25th International IEEE Conference on Requirements Engineering (RE’17), Lisbon, Portugal.
5. S. Fricker, K. Schneider, F. Fotrousi, C. Thuemmler (2016). “Workshop videos for requirements communication”, Requirements Engineering Journal, 21(4), 521-552.
6. M. Stade, M. Oriol, O. Cabrera, F. Fotrousi, R. Schaniel, N. Seyff, O. Schmidt (2017). “Providing a user forum is not enough: first experiences of a software company with CrowdRE”, Second Workshop on Crowd-Based Requirements Engineering (CrowdRE), Lisbon, Portugal.
7. N. Seyff, M. Stade, F. Fotrousi, M. Glinz, E. Guzman, M. Kolpondinos-Huber, R. Schaniel (2017). “End-user Driven Feedback Prioritization”, REFSQ Workshops, Essen, Germany.
8. F. Fotrousi, K. Izadyan, and S. A. Fricker (2013). “Analytics for Product Planning: In-depth Interview Study with SaaS Product Managers.” Sixth International IEEE Conference on Cloud Computing (CLOUD), Santa Clara Marriott, CA, USA.
9. F. Fotrousi (2016). “Quality-Impact Assessment of Software Systems”. In Ph.D. Symposium of 24th IEEE conference on Requirements Engineering Conference (RE), Beijing, China .
10. S. Fricker, F. Fotrousi, M. Fiedler, P. Cousin (2013). "Quality of Experience Assessment based on Analytics", 2nd European Teletraffic Seminar (ETS), Karlskrona, Sweden.
11. J. Molleri, I. Nurdiani, F. Fotrousi, K. Petersen (2019). “Experiences on studying Attention through EEG in the Context of Review Tasks”, Evaluation and Assessment in Software Engineering Conference (EASE), Copenhagen, Denmark.