Preclinical Endodnotics
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Sed ultrices ac elit sit amet porttitor. Suspendisse congue, erat vulputate pharetra mollis, est eros fermentum nibh, vitae rhoncus est arcu vitae elit.
Farnaz is a Software Engineer and researcher. She is interested in applied research, combining her over twelve years of industrial experience in software companies with research knowledge. Her main research interests are user feedback for modelling product success, requirement engineering, continuous software evolution, software product management, and software ethics.
Farnaz is an assistant professor at Chalmers and Gothenburg University. Her current research is on the applications of conversational AIs utilising LLMs (Large Language Models), in Requirements Engineering and Continuous Evolution of Software Systems.
She organized several workshops, including ASYDE 2023, CrowdRE 2022, SEthics 2021, REthics 2020 and was the proceeding co-chair of RE2021. She has served as the review of the journals "Requirements Engineering", "Empirical Software Engineering", and "Journal of Systems and Software", and also in the PC of conferences and workshops, including RE, REFSQ, QUATIC, RCIS, CrowdRE, and IWSPM. She collaborated on three European projects: FI-STAR, SUPERSEDE, and Wise-IoT.
Master of Science in Software Engineering
Blekinge Institute of Technology
Master of Science in Computer Science - Artificial Intelligence & Robotics
Iran University of Science and Technology
Bachelor of Science in Computer Engineering (Software)
Azad University of Tehran (Central Branch)
The research aims to explore end-users experiences with a software system at runtime, use the collected evidence to gain insights into the evolution of software systems. The study investigates methods for collecting user feedback and using the data to support system analytics and vice versa to support the product team.
User feedback collected at runtime can be used as complementary information to support system analytics such as performance and usability that reflect the system's strengths and weaknesses. Understanding users' desires and experience can create an insight to interpret the system analytics, verify the implemented features and validate against users' acceptances supporting evolving software systems.
Sometimes the collected user feedback is not clear, for example missing the context information. And the product team needs further communication with the users for more clarifications. So, the research problem is how to collect user feedback efficiently that is understandable, unambiguous, and address the project's team questions.
The research designs a feedback bot that simulates users' conversations with the product team in an online setting throughout days 24/7, adapts the conversations considering system analytics to collect informative feedback. The research investigates applications of the adaptive collection of user feedback for requirements elicitation and the evolution of software systems.
Background: Empirical studies involving human participants need to follow procedures to avoid causing harm to the subjects. However, it is not always clear how researchers should report these procedures.
Aim: This study investigates how researchers report ethical issues in the software engineering journal publications, particularly informed consent, confidentiality, and anonymity.
Method: We conducted a literature review to understand the reporting of ethical issues in software engineering journals. In addition, in a workshop, we discussed the importance of reporting the different ethical issues.
Results: The results indicate that 49 out of 95 studies reported some ethical issues. Only six studies discussed all three ethical issues. The subjects were mainly informed about the study purpose and procedure. There are limited discussions on how the subjects were informed about the risks involved in the study. Studies reported on how authors ensured confidentiality have also discussed anonymity in most cases. The results of the workshop discussion indicate that reporting ethical issues is important to improve the reliability of the research results. We propose a checklist based on the literature review, which we validated through a workshop.
Conclusion: The checklist proposed in this paper is a step towards enhancing ethical reporting in software engineering research.
The paper demonstrates a chatbot used for elicitation of contextual information. The chatbot is an open-source application under GNU GPLv3. The project files of the chatbot are accessible at https://github.com/RWolfing/BugBot. A demonstration of the tool can be watched at https://youtu.be/a2gSBaijiY8.
Context. Companies continuously explore their software systems to acquire evidence for software evolution, such as bugs in the system and new functional or quality requirements. So far, managers have made decisions about software evolution based on evidence gathered from interpreting user feedback and monitoring data collected separately from software in use. These evidence-collection processes are usually unmethodical, lack a systematic guide, and have practical issues. This lack of a systematic approach leaves unexploited opportunities for detecting evidence for system evolution. Objective. The main research objective is to improve evidence collection from software in use and guide software practitioners in decision-making about system evolution. Understanding useful approaches to collect user feedback and monitoring data, two important sources of evidence, and combining them are key objectives as well. Method. We proposed a method for gathering evidence from software in use (GESU) using design-science research. We designed the method over three iterations and validated it in the European case studies FI-Start, Supersede, and Wise-IoT. To acquire knowledge for the design, we conducted further research using surveys and systematic mapping methods. Results. The results show that GESU is not only successful in industrial environments but also yields new evidence for software evolution by bringing user feedback and monitoring data together. This combination helps software practitioners improve their understanding of end-user needs and system drawbacks, ultimately supporting continuous requirements elicitation and product evolution. GESU suggests monitoring a software system based on its goals to filter relevant data (i.e., goal-driven monitoring) and gathering user feedback when the system requests feedback about the software in use (i.e., system-triggered user feedback). The system identifies interesting situations of system use and issues automated requests for user feedback to interpret the evidence from user perspectives. We justified using goal-driven monitoring and system-triggered user feedback with complementary findings of the thesis. That showed the goals and characteristics of software systems constrain monitoring data. We thus narrowed the monitoring and observational focus on data aligned with goals instead of a massive amount of potentially useless data. Finally, we found that requesting feedback from users with a simple feedback form is a useful approach for motivating users to provide feedback. Conclusion. Combining user feedback and monitoring data is helpful to acquire insights into the success of a software system and guide decision-making regarding its evolution. This work can be extended in the future by implementing an adaptive system for gathering evidence from combined user feedback and monitoring data
[Context and motivation] To validate developers’ ideas of what users might want and to understand user needs, it has been proposed to collect and combine system monitoring with user feedback. [Question/problem] So far, the monitoring data and feedback have been collected passively, hoping for the users to get active when problems emerge. This approach leaves unexplored opportunities for system improvement when users are also passive or do not know that they are invited to offer feedback. [Principal ideas/results] In this paper, we show how we have used goal monitors to identify interesting situations of system use and let a system autonomously elicit user feedback in these situations. We have used a monitor to detect interesting situations in the use of a system and issued automated requests for user feedback to interpret the monitoring observations from the users’ perspectives. [Contribution] The paper describes the implementation of our approach in a Smart City system and reports our results and experiences. It shows that combining system monitoring with proactive, autonomous feedback collection was useful and surfaced knowledge of system use that was relevant for system maintenance and evolution. The results were helpful for the city to adapt and improve the Smart City application and to maintain their internet-of-things deployment of sensors.
Context: Software evolution ensures that software systems in use stay up to date and provide value for end-users. However, it is challenging for requirements engineers to continuously elicit needs for systems used by heterogeneous end-users who are out of organisational reach. Objective: We aim at supporting continuous requirements elicitation by combining user feedback and usage monitoring. Online feedback mechanisms enable end-users to remotely communicate problems, experiences, and opinions, while monitoring provides valuable information about runtime events. It is argued that bringing both information sources together can help requirements engineers to understand end-user needs better. Method/Tool: We present FAME, a framework for the combined and simultaneous collection of feedback and monitoring data in web and mobile contexts to support continuous requirements elicitation. In addition to a detailed discussion of our technical solution, we present the first evidence that FAME can be successfully introduced in real-world contexts. Therefore, we deployed FAME in a web application of a German small and medium-sized enterprise (SME) to collect user feedback and usage data. Results/Conclusion: Our results suggest that FAME not only can be successfully used in industrial environments but that bringing feedback and monitoring data together helps the SME to improve their understanding of end-user needs, ultimately supporting continuous requirements elicitation.
Companies are interested in knowing how users experience and perceive their products. Quality of Experience (QoE) is a measurement that is used to assess the degree of delight or annoyance in experiencing a software product. To assess QoE, we have used a feedback tool integrated into a software product to ask users about their QoE ratings and to obtain information about their rationales for good or bad QoEs. It is known that requests for feedback may disturb users; however, little is known about the subjective reasoning behind this disturbance or about whether this disturbance negatively affects the QoE of the software product for which the feedback is sought. In this paper, we present a mixed qualitative-quantitative study with 35 subjects that explore the relationship between feedback requests and QoE. The subjects experienced a requirement-modeling mobile product, which was …
Crowd-based requirements engineering (CrowdRE) is promising to derive requirements by gathering and analyzing information from the crowd. Setting up CrowdRE in practice seems challenging, although first solutions to support CrowdRE exist. In this paper,we report on a German software company's experience on crowd involvement by using feedback communication channels and a monitoring solution for user-event data. In our case study, we identified several problem areas that a software company is confronted with to setup an environment for gathering requirements from the crowd. We conclude that a CrowdRE process cannot be implemented ad-hoc and that future work is needed to create and analyze a continuous feedback and monitoring data stream.
End-user feedback is becoming more important for the evolution of software systems. There exist various communication channels for end-users (app stores, social networks) which allow them to express their experiences and requirements regarding a software application. End-users communicate a large amount of feedback via these channels which leads to open issues regarding the use of end-user feedback for software development, maintenance and evolution. This includes investigating how to identify relevant feedback scattered across different feedback channels and how to determine the priority of the feedback issues communicated. In this research preview paper, we discuss ideas for enduser driven feedback prioritization.
Collecting and using user feedback as a method to support requirements engineering, might undermine user rights. This becomes apparent when looking at related areas, eg, research in user experience, where collecting user feedback also plays an important role. In such settings, researchers need to ensure that the stakeholders' rights and integrity are respected. This paper identifies and discusses some of the ethical challenges and issues a researcher can face, using an example case. Focusing on user feedback, this case can serve as an example for CrowdRE, ie several of our findings might apply to CrowdRE in general. However, further research is needed as our work mainly reflects the challenges experienced by the authors of this paper.
Feedback communication channels allow end-users to express their needs, which can be considered in software development and evolution. Although feedback gathering and analysis have been identified as an important topic and several researchers have started their investigation, information is scarce on how software companies currently elicit end-user feedback. In this study, we explore the experiences of software companies with respect to feedback gathering. The results of a case study and online survey indicate two sides of the same coin: on the one hand, most software companies are aware of the relevance of end-user feedback for software evolution and provide feedback channels, which allow end-users to communicate their needs and problems. On the other hand, the quantity and quality of the feedback received varies. We conclude that software companies still do not fully exploit …
Evolution of a software product is inevitable as product context changes and the product gradually becomes less useful if it is not adapted. Planning is a basis to evolve a software product. The product manager, who carries responsibilities of planning, requires but does not always have access to high-quality information for making the best possible planning decisions. The current study aims to understand whether and when analytics are valuable for product planning and how they can be interpreted to a software product plan. The study was designed with an interview-based survey methodology approach through 17 in-depth semi-structured interviews with product managers. Based on results from qualitative analysis of the interviews, we defined an analytics-based model. The model shows that analytics have potentials to support the interpretation of product goals while is constrained by both product characteristics and product goals. The model implies how to use analytics for a good support of product planning evolution.
Quality requirements, an important class of non-functional requirements, are inherently difficult to elicit. Particularly challenging is the definition of good-enough quality. The problem cannot be avoided though, because hitting the right quality level is critical. Too little quality leads to churn for the software product. Excessive quality generates unnecessary cost and drains the resources of the operating platform. To address this problem, we propose to elicit the specific relationships between software quality characteristics and the impacts of potential quality levels. An understanding of each such relationship can then be used to specify the right quality level by deciding about acceptable impacts. This paper describes an approach to elicit such quality-impact relationships and use them for specifying quality requirements. The approach has been applied with user representatives in requirements workshops and used for determining Quality of Service (QoS) requirements based the involved users' Quality of Experience (QoE). The paper describes the approach in detail and reports experiences from applying the approach in software projects.
Shared understanding of requirements between stakeholders and the development team is a critical success factor for requirements engineering. Workshops are an effective means for achieving such shared understanding. Stakeholders and team representatives can meet and dis- cuss what a planned software system should be and how it should support achieving stakeholder goals. However, some important intended recipients of the requirements are often not present in such workshops: the developers. Thus, they cannot benefit from the in-depth understanding of the requirements and of the rationales for these requirements that develops during the workshops. The simple handover of a requirements specification hardly compensates the rich requirements understanding that is needed for the devel- opment of an acceptable system. To compensate the lack of presence in a requirements workshop, we propose to record that requirements workshop on video. If workshop partic- ipants agree to be recorded, a video is relatively simple to create and can capture much more aspects about require- ments and rationales than a specification document. This paper presents the workshop video technique and a phe- nomenological evaluation of its use for requirements communication from the perspective of software develop- ers. The results show how the technique was appreciated by observers of the video, present positive and negative feedbacks from the observers, and lead to recommenda- tions for implementing the technique in practice.
Quality requirements, an important class of non-functional requirements, are inherently difficult to elicit. Particularly challenging is the definition of good-enough quality. The problem cannot be avoided though, because hitting the right quality level is critical. Too little quality leads to churn for the software product. Excessive quality generates unnecessary cost and drains the resources of the operating platform. To address this problem, we propose to elicit the specific relationships between software quality characteristics and the impacts of potential quality levels. An understanding of each such relationship can then be used to specify the right quality level by deciding about acceptable impacts. This paper describes an approach to elicit such quality-impact relationships and use them for specifying quality requirements. The approach has been applied with user representatives in requirements workshops and used for determining Quality of Service (QoS) requirements based the involved users' Quality of Experience (QoE). The paper describes the approach in detail and reports experiences from applying the approach in software projects.
To create value with a software ecosystem (SECO), a platform owner has to ensure that the SECO is healthy and sustainable. Key Performance Indicators (KPI) are used to assess whether and how well such objectives are met and what the platform owner can do to improve. This paper gives an overview of existing research on KPI-based SECO assessment using a systematic mapping of research publications. The study identified 34 relevant publications for which KPI research and KPI practice were extracted and mapped. It describes the strengths and gaps of the research published so far, and describes what KPI are measured, analyzed, and used for decision-making from the researcher'st of view. For the researcher, the maps thus capture state-of-knowledge and can be used to plan further research. For practitioners, the generated map points to studies that describe how to use KPI for managing of a SECO.
This work, which is connected to the Future Internet Public Private Partnership (FI-PPP) Integrated Project FI-STAR, presents a validation approach for Future Internet applications based on the use of analytics. In particular, it discusses how to use and combine software use and health statistics for the assessment of user-perceived Quality of Experience, in order to monitor user satisfaction, the risk of user churn, and the status of the corresponding ecosystem.
SaaS cloud computing, in contrast to packaged products, enables permanent contact between users of a software product and the product-owning company. When planning the development and evolution of a software product, a product manager depends on reliable information about feature attractiveness. So far, planning decisions were based on stakeholder opinion and the customer's willingness to buy. Whether or not a feature actually is used was out of consideration. Analytics that measure the interaction between users and the SaaS gives product managers unprecedented access to information about product usage. To understand whether and how SaaS analytics can be used for product planning decision, we performed 17 in-depth interviews with experienced managers of SaaS products and analyzed the results analyzed with a mixed-method strategy. The empirical results characterize the relevance of a broad range of analytics for product planning decisions, and the strengths and limitations of an analytics-based product planning approach.
Context. Successful software product management concerns about developing right software products for right markets at the right time. The product manager, who carries responsibilities of planning, requires but does not always have access to high-quality information for making the best possible planning decisions. The following master thesis concentrates on proposing a solution that supports planning of a software product by means of analytics.
Objectives. The aim of the master thesis is to understand potentials of analytics in product planning decisions in a SaaS context. This thesis focuses on SaaS based analytics used for portfolio management, product roadmapping, and release planning and specify how the analytics can be utilized for planning of a software product. Then the study devises an analytics-based method to enable software product planning.
Methods. The current study was designed with a mixed methodology approach, which includes the literature review and survey researches as well as case study under the framework of the design science. Literature review was conducted to identify product planning decisions and the measurements that support them. A total of 17 interview based surveys were conducted to investigate the impact of analytics on product planning decisions in product roadmapping context. The result of the interviews ended in an analytics-based planning method provided under the framework of design science. The designed analytics-based method was validated by a case study in order to measure the effectiveness of the solution.
Results. The identified product planning decisions were summarized and categorized into a taxonomy of decisions divided by portfolio management, roadmapping, and release planning. The identified SaaS-based measurements were categorized into six categories and made a taxonomy of measurements. The result of the survey illustrated that importance functions of the measurement- categories are not much different for planning-decisions. In the interviews 61.8% of interviewees selected �very important� for �Product�, 58.8% for �Feature�, and 64.7% for �Product healthiness� categories. For �Referral sources� category, 61.8% of responses have valuated as �not important�. Categories of �Technologies and Channels� and �Usage Pattern� have been rated majorly �important� by 47.1% and 32.4% of the corresponding responses. Also the results showed that product use, feature use, users of feature use, response time, product errors, and downtime are the first top measurement- attributes that a product manager prefers to use for product planning. Qualitative results identified �product specification, product maturity and goal� as the effected factors on analytics importance for product planning and in parallel specified strengths and weaknesses of analytical planning from product managers� perspectives. Analytics-based product planning method was developed with eleven main process steps, using the measurements and measurement scores resulted from the interviews, and finally got validated in a case. The method can support all three assets of product planning (portfolio management, roadmapping, and release planning), however it was validated only for roadmapping decisions in the current study. SaaS-based analytics are enablers for the method, but there might be some other analytics that can assist to take planning decisions as well.
Conclusion. The results of the interviews on the roadmapping decisions indicated that different planning decisions consider similar importance for measurement-categories to plan a software product. Statistics about feature use, product use, response time, users, error and downtime have been recognized as the most important measurements for planning. Analytics increase knowledge about product usability and functionality, and also can assist to improve problem handling and client-side technologies. But it has limitations regarding to receiving formed-based customer feedback, handling development technologies and also interpreting some measurements in practice. Immature products are not able to use analytics. To create, remove, or enhance a feature, the data trend provides a wide view of feature desirability in the current or even future time and clarifies how these changes can impact decision making. Prioritizing features can be performed for the features in the same context by comparing their measurement impacts. The analytics-based method covers both reactive and proactive planning.
This paper presents an application of genetic algorithm for the problem of finding a specific layout of objects in addition to classify the layout. In other words, it combines the optimization capabilities of a genetic algorithm with classification capability of the k nearest neighbour's algorithm in layout analysis. We try to classify the layout in order to find the most appropriate layout (in terms of profitability). This paper focuses on the representation issues of the problem and on designing of the operators.
Machine Translation is an applicable and developing topic to utilize artificial intelligence techniques. There are some automatic methods for evaluating machine translation results. These methods calculate scores to measures the desirability of the translation, by comparing a candidate translation with a reference one translated by human. However these evaluation methods have not reached to the accepted level of satisfaction yet. Also the parameters that human use in their translations, have not been incorporated in automatic machine translation evaluation. In this study, we apply a learning model using SVM classification; which can evaluate the translated sentences for fluency and adequacy. These parameters (fluency and adequacy) are being incorporated in the automatic machine translation evaluation. Results demonstrate that the proposed model improves the previous automatic MT evaluation methods at the sentence level.
ASYDE 2023 (5th International Workshop on Automated and verifiable Software sYstem Development), co-organized with ASE 2023 - Workshop co-chair
RE 2021 (29th IEEE International Conference on Requirements Engineering) – Proceedings co-chair
CrowdRE 2022 (6th International Workshop on Crowed-based Requirements Engineering, co-organized with RE 2022 - Workshop co-chair
SEthics 2021 (2nd International Workshop on Ethics in Software Engineering Research and Practice), co-organized with ICSE 2021, workshop – Workshop chair
REthics 2020 (1st International Workshop on Ethics in Requirements Engineering Research and Practice), co-organized with RE 2020, Workshop co-chair
IWSPM 2016 (9th International workshop on Software Product Management, co-organized with RE 2016 - Workshop co-chair
RE 2021 (29th IEEE International Conference on Requirements Engineering) – Track: Tool and poster
RE 2020 (28th IEEE International Conference on Requirements Engineering) – Track: Tool and poster
QUATIC 2023 (16th International Conference on the Quality of Information and Communications Technology) - Track: Quality Aspects of Human-Factors in Software Engineering
QUATIC 2022 (15th International Conference on the Quality of Information and Communications Technology) - Track: Quality Aspects in Software Product Management and Requirements Engineering ٍ
REFSQ 2022 (28th International Working Conference on Requirements Engineering: Foundation for Software Quality) – Track: tool and poster track
REFSQ 2021 (27th International Working Conference on Requirements Engineering: Foundation for Software Quality) – Track: tool and poster track
RCIS 2021 (15th International Conference on Research Challenges in Information Science) – research project track
REFRAME 2023 (1st International workshop on Requirements Engineering framework)
CrowdRE 2021 (5th International workshop on Crowd-based Requirements Engineering)
IWSPM 2018 (12th International workshop on Software Product Management)
Requirements Engineering Journal, 2020
Journal of Systems and Software, 2020-2022
Empirical Software Engineering Journal, 2022-2023
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Sed ultrices ac elit sit amet porttitor. Suspendisse congue, erat vulputate pharetra mollis, est eros fermentum nibh, vitae rhoncus est arcu vitae elit.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Sed ultrices ac elit sit amet porttitor. Suspendisse congue, erat vulputate pharetra mollis, est eros fermentum nibh, vitae rhoncus est arcu vitae elit.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Sed ultrices ac elit sit amet porttitor. Suspendisse congue, erat vulputate pharetra mollis, est eros fermentum nibh, vitae rhoncus est arcu vitae elit.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Sed ultrices ac elit sit amet porttitor. Suspendisse congue, erat vulputate pharetra mollis, est eros fermentum nibh, vitae rhoncus est arcu vitae elit.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Sed ultrices ac elit sit amet porttitor. Suspendisse congue, erat vulputate pharetra mollis, est eros fermentum nibh, vitae rhoncus est arcu vitae elit.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Sed ultrices ac elit sit amet porttitor. Suspendisse congue, erat vulputate pharetra mollis, est eros fermentum nibh, vitae rhoncus est arcu vitae elit.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Sed ultrices ac elit sit amet porttitor. Suspendisse congue, erat vulputate pharetra mollis, est eros fermentum nibh, vitae rhoncus est arcu vitae elit.
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Sed ultrices ac elit sit amet porttitor. Suspendisse congue, erat vulputate pharetra mollis, est eros fermentum nibh, vitae rhoncus est arcu vitae elit.
Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum.
I would be happy to talk to you if you need my assistance in your research or any support from me or if you are so kind as to share your knowledge and experiences with me.
Department: Computer Science and Engineering
Room: 460
Address: University of Gothenburg, Jupiter building, Hörselgången 5, 417 56, Göteborg, Sweden