Specifying the Process Model for Systematic Reviews: An Augmented Proposal

Context : Systematic Literature Review (SLR) is a research methodology intended to obtain evidence from scientific articles stored in digital libraries. SLRs can be performed on primary and secondary studies. Although there are guidelines to the SLR process in Software Engineering, the SLR process is not fully and rigorously specified yet. Moreover, it can often be observed a lack of a clear separation of concerns between what to do (process) and how to do it (methods). Objective : To specify the SLR process in a more detailed and rigorous manner by considering different process modeling perspectives, such as functional, behavioral, organizational and informational. The main objective in this work is specifying the SLR activities rather than their methods. Method : The SPEM (Software & Systems Process Engineering Metamodel) language is used to model the SLR process from different perspectives. In addition, we illustrate aspects of the proposed process by using a recently conducted SLR on software testing ontologies. Results : Our SLR process model specifications favor a clear identification of what task/activities should be performed, in which order, by whom, and which are the consumed and produced artifacts as well as their inner structures. Also, we explicitly specify activities related to the SLR pilot test, analyzing the gains. Conclusion : The proposed SLR process considers with higher rigor the principles and benefits of process modeling backing SLRs to be more systematic, repeatable and auditable for researchers and practitioners. In fact, the rigor provided by process modeling, where several perspectives are combined, but can also be independently detached, provides a greater richness of expressiveness in sequences and decision flows, while representing different levels of granularity in the work definitions, such as activity, sub-activity and task.


Introduction
A Systematic Literature Review (SLR) aims at providing an exhaustive evidence of relevant literature for a set of research questions. Initially, SLRs were conducted in Clinical Medicine (Rosenberg et al. 1996). Since Kitchenham issued in 2004 a technical report (Kitchenham 2004) about SLRs, the use of SLR in different scientific communities of Software Engineering (SE) has become more and more frequent for gathering evidence mainly from primary studies and, to a lesser extent, from secondary studies. The output document yielded when applying the SLR process on primary studies is called secondary study, while applying it on secondary studies is called tertiary study. To quote just a few examples, authors in Sepúlveda et al. (2016), Tahir et al. (2016), and Torrecilla-Salinas et al. (2016) document secondary studies on diverse topics in SE, while the authors in Garousi & Mäntylä (2016) and Kitchenham et al. (2010b) report tertiary studies. Very often researchers have reused the procedures and guidelines proposed in Kitchenham (2004), which first were reviewed by Biolchini et al. (2005), and later updated by Kitchenham and her colleagues in 2007(Brereton et al. 2007, Kitchenham & Charters 2007. More recently, by conducting a SLR, Kitchenham & Brereton (2013) evaluated and synthesized studies published by SE researchers (including different types of studies, not only primary ones) that discuss about their experiences in conducting SLR and their proposals to improve the SLR process.
Even though SLRs have become in an established methodology in SE research, and there exist guidelines that help researchers to conduct a SLR, the SLR process itself is not fully and rigorously specified yet. Figure 1 depicts the process specification made by Brereton and Kitchenham et al., which was totally adopted or slightly adapted by the rest of the SE community up to the present time. This process specification shows 'what' to do through its phases and steps and in which order -or in other words, through its processes, activities and tasks as per Becker et al. (2015).
However, the process in Figure 1 can be improved if we take into account the principles of process modeling proposed by Curtis et al. (1992), and used for instance in Becker et al. (2012). Curtis et al. describe four perspectives (views) for modeling a process:  Functional: which describes what activities should be carried out and what flow of artifacts (e.g., documents) is necessary to perform the activities and tasks;  Behavioral: which specifies when activities should be executed, including therefore the identification of sequences, parallelisms, iterations, etc.;  Organizational: which aims at showing where and who are the agents (in compliance with roles) involved in carrying out the activities; and,  Informational: which focuses on the structure of artifacts produced or consumed by activities, on their interrelations, etc. Therefore, a full process specification considering different perspectives contributes to a clearer identification of what task/activities should be performed, in which order, by whom, and which are the consumed and produced artifacts as well as their inner structure.
In addition to these four views, a methodological perspective is defined in Olsina (1998), which specifies particularly what constructors (i.e., methods) are assigned to the activity descriptions. However, we detect that there is often a lack of a clear separation of concerns between what to do (process) and how to do it (methods). Consequently, sometimes methods are included as activities in the process, such as in Garousi & Mäntylä (2016), as we discuss later on.
Some benefits of using process modeling to strengthen the process specifications in general, and to strengthen the SLR process in particular, are: To facilitate the understanding and the communication, which it implies that the process model (with the richness that graphic representations provide) should be understandable for the target community; to give support to the process improvement, since all the fundamental perspectives of the process model are identified, which benefits the reutilization and the evaluation of impacts in front of potential changes in the process; to give support to process management, that is, to the planning, scheduling, and monitoring and control activities; to allow the process automation, which can help to provide supporting tools and to improve the performance; to favor the verification and validation of the process, fostering thus the consistency, repeatability and auditability in projects.
Additionally, in large-scale studies, like a SLR, a pilot or small-scale trial often precedes the main study in order to analyze its design validity. Therefore, it is very useful for researchers to conduct a SLR pilot to test whether aspects of the SLR design (such as the search string, selection criteria and data extraction form) are suitable. However, we observe that activities related to the pilot test are not explicitly speci-fied in current SLR process models, as it happens in the process representation of Figure 1.
It is important to remark that the present paper is a significantly extended version of Tebes et al. (2019a). In this work, we include the organizational perspective for the SLR process, and new models from the functional and behavioral perspectives for some activities. Furthermore, in Tebes et al. (2019a) the study on software testing ontologies is illustrated just for the SLR pilot test. Here, we use the same study to illustrate fully the produced main artifacts throughout the SLR process. On the other hand, this work differs from Tebes et al. (2019b), whose focus is mainly on the analysis of the retrieved software testing ontologies but not on the process perspectives as we do in the next sections.
Summarizing, the main contribution of this work is to augment the existing SLR process specifications, considering the principles and benefits of process modeling such those described above. To this aim, we use the functional, behavioral, informational and organizational perspectives. Furthermore, we specify activities related to the SLR pilot test, which are often neglected in other SLR process specifications. As a result, the SLRs can be more systematic, repeatable and auditable for researchers and practitioners. It is worth noting that in the current work, regarding those quoted benefits of process modeling, our SLR process specifications aim primarily at facilitating the understanding and communication, as well as at giving support to the process improvement and process management. However, a thorough discussion and a detailed illustration of process modeling for supporting fully SLR process automation are out of the scope of this article. This paper is structured as follows: Section 2 addresses related work. Section 3 specifies the proposed SLR process considering different process modeling perspectives. Section 4 illustrates a practical case applied to software testing ontologies from the process modeling perspectives standpoint. Section 5 discusses some benefits of our SLR process. Finally, Section 6 presents our conclusions and outlines future work.

Motivation and Related Work
One motivation for modeling the SLR process arose from certain difficulties that we faced (all the authors of this paper) when carrying out a SLR pilot study about software testing ontologies (Tebes et al. 2018, Tebes et al. 2019a). The general objective of this pilot study was to be able to refine and improve aspects of the protocol design such as the research questions, search protocol, selection and quality criteria, and/or data extraction forms. Analyzing several works about SLR, we have observed at least three main issues. First, activities related to the SLR pilot test are often omitted or not explicitly specified. Second, some aspects of the existing SLR processes are weakly specified from the point of view of the process modeling perspectives. Third, there is often a lack of a clear separation of concerns between what to do (process) and how to do it (methods). Next, we comment related work to SLR process specification where these issues were detected.
The first graphic representation of the SLR process proposed by Kitchenham (Brereton et al. 2007) was outlined in 2007 -taking into account previous works of the same authors (Kitchenham 2004) and other contributions such as Biolchini et al. (2005). It was totally adopted or slightly adapted by the rest of the SE community until to the present moment. Most of the works divide the process into three phases or stages: Plan Review, Conduct Review and Document Review. While at phase level the same three main activities are generally preserved (for example, in Sepúlveda et al. (2016), Tahir et al. (2016), Torrecilla-Salinas et al. (2016), to quote just a few works), at step level (sub-activities and tasks) they differ to some extent from each other. For example, in Tahir et al. (2016) three steps are modeled for phase 1: 1) Necessity of SLR; 2) Research questions formation; and 3) Review protocol formation. Note that these steps differ from those shown in Figure 1. Moreover, in Sepúlveda et al. (2016) five steps are presented for phase 1: 1) Goal and need of SLR; 2) Define research questions; 3) Define search string; 4) Define inclusion and exclusion criteria; and 5) Protocol Validation. The same lack of consensus in naming and including steps is observed in the abovementioned works for phase 2. Furthermore, in these works just a behavioral perspective is used to specify the process, so inputs, outputs and roles are not considered in these process models.
Although the SLR pilot test activity is usually neglected, in Sepúlveda et al. (2016) the "Pilot selection and extraction" step is included in phase 2. Nevertheless, the selection and pilot extraction step does not iterate into -or feedback tophase 1, which may help to improve SLR design aspects, as we model in our proposed process specification in Figure 2.
In Garousi & Mäntylä (2016) and Irshad et al. (2018), we observe another adaptations or variations of the process documented in Brereton et al. (2007). In Irshad et al. (2018) the use of two methods called backward and forward snowballing is emphasized, while in Garousi & Mäntylä (2016) the snowballing activity is included in the systematic review process. Table 1 summarizes the analyzed features of the SLR processes considered in this Related Work Section.
On the other hand, it is important to remark that while SLRs are focused on gathering and summarizing evidence of primary or secondary studies, systematic mapping (SM) studies are used to structure (categorize) a research area. According to Marshall & Brereton (2013), a SM is a more 'open' form of SLR, which is often used to provide an overview of a research area by assessing the quantity of evidence that exists on a particular topic.
In Petersen et al. (2015), authors performed a SM study of systematic maps, to identify how the SM process is conducted and to identify improvement potentials in conducting the SM process. Although there are differences between SLRs and SMs regarding the aim of the research questions, search process, search strategy requirements, quality evaluation and results (Kitchenham et al. 2010a), the process followed in Petersen et al. (2015) is the same to that used for SLRs.
Therefore, we can envision that our proposed process can be used for both SLR and SM studies. What can differ, it is the use of different methods and techniques for some activities and tasks, mainly for the analysis, since as mentioned above the aim and scope for both are not the same, as also analyzed in the Napoleão et al. (2017) tertiary study.
In summary, as an underlying hypothesis, the existing gap in the lack of standardization of the SLR and SM processes currently used by the scientific communities can be minimized, if we would consider more appropriately the principles and benefits of process modeling enumerated in the Introduction Section. * The specification of the informational perspective is represented by a text-based work-product breakdown structure. ** It has no graphical representation for the followed SLR process. However, authors adopt the Kitchenham & Charters (2007) process and the Wohlin (2014) guidelines for snowballing.

Augmented Specification of the SLR Process
Considering that, there is no generalized consensus yet in the terminology used in the process domain, we introduce the meaning of some terms used in this work and then we focus on the SLR process specification. Note that the terms considered below (highlighted in italic) are taken from the Process Core Ontology (ProcessCO) documented in Becker et al. (2015).
In this work, a process is composed of activities. In turn, an activity can be decomposed into tasks and/or into activities of lower level of granularity called sub-activities. A task is considered an atomic element (i.e., it cannot be decomposed). Besides, process, activity and task are considered work (entity) definitions, which indicate 'what' to do. Every work definition (process/activity/task) consumes, and modifies and/or produces work products. A particular work product type is artifact (e.g., diagrams, documents, among others). Additionally, methods are resources that indicate 'how' to carry out the description of a work definition. In ProcessCO, many methods may be applicable to a work description. Lastly, an agent is a performer assigned to a work definition in compliance with a role. In turn, the role term is defined as a set of skills (abilities, competencies and responsibilities) that an agent ought to own in order to perform a work definition.
Regarding the main aim of this section, Figure 2 illustrates the proposed SLR process from the behavioral perspective using SPEM (OMG 2008). There are several process modeling languages such as BPMN (OMG 2011), SPEM and UML Activity Diagram (OMG 2017), which are the most popular in the academy and industry. Their notations are very similar considering different desirable features such as expressiveness (i.e., amount of supported workflow patterns), understandability, among others (Portela et al. 2012, Russel et al. 2006, White 2004. From the functional, behavioral and organizational perspectives, SPEM, UML and BPMN are suitable modeling languages that can be used. However, for the informational perspective, BPMN is not a suitable language. SPEM allows to use both BPMN Business Process Diagram and UML Activity Diagram, among other diagrams like the UML Class Diagram for specifying all process perspectives.  As seen in Figure 2, our proposed process, like the original process (Brereton et al. 2007), has three main activities: (A1) Design Review, (A2) Implement Review and (A3) Analyze and Document Review. In turn, these activities group sub-activities and tasks. Note that for the Design Search Protocol sub-activity, the included tasks are shown as well, while not for the rest of the A1 sub-activities. It is done so intentionally to communicate that sub-activities have tasks, but at the same time for not giving all the details in order to preserve the diagram legibility.
As the reader can notice, our process specification has more details than other currently used models for SLRs, from the behavioral perspective standpoint. For example, we introduce decision nodes (diamonds in Figure 2) to represent iterations (e.g. between Validate SLR Design and Improve SLR Design in A1) and to convey that some activities/tasks could be not performed (e.g. the Perform SLR Pilot Study sub-activity in A2 is optional). Consequently, our process helps to indicate explicitly to researchers and practitioners that the SLR process is not totally sequential.
It is worth mentioning that Figure 2 shows a recommended flow for the SLR process. In other words, it represents the "SLR-to-be" rather than the "SLR-as-is" process. We are aware that in a process instantiation there might be some variation points, including the parallelization of some tasks, and so on, as we discuss to some extent later on.
Furthermore, aimed at enriching the process specification, we consider in Figure 3 the functional perspective joint to the behavioral perspective. Therefore, throughout the entire process, we can see the flow of activities and the work products consumed and produced in each activity/task. The functional perspective is very important to check out which documents are needed to perform a task and which documents should be produced, serving for verification purposes as well. Unfortunately, the functional perspective in the current SLR process proposals is often neglected.
Considering that a SLR is a very time-consuming endeavour that hardly can be faced by just one person, usually several researchers are involved playing different roles. The SLRs with the highest quality should have input from experts in the subject being reviewed, in the different methods for search and retrieval, in qualitative and quantitative analysis methods, among many other aspects. Therefore, the organizational perspective to show the different roles involved in a SLR process can be used as represented in Figure 4.
Among these roles, A1 includes the SLR Designer (or Research Librarian) whose agent should develop comprehensive search strategies and identify appropriate libraries. Also, this role in conjunction with the Analysis Expert Designer are needed for the design of the data extraction form as well as for the definition of potential analysis methods. The SLR Validator role whose agent should have expertise in conducting systematic reviews, and the Domain Expert role, which should be played by an agent aimed at validating the protocol and clarifying issues related to the topic under investigation. Table 2 describes the responsibilities and/or capabilities required by the different roles. Note that an agent can play different roles and, in turn, a role can be played by one or more agents (or even by a team). For example, in a given SLR the Data Collector role is frequently played by several researchers since Extract Data from a Sample of Documents and Extract Data from all Documents sub-activities are very time consuming and require a lot of effort.
In the following sub-sections, the three main activities are described considering their sub-activities and tasks, sequences, inputs and outputs, and roles, by considering the functional, behavioral and organizational perspectives. Additionally, to enrich the process specifications, in some cases, the informational perspective is used as illustrated later on.

To Design the Review (A1)
The main objective of the Design Review (A1) activity is to design the SLR protocol. To achieve this, the tasks and activities depicted in the light-blue box in Figure 3 should be performed following the represented flow and the input and output artifacts.
As shown in Figure 3, the first task is to Specify Research Questions, which consumes the "SLR Information Need Goal Specification" artifact. It contains the goal purpose and the statement established by researchers, which guides the review design. Then, from the "Research Questions", the Design Search Protocol activity is carried out. This includes the Table 2. Definitions of roles for the SLR process.

Role
Definition (in terms of Responsibility/Capability)

Analysis Expert Designer
A researcher responsible for identifying the suitable qualitative/quantitative data analysis methods and techniques to be used. The agent that plays this role should also be capable on managing documentation and visualization techniques.

Data Analyzer
Responsible for conducting the data analysis.

Data Collector
Responsible for extracting data from primary or secondary studies.

Domain Expert
A researcher or practitioner with knowledge, skills and expertise in a particular topic or domain of interest.

Expert Communicator
Researcher with rhetoric and oratory skills who communicates the SLR results to an intended community/audience.

SLR Designer
A researcher with knowledge and skills for designing and specifying SLR protocols.

SLR Expert Researcher
A researcher with knowledge and expertise in conducting SLR studies.

SLR Performer
A researcher with knowledge and skills for retrieving documents. The agent that plays this role should be expert in using search engines and document retrieval methods and techniques.

SLR Validator
A researcher with expertise in SLR for checking the suitability and validity of a SLR Design.
Specify Search String and Identify Metadata for Search tasks, as well as the Select Digital Libraries sub-activity. In turn, the latter includes the Define Digital Libraries Selection Criteria and Identify Digital Libraries tasks as represented in Figure  5. Examples of digital libraries selection criteria can be the target language and the library domain, among others. Digital libraries' selection can determine the scope and validity of the reviewers' conclusions. As a result of the Design Search Protocol activity, the "Search Protocol" is obtained, which includes a search string consisting of terms and logical operators, the metadata on which the search will be applied (e.g. title and abstract) and, the selected digital libraries (e.g. IEEE, ACM, Springer Link, Google Scholar, among others). From the "Search Protocol" and "Research Questions" artifacts, it is possible to execute the Define Selection and Quality Criteria sub-activity. This produces the "Selection Crite-ria" and "Quality Criteria" artifacts. The criteria can be indicated in a checklist with the different items to be considered. The "Selection Criteria" documents the set of inclusion and exclusion criteria, i.e., the guidelines that determine whether an article will be considered in the review or not. Reviewers should ask: Is the study relevant to the review's purpose? Is the study acceptable for review? To answer these questions, reviewers formulate inclusion and exclusion criteria. Each systematic review has its own goal purpose and research questions, so its inclusion and exclusion criteria are usually unique (except for a replication). However, inclusion and exclusion criteria typically belong to one or more of the following categories: (a) study population, (b) nature of the intervention, (c) outcome variables, (d) time period, (e) cultural and linguistic range, and (f) methodological quality (Meline   Figure 6 "Inclusion Criteria" and "Exclusion Criteria" artifacts are part of the "Selection Criteria" artifact.

2006). Note that in
The "Quality Criteria" documents features that allow to evaluate the quality of retrieved studies in A2, as well as to identify relevant or desirable aspects for the researchers. Sometimes, quality criteria are used like inclusion/exclusion criteria (or to build them) because are very important to select studies of high quality for deriving reliable results and conclusions , Kitchenham & Charters 2007. In other cases, researchers did not plan to exclude any studies based on the quality criteria. To use "Quality Criteria" as "Selection Criteria" is a critical decision because if the inclusion criteria are too broad, poor quality studies may be included, lowering the confidence in the final result; but if the criteria are too strict, the results are based on fewer studies and may not be the yielded evidence generalizable (Lam & Kennedy 2005).
As shown in Figure 3, the next activity is Design Data Extraction Form. As output, the "Template of the Data Extraction Form" is yielded, whose fields are defined from the "Research Questions" and "Quality Criteria". This will be used in A2 to collect information about each selected article. Note in Figure 4 that this activity is performed by the SLR Designer and the Analysis Expert Designer. The former should has knowledge and skills to design and specify the data extraction form, while the latter expertise to identify required data types for analysis purposes (look at the annotation for the Design Data Extraction Form sub-activity, in Fig 3). Then, all the artifacts produced until this moment should be validated. To validate the SLR Design implies reviewing such documents in order to detect problems or opportunities for improvement. Usually, researchers with expertise in conducting SLRs perform this activity (see SLR Expert Researcher and SLR Validator definitions in Table 2). As out-come, the "SLR Protocol" document is obtained, which contains all the artifacts previously produced, such as represented in the informational perspective in Figure 6.
Lastly, it is worth mentioning that the "SLR Protocol" document may be in an approved, corrected or disapproved state. In the latter case, a list of "Detected Problems/Suggested Improvements" should also be produced. This artifact will serve as input to the Improve SLR Design activity, which includes tasks such as Correct Research Questions, Correct Search String, among other tasks, in order to introduce changes in the "SLR Protocol", i.e., to introduce corrections to improve it. Once the protocol has been corrected, the Validate SLR Design activity is performed again aimed at checking that the corrected protocol complies with the "SLR Information Need Goal Specification". Ultimately, A1 activity ends when the "SLR Protocol" is approved.

To Implement the Review (A2)
The A2 main objective is to perform the SLR. Pink box in Figure 3 shows the different sub-activities and tasks of A2 together with its input and output artifacts. Note that for firsttime cases where a study is not a repeated or replicated one, performing first a pilot test is recommended, which is aimed at fitting the "SLR Protocol" produced in the A1 activity. Note that this concern is usually neglected or poorly specified in other existing SLR/SM processes.
When the realization of a SLR pilot study (A2.1) is taken into account, the first task to be enacted by the SLR Performer is Select Digital Libraries for Pilot Study (see Figure  7, which mainly emphasizes the flow of tasks and activities for the pilot study). This consists of choosing a library subset (usually one or two) from the "Selected Digital Libraries" artifact produced in A1. Then, the Execute Search Protocol task on selected libraries considering the "Search String" and the "Metadata for Search" is enacted. As outcome, a list of "Pilot Study Retrieved Documents" is produced. From this list, in Apply Selection Criteria activity, the articles are downloaded and filtered out considering the "Inclusion Criteria" and "Exclusion Criteria". This results in the "Pilot Study Selected Documents".
From this subset of documents, the Extract Data from a Sample of Documents is done by Data Collectors (see Figure  8). This activity involves the Select Sample of Documents task, which can be done randomly (Kitchenham 2004). Then, for each document, the Extract Data from Sample task is performed by using the "Template of the Data Extraction Form". Note that data is extracted from only one sample since the aim of the pilot test is just to analyze how suitable the protocol being followed is. If more than one Data Collector will use the forms in the final review, then it is recommended that more than one Data Collector participate in the pilot study data extraction. Testing the forms by different Data Collectors can be useful to find inconsistencies.
Finally, considering all the parts that integrate the "SLR Protocol" artifact ( Figure 6) as well as the "Forms with Pilot Extracted Data", the Analyze Suitability of the SLR Design activity is performed by the SLR Validator and the Expert Domain. This analysis permits to adjust the data extraction form in addition to other protocol aspects such as the research questions, search string and/or selection criteria. For example, a method to validate the search string is checking if a set of known papers is recovered among the "Pilot Study Selected Documents". When no problem is detected in the protocol, the Perform SLR activity (A2.2) is carried out. However, if a problem is detected or there is an opportunity for improvement, the Improve SLR Design and Validate SLR Design activities should be carried out again, as shown in the behavioral perspective specified in Figure 7. Once all the changes have been made and the "SLR Protocol" has been approved, the A2.2 sub-activity should be executed. Notice in Figure 7 that a new cycle of the pilot study could be performed, if were necessary.
The Perform SLR (A2.2) implies the Execute Search Protocol task taking now into account all the "Selected Digital Libraries". The SLR Performer must Apply Selection Criteria on the "Retrieved Documents" in order to filter out those that do not meet the criteria defined in A1. As artifact, "Selected Documents" is yielded serving as input to the Add Non-Retrieved Relevant Studies sub-activity. This activity is usually performed using a citation-based searching method, for example, the forward snowballing method (i.e., finding papers that cited papers found by a search process) or the backward  snowballing method (i.e., looking at the references of the papers found by a search process).
At the end, the Extract Data from all Documents activity is done by using the "Template of the Data Extraction Form". This activity is performed by one or more Data Collectors. Depending on Data Collector agents' experience, the available resources, amount of articles, among other factors, a given article can be analyzed by one or two agents. In cases where the same article is read independently by several agents (as Data Collectors), the extracted data should be compared and disagreements be solved by consensus among them or by an additional researcher, maybe by an agent that plays the SLR Expert Researcher role. If each document is just reviewed by one Data Collector agent, for example, due to time or resource constraints, it is important to ensure that some method will be used for verifying consistency. Note in Figure 3 that discrepancies should be recorded in the "Divergencies Resolution Report", as also suggested by Biolchini et al. (2005). Once A2 is accomplished, the "Forms with Extracted Data" artifact is available for the A3 activity.

To Analyze and Document the Review (A3)
A3 is a central activity in the entire SLR/SM process. The main objective of this activity is to synthesize the analysis results based on the available scientific evidence in order to draw conclusions and communicate the findings. Furthermore, considering that a SLR should be systematic, reproducible and auditable, the continuous documentation of the followed process, applied methods and produced artifacts is a key issue. (Note that there are additional activities to those specified in Figure 2 and Figure 3, in which the management of a SLR project -specifically, the project planning and scheduling-should also take into accounts such as the documention of artifacts in all activities and control their changes and versions). Figure 3 (in gray box) shows that Analyze SLR Results is the first sub-activity to be performed in A3. This implies in turn the Design SLR Analysis sub-activity, which is performed by an agent that plays the Analysis Expert Designer role, who is responsible for identifying the suitable data analysis methods and techniques to be used. As output, the "SLR Analysis Specification" is produced. Then, the Implement SLR Analysis sub-activity should be enacted by a Data Analyzer, who is responsible for conducting the data analysis. Analysis is carried out looking at the "Forms with Extracted Data" and "SLR Analysis Specification" artifacts. In analysis, diverse measurement, evaluation, categorization and aggregation methods of data as well as visualization means (such as tables, charts, word clouds, among others) can be used in order to give answer to the established research questions, e.g. to address the findings of similarities and differences between the studies, among many others. As a result, the "Data Synthesis" artifact is produced. The synthesis is usually descriptive. However, sometimes, it is possible to supplement a descriptive synthesis with quantitative summaries through meta-analysis, using arithmetical and statistical techniques appropriately.
Finally, an Expert Communicator carries out the Docu-ment/Communicate Results sub-activity. To this end, dissemination mechanisms are first established, for example, technical reports, journal and conference papers, among others. Then, the documents that convey the results to the intended community are produced. In this way, the SLR process concludes. All the collected evidence and summarizations might be publicly available for auditability reasons.

Application of the proposed SLR Process on Primary Studies
In Olsina & Becker (2017) In order to illustrate the proposed SLR process, next, we introduce the rationale for our research to contextualize the SLR study on software testing ontologies described later on.

Rationale for the SLR Study on Software Testing Ontologies
A strategy is a core resource of an organization that defines a specific course of action to follow, i.e., specifies what to do and how to do it. Consequently, strategies should integrate a process specification, a method specification, and a robust domain conceptual base (Becker et al. 2015). This principle of integratedness promotes, therefore, knowing what activities are involved, and how to carry them out by means of methods in the framework of a common domain terminology. In Olsina & Becker (2017), to achieve evaluation purposes, a family of strategies integrating the three-abovementioned capabilities is discussed. The conceptual framework for this family of evaluation strategies is called C-INCAMI v.2 (Contextual-Information Need, Characteristic Model, Attribute, Metric and Indicator) (Becker et al. 2015).
This conceptual framework was built on vocabularies or terminologies, which are structured in ontologies. Figure 9 depicts the different C-INCAMI v.2 conceptual components or modules, where the gray-shaded ones are already developed. The ontologies for Non-Functional Requirements (NFRs), NFRs view, Functional Requirements (FRs), business goal, project, and context are defined in , while for measurement and evaluation are in Becker et al. (2015). The remainder ontologies (for testing, development and maintenance) are not built yet.
Bearing in mind that there are already integrated strategies that provide support for achieving evaluation purposes, the reader can assume that strategies that provide support for achieving testing purposes are feasible to be developed as well. Given that a strategy should integrate a well-established domain terminology, therefore, a well-specified testing strategy should also have this capability for the testing domain. A benefit of having the suitable software testing ontology is that would minimize the heterogeneity and ambiguity problems that we currently observe in the different concepts dealing with testing methods and processes.
In this direction, we conducted the SLR study on software testing ontologies (Tebes et al. 2019b) in order to establish the suitable top-domain testing ontology to be integrated into the C-INCAMI v.2 conceptual framework. That is, we envision populating the testing conceptual component shown in Figure 9 and linking it with the FRs and NFRs components. Next, we illustrate the A1-A3 activities presented in Section 3 using excerpts of the Tebes et al. (2019b)

To Design the Review (A1) for the Software Testing Ontologies Study
As observed in the functional and behavioral perspectives in Figure 3, to start A1, the "SLR Information Need Goal Specification" is required. In this case, the information need establishes that papers documenting software testing ontologies from digital libraries must be systematically analyzed.
From the main goal established for this SLR, two "Research Questions" were initially formulated, namely: (RQ1) What are the existing ontologies for the software testing domain? And, (RQ2) What are the relevant concepts, their relationships, attributes and constraints or axioms needed to describe the software testing domain? To answer RQ1 will allow us to identify and analyze the different existing software testing ontologies. The RQ2 will serve us to know the terms (or concepts), their relationships, attributes or properties and restrictions needed to specify an ontology for the testing domain.
Then, the "Search Protocol" was designed. Taking into account RQ1, the following search string was initially proposed: "Software Testing" AND ("Ontology" OR "Conceptual Base"). For this particular study, the search string was applied on the three selected metadata, namely: title, abstract and keywords. (Note that the search string could also be applied to the full-text). Finally, the selected digital libraries included in the revision were Scopus, IEEE Xplore, ACM Digital Library, Springer Link and Science Direct.
For the "Selection and Quality Criteria" defined in this case study, see WP 1.3 in Table 3. Then, based on the research questions and the quality criteria, a set of fields for the data extraction was defined by the SLR Designer and the Analysis Expert Designer (see WP 1.4 Table 3). For example, to extract terms, properties, relationships and axioms from each article, the "Relevant concepts used to describe software testing domain" field was specified as follows (not shown in WP 1.4 of Axiom: definition/specification The "Data Extraction Form" allows obtaining homogeneity between the data extracted from each document and thus facilitating the task of analyzing them.

in
Once validated the produced artifacts, as result of A1, the "SLR Protocol" was obtained. Table 3 shows all the artifacts that integrate this document, which correspond to those specified in the informational perspective of Figure 6.

To Perform the SLR Pilot Study (A2.1)
Looking at the activity flow shown in the behavioral perspective of Figure 2, we conducted a pilot study for analyzing the suitability of the "SLR Protocol". As part of the A2.1 execution, from the "Selected Digital Libraries" in A1 (see WP 1.2.3 in Table 3), Scopus was selected for this pilot test because it contains digital resources from various sources such as Elsevier, IEEE Xplore and ACM Digital Library. As result of carrying out the Execute Search Protocol and Apply Selection Criteria activities, 19 documents were obtained (Tebes et al. 2018), which were reviewed by three Data Collectors of the GIDIS_Web research group to Extract Data from a Sample of Documents (recall Figure 8). Once A2.1 was completed, a list of "Detected Problems/Suggested Improvements" was produced and the Improve SLR Design activity was run (as prescribed in Figure 3 through the functional and behavioral perspectives). Table 4 shows the updates (highlighted in blue and underlined) that the "SLR Protocol" underwent after the pilot study. Next, some changes are described considering the "Detected Problems/Suggested Improvements" document.
In the RQ1 research question, the "existing" term was replaced by the "conceptualized" term. The former term is broader than the latter including conceptualizations and implementations of software testing ontologies. However, our main goal is retrieving conceptualized ontologies regardless whether they are implemented or not.
On the other side, the "relevant" term in the RQ2 research question in Table 3 influenced negatively the number of terms extracted by each Data Collector. Therefore, the research question was reformulated as observed in the WP 1.1 in Table 4. In addition, the "Relevant concepts used [...]" field in the form (see WP 1.4) was changed for "Specified concepts [...]". This change made the extraction more objective and easier to interpret than with the initial design.
Moreover, the full reading of articles during the pilot study allowed us to detect that ontologies of various types were presented, such as foundational ontology, top domain ontology and domain ontology. Since the final aim after executing the SLR was to adopt, adapt or build a new top-domain ontology, this information turned out relevant. Consequently, a new research question (RQ3 in the WP 1.1 in Table 4) and the "Classification of the proposed ontology" field in the "Template of the Data Extraction Form" (see WP 1.4 Table 4) were added. 1) That the work be published in the last 15 years; 2) That the work belongs to the Computer Science area; 3) That the work documents a software testing ontology; 4) That the document is based on research (i.e., it is not simply a "lesson learned" or an expert opinion). Exclusion Criteria (WP 1.3.2) 1) That the work be a prologue, article summary or review, interview, news, discussion, reader letter, or poster; 2) That the work is not a primary study; 3) That the work is not written in English. Also the search string was modified slightly (compare WP 1.2.1 in Table 3 and Table 4) because not all search engines take into account variations or synonyms of the used words. The inclusion criterion 1 in the WP 1.3.1 ( Table 3) is not very specific; therefore, it was modified as observed in the WP 1.3.1 of Table 4. The full reading of articles also permitted to detect that some of them were different versions (or fragments) of the same ontology. Therefore, exclusion criteria 5 and 7 were added (see WP 1.3.2 Table 4).

in
On the other hand, since the searches in Scopus retrieve documents that belong to other digital libraries, exclusion criterion 6 of the WP 1.3.2 was added to eliminate duplicates.
Finally, we also observed that some ontologies were built taking into account other terminologies, which may add a quality factor to the new proposal. For this reason, quality criterion 5 was added in the WP 1.3.3 of Table 4, which implies a new field in the "Template of the Data Extraction Form" (see "Terminologies or Vocabularies taken into account [...]" in WP 1.4). This new quality criterion may prove to be useful information in the construction process of any ontology. The reader can check the final "Template of the Data Extraction Form" artifact in Appendix A.

To Perform the SLR (A2.2)
Six agents (four researchers from GIDIS_Web and two researchers from ORT Uruguay) performed the A2.2 activity. 3) That the document has the ontological conceptualization of the testing domain (i.e., it is not simply a "lesson learned or expert opinion" or just an implementation). Exclusion Criteria (WP 1.3.2) 1) That the work be a prologue, article summary or review, interview, news, discussion, reader letter, poster, table of contents or short paper (a short paper is considered to that having up to 4 pages size); 2) That the work is not a primary study; 3) That the work is not written in English; 4) That the work does not document a software testing ontology; 5) That the ontology presented in the document be an earlier version than the most recent and complete one published in another retrieved document; 6) That a same document be the result of more than one bibliographic source (i.e., it is duplicated); 7) That the conceptualized ontology in the current document be a fragment of a conceptualized ontology in another retrieved document. Quality Criteria (WP 1.3.3) 1) Is/Are the research objective/s clearly identified? 2) Is the description of the context in which the research was carried out explicit? 3) Was the proposed ontology developed following a rigorous and/or formal methodology? 4) Was the proposed ontology developed considering also its linking with Functional and Non-Functional Requirements concepts? 5) What other terminologies of the software testing domain were taken into account to develop the proposed ontology? Template of the Data Extraction Form (WP 1.4) Researcher name; Article title; Author/s of the article; Journal/Congress; Publication year; Digital library; Name of the proposed ontology; Specified concepts used to describe software testing domain; Methodology used to develop the ontology; Terminologies or Vocabularies taken into account to develop the proposed ontology; Classification of the proposed ontology; Research context; Research objective/s related to software testing ontologies; Does the proposed ontology consider its linking with Functional and Non-Functional Requirements concepts?; Additional notes. Figure 10 shows the Execute Search Protocol and Apply Selection Criteria work definitions instantiated for the SLR project on Software Testing Ontologies. The Execute Search Protocol task was performed for both research groups. Particularly, GIDIS_Web agents (green shadow in the figure) retrieved documents from the Scopus, ACM and IEEE Xplore digital libraries while, in parallel, ORT agents (orange shadow) retrieved documents from Springer Link and Science Direct digital libraries. The workload for carrying out the Execute Search Protocol tasks was balanced, i.e., two members per each group participated. The remainder two members of GIDIS_Web acted as Domain Expert and SLR Validator, who coordinated and guided to the other researchers throughout A2.2. Additionally, Figure 10 shows the tasks instantiated in this SLR project in order to perform the Apply Selection Criteria sub-activity. Note that these tasks are scheduled taking into account the Include and Exclude Criteria artifacts of our project (see WP 1.3.1 and WP 1.3.2 artifacts in Table 4).
It is important to remark that from the project scheduling standpoint, for instance, the Execute Search Protocol task in the generic process in Figure 3 was instantiated twice in Figure 10, considering the actual project particularities. Note also that the Apply Selection Criteria sub-activity is shown at task level in Figure 10 considering the actual project partic-ularities. Therefore, the process models represented in Section 3 should be customized for any specific project life cycle and context.
As result of the Execute Search Protocol tasks, 731 documents were retrieved. Figure 11 shows the amount of "Retrieved Documents" per digital library, following the specification of the "Search Protocol" artifact (see WP 1.2 in Table  4).    Table 5 records the number of "Selected Documents" produced after performing the Apply Selection Criteria sub-activity. This yielded 10 selected primary studies by applying the different inclusion/exclusion criteria over the analyzed content as presented in its 1 st and 2 nd columns. Considering the initial and end states the reached reduction rate is 98.6%.
The next activity carried out was the Add Non-Retrieved Relevant Studies sub-activity as depicted in Figure 3. This was performed just by GIDIS_Web members using two different methods, namely: Backward Snowballing and Prior Knowledge of other Research Work. Table 6 shows the "Final Selected Documents" after enacting Add Non-Retrieved Relevant Studies sub-activity. This totalized 12 research primary studies after surpassing all inclusion and exclusion criteria (see Table 6  Finally, the Extract Data from all Documents sub-activity should be performed, which has as input the "Template of the Data Extraction Form" (Appendix A) and the "Final Selected Documents" (Table 6), and as output the "Forms with Extracted Data" artifact. Appendix B shows the filled form with extracted data for the PID 347 article. This sub-activity must be performed in a very disciplined and rigorous way, being also very time consuming.
The work distribution for the Extract Data from all Documents sub-activity was as follows. Two members of GIDIS_Web, as Data Collectors, gathered completely the required data of the 12 documents. At random, we selected 4 out of 12 documents which were made available (by Google Drive) for data collection by two members at Universidad ORT Uruguay, but not shared with each other while gathering the data in order to permit later a more objective checking of consistency. As result of this checking to the instantiated forms of both groups some minor issues were raised and discrepancies were consensuated via video chat. (Note that the data and quality extraction reliability could be evaluated. For example, in the study documented in Kitchenham & Brereton (2013), authors checked the level of agreement achieved for data and quality extraction. In the case of quality extraction, the Pearson correlation coefficient was found between the values for each assessor for each paper both for the number of appropriate questions and for the average quality score for each paper. In the case of data extraction, the agreement with respect to the study categories was assessed using the Kappa statistic.) It is worth mentioning that thanks to looking for inconsistencies, we detected that into the collected concepts (i.e., terms, properties, etc.) included in the form for the RTE-Ontology (Campos et al. 2017) there were not only those software testing domain-specific terms and relationships but also those related to the core or high-level ontology so-called PROV-O (Lebo et al. 2013). Hence, we decided to document both in a differentiated way (by colors in the form) so, at analysis time, count only the domain related concepts accordingly. For instance, there are 17 terms in Campos et al. (2017), but just 14 are domain-specific software testing terms not taken from PROV-O. A similar counting situation happened with ROoST , but in this case, they took terms from UFO (a foundational ontology built by the same research group). Lastly, all these raised issues during the Extract Data from all Documents sub-activity were documented in the "Divergencies Resolution Record" artifact.
Ultimately, all the main SLR artifacts for this study including the filled forms with the extracted data for the 12 "Final Selected Documents" can be publicly accessed at https://goo.gl/HxY3yL.

To Analyze and Document the Review (A3) for the Software Testing Ontologies Study
A3 includes two sub-activities. The first sub-activity named Analyze SLR Results produces the "Data Synthesis" artifact. To produce this artifact, playing the Analysis Expert Designer role, we have designed and specified a set of direct and indirect metrics (Olsina et al. 2013). Below, we show just the formulas (not their whole specifications) for some indirect metrics: %DfTr = (#DfTr / #Tr) * 100 (1) #Rh = #TxRh + #NoTxRh (2) #TxRh = #Tx-is_a + #Tx-part_of (3) %DfNoTxRh = (#DfNoTxRh / #NoTxRh) * 100 (4) %TxCptuaLvl = (#TxRh / #Rh) * 100 (5) Briefly, metric's formula (1) allows us to know the proportion of defined terms (#DfTr) with regard to the number of terms (#Tr) in the conceptualized ontology. The metric's formula (2) is devoted to calculate the number of taxonomic (#TxRh) and non-taxonomic relationships (#NoTxRh). Note that a taxonomic relationship is the sum of the inheritance (#Tx-is_a) relationships and the whole-part (#Tx-part_of) relationships -see formula (3). The metric's formula (4) permits to understand the proportion of defined non-taxonomic relationships. Finally, the metric's formula (5) calculates the percentage of taxonomic conceptual-base level, which is very useful to determine if a conceptual base is actually an ontology or rather a taxonomy. (Note that, for brevity reasons, we did not include the metric's formula for getting the ratio of specified axioms.) These metrics are very easy to use considering that the "Template of the Data Extraction Form", particularly the "Specified concepts used to describe software testing domain" field, has a suitable design for facilitating the subsequent counting (see the structure of this field in Appendix A, and one example of its instantiation in Appendix B). Additionally, we use a table to record the measure's values. For example, Table 7 shows the measures' values for the paper PID 347 (Campos et al. 2017). Then, all these values are considered by the Data Analyzer to perform the analysis.
Moreover, in this activity we use other tools for analysis purposes. For example, a word cloud tool for the RQ2 (What are the most frequently included concepts, their relationships, attributes and axioms needed to describe the software testing domain?) is used. Figure 12 shows the word cloud produced from the terms retrieved from the 12 conceptual bases.
All the tables, charts, word cloud and measures are included in the "Data Synthesis" and used by the Data Analyzer to perform the analysis. In Figure 13, we show an excerpt of the "Data Synthesis" produced for the software testing ontologies study. Then, using the "Data Synthesis" as input, a "Scientific Article" (Tebes et al. 2019b) was elaborated by the Expert Communicator and the Domain Expert in the Document/Communicate Results sub-activity. In Tebes et al. (2019b), the full analysis is documented, where all research questions are answered and other issues are considered, such as validity threats. However, due to the page size constraints that a conference paper has, we are currently extending it to a journal format where there is less page limits. This SLR on software testing ontologies will be then fully documented.

Discussion
The process proposed by Kitchenham et al., which was so far adopted or slightly adapted by other researchers -for example in Sepúlveda et al. (2016), Tahir  can observe, on one hand, a greater richness of expressiveness in sequences and decision flows in Figure 2, and, on the other hand, the possibility of representing different levels of granularity in work definitions such as activity, sub-activity and task. (Note that, for the reason of maintaining the simplicity and legibility of the diagram in Figure 2, we have not specified other aspects such as iterations and parallelismse.g., the parallelism that can be modeled between the Define Quality Criteria and Define Inclusion/Exclusion Criteria tasks, within the Define Selection and Quality Criteria subactivity in Figure 2.) Although the behavioral perspective is undoubtedly necessary, it is usually not sufficient since it does not represent inputs and outputs for the different activities and tasks. For this reason, the functional perspective in conjunction with the behavioral perspective enhance the model expressiveness as shown in Figure 3. Furthermore, the informational perspective also enriches the SLR process specification and favors the understanding of the process by showing the structure of a given artifact. This informational perspective is illustrated for the "SLR Protocol" artifact in Figure 6, which is produced by the A1 activity.
Additionally, the organizational perspective shown in Figure 4 helps to understand that in order to carry out an SLR, it is also necessary to cover several roles that agents (both human and automated agents) require with different skills, i.e., the set of capabilities, competencies and responsibilities.
On the other hand, a key aspect to be highlighted in the present proposal is the modeling of the A2.1 sub-activity (Perform SLR Pilot Study), which gives feedback to the A1 activity (Design Review). This pilot test activity, in the adapted processes of Brereton et al. (2007) is very often neglected. Alternatively, if it is mentioned or considered such as in Brereton et al. (2007) and Sepúlveda et al. (2016), it is poorly specified. For example, in Sepúlveda et al. (2016), authors include the Selection and Pilot Extraction activity, which is represented simply as a step in the A2 activitynamed phase in Sepúlveda et al. (2016)-, but it does not give feedback to the A1 activity (phase). So modeling the feedback loop is important due to it could help to improve aspects of the SLR design, as we have proposed in Figure 7, and illustrated its usefulness in sub-section 4.3.1, particularly in the improvement of the Data Extraction Form. Additionally, notice that in our proposal we include both the Validate SLR Design sub-activity (which is represented in A1) and the Perform SLR Pilot Study sub-activity (which clearly must be represented in A2 since it contains tasks inherent to the execution of the SLR).
In short, the main contribution of this work is to augment the SLR process currently and widely used by the SE scientific communities. This is achieved by considering, on one hand, the principles and benefits of process modeling with greater rigor by using four modeling perspectives, namely: functional, behavioral, informational and organizational. And, on the other hand, the vision of decoupling the 'what' to do aspect, which is modeled by processes, activities, tasks, artifacts and behavioral aspects, from the 'how' to realize the description of work definitions, which is modeled by method specifications. It can be observed in the cited literature (and in general) a lack of separation of concerns between what to do (processes) and how to do it (methods).
Although throughout Sections 3 and 4 we have indicated some methods applicable to SLR tasks, the emphasis in this work is not on the specifications of methods. This separation of concerns can be seen, for example, in the description of the Add Non-Retrieved Relevant Studies task (recall sub-section 4.3.2), where it can be carried out by using two methods such as the forward snowballing and/or backward snowballing. However, in the process models related to this sub-activity (and to others) no reference is made to these methods, nor to any others.

Conclusion
In this work, we have documented the SLR process specification by using process-modeling perspectives and mainly the SPEM language. It is a recommended flow for the SLR process, since we are aware that in a process instantiation there might be some variation points, such as the parallelization of some tasks. It is worth noting that, regarding the quoted benefits of process modeling in the Introduction Section, the proposed SLR process specifications aim primarily at facilitating the understanding and communication, as well as at giving support to the process improvement and process management. However, a thorough discussion and a detailed illustration of process modeling for supporting fully SLR process automation have been out of the scope of this article. Additionally, we have highlighted the benefits and strengths of the proposed SLR process model compared with others, which are more coarse-grained specified. Finally, we have illustrated aspects of it by exemplifying a SLR on software testing ontologies. One important contribution is the inclusion of the pilot test activity, which promotes the validation of the "SLR Protocol" artifact not only in the A1 activity but also in the A2.1 sub-activity. It is worthwhile to highlight that conducting a pilot study (not for a replicated study) can be very useful to improve the SLR Protocol and foster to some extent the quality of the evidence-based outcomes. Given that so far we have performed very few pilot studies, we have not the practical evidence to state what is the more effective sample size for it. In fact, Kitchenham & Charters (2007) neither mention the appropriate sample size due tothey indicate-it depends on the available resources as well. In our humble opinion, we consider that in a pilot study, in addition to the sample size, it is important to select a set of documents (or the whole sample) and perform the data extraction of each document by more than one independent data collector. Having more than one filled data extraction form for the same document allows checkings for potential inconsistencies in the designed artifacts. Moreover, the Data Validator (SLR Expert Researcher) and the Domain Expert are very important roles that should be played at least by one person with expertise in order to check (jointly with the independent data collectors) the data extraction forms for inconsistencies, in addition to detect opportunities for improvement in the template metadata for the ulterior analysis endeavor, in the A3 activity. In a nutshell, we consider that several variables may be important for making a good/effective pilot test. However, we would need more informed evidence to tackle this issue. So it is an interesting aspect that could be further investigated by the community.
The proposed process model for the SLR provides a good baseline for understanding the details and discussing alternatives or customizations to this process. In fact, the rigor provided by process modeling, where several perspectives are combined (e.g. functional, informational, organizational and behavioral), but can also be independently detached, provides a greater richness of expressiveness in sequences and decision flows, while representing different levels of granularity in the work definitions, such as activity, sub-activity and task.
It is worth mentioning that the specified process contributes to one pillar of a well-established SLR strategy -knowing beforehand that a strategy should also integrate the method specification capability. Note that, for the same task, different method specifications could be applied. In consequence, the life cycle of a given SLR project should organize activities and tasks considering not only the prescribed SLR process but also the appropriate allocation of resources such as methods and tools, among others, for achieving the proposed goal. There are additional activities (to those specified in Figure 2) in which project planning must also take into account such as documenting artifacts in all activities and control their changes and versions. The continuous documentation and versioning of artifacts are key factors to guarantee consistency, repeatability and auditability of SLR projects.
As an ongoing work, we are currently developing a supporting tool for this SLR process since it can be very time consuming and also error prone remembering and using all the elements that this process provides. Although a variety of tools is available to assist the SLR process (Marshall & Brereton 2013), current tools obviously do not follow our SLR process or do not fit well. Consequently, we are developing a tool that can help to automate part or all of our SLR process and its documentation, in addition to favor its usefulness and wide adoption. Taking into account that the main objective in the present work is to provide models to facilitate the understanding and the communication of the SLR process to researchers and practicioners, rather than to give support to the fully SLR process automation, we will need to augment for instance some activity specifications at task level in addition to provide a more flexible process flow for collaborative work.
On the other side, as ongoing work, we are currently finishing the development of the suitable top-domain testing ontology to be integrated into the C-INCAMI v.2 conceptual framework. To this end, we took into account those explicit terminologies coming not only from some of the existing primary studies, but also from official and de facto international standards (e.g. ISO/IEC/IEEE 29119-1:2013 https://www.iso.org/standard/45142.html, and ISTQB https://www.istqb.org/downloads.html respectively), which are widely adopted by professional testers.
Lastly, as a future work, we will perform a thorough checking if our SLR process specifications can suitably represent the SLR processes followed by other researchers. The outcome of this work will help us to validate our process specifications in a broader way.