Analysis of Automotive EBSE Testing Process (2): A Case Study of Advantages and Challenges

EBSE special serials are divided into "five" chapters. This article is the "second" chapter of the serial series. In the previous "chapter (1)", the characteristics of automotive software engineering, the phased EBSE testing process designed with mixed methods, and questions have been raised. Next, we will specifically analyze EBSE Step 1: Case Analysis on Strengths and Challenges.

Click to read: Analysis of Automotive EBSE Testing Process (1): Characteristics and Problems of Automotive Software Testing icon-default.png?t=N5K3https://blog.csdn.net/NewCarRen/article/details/130137367?spm=1001.2014.3001.5501

4. EBSE Step 1: Case Analysis on Strengths and Challenges

We conducted an industry case study to investigate the strengths and challenges in the automotive software testing process and identify room for improvement, answering RQ1 and RQ2 of this topic serial (1) .

4.1. Case Study Design Types

Case is one of the development bases of a large Swedish automotive organization. The case organization is ISO certified. However, the organization has struggled to achieve the level of SPICE expected by its customers. In particular, different departments have achieved different results in the assessment. It was also evident from this study that we found that there was no uniform testing process and not all projects had a proper testing plan. They focus on soft and hard products in the fields of Internet of Vehicles, logistics, electronics, mechanics, simulation modeling and system engineering.

We report a case with multiple units of analysis in which we focus on the phenomenon of testing multiple items in one firm. This type of case study helps to compare testing methodologies, tools, and methods used for different projects in the case organization.

The unit of analysis here is the different projects of the companies under study. Choose in such a way that they have the greatest variance in factors such as methodologies used, team size, and technologies used for testing. Project motivations that focus on change are able to elicit a wide range of challenges and benefits. Also, this helps with generalizability, since the challenges are not biased towards specific types of projects.

4.2. Analysis object

All of the projects studied this time were bespoke as the case organization was a supplier to a specific client. All projects here are externally initiated and the organization does not sell any proprietary products/services. Projects within an organization are mostly maintenance projects or improvements to existing products. In this organization, it is common for a role to have multiple responsibilities across multiple projects. Table 1 gives an overview of the items studied.

Systems: Most systems are embedded applications (P1, P2, P3, P4, P7 and P8), i.e. they involve software and hardware parts such as control units, hydraulics, etc. Windows applications developed in P2, P5 and P6 do not control the hardware.

Team size: We distinguish between small projects (teams of less than 4 people) and large projects (teams of 4 or more people). Most teams are large in size, as shown in Table 1. Small teams do not necessarily have structured development and testing processes, roles and responsibilities, testing methodologies or tools. Three projects (P3, P6, and P8) did not report any test plans. Projects with high number of modules are developed by large teams and are outdated compared to projects handled by small teams. That said, these systems have evolved considerably over time.

Development methodologies: Different software development methodologies are adopted within organizations. However, Model-Based Development is the most prominent (P4, P5, P7, and P8) and is used with the Waterfall Model concept. Waterfall refers to a sequential process involving requirements, design, component development, integration, and testing. Agile development using Scrum was adopted in one project (P2). The small team involved in maintenance took a temporary approach (P6). Two projects recently introduced some agile practices to incorporate iterative development (P1 and P5).

Tools: Various tools are used in test projects such as test case and data generators, test execution tools, defect detection and management tools, debugging tools, requirements tracing and configuration management tools, and tools for modeling and analyzing electronic control units (ECUs) Tool of. In addition to these tools, custom tools are used in some projects when any other tool cannot meet the specific purpose of the project. These tools are often used for test execution to bring the test environment close to the target environment. Small teams (like P3) don't rely on testing tools, but use spreadsheets. Large teams responsible for multiple modules use multiple tools to organize and manage test artifacts.

Testing Level: As shown in Table 1, almost all projects (seven out of eight) had unit testing and integration testing was used in five projects. Unit/base tests in a project are similar to smoke tests. However, unit testing in this context does not have a clearly defined scope. Half of the projects studied used test automation. However, the ever-increasing test cases are not always updated into the automation architecture. It can be seen from the interview data that many teams do not conduct system integration testing. However, most teams agree that integration testing can replace system testing. As Table 1 shows, other forms of testing, such as regression and exploratory testing, are less common but have recently gained importance within companies.

4.3. Data collection

Data was collected through interviews and process documentation. However, data from other sources were not collected due to lack of availability and insufficient data quality (e.g. quantitative data). The motivation behind using multiple data sources (triangulation) is to limit the impact of only one interpretation and thus make the conclusions stronger.

4.3.1. Respondent choice

The selection process of respondents was done using the following steps:

• Created a comprehensive list of people involved in the testing process, regardless of their role.

• We would like to select at least 2 people for each project, which is not possible from a usability point of view. Especially, for small projects, only 1 person was selected. For larger projects, more people are selected. Additionally, the different roles related to the testing process (including developers, managers, and designated testers) should be covered. However, the final list of employees who participated in the interviews was based on the availability of the time slots in which the interviews were conducted.

• Respondents were emailed to explain why they were considered for the study. The email also contains the purpose of the research and an invitation for an interview.

Table 1: Project overview (unit of analysis )

The selected roles represent positions that are directly involved in testing-related activities, or are affected by the results of the overall testing process (see Table 2).

Table 2: Role Description

Project and line organization roles from the three divisions Alpha, Beta, and Gamma (division names have been renamed for confidentiality reasons) were included in our study. It can also be seen that some roles are related to project work and some are related to line responsibilities within the department, ie they support different projects within the department. The number of interviews related to departments, projects, and roles is shown in Table 3.

Table 3: Respondents

In the Alpha and Beta divisions, a sufficient number of employees were available, but in Gamma, only 1 person was interviewed due to understaffing in the division. This individual was chosen because she was considered an expert with extensive experience in testing automotive systems.

4.3.2. Interview design

The interviews covered four topics; the duration of the interviews was set at approximately one hour each. All interviews were recorded in audio format and notes were taken. A semi-structured interview strategy was used in all interviews. The topics of the interview are:

1. Warm up and experiences: Questions about the respondent's background, experiences and current activities.

2. Overview of the software testing process: issues related to the test object, the testing activities, and the information required and generated to perform the test.

3. Challenges and strengths during testing: This topic captures strengths/good practices as well as challenges/underperforming practices. Respondents should state what practice they use, what its value contribution is, and where it stands in the testing process.

4. Room for improvement of the testing process: This topic includes gathering questions about why challenges must be eliminated, and how the testing process can be improved.

4.3.3. Process documents

Process documents such as software development process documents, software test description documents, software test planning documents, and test reports are studied to gain insight into testing activities. In addition, documents related to the organization and process description of the entire development process were studied to familiarize yourself with the terminology used by the company. This in turn facilitates the understanding and analysis of interview data.

4.4. Data Analysis

In order to understand the challenges and advantages of the automotive testing process, we performed an in-depth analysis of the different analysis units using a coded approach. Five interview transcripts were manually coded to create an initial set of codes. These codes are grouped into different main categories, predefined according to our research questions (Level 1), literature (Level 2) and open coding (Level 3 and Level 4), see Table 4. From this a coding guide was developed. We used open coding for the transcripts from the interviews, which are constantly evolving. For example, if we find a new statement that does not fit into an already identified code, we create a new one, such as interaction and communication. When we find another statement that belongs to existing code, we link that statement to that code. After encoding the text, we looked at each cluster, identified very similar statements, and reformulated them to represent a single challenge/strength. Upon completion, we reviewed the clusters and provided a high-level description for each cluster. To validate the coding guidelines, an employee of the case organization manually coded the interview transcripts, compared the coding results with the researchers' interpretations, and made necessary revisions. Coding guidelines are continuously refined throughout the data extraction phase.

4.5. Results

The results include a description of the testing process, and the strengths and challenges associated with that process.

4.5.1. Testing process

The majority of respondents (9 respondents) indicated that there is a lack of a clear testing process that can be applied to any project lifecycle. Of the 8 projects studied, only 3 had a clearly defined testing process. It was observed that each project followed a process very similar to that shown in Figure 2, although not all projects followed all the activities outlined in this process.

An organization's testing strategy describes what types of tests are required and how they are used for development projects with the least risk. The testing strategy used by the company is mainly focused on black box testing with only a small portion of testing performed as white box testing. There is a tester's handbook within the organization which describes the testing process, methodology and tools. However, this research shows that most teams don't implement/use it. The main activities performed are: test planning, test analysis and design, test setup, test execution and reporting. Among them, the test plan was completed ahead of schedule by five projects (three large teams represented by P1, P2, and P4 and two small teams represented by P5 and P7). Most of the small teams don't have any software testing plan, although they have a very flexible testing strategy/approach to testing.

These steps are described in more detail below:

Test Plan: This activity addresses what to test and why. The entry criterion for this activity is to prepare the priority requirements for release as input to the test plan. What this phase delivers is the software test plan, including the estimation and scheduling of required resources, the test artifacts to be created, and the required techniques, tools, and test environment. The roles involved in this stage of testing are customers, project managers and test leaders. If the project does not have a test lead available, the developers themselves participate in the test planning activities. The export criterion for the test plan is the approval of the test plan by the client and project management.

Table 4: Analysis by encoding

Test Analysis and Design: This testing activity aims to determine how testing will be performed (by defining the test data, test cases, and progress of the process or system under test), which is documented in the software test description. A software test description also defines which tests (i.e. testing techniques) will be performed during test execution. Other deliverables of this phase are requirements traceability matrix, test cases and design of test scripts to implement the test cases. Test cases are written and managed using the test case management tool used in all projects. The criterion for entering this stage is that the software testing plan is approved by the customer and project management. The test plan arranged in the previous phase is updated with all detailed timetables for each testing activity. The role involved in this phase is the test lead or test coordinator who is responsible for designing, selecting, prioritizing and reviewing test cases. Since testers share responsibilities between projects and it is not always possible to perform testing tasks, in most projects developers are responsible for writing test cases for their own code. The project manager is responsible for the supervision of the testing activities.

Test Setup: In automotive software testing, test setup is the most important part of the testing process as it involves building a test environment that describes the target environment. The result of this level is having the hardware and being able to visualize it as a live environment, including test scripts and all other test data. Since the case organization works with the control engine and electronic control unit (ECU) of most projects, modeling tools such as Simulink and MATLAB are used to visualize the target environment. Most testers or developers participate in this activity. Project managers are responsible for providing resources, such as hardware devices. The test team leader oversees the activity.

Test Execution and Reporting: The final stage of the testing process is to execute the tests and report the results to the client. In order to execute the tests, the test lead or project manager will choose the right person to run the test scripts. After testing is complete, the results are recorded in the defect management system. The result of this phase is a software testing report that describes the entire testing performed and its testing conclusions. The results are also analyzed and evaluated later to see if there are any discrepancies compared to the previous version of the test report. If critical errors occur, these are corrected and the test repeated. The project manager is responsible for determining the stopping criteria for test execution.

4.5.2. Advantages and experience

The benefits of the discovery testing process depend on team size. Most practices that are considered strengths in small teams are not seen as strengths in large teams, and vice versa. That said, it was clear from the interviews that strengths vary with team size.

Working in a small agile team: testing activities are flexible without generating extensive test reports. Large teams only do this for small releases. Large teams have a very structured and plan-driven approach to testing. Small teams focus on continuous integration and iterative development (e.g. P2 uses Scrum with continuous integration and sprint planning). Agile testing practices make it easier for them to plan tests for each iteration compatible with the requirements specification. This in turn allows testing to be properly aligned with other activities such as requirements, design, etc. Compared to small teams, big teams focus more on reusing test cases most of the time, which makes them more efficient.

Figure 2: Testing process

Communication: The benefits of communication can be found in projects with agile practices, such as stand-up meetings, regular stakeholder collaboration, and working together in open office spaces. Each activity involves a tester, which indicates parallel testing efforts throughout the development lifecycle. Apart from this, agile methodologies enhance team spirit, lead to efficient interaction among team members and form cross-functional teams. Other projects use weekly meetings and other electronic services such as email and messaging within projects.

Shared roles and responsibilities: Small teams see it as an advantage to have one person perform the roles of tester and developer because it doesn't delay the process of waiting for someone to test the software, says one developer: "When we work, Since the tester is the same person as the developer, there is no delay in reporting. So if a developer/tester finds a bug, he knows where the bug was introduced from, instead of blaming someone else, the developer will be more careful while writing the code ". However, large teams don't see this as an advantage; most of these teams don't have any dedicated testers (except for one large team that has a dedicated testing team).

Testing Techniques, Tools, and Environments: Here we take a different look at the scale of the project. In small teams, use less testing tools and methods to avoid more documentation. These teams typically have fewer project modules than larger teams. In this case, the system is well known to the tester/developer (development and testing is done by one person in a small team), which makes it easy to test with a minimum number of tools and methods. Small teams (such as projects P3 and P6) generally conduct smoke or unit tests first to test the basic functions of the system, and then conduct integration tests. An employee conveys the purpose of unit/base testing by saying: "I see unit testing as a strength. Having this go into the details and make sure every subsystem works as expected". The testing tools used here were developed by the team to suit the project needs. However, these custom tools developed for their specific team are not shared between teams. The main concern for small teams is to have a test environment with the same hardware and interfaces as the target environment. This makes test maintenance in the project efficient.

Recommended Code Testing Tools

In contrast to small teams, large teams conduct testing using multiple methods and tools to perform multiple activities. One of the most obvious strengths found in large teams is experience-based testing (such as projects such as P1, P2, P4, and P8). Since the same team members have been working on the same project for years, they find it easy to use their experience-based knowledge in product development and testing. An employee responsible for quality coordination on a large team said, "The metrics used for testing didn't help our team much because the testing was more based on our experience, which we used to decide what type of test cases needed to be run. "Another perceived advantage is the exploratory testing/session-based test management applied in projects P1 and P2. One employee pointed out that "the charter to perform session-based testing (i.e. exploratory testing) and we found critical bugs at a finer level of detail". Hardware-in-the-loop (HIL) is also considered one of the strengths of large teams as it detects most of the defects during integration testing. HIL for integration and system-level testing is considered an advantage as it can detect the most critical defects such as timing issues and other real-time issues in large and complex systems. Informal code reviews are considered an advantage for large teams, although they are used for small teams as well. Informal code review avoids bias in testing because it is performed by someone other than the person responsible for coding.

Speaking of tools, test case management tools were cited as an advantage for large teams (such as P4), as one employee pointed out, "I think test case management is about storing test cases, choosing which tests should be executed, and giving feedback from testers. good idea". Other tools that are considered useful are defect management tools (eg, projects). A test environment in a large team is great for testing because it describes the live environment.

4.5.3. Challenges

The challenges are broken down into several areas. For each challenge area, we also indicate the number of projects that addressed the challenge within the challenge area, and the process areas that the challenge area focuses on (see Table 5), and report a set of related questions for each area.

Table 5: Overview of Challenge Areas

C01: Organization of testing and its process-related issues: Organizational issues relate to poorly executed practices related to the organization and its testing process, such as change management, lack of a structured testing process, etc. Organizational issues also include stakeholder attitudes towards testing (if testing is given low priority).

C01_1: No unified testing process: Projects vary in testing methodologies and tools usage, and due to the fragmented functionality and evolving complexity of hardware and software, finding a unified process that fits all projects is considered a challenge. Although there is a tester playbook that would help achieve a more uniform process, it is not used because teams are not aware of it, or people feel that it does not fit their project characteristics. An unstructured and poorly organized process is fine for small projects but not for large ones as it affects quality. As noted by the interviewees, "the testing process feels unstructured and always disorganized. It works well for small projects, but not for large ones".

C01_2: Testing was rushed and poorly planned: The delivery date was not extended because more time was required, which resulted in testing being affected and rushed. In addition, the client did not deliver the hardware for testing in time and of good quality, so testing could not be completed ahead of schedule; the result was widespread non-compliance with testing deadlines.

C01_3: Stakeholder attitudes towards testing: Past improvement efforts have focused on implementation, not testing. As a result, new testing methods do not get much support from management, which sometimes leads teams to develop their own methods and tools, which requires a lot of effort.

C01_4: Asynchronous test activities: Testing is not synchronized with other vendor-related activities; test artifacts must be restructured to be in sync with vendor-supplied artifacts. This leads to rework on the testing side.

C02: Time and cost constraints for testing: Challenges with time and cost constraints can be due to insufficient time spent on requirements, testing activities, or testing processes.

C02_1: Lack of time and budget to specify verification requirements: Verification requirements are requirements that are verified during testing (for example, specifying the environmental conditions under which the system must be tested). The time and money saved by not writing validation requirements resulted in a lot of rework and time in other parts of the process, especially testing.

As one interviewee noted: "Re-writing customer specifications into our own requirements? That's not possible today because customers won't pay for it and we don't have an internal budget." Overall , the lack of verification requirements leads to a lack of goals and a clear scope of testing.

C02_2: Test equipment is available on time: The test equipment is not available on time and the quality is not good enough to make unit testing impossible.

C03: Requirements-related issues: Insufficient testing requirements, difficult to understand high-level requirements, and requirements volatility are challenges that prevent proper testing to achieve high quality. These problems usually occur when the customer does not specify the requirements correctly due to lack of time or knowledge, which means that the requirements are poorly managed.

C03_1: Lack of explicit requirements: Too little effort is invested in understanding and documenting explicit requirements, resulting in too much effort being reinterpreted at later stages (eg testing). As one of the employees pointed out: "I think we'd better start putting more effort into requirements management and avoid customers complaining about misinterpreting/misinterpreting their specified requirements so that you end up with fewer problems and save repeating changes and testing all content time".

C03_2: Criteria for finalizing test design and starting/stopping testing are unclear: According to the interviews, once the requirements stabilize, the definition of the testing process will be done. Respondents linked demand volatility to test start and stop criteria. The volatility of requirements requires redefining the entire test plan. This acts as a barrier when the actual testing begins. In cases where organizations use test scripts to perform testing, they have a hard time defining when to stop scripting and start/stop testing as demands come in. The criteria for when to stop testing are mostly related to budget and time, not test coverage.

C03_3: Requirements traceability management issue: The traceability between requirements and tests can be better to easily identify which test cases need to be updated when requirements change. Also, the lack of traceability makes it more difficult to define test coverage. The reason for the lack of traceability is that requirements are sometimes too abstract to link them to concrete functions and their test cases.

C04: Resource constraints for testing: These challenges are related to the availability of skilled testers and their knowledge.

C04_1: Lack of dedicated testers: Not all projects have dedicated testers, but developers who interpret requirements, implement software, and write tests. The lack of independent verification and validation (different people writing software and testing software) leads to bias in testing.

C04_2: Testers are not available: Considering the complexity of the system, it takes time to accumulate knowledge to become a good tester. If experienced testers are rotated between projects, it can be difficult to find someone who can complete the task at hand. One respondent who was in charge of testing said: “It is difficult to find people with the same experience, and it takes a long time for them to learn and understand the product due to the complexity of the product. Therefore, the same knowledge is required before testing .”

C05: Test questions related to knowledge management: The questions related to knowledge management identified in this case study include:

C05_1: On the domain of testing, systematic knowledge transfer and knowledge sharing issues: new testing techniques used by companies (exploratory testing) require a great deal of knowledge which is not available because testers are always changing and the research To the new testers hired by the company, this knowledge is not available to enter the project. Despite the need to reach a state where a project is not dependent on a single person, there is not enough information and training material on how to do the testing. We also found from the interviews that the challenge of knowledge transfer was magnified because of the emphasis on controls engineering, mechatronics and electrical engineering in addition to software.

C05_2: Lack of basic testing knowledge: As testers lack basic testing knowledge, testing is given low priority. In this context, one respondent involved in life cycle management activities stated, “I think there is a lack of information about the basis of testing. Some of us don’t know when to start testing and when to end testing, and it feels like A gray area that is not clearly defined anywhere."

C06: Interaction in testing, communication related issues: practical issues related to communication between different stakeholders in testing. Also includes inappropriate forms of communication such as lack of regular face-to-face meetings, lack of communication between clients and testers.

C06_1: Lack of regular interaction with customers on requirements: At the beginning of the project, customer interactions are more frequent, and in terms of verification requirements in testing, there are too few customer interactions. The customer does not have the right people in place to communicate the testing requirements.

C06_2: Lack of interaction with other roles in the project during testing: Lack of communication with previous team members who have moved to another project, even if they are needed (for example, to verify and fix identified bugs). One interviewee narrated the incident in the following way; "I have assigned a person to our team and then he has to communicate with us, but sometimes it is difficult for this person to find that person because he is now on another team Work".

C06_3: Informal communication with customers: Overall, there is a lack of face-to-face and informal communication with customers, where customers communicate by providing vague descriptions that are not clarified. A manager added: "I think the most important thing is to maintain the relationship (the informal one with the client) and ask the client that we can't start working until you tell us what you want".

C07: Questions related to testing techniques, tools, and environments: Questions related to the use of current testing techniques, environments, and tools.

C07_1: Lack of automation leads to rework: Automation of unit tests and regression tests is not done effectively. One interviewee noted that "as long as there is no automation in place, testing is rework". Generating and effectively automating tests was seen as a challenge due to lack of tool support, leading to rework when one wanted to re-run the tests.

C07_2: There is no unified tool for the entire testing activity: A test leader pointed out the need for a unified tool that can be used for testing "We have many tools available for testing, but there is some difficulty in deciding which tool to use because each Tools have disadvantages and advantages. Sometimes we are forced to develop custom tools because we cannot get any tool from the market that will do everything for us.” One tool for all testing activities in the automotive domain can be easily used instead of managing and organizing the multitude of tools currently in use.

C07_3: Improper maintenance of test equipment: Multiple test environments need to be maintained, lack of maintenance leads to rework, long lead times before actual testing begins. One interviewee summed it up nicely: "We have several test environments and test steps that need to be maintained. They don't always need to be maintained, and it takes a long time to start actual testing".

C08: Issues related to quality aspects: Issues related to the inclusion of test quality attributes such as reliability, maintainability, correctness, efficiency, effectiveness, testability, flexibility, reusability, etc. Involves trade-offs between quality and other activities.

C08_1: Reliability issues: The system is not as reliable as it needs to be. Quality is difficult to integrate due to lack of testing process and failure of hardware components. As one interviewee noted, "it is difficult to meet several required criteria of a system, such as longer working hours, less resource intensive, ability to work on different platforms, etc".

C08_2: Quality attributes are not well specified from the beginning of the project: Quality requirements are not well specified, leading to quality problems of complex systems in the market for existing products.

C08_3: No quality measurement/assessment: There are no quality measures, but their need is recognized to improve the ability to evaluate test results, with one employee saying: "Although our customers are happy, the quality curve must be better. I It is believed that quality measures should be recorded to facilitate better analysis of test results".

C09: Defect Detection Issues: Issues related to practices that make it impossible for testers to trace defects, or the root causes of defects, and also include issues related to defect prevention.

C09_1: Late testing in the process makes fixing defects expensive: Due to system complexity and late testing, the number of defects in a system increases as the system grows and scales. Due to the lack of many bugs in previous releases, customers reported a large number of bugs that needed to be corrected in subsequent releases, making it expensive to fix bugs.

C09_2: Difficulty tracking unfixed defects in previous versions: For the development of complex parts (i.e. involving processing time issues and other critical issues), differences in system behavior between two different versions need to be the same. But this doesn't always happen because in the current version a bug that was not fixed in the previous version is triggered. This is because when these bugs are untraceable in such a large system, they may become severe in the next version.

C10: Issues related to documentation: Poorly executed practices related to test documentation, such as insufficient documentation, no documentation, or excessive documentation, that do not provide proper support for maintaining quality in the testing process, are the subject of this challenge area.

C10_1: Documentation on test artifacts is not kept up to date: Respondents emphasized that the documentation provided (such as test cases and other test artifacts) is not sufficient for testing and cannot be trusted; one respondent added, "Test documentation is not kept up to date , so we found them unreliable". One of the reasons mentioned was that some minor changes were made to the test artifacts, but these changes did not update the test documentation accordingly. Failure to update documentation leads to rework.

C10_2: No detailed manuals on some specific test methods and tools: Another observation in this regard is the lack of documentation on how tools and methods work. One interviewee summed it up nicely: "There is support for tools, but we just can't find people who can solve problems with them. I think it could be better documented". However, it has been observed that there are some manuals within the organization that serve this purpose. But for some specific tools (such as custom tools) or methods, this doesn't work. This problem arises when the people doing the testing don't understand the terms in the manuals or they don't know the manuals.

For more content, please pay attention to "Analysis of Automotive EBSE Testing Process (3): EBSE Step 2, Determine Improvements Through Systematic Literature Review", follow Niuka.com, and learn more about automotive technology.

рекомендация

отblog.csdn.net/NewCarRen/article/details/131374011