Cloud Efficiency DevOps Practice-How to Realize Test Automation Integration and Analysis Based on Cloud Efficiency

For modern software research and development, continuous, fast, high-quality, and low-risk delivery of demand characteristics is the main business demand for research and development. To achieve this, in addition to good architecture design and excellent engineering capabilities, fast and reliable test feedback is also a very important part. To achieve this, you need to rely on test automation.

As DevOps platform for enterprise developers, cloud effect provides a wealth of capabilities to help everyone in the landing test automation process DevOps practices.

To put it simply, an enterprise's self-built test automation system can be divided into three forms:

Form 1: Based on open source test automation tools

Many companies start with self-built test automation by choosing an open source test automation tool. An open source test automation tool often contains the following parts (take RobotFramework as an example):

  1. Test execution tools, such as robot
  2. Test cases, such as .robot files
  3. Test results and reports, such as log.html and report.html generated after execution
  4. Test capability library, used to complete specific tests, such as SeleniumLibrary

For a test automation system, it is often necessary to add:

  1. Scheduling and execution platform
  2. Result analysis and statistical reports
  3. Test result notification ability

Based on cloud efficiency, the entire architecture is like this.

  1. Test automation use cases are stored in the git warehouse of the cloud-effect code platform
  2. Test steps used to perform test automation, created based on cloud-based custom step capabilities
  3. A cloud-effect pipeline for triggering and concatenating code, building and automating tests
  4. Notification mechanism (Dingding message)
  5. The data report for the quality situation can be directly displayed in the pipeline test result, or the data can be sent to the self-built data report service for display

Taking RobotFramework as an example, there are the following steps to access open source test automation tools on cloud efficiency.

1. Select or write the flow step corresponding to the open source test automation tool

Cloud Effect does not have built-in open source test automation components, but based on its flow cli tool, companies can easily customize test automation components that meet their own requirements. For how to implement and publish a flow step through flow cli, please refer to the related content of flow cli in the Cloud Efficiency Academy.

Here, just take RobotFramework as an example to explain its key parts.

First initialize a flow step component project through the flow step init command.

1.1 Execution environment and commands

In the step.yaml file, image is the environment image for test execution. Here registry.cn-hangzhou.aliyuncs.com/feiyuw/flow-robotframework:1.0, the content of the image is defined in the Dockerfile.

Add a shell type input box to items to set the execution command. The default value here is robot -L Trace -d robot_logs .that the current directory "." is the directory where the code is located.

# ... image: registry.cn-hangzhou.aliyuncs.com/feiyuw/flow-robotframework:1.0 items:   - label: 执行命令     name: STEP_COMMAND     type: shell     value: |       # NOTE: output directory should be robot_logs       robot -L Trace -d robot_logs . # ...

1.2 Red line configuration

First define the red line configuration components in step.yaml, these components will be displayed to the user during the pipeline configuration step.

  items:     - label: 红线信息       name: CHECK_REDLINES       type: addable_group       rules:         - require: false       add_button:         type: icon         icon: plus         text: 增加红线         tip:           icon: question-circle           description: 红线校验失败步骤标记为失败       template:         items:           - name: redline             label: 红线             position: flat             type: custom_redline_dropdown             datamap: '[{"key": "PassRate", "type":"GE"}]'             rules:               -requires: false

In addition, add a red line check part at the end of step.sh, such as:

redline Passed:成功:$STEP_ROBOT_PASS:Success Failed:失败:$STEP_ROBOT_FAILED:Error PassRate:成功率:$STEP_ROBOT_PASSRATE:Default

After the flow step is written and debugged, publish to the current enterprise.

2. Add test automation use cases to the code base

For the automated testing of the entire product or a subsystem, we recommend that the automated test cases be stored in a separate code warehouse; for the automated testing of a specific application, we recommend that the test cases be stored in the application’s code warehouse. And use the same branch as development (recommended).

There are many advantages to managing automated test cases and application code in the same code base:

  1. The test case and the code match each other and are up to date, so that automated testing can be involved in time during the development phase
  2. Directly reuse the branch mode of development, without considering the version management of automated use cases
  3. Development and testing are based on the git code base and work closely together to facilitate the implementation of good practices such as ATDD
  4. It is easy to integrate into the pipeline, and it can be quickly executed and fed back when the test code or development code is changed, which speeds up the location and repair of problems

Example: Test automation use case of alpd-bot-ssh .

alpd-bot-ssh is an SSH service that provides IP attribution query and weather query capabilities. The test automation use case is implemented based on the RobotFramework framework. The test automation use cases are stored in the atest directory of the code library, and the structure is as follows:
atest ├── __init__.robot ├── ip.robot                             # 用于ip归属地场景的测试集 ├── resources                                                        # 测试公共资源,包括通用变量定义、公共函数等  │   ├── common_resource.robot │   └── ssh_lib.py └── weather.robot                                                # 用于天气查询场景的测试集
In the code root directory, pass the  robot -L Trace atest  execution test.

3. Add test automation nodes to the pipeline

Open the continuous integration pipeline, if not, create one on flow.

  1. Edit the pipeline and add a blank task

2. Add a custom step, "RobotFramework test"

3. Configure execution commands and red lines

 

4. Upload the test report to CloudEfficiency to display it in the execution results of the CloudEfficiency pipeline

  1. Edit the test automation node of the third step and add a step

2. Configure the test report directory (here is robot_logs) and the test report entry file (here is report.html)

5. Synchronize test results to self-built report system

Sometimes, we need to perform further statistical analysis of the test results. At this time, the report provided by the test automation tool is not enough. Usually, we will build a report system by ourselves. So, how do the test automation results executed in the cloud effect upload to our self-built report system?

5.1 Ensure that the reporting system can be accessed by Cloud Effect

Due to network problems, Cloud Effect cannot access our report system built in a private network environment, and requires the report system to open the public network access interface. For security, we recommend opening only necessary interfaces and doing IP whitelist protection at the same time.

5.2 Add the upload report step in the flow step

Open the flow step of step 1, edit step.sh, and add the upload report step.

Note:
This step needs to be placed before the redline check, and the recommended information includes: test results, code branch, code version, committer, pipeline name, etc.
# ... # sh -ex $WORK_SPACE/user_command.sh bash -c "$STEP_COMMAND"  output=`python3 /root/parse_output.py $OUTPUT_XML`  STEP_ROBOT_PASS=`echo $output | awk -F, '{print $1}'` STEP_ROBOT_FAILED=`echo $output | awk -F, '{print $2}'` STEP_ROBOT_PASSRATE=`echo $output | awk -F, '{print $3}'`  # upload test result to report server python3 /root/upload_to_report_server.py $OUTPUT_XML $CI_COMMIT_REF_NAME $CI_COMMIT_SHA $EMPLOYEE_ID $PIPELINE_NAME $BUILD_NUMBER  redline Passed:成功:$STEP_ROBOT_PASS:Success Failed:失败:$STEP_ROBOT_FAILED:Error PassRate:成功率:$STEP_ROBOT_PASSRATE:Default

The final pipeline is roughly as follows:

Form 2: Test automation self-built Jenkins

For teams that have self-built Jenkins and other tools for test automation scheduling execution, and even conducted secondary development and customization on Jenkins, or for applications with special environmental requirements like ioT development, it is more to reuse existing tool resources economic. To this end, CloudEfficiency provides the ability to seamlessly connect with customers' existing Jenkins services to help companies pass the serial R&D tests.

1. Ensure that the self-built Jenkins can be accessed by Cloud Effect

The self-built Jenkins service needs to support public network access so that the cloud effect can access and trigger the corresponding task. Similarly, for security considerations, it is recommended to open only necessary interfaces and enable IP whitelist protection.

2. Add Jenkins task node to the pipeline

Edit the cloud effect pipeline, add a task node, and select the Jenkins task.

Next, configure the Jenkins address, authentication method, corresponding job name, and trigger parameters (upstream build image).

3. View results and statistical reports

After the pipeline is executed, the result information will be synchronized to the Jenkins task component, and the user can directly jump to the Jenkins Job log on the cloud-effect pipeline running result.

For statistical reports, in this way, the cloud effect will not save any data of the execution of the task, it is recommended to complete the data upload and other tasks in the Jenkins task.

Form 3: Self-built test automation platform

If open source test automation tools cannot meet the test requirements, and there are customized scheduling, triggering, management and control requirements, some companies will choose to build their own test automation platform. In this case, how to organically integrate with cloud efficiency to achieve one-stop research and development?

The solution is similar to the integrated open source test automation tool. The difference is that our self-built test automation platform needs to expose two interfaces to the cloud effect:

  1. Trigger test execution
  2. Get test results

Here we assume that the address of the self-built test automation platform is: http://taplatform.my.corp , and the two interfaces are:

  1. POST /api/v1/runs

request:{"ref_name": "feature/limit_1", "trigger_by": "yunxiao", "suites": "all"}

response:{"code": 0, "run_id": 123}

  1. GET /api/v1/runs/

response: {"code": 0, "status": "RUNNING|PASS|...", "report_link": "http://taplatform.my.corp/reports/1234", "summary": {"total": 1000, "pass": 1000, "duration": 1200}, ...}

1. Write a flow step to trigger the test automation platform and set the red line

The implementation method is similar to the method of integrating open source test automation tools, mainly by configuring step.yaml and step.sh.

Configure the address of the self-built test platform in step.yaml and the filter parameters of test cases, such as:

items:   - label: 测试平台地址     name: TEST_PLATFORM_HOST     type: input     value: http://taplatform.my.corp   - label: 用例     name: SUITES     type: input     value: all # 用例筛选条件

The main completion in step.sh:

  1. Trigger the test platform to execute the corresponding test case
  2. Wait for the test to complete
  3. Get test results
  4. Verify the red line stuck point

Such as:

# sh -ex $WORK_SPACE/user_command.sh output=`python3 /root/run_and_wait_until_finish.py $TEST_PLATFORM_HOST $SUITES $EMPLOYEE_ID`  STEP_ROBOT_PASS=`echo $output | awk -F, '{print $1}'` STEP_ROBOT_FAILED=`echo $output | awk -F, '{print $2}'` STEP_ROBOT_PASSRATE=`echo $output | awk -F, '{print $3}'`  redline Passed:成功:$STEP_ROBOT_PASS:Success Failed:失败:$STEP_ROBOT_FAILED:Error PassRate:成功率:$STEP_ROBOT_PASSRATE:Default

The implementation steps of run_and_wait_until_finish.py are roughly as follows:

import os import time import sys import requests   def start_test_task(ta_host, suites, trigger_by):  resp = requests.post(f'{ta_host}/api/v1/runs', json={'trigger_by': trigger_by, 'suites': suites})  if not resp.ok or resp.json()['code'] != 0:  raise RuntimeError(f'create test task error: {resp.content}')  return resp.json()['run_id']   def generate_report(ta_host, report_link):  if not os.path.exists('report'):  os.mkdir('report')  with open('index.html', 'w') as fp:  fp.write(f'''<html> <head> <title>Test Report</title> <meta http-equiv="refresh" content="0;URL={ta_host}/reports/{report_link}" /> </head> <body> <p>forwarding...</p> </body> </html>''')   def wait_until_task_done(ta_host, run_id):  while True:  resp = requests.get(f'{ta_host}/api/v1/runs/{run_id}')  if not resp.ok:  raise RuntimeError(f'task error: {resp.content}')  data = resp.json()  if data.get('code') != 0:  raise RuntimeError(f'task error: {data}')  if data['status'] in ('PASS', 'FAILED'):  generate_report(ta_host, data['report_link'])  return data  time.sleep(5)   if __name__ == '__main__':  if len(sys.argv) != 4:  raise RuntimeError('invalid arguments')  ta_host, suites, employee_id = sys.argv[1:]  run_id = start_test_task(ta_host, suites, employee_id)  run_info = wait_until_task_done(ta_host, run_id)  summary = run_info['summary']  pass_cnt = summary['pass']  total_cnt = summary['total']  pass_rate = pass_cnt * 100 / total_cnt  print('%d,%d,%d' % (pass_cnt, total_cnt-pass_cnt, pass_rate))

Two interfaces of the test platform are called, and a test report file of index.html is generated. Note: The test report just forwards the request to the corresponding page of the self-built test platform.

2. Add test automation nodes to the pipeline

Add a blank task node to the pipeline, add a step to it, select the flow step we customized earlier (remember to publish to the corresponding enterprise). Configure the test platform address and test case in the steps, and set the red line information.

3. View the test report

Add a report uploading step in the test node, fill in "report" in the test report directory, and the test report entry file as "index.html".

4. Statistics and reports

In the execution results of the pipeline, you can see summary information such as the pass rate. Detailed statistics and report suggestions are implemented in the self-built test automation platform.

to sum up

For the three forms of self-built test automation mentioned above, here is a simple summary to help you make better choices.

  1. Currently, there is no practice in test automation and I plan to start building test automation capabilities : It is recommended to choose form one, which is based on open source test automation tools. In this way, we can focus our energy on specific testing work, so that test automation can be implemented quickly, and at the same time, we can enjoy a large amount of technology precipitation in the open source community and avoid detours.
  2. Jenkins has been built, but no secondary development is carried out, and it is only used to perform automated testing : It is recommended to choose form one, which is based on open source test automation tools. This can save the cost of maintaining jenkins, and at the same time, test reports and stuck points can be better integrated with the research and development process to avoid tool fragmentation and data islands.
  3. Build a CICD pipeline based on jenkins, or perform secondary development and tool integration based on jenkins : It is recommended to choose form two, test automation self-build jenkins. This enables faster system integration, and at the same time, it will not affect subsequent migration and improvement actions.
  4. Self-developed test automation execution and analysis platform : It is recommended to choose form three, self-built test automation platform. If we do not plan to overthrow and rebuild, we still recommend the integration of existing systems and cloud effects to avoid the interference of tool switching on the team, and to better reuse existing resources.

Original link

This article is the original content of Alibaba Cloud and may not be reproduced without permission.

Guess you like

Origin blog.csdn.net/weixin_43970890/article/details/114262723