[Interface Test] The mental journey from 0 to 1

I am a tester who has been testing for three years. I joined a medical training company in Beijing as a functional test engineer in 2020. After joining the company, in order to improve testing efficiency, I came into contact with interface testing. The following is from zero to now (there is still a lot to do) Perfect space, so it can't be regarded as some mental journey of 1).

Motivation for doing interface testing

In terms of the current situation of the company's business and the team, I am the only tester in the team. As the product functions become more and more abundant, the regression test task becomes larger and larger. Close to the manual regression, the efficiency is low and the quality is low, so the automation test is inevitable. must do.

As far as personal career development path is concerned, it seems busy every day, but in addition to becoming more and more proficient in business processes, hard skills grow very slowly. If you want to broaden the road of testing and go far, take the initiative to seek changes It is necessary . Looking at the career development direction from the perspective of a functional test engineer, it is nothing more than security testing, performance testing, automated testing, and management. It just meets the needs of the company's business, so I decided to start with automated testing.

For starters, choose JMeter

I think that if you plan to do something, if you prepare too much, it will often hinder the pace of starting. If you go too far, it is better to throw the pole immediately after a little preparation, and practice to see if there is any mouth in the fish pond. So I chose JMeter, which is easier to get started, and started to break through from interface automation. At that time, I was holding the classic reference book "Full-stack Performance Testing Cultivation Collection JMeter Practical Combat", referring to the excellent practical experience posts on the Internet, and quickly built a set of 200 case scripts, extracting public variables as much as possible from the scripts. Solve the interface dependency problem.

Abandon JMeter for interface automation testing

With more and more cases, I found that once there is an interface change, it is extremely difficult to maintain (very easy to be dazzled). Once, because there was an extra space in the value of the input parameter, I searched for more than 3 hours. When I found The reason is that it crashes directly (I can’t remember which parameter value, for example: Correct: ${14trainCenterId}, Error: ${14trainCenterId}), because there is an extra space in front of 14, I am in more than 200 cases , I searched for a whole afternoon among thousands of parameter values, so far, I decided to give up JMeter.

In fact, another important reason why JMeter is not suitable for interface automation testing in the team is that it is not very convenient for multiple people to cooperate to write a script, but since I was the only one in the team at the time, I did not really experience this shortcoming.

Although JMeter is abandoned, during the writing of the interface test, a set of interface test case writing methodology has been formed in the mind . For example, the design of the interface test case needs to consider the normal situation (required items must be filled in, the combination of parameters, parameter values traversal, etc.), abnormal conditions (required parameter values ​​are empty, abnormal values, simply do not even write parameters, etc.), in fact, the design of the interface case is similar to the design of the business function test case, the only difference may be The input of some parameters is restricted by the big front end when clicking manually, but the interface test can pass parameters at will.

Try using Python+unittest+requests

If you abandon tools, you have to choose a programming language. At that time, there were not so many high-level reasons for choosing Python. It was just because of the phrase "Life is short..." and the voice of Python in terms of automated testing. It's also very high. I myself tend to do things as quickly as possible, so I didn't consider Java.

If you want to just let the Python script run, maybe you only need to learn some basic grammar, and then you can choose a test framework in three to five days. At that time, you chose this combination: Python+unittest+requests. In the early stage, I learned the Python syntax and the testing framework while learning the gourd painting. I quickly wrote a set of test scripts with the same effect as the previous JMeter (although there are only two files in the whole project: test_.py and run.py, although simple, they can run, and entry learning is enough). So far, it can be regarded as entering the Tao.

Because I am just starting out, and there are no seniors in the team to give pointers, so I almost cross the river by feeling the stones and took a lot of detours (maybe the road I am walking now is not straight), but I will stop every time I accumulate Come down and summarize and think about whether it needs to be refactored, and it will not be copied and pasted with the inertia of thinking. As of now, there are three major changes in the version, let me tell you.

现在我也找了很多测试的朋友,做了一个分享技术的交流群,共享了很多我们收集的技术文档和视频教程。
如果你不想再体验自学时找不到资源,没人解答问题,坚持几天便放弃的感受
可以加入我们一起交流。而且还有很多在自动化,性能,安全,测试开发等等方面有一定建树的技术大牛
分享他们的经验,还会分享很多直播讲座和技术沙龙
可以免费学习!划重点!开源的!!!
qq群号:110685036

Taking shape - first generation testing framework

To put it simply, the main thing of the first generation is to build a basic framework and use parameterized to implement data-driven testing.

The first generation of my directory structure is like this:

There are 4 directories and 2 files. What needs to be explained may be custom_tools under public. I will write some general methods in this file, such as the random_str method to obtain a specified number of strings, and to_md5 to encrypt parameters with md5 method etc.

As for the tearDown_methods file under public, I defined some data deletion methods in order to clean up the test data. For example, interface A generates products, interface B edits products through product id, and interface C deletes products through product id. In this version, I wrote the interface C into the tearDown_methods file, and then defined the teardown method in the official test script file to call the deletion interface (at this time, some friends may question it, are you taking off your pants and farting, haha , I thought this style of writing was very high-end when I first started, but I realized this problem soon after, so it was corrected in later versions).

The writing idea of ​​the text-test_tanqiang.py is as follows:

In order to solve the problem of interface dependence (such as the above example: interface A generates products, interface B edits products through product id, and interface C deletes products through product id), I initialized multiple global variables in the first part of the test file (such as id_product = -1), then in the new interface, get the product id after the addition is successful, and assign it to id_product, and then use this value in the edit and delete interfaces. The file structure is roughly as follows:

The first version uses the parameterized decorator to do data-driven testing. The complete case is as follows:

In addition, I wrote HOSTs of different environments and different ends into the config file.

This version can also realize the function of generating test reports and sending emails, which is intended to pave the way for continuous integration in the future.

Results and reflections of the first edition : In this mode, I have written a total of more than 70 interface cases. Practice has proved that using Python for interface automation testing is indeed much more flexible than JMeter. After practice, I also found the following problems:

  • There is no need to write the test data cleaning method in a separate file, it can be written directly under the test class, because deleting the interface is also the category of interface testing.

  • When data-driven testing, the test data is listed above the test method. If there are too many interface parameters, it is very inconvenient for reading and later maintenance (because of the number index position).

  • In data-driven testing, if one of the case assertions fails, it is not known which piece of data is the problem.

  • The execution order of test cases is uncontrollable.

Efficiency first - the second generation test framework

According to the experience and problems of the first generation, the framework is reconstructed, as follows:

Change 1 : Abandon the data-driven mode, because the original intention of doing interface automation testing is to do regression testing (to ensure that the original functions are not affected), rather than to discover new problems through interface automation testing. In addition, I am the only tester in the team, and the focus of my time is still on the guarantee function test. According to the experience of affected interfaces in the past, if the interface is more affected, it will directly 500, and it will not be that the type transfer 1 interface is normal, and the type transfer 2 interface is abnormal. So in the second version, I only select the normal input parameters of each interface, and write the test data directly into the test script.

Change 2 : Control the execution sequence of use cases through the names of files, classes, and test methods.

Change 3 : Remove the teardown_method.py file under public, and write the data cleaning interface into the test class in the test_xxx file.

Change 4 : Enrich interface assertions. The most important thing in interface automation testing is assertion design, so this revision enriches the types of assertions.


(It is possible that some small partners will ask questions: Is it necessary to assert the msg of the response if the code of the response is asserted for everything? My thinking is: if the interface development of the team you are in is more standardized, the code and msg assertion One of them is fine, but my current team is not very standardized. For example, the successful response of some interfaces is {'code':'2000','msg':'operation successful'}, and the successful response of some interfaces is {'code':'0','msg':'success'}, based on this I make asserts on both code and msg.)

Change 5 : The "error first" programming idea is adopted when the interface is dependent. (Maybe some friends will laugh, wondering why there is such a way of writing:

self.assertEqual(1, 2, 'Failed to obtain the province id, so the operation of creating a new training center cannot be performed'), I also discovered this problem later, so I changed it to "raise KeyError('Failed to obtain the province id, so Unable to create new training center operation')")

Achievements and reflections of the second edition : In this mode, I integrated a total of 150 interface automation cases. However, a new problem has been discovered in practice: if a certain case reports an error, I cannot clearly know the specific content of the request body and the specific content of the response body during the test after the error is reported. This will make it very difficult for me to troubleshoot and give feedback to the development (after running, it will be black and red, and it is not very clear where the error is).

Improve the credibility of interface test results and persist information in the process of interface testing before, during and after - the third generation testing framework

I joined this company when the team was just starting, so the iterations were fast at that time, and I was very busy every day. It has been two years since I joined the company, and the expected functions of the product have been launched. Our company’s main focus is not software, but the IT department. It is a functional department, so once the necessary functions are available, there will not be too many major updates (know everything). At this time, I have time to rethink the current test script.

Recently, I have repeatedly read the book "Test Architect's Practice 2nd Edition" (strongly recommended to everyone) . The book systematically teaches how to design test cases, and is inspired by the multi-parameter combination test method taught in it. The general meaning is that, for example, an interface has 3 required parameters (parameter A (possible values: 1,2,3,4,5), parameter B (0,1), parameter C (0,1,2)), There is no relationship between each other, so 5 cases can be generated:

Based on this, I thought of data-driven testing again. This time I want to put the test data source in excel. Some friends may complain about this kind of data-driven testing in excel mode, and feel that they should sort out excel before testing. Very troublesome. My strategy this time is not to manually count the grids in excel to fill in the test data, but to write a method to generate test data and let it automatically generate test data. The overall mind map is roughly as follows:


(the whole idea)


(Write the method of generating test data before testing (you need to write the method of generating test data according to each interface))


(Write information about the response during the test run to excel (this method can be used publicly))


(After the test, if all the assertions pass, the corresponding case_id in excel will be set to green (this method can be used publicly))


(method to call response in testcase)


(results in excel after the test run is over)

Other changes in the third edition include:

1. Move all the global variables in the original test script to the config file, making the test script file more tidy;

2. Remove the unnecessary self. in the original test case, and only add (self.assertEqual()) when asserting, for example, it will be written before: self.url = "XXX" self.data = {'id ':1}, now changed to: url = "XXX", data = {'id':1}, theoretically can be dubbed "the difference between adding self and not adding self in the class method", anyway, the result of my practice is to add Whether it is added or not will not affect the test results, so it is natural not to add it (write less if you can, ~.=).

Achievements and reflections of the third edition : Currently using this version, I have just written 23 interface cases. Although it seems that it is troublesome to write a method to generate test data before writing a formal script, I found that I did not think about it in practice. It's so troublesome, and the input-output ratio is still very high in my opinion. Maybe the number of interfaces written in this mode is not enough, and other new problems will definitely be discovered after precipitation.

Isn't our growth just about constantly discovering problems, solving them, and discovering new ones, hehe.

Rory wrote a big push, more of a summary of his own interface testing from the beginning to the present, as mentioned at the beginning, because there are almost no partners in the team who can communicate with each other in terms of testing. So I have always crossed the river by feeling the stones by myself, taking many detours, and the road I am walking now may not be straight. I am really really really eager to communicate, and I hope you guys can give me some advice. , Leave a message to me about the problems or suggestions you find. It would be amazing if you can add WeChat to give me some pointers (VX: GXY1162031010).

In fact, after refactoring to the current version, there are still many problems that I have not figured out how to solve. Please help me :

Question 1: How to do data-driven testing for interface dependencies?

Scenario description : Create the interface A of the product. After the call is successful, the id of the newly created product will be extracted and saved as a global variable (for example: id = -1, after the interface A is called successfully, it will be set to a new value, such as id = 100), edit the product Interface B of interface B, in writing the function of interface B to generate test data, expects to be able to read the latest value of id 100, after the actual operation, it is found that the original value is -1. I also know the reason for this problem, because all the ddts are loaded before the test is run, and then the testcase is run, so the new value 100 assigned to the id in the test run cannot be read. Someone suggested that id = -1 can be written as id = [-1], so that ddt can read the new value during the test run. After I tested it, it was indeed possible, but another brother suggested that the interface dependency cannot be done. Data-driven testing, so it is still trial and error, try.

Question 2: How to make an assertion for the query interface?

Scenario description : Add a new product, get the product id = 100 after the addition is successful, and then make an assertion on the query product interface C. My current assertion strategy is "self.assertIsNotNone(response.json()['data']) "(cover face), I have tried to assert whether the id of the first piece of data in the query result is equal to 100, but this kind of assertion is very unstable. If there are many people using it, then the first piece of data in the list is likely to be Not the data you just added. So I haven't thought of a suitable assertion method yet.

Question 3: In the third version of the testcase, after getting the response, the TD.data_write() method will be called fixedly, and there are many parameters passed, and even if it is a different testcase, at this step, the parameters passed in all the same. Is there any optimization method to avoid writing so many repeated contents for each testcase?

END

meager strength

Finally, I would like to thank everyone who has read my article carefully. Seeing the fans’ growth and attention all the way, there is always a need for reciprocity. Although it is not a very valuable thing, you can take it away if you need it:

These materials should be the most comprehensive and complete preparation warehouse for friends who want to advance [automated testing]. This warehouse has also accompanied me through the most difficult journey, and I hope it can help you too! Everything should be done as early as possible, especially in the technical industry, we must improve our technical skills. I hope to be helpful…… 

Guess you like

Origin blog.csdn.net/IT_LanTian/article/details/130161005