[Website Architecture] The more iterative the project, the more difficult it is, and the serious delay? That's not handling the change well

Hi everyone, and welcome to the Stop Refactoring channel. 

In this issue, we discuss the scalability of the website system.

Scalability refers to how the website system should better handle demand changes and version iterations.

For those who have several project experiences, they may not take such questions seriously . After all, devops, CI/CD, git, agile development, and version planning are all familiar to us.

However, in many projects, function regressions will continue to occur , each version release will take almost overnight , and each additional function will take a long time to reorganize the logic and bugs frequently occur.

This is the performance of poor scalability . Even if the same tools as big companies are used, the effect will still be a mess.

In fact, this is a misunderstanding . Scalability is not just about how to do a good job in the version process.

Instead, we should think about how to deal with changes in demand. The key to improving scalability is to deal with changes in demand.

We discuss in this order

  1. iteration plan

  2. Code regularization

  3. release process

iteration plan

Iterative planning is very important. The practice of many projects is to first clarify the launch date , and then reverse the completion time of each function point .

This can indeed make a plan that looks very beautiful, but such a plan has no practical significance , and often it will only be postponed and postponed.

For a detailed introduction to iterative planning, please refer to our previous "Project Process".

Our recommended approach is  to divide the entire project into multiple independent sub-projects according to the business architecture .

Then divide the current sub-projects into three phases: the main function phase, the secondary function phase, and the optimization phase .

After that, the iterative cycle can be subdivided in each stage according to the actual situation .

The advantage of this is that it can be well adapted to changes in demand . The main function will not change , and it is also guaranteed as a priority. It is impossible to build a live broadcast platform at the beginning and then change it to a mall. 

The secondary functions are vague at the beginning , and they are also the main part of the demand changes. However, after the main functions are completed, the secondary functions will gradually become clear , and the rationality of the secondary functions can basically be judged.

Optimization requirements are the most unpredictable , and even many of them are sudden inspirations. These requirements and suggestions are concentrated in the optimization stage. One is to judge the rationality of the function after the project is roughly formed ; the other is to cool down the impulse of inspiration and judge the necessity of the function more rationally. Otherwise, these small functions seem to be small in size, but reviewing some seriously delayed Projects are often caused by random addition of these inconspicuous small functions .

Code regularization

In the previous videos, the coding rules have been constantly emphasized, because the needs of website projects will change , and generally the construction will continue in the first phase, second phase and third phase .

The practice of many projects is to adapt to future changes and extract some potentially reusable code to reduce subsequent workload. But in fact, this approach is not necessarily good .

Extracting too much common code will make the code structure very bloated , but it feels very efficient, but once the large version is iterated and the team members flow , the code will become a mess .

As discussed in the previous "Less Code Doesn't Mean Efficiency", when doing a project, you need to consider that the requirements may change, and you need to continue to iterate in the future .

If it takes a long time to figure out the logic and locate the modification points every time you add and modify functions and troubleshoot bugs , then the project will only become more and more difficult .

Of course, code regularity does not mean which set of coding standards to adopt or which international standards to comply with , but to allow the team to complete coding tasks in a similar way , no matter how simple or clumsy the rules seem.

In this way, no matter who takes over or leaves, there will be no code black hole , because the basic logic of the code written by everyone is similar.

In this way, the iterative plan can be completed more smoothly, which actually alleviates a major problem in software projects. Quantification can quantify the workload relatively accurately, and there will be no serious delays.

For a more detailed discussion on code regularization, please refer to previous issues of "Front-end Regularization" and "Back-end Regularization".

release process

The release process is best to release to the test environment first , and then release to the production environment after passing the test , and do not add functions at will during the test . Of course, in order to save release costs, you can also only perform this process for large versions.

In order to prevent human problems , it is best to build a CI/CD automatic release process .

Since it is impossible to test all functions in every release, it is difficult to avoid that the new version will not affect the old functions. The best way is to release in grayscale , but it is unrealistic to implement grayscale release for all functions .

Our recommended method (and the method of many projects) is to sort out the main functions of the website system, and conduct a minimum set of tests after each release (the test can be completed within 1 hour) to ensure that there will be no problems with the main functions.

In addition, unless there are urgent vulnerabilities, it is best to limit the release cycle to once a week , and it is best not to release on Fridays (it is difficult to respond in the first place on weekends), frequent and random releases may cause some unexpected problems , these Problems will continue to disrupt the plan invisibly .

According to the scale of the project, it is best to keep more than two historical version backups for quick recovery. Version backups allow the team to choose not to modify the problem temporarily .

Because sometimes, the new version is actually irrelevant, and the ongoing iteration may be very important, and the construction period needs to be guaranteed.

Summarize

The above are a few key issues that need to be considered for scalability, and these issues have nothing to do with the tools we use.

But if these issues are dealt with, then the project will flow smoothly.

Taking change into account makes architecture and management more useful and feasible.

​We should accept and deal with changes more rationally , instead of always thinking about it, and we will carry it through.

Guess you like

Origin blog.csdn.net/Daniel_Leung/article/details/128277316