Refactoring:What’s the origin of bad design?
The refactoring is defined as the process of changing a software system in such a way that it does not alter the external behavior of the code yet improves its internal structure.
Refactoring improves the quality of application design and implementation, but unfortunately the expression “If it is working don’t change” is many times used to bypass it.
We can found in general three cases concerning refactoring:
– Iterative refactoring: each application can’t be developed perfectly in the first iteration without any feedbacks, even if the team has the best of the architects, designers and developers. The easy way to do refactoring without investing a lot of money and without wasting time is to integrate it in the development process and do it after each iteration.
– Refactoring when is necessary: after the application is deployed, there are some feedbacks and bugs, if resolving them take a lot of time or some client needs are very complex to develop and integrate to the existing system, a refactoring can be a good solution to improve the quality of the code base, but in this case it can be very risky and we have to take care to avoid regression in the existing code.
– Not refactor: sometimes even if there are many problems in the existing application, a refactoring is never began, because the boss not want to invest in this process, and the support team have to manage the stress generated by all bugs and feedbacks.
Which factors can influent the quality of design and implementation?
To avoid investing a lot of money and time when refactoring, the good solution is to design well the application, but there are some traps that can influent the code quality:
No architect and designers in the team:
Unfortunately there are some projects initiated with only developers, and no architect is present, the problem is that we can discover this misconduct very early and it can contribute of the project failing.
A prototype typically simulates only a few aspects of the features of the eventual program, and may be completely different from the eventual implementation.
Unfortunately many projects are based on the prototype without any review of architecture, design and implementation, and this trap is due to the rapidity of development and the team thinks that’s a good solution to continue with a prototype to gain time.
Some frameworks provide useful classes but can influent also the design, for example MFC or Qt are intrusive frameworks.
Using some bad designed functionalities of such frameworks can influent to complicate the design of project, let’s take as example the Doc/View design provided by MFC, if a designer choose to consider CDocument as the model as encouraged by the MFC documentation, it will be a bad choice.
The model must be independent of MFC to be used in other contexts easily; however CDocument can be used as controller.
Technology high coupling:
Some architects and designers forget the low coupling concept, and propose a high coupled design with the technology or the frameworks used.
For example If CORBA is used for a distributed system, it must be isolated in the communication level, and all other classes must be CORBA independent.
Unfortunately in some projects we can found CORBA types used also in the business classes and it complicate a lot the application.
In my opinion this problem is the most popular and the very dangerous one; it has an enormous impact especially for C++ projects, indeed currently finding a good C++ developer is not very easy, and finding one who master many technologies and frameworks become very hard, so creating a high coupling application with technology used implies that all the team must know it, but if the application is low coupled with technology, it will facilitate the human resource department task and they can find easily C++ developers.
Your experience is very interesting to identify the bad design origin, for that you can vote to identify each factor is the most responsible.