Tooling to assist software development has never been more available or more advanced. Tools, both commercial and open source, have been purpose-built to identify code that is provably incorrect (and even code that might be incorrect). Some examples of C++ tooling available from LLVM:
This is a minuscule list in comparison to what’s available from LLVM and elsewhere. The problem is not, therefore, that the tools don’t exist. The problem is not that the tools don’t report any errors when used either.
The problem that I see more often than any other is that developers don’t believe what the tooling reports.
I don’t know why. Is it pressure to ignore all roadblocks in delivering a feature on a deadline? Is it hubris? Laziness? Is it some mistrust of the tooling itself?
I don’t suspect malice. But I don’t know the answer.
Is the tooling always perfect? Absolutely not. That doesn’t change the advice I’m going to offer:
Assume the tool is right.
If you dig into what’s being reported and believe you have a case where the tool is wrong, you should be able to completely explain why it’s a false positive. If you can’t confidently provide that explanation, check the error report again. At some point you will conclude that your code is at fault (almost always the case), the tool is pedantically correct about your code but the consequences are benign (fix it anyway to remove the noise), or the tool has a bug (which you should file).
And if you have a tool that’s consistently wrong, stop using that tool.
It’s all about perspective. A well-built code analysis tool is better than a human at detecting errors. For some, that’s hard to accept.
For others, myself included, we recognize that we need all the help we can get. Start by assuming the tool is on your side. That perspective will let you understand (and benefit from) what it’s telling you.