Pull to refresh

Comments 4

By the way, at that point the budget could go very far, the price was no object at all.
I think you may consider createing a online service — validate warnings.

For a small price your support team will validate new client's warnings, based on some criteria (for exmaple, only high probability ones) with suggestions on how to fix it.

It is very similar to service of some companies that do security scans.

I bet after some time, your support team willl become very efficient and maybe even come with ideas on how to automate most of the validations/suggestions.

We consult our clients, help to configure the analyzer and explain unclear warnings either way. A more advanced option of cooperation is when we fix errors ourselves. As I understand, we are talking about the intermediary option. I'm not sure if this service is in demand. We'll consider it. Thanks.
Based on your explanation, you have it pretty much covered.
explain unclear warnings either way
I was wondering, do you have something like decision-making logs from an analyzer for a warning? i.e. something like detailed steps why something was considered to be a warning. It might be easy for a developer to find an issue if he had more info.
The analyzer warnings are divided into levels of certainty. The article "The way static analyzers fight against false positives, and why they do it" covers in detail the question of assigning a certain level. Each diagnostic comes with its description with correct and incorrect code examples. Also the diagnostic description provides reference to CWE, which allows to look at the warnings from from another angle. In addition, in the diagnostic description there is also a reference to examples of real errors found in open source projects.

In most cases, all this is enough for users to get through the warnings. However, if something really strange is happening, a user can always reach out to us. Example: False Positives in PVS-Studio: How Deep the Rabbit Hole Goes.
Sign up to leave a comment.