Fix DNS Timeout messages with retry system #80
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
A fix to significantly reduce the amount of DNS Timeout messages.
I was testing out the tool and noticed that for any input I gave it, the DNS requests always had timeouts at random websites, such as this:

There seemed to be no consistency across multiple scans of the same input, so I assumed it was a DNS rate limiting issue (which seems to be correct). This fix gets around the rate limiting issue by modifying the dns_lookup method in utils, such that it retries a maximum of 3 times when encountering DNS timeouts. In my experience, this also allows the timeout limit to be reduced to 3 seconds instead of 10 without any impact on the accuracy. I could consider changing the timeout limit back to account for worse hardware/DNS connections, but it could also be added as a command line argument if need be.
So far it has given me no more DNS timeouts and should slightly increase the performance of the DNS sections of the scanner and the accuracy, since before it wasn't retrying the timed-out websites, so they weren't properly tested.