For the most part we don’t have any special whitelist where we can say, well this website is actually okay, therefore we will take it out of this algorithm.
For some individual cases we do that. So it depends on the algorithm.
So for a lot of the general search algorithms we don’t have that ability but for some individual algorithms we do need to be able to kind of take manual actions and say well…
For example the Safe Search algorithm is picking up on these words on this website as being adult. Adult website similar but actually they’re talking about, I don’t know, animals or something completely unrelated.
And in those kind of cases the SafeSearch aglorithm would have kind of a whitelist which would say well, this is a problem that we’re picking up incorrectly with the algorithm and we will add them to the whitelist for the moment, work to improve the algorithm so that they don’t take this into the account in the long run. But in the mean time, we can kind of, like a stop gap measure to help that. That is something where that sometimes makes sense.
We don’t have that for a lot of the other algorithms like Penguin and Panda.
It’s not that we would say well this website is being recognized as kinda problematic for quality point of view, will put it on the whitelist and it will be seen as perfectly fine. That is not something that that the search quality team would want to do.