For example, if we want to check for gender bias, should we compare the gender attribute against another sensitive attribute, such as the race attribute?
Is there any rule of thumb that can help us understand which sensitive attributes should be compared in order to detect fairness violations?

Yes

Thanks, I just had one more question regarding the Fairness API.

I would like some feedback on whether my reasoning for this last question is correct or not? It would really help me out a lot if you could please give me your honest thoughts and feedback on my question. I hope that it won't be too much of an imposition on your time to take a few minutes out of your schedule just to kindly answer my last question here in the discussion forum for this project.

Thanks!