In the field of fair machine learning, it is presumed that fair analyses should always omit, or at least reduce in influence, sensitive variables such as race and gender. But in some applications, those affected may actually want their sensitive traits to be used. -
View it on GitHub