First published: 2018/07/01 (6 years ago) Abstract: No methods currently exist for making arbitrary neural networks fair. In this
work we introduce GRAD, a new and simplified method to producing fair neural
networks that can be used for auto-encoding fair representations or directly
with predictive networks. It is easy to implement and add to existing
architectures, has only one (insensitive) hyper-parameter, and provides
improved individual and group fairness. We use the flexibility of GRAD to
demonstrate multi-attribute protection.