In recent years, I have worked on various IT projects within Info Support for our customers. One project was the development of a product to predict the unlawful use of social assistance benefits. We used an artificial intelligence system for this project – one with high impact in the society. The development of this system raised many questions, including the issue of whether the use of such a system is fair. In the media, journalists have published several articles about this system, often with a negative connotation. On the contrary, measurements have shown that the application of this system makes the detection of the unlawful use of social assistance benefits easier and faster. Weighing all the advantages and disadvantages, we ultimately decided not to further develop this system. The real result is that fraud is less easy to detect, with various consequences.
Sometimes I hear the argument that algorithms are less morally acceptable. However, does this case truly hold? Hence, my personal drive is to provide insight into how individual moral convictions influence the moral acceptability of algorithms. I intend to use this insight to improve the conversation/debate about artificial intelligence in a nuanced manner. I hope such learning contributes to the increased adoption of artificial intelligence in relation to predicting fraud not only in this specific domain but also in a broader sense.