Jan 17 2024

Talk: Mel Andrews

January 17, 2024

4:00 PM - 6:00 PM

Location

1430

"Machine Learning in Science and Society: A Theory-Laden Conception"

Abstract:

As the use of machine learning (ML) based technologies becomes ubiquitous across the sciences and across society, “AI hype” narratives engender misuse by misrepresenting ML’s core epistemic capabilities. One prominent and pernicious example is the presentation of ML as enabling “atheoretical” inference. By this, it is meant that the methods do not rest essentially on prior conceptualisations of the nature of the target phenomena. Touted as an epistemic virtue by some and decried as an impediment to epistemic aims by others, the theory-free conception of ML can be seen implicit in the uncritical ways ML tools are wielded in socially-sensitive contexts and in the unchallenged assumptions of the ML ethics community. This false narrative is unfortunately lent credence by philosophers of science seeking to differentiate ML from traditional mathematical and computational methods. I argue against the viability of this thesis, pointing to the myriad points at which epistemic choice and domain knowledge enter into the modeling procedure. Rather than freeing us from the constraints of preexisting conceptualisations of phenomena, I demonstrate that this theory-free ideal only serves to facilitate such unsubstantiated beliefs slipping in the back door of the inference procedure. When we regard real-world uses of ML as necessarily theory-laden modelling exercises, more avenues open up to us in the prevention and mitigation of unethical uses of ML. Tradeoffs between epistemic and egalitarian virtues, treated by the ML fairness literature as intrinsic and necessary, are shown to be themselves the product of contingent choices made in the operationalisation of concepts. I argue that at least some subset of instances of algorithmic bias may be better intervened upon under a framework in which such virtues are in principle alignable.

Contact

Philosophy Department

Date posted

Jan 31, 2024

Date updated

Jan 31, 2024