Human intelligence: another abominable idea from the AI industry
Helen Beetham has been on fire lately, and this piece is particularly sharp. Beetham writes about how the “AI” industry has tried to redefine “human intelligence” in contrast to, or to justify, its idea of “machine intelligence”:
In the guise of starting from something self-evident, the project of ‘artificial intelligence’ in fact serves to define what ‘intelligence’ is and how to value it, and therefore how diverse people should be valued too. Educators have good reason to be wary of the concept of ‘intelligence’ at all, but particularly as a single scale of potential that people have in measurable degrees.
Beetham goes on to discuss the discourse we've all seen ramp up in the last year or two, where helpful “AI” will make us “more productive” and do our work for us while we supervise. She sees this for exactly what it is:
What these self-serving comparisons produce is a double bind for students and intellectual workers. Submit yourself to the pace, the productivity, the datafied routines of algorithmic systems, and at the same time ‘be more human’. Be more human, so that you can add value to the algorithms. Be more human , so more of your behaviour can be modelled and used to undercut your value. Be more human so that when AI fails to meet human needs, the work required to ‘fix’ it is clearly specified and can cheaply be fulfilled.
We've been here before. Fool me twice, shame on me.
Attention, moral skill, and algorithmic recommendation
This is a pretty interesting paper by two authors from ANU, in Philosophical Studies. They make the case for attention as a “moral skill”, and argue that how we pay attention is as important as whether, or on what, we do so.
Online platforms can direct us toward things we should not attend to just as easily as toward things we should. And even when they direct our attention to the right things, they may not do so in the right ways, to the right degrees, or for the right reasons.
I find their argument compelling and it seems to open up a lot of further intersting questions to explore. However their conclusion was somewhat surprising and, to be honest, baffling. If AI-driven recommender systems are bad for our attention and moral health, according to these authors the solution is...more AI recommender systems, but build on generative AI running on your operating system. Fair to say it's not the conclusion I would have come to.
We Need To Rewild The Internet
On a somewhat similar theme, Maria Farrell and Robin Berjon explore rewilding as a both a metaphor and a somewhat literal suggestion for repairing the dystopian disappointment that is the Internet – and more specifically the World Wide Web – in 2024.
I admit I was hooked with the early reference to James Scott's Seeing like a State, one of the books that has most profoundly influenced how I think about the world, but just as I was intrigued by the idea of attention as a moral question, I wanted to know more about the Internet Oligarchy problem as an emotional one:
Rewilding the internet is more than a metaphor. It’s a framework and plan. It gives us fresh eyes for the wicked problem of extraction and control, and new means and allies to fix it. It recognizes that ending internet monopolies isn’t just an intellectual problem. It’s an emotional one. It answers questions like: How do we keep going when the monopolies have more money and power? How do we act collectively when they suborn our community spaces, funding and networks? And how do we communicate to our allies what fixing it will look and feel like?
What Farrell and Berjon are suggesting in this piece is some combination of legalist liberal-democratic power through enforcement of anti-monopoly laws, Lenin's concept of “dual power”, and anarchistic “building the new world in the shell of the old”. Not that they'd likely put it like that.
Libraries and Learning Links of the Week is published every week by Hugh Rundle.
Subscribe by following @fedi@lllotw.hugh.run
on the fediverse or sign up for email below.