Artificial Intelligence Is Neither

It feels like a long time ago when I wrote the Systematic (Software) Innovation book. One of the main, albeit inadvertent, themes of the book was to highlight a dichotomy that, now we all live in the first stirrings of a Big Data tsunami, seems to be getting worse rather than better: software engineers and architects, the book described, are simultaneously the people most likely to determine how society evolves in the coming decades, and also the people least well qualified to take on such responsibilities.

Last week I came across an online article bearing the ominous title, ‘Machine Learning Is Racist Because The Internet is Racist’. If I ever needed a way to exemplify the dichotomy, I think this might just make it into my Top Five.

The article represents the sort of complete abdication of responsibility now becoming quite typical of the ICT industry. An industry that still – two months after it hit the media – hasn’t done anything to tackle the problem that recommendation algorithms now teach people how to make bombs. ‘People who bought this product also bought…’ and hey presto, everyone knows how to make their very own explosive device. This the same industry that – they tell the media – can’t be blamed for providing a conduit for terrorist communications.

At the crux of the issue here is Artificial Intelligence. Plenty of the former, not so much of the latter it seems. There’s perhaps a telling irony in the fact that the software geeks effectively try to tell us all that their AI algorithms can’t be blamed for mimicking the content of what they find on the internet. The irony being that if they are able to declare a significant enough amount of internet content is ‘racist’, how come they weren’t able to create a racist-comment-detecting algorithm and thus exclude such trash from the data they use to train their algorithms?

Sure, the Internet might be ‘racist’ right now. But in no way does that give AI professionals a ‘get out of jail free’ card excuse for creating racist AI. It’s not rocket science.

Or maybe it is. Not in the offending blame-dodging article under examination here, but I’ve heard from other ICT ‘thought leaders’ that any racism (or any other kind of ‘ism) detection algorithm cannot be their responsibility, because who are they to decide what is and what is not racist? Only the politicians can tell us what the rules are, they claim. My confidence that our politicians can help solve the problem is frankly quite low. Not so much because the problem requires any rocket-science per se, but rather that it requires someone to come at the problem with a contradiction-solving mindset.

I suspect every person on the planet is guilty of at least half a dozen ‘ism-crimes’ during any given day. I can be confident of this because spending a couple of hours watching any trending ‘controversial’ topic on Twitter quickly reveals the rapid appearance of a quite staggeringly broad spectrum of responses. Every single respondent sits somewhere along an -ism spectrum. From one day to the next, their position might shift, as might those of every other person wanting to join the conversation. The fact this spectrum exists and that it might be dynamic, however, does not mean the job of the AI algorithms (or their programmers) is to define ‘the’ point along the spectrum at which ‘on average’ the boundary between racist and not-racist sits. Making decisions based on any kind of average is a pretty dumb thing to do in any kind of complex environment.

The only meaningful design response in this kind of dynamic spectrum situation is to solve the contradiction between the two ends of the spectrum: make people at each extreme ‘happy’ and everyone in the middle is also happy. If every person on the planet draws the racist/not-racist boundary somewhere different, the AI needs to take into account that personal boundary when delivering content to that individual.

Solve one contradiction, of course, and the next one inherently appears in its wake. Every person on the planet is entitled to hold their own opinion about where the racist/non-racist (or sexist/non-sexist, etc) boundary exists for them personally, but by the same token they are absolutely not entitled to hold their own truths. Truth-wise, then, the new problem becomes whether it is ever possible to objectively determine what racism is and is not? This too would appear to require a dynamic way of thinking. I’ve heard several comedians making jokes around the phrases ‘different times’. What was apparently ‘acceptable’ in the 1970s, clearly appears not to be today. Whether that makes it appropriate to judge historical ‘misdemeanors’ according to today’s norms is yet another contradiction to be solved.

But again, it is ‘merely’ a contradiction. Systematic (Software) Innovation was intended to help software engineers – the future rulers of society, right? – to identify and resolve such conflicts. If any of them was in any way smart, they’d be writing AI algorithms that automatically identified society’s conflicts and conundrums. If they were smarter still, they’d be writing self-evolving code that also helped solved these conflicts and then automatically identified the next contradictions. More fool them if they haven’t started the journey yet. The slower they are to the game, the further ahead PanSensic gets. AI gets intelligent by asking intelligent questions, and intelligent questions start by measuring what’s important rather than what’s easy to measure. It’s very easy to measure racism. Or sexism. Or any other kind of ism. (We know because we do it every day.) Real intelligence is knowing why we’re measuring it. And what contradictions we’re intending to solve when we do find it. Perhaps our next PanSensic lens should be one for detecting software engineers abdicating their moral and ethical responsibility to think before they code. I don’t think that’s rocket science either.