There is a concerted effort by academics to spread a vague and broad definition of ‘far right’. Since this comes from universities, it has an aura of objectivity and scientific rationale, and these people are then advising governments and tech companies.
There is a concerted effort by academics to spread a vague and broad definition of ‘far right’.
Since this comes from universities, it has an aura of objectivity and scientific rationale, and these people are then advising governments and tech companies. For example, the Oxford Internet Institute, a research centre at the University of Oxford, produces some worrying work. Please do not make the mistake of thinking that these are ‘out of touch’ academics in their ivory towers that can simply be mocked but ignored. They have links directly to government, to regulators, to tech and social media firms, and other influencers, both in terms of giving advice and recommendations, and in terms of funding. Some of the work is explicitly aimed at producing ways of ‘disrupting’ vaguely defined ‘extremism’, and at clamping down on the equally vaguely defined ‘hate speech’.
A particular concern is how much of this political work on ‘the far right’ and ‘Islamophobia’ is aligning with those who are working to automate detection of ‘far right’ and of ‘extremism’ online. Such automation makes for great power, and any human errors and biases will simply be magnified. There is a big push for ‘ethics’ in tech firms now and in particular for those using artificial intelligence and machine learning. But just because such firms say they are ‘doing ethics’ is no reason to rest easy. For example, the London-based and Google-owned Deep Mind has someone, Mustafa Suleyman, in charge of ethics, but even those working in the field elsewhere have difficulty in finding out exactly what DeepMind is doing by way of ethics. And the very same chap in charge of ‘ethics’ also leads a team working on YouTube recommendation personalisation – a powerful technique used by those who wish to ‘disrupt’ ‘extremism’ by blocking certain video recommendations and prioritising others. What is clear too, reading statements from him, is that Deep Mind’s ethics guy seems not to understand the difference between ethics and social activism. You can read more about this here.
Ever worried about how the term ‘Islamophobia’ is banded about and used to discredit people? Start worrying a bit harder. The rot goes right to the heart of academia and government and corporate regulation. This is propped up by shoddy ‘research’ that gives an aura of respectability and objectivity. Such researchers rely on broad, catch-all definitions of ‘far right’ and of ‘Islamophobia’ and often rely upon groups such as Hope Not Hate and the SPLC to determine who counts as ‘far right’. It’s very possible that many of those reading this have had their tweets included in research on ‘far right Islamophobes’. The broad definitions used are worrying for many reasons, including that they make it hard or impossible to distinguish genuine concerns rooted in fact and reason, from those few out there who really do hate and target people simply because they happen to be Muslim or because of the colour of their skin. Further critique of recent research carried out by some members of the Oxford Internet Institute can be found here.
Several researchers from this institute are specifically working on ways of disrupting ‘far right’ and ‘extremist’ groups online, under the auspices of an institution funded by the European Union, and again, giving advice to governments and tech companies: take a look here. Is it a coincidence that many of the Twitter accounts included in these various studies have since been suspended? These researchers include as ‘extremist’ such events as the Day of Freedom rally last May 6th in London, in support of free speech. But their ‘academic’ work makes numerous unfounded assumptions and is methodologically weak. University research centres have a gloss of competence to them, but are actually often populated not just by leftist ideologues – but by incompetents. The salaries for junior research staff are appalling, the career ladder uncertain, to get on you have to brown-nose your way to the top, and in ‘interdisciplinary’ work such as this, very junior staff or even PhD students are working in areas that they know little about, or do not have the experience to realise how lacking some aspects of the work are.
Just recently, two more researchers from the OII published an account of their work developing machine learning to detect ‘Islamophobia’ on Twitter: see here. This work is specifically intended to be used by those in charge of overseeing social media platforms and for the patrolling of ‘hate speech’ online – in other words, there are implications for who ends up with a criminal record and possibly even who goes to prison. The definition of ‘Islamophobia’ used is so broad that it includes any true statement, just so long as it spreads ‘negativity’ about Islam.
These are all worrying and sinister developments.
Anonymous University Professor