AI is always going to be abused

AI = artificial intelligence

So far, I have never dealt with my former profession or the topics of my education in this blog. As an electrical engineer, you learn how to develop digital and analog systems. In the case of digital systems, it is inevitably necessary to deal with programming, because a microprocessor without firmware/software does nothing at all except consume electricity. In this profession one is very attached to the factual and practical and does not necessarily feel called to influence politics or society. In the meantime, however, there are a number of topics where the fundamentals of electrical engineering would be very helpful for the discussion of socio-political issues. Danisch (a German blogger) does this every now and then when he looks at current topics from the point of view of the computer scientist, which I appreciate very much.

I could turn to the topic of “alternative energies” or “green energy turnaround (in Germany)” in connection with the so-called climate change. But on the one hand, there is already a great confusion of voices, in which I would hardly have a chance to get through. I also think it is risky and one-sided to let politicians drive us back into centralized systems in the case of “alternative energies” instead of pushing ahead with local and individual systems. On the other hand, I do not see the urgency because I can only be doubtful about climate change (as a substitute for the supposedly global warming caused by mankind). It starts with the fact that the temperature measurement data are partially corrected (falsified?) and partly only selectively recorded and evaluated. What am I, as an engineer, supposed to do with this data garbage?

Another topic would be “alternative drives”, i. e. replacing the good old combustion engine with almost as old electric motors. With so much enthusiasm for Tesla, it has been forgotten that the first practically usable electric cars were already around 1880 and the first vehicles with diesel and petrol engines were built at the same time. We all know who won this competition, and the main reason for this was (and still is) that in the case of an internal combustion engine, the drive energy can simply be filled into a canister, while in the case of an electric motor you have to deal with the energy storage battery. Physicists and chemists, not necessarily electrical engineers, are primarily concerned with battery technology.

The subject of artificial intelligence is much more productive for me. A long time ago, I had something to do with radar sensors that were supposed to provide more safety in the automotive industry as distance warning devices and, if necessary, automatic braking systems. During the further development to self-driving cars one came across the good old trolley problem. Even with a very special system, which can only be described as ‘intelligent’ to a limited extent, software with an ethics subroutine should decide between life and death. Even if the ethics subroutine is legally defined, the question arises as to who is responsible for a death, if the electronics or the program does not function properly, or even – as in the case of the emissions scandal – is deliberately undermined. At this point, I have already pricked up my ears, because the problem area is only ethical or legal on the surface.


In the last few weeks, however, two issues have been flushed into the public conscience, which will have far-reaching social repercussions in combination and are far more important than wind farms, electric cars and self-driving vehicles. On the one hand, this was the so-called processor bug and on the other hand, the use of AI for censorship of social media.

Meltdown and Spectre didn’t surprise me very much. Anyone who, like me, has been using microprocessors since he was at school, quickly learns that the manufacturer’s description does not necessarily correspond exactly to the real function. As a small special user, who only buys a few tens to hundreds of thousands of chips, you can trample your hooves bloody until the processor manufacturer responds to an error message. Usually the detected errors are only included in an ‘Errata’, so that this is documented. There is no guarantee that the bug will be fixed in the next version. These manufacturers always have one or a few lead customers, and if these don’t shitstorm the supplier, nothing happens. If you can’t use an alternative processor (and this is usually associated with a huge amount of effort/costs) then the electronics engineers have to jump in and either make design changes and/or (we called it that way at the time) program a ‘balcony’ around the problem. This is exactly what happens now with the changes in the operating system (MS, Linux, Apple, Android) that prevent or circumvent the problem, but in the course of which the system may become considerably slower.

For further understanding, I would like to distinguish between a special system and a universal system (even if it is often blurred). You can – just as an example – build a music CD player from single components and implement exactly the, say 50 functions of the device. This gives you the guarantee, if you don’t make any mistakes, that the device can only perform these 50 functions and never get into an undefined state. Even if the individual components can be integrated into an ASIC, this solution is usually more expensive than using universal components such as a common microprocessor. This means that there exist hardly any such special electronic systems these days. Even in a special system, one has to cope with influences regarding EMC (electromagnetic compatibility) and other environmental influences such as temperature, humidity and fluctuations in the power supply, which sometimes lead to unpredictable system conditions.

The microprocessor itself is therefore a universal system as a basic building block. In order to build a special system with it, the many functions of the processor are reduced to the few needed for the music CD player, for example. The problem is not only that the chip sometimes doesn’t do the things it is supposed to do (as described in the penultimate paragraph), but sometimes does things it shouldn’t do. The latter can often go unnoticed for years. A PC, laptop or tablet is also a special system, although much more universal than a CD-player. Networking via the Internet adds yet another environmental impact, whose variations are virtually infinite. Even the best security system can only detect the errors, viruses, trojans, etc. that it already knows, or which are very similar to those it already knows.

This small excursion into the processor technology hopefully doesn’t look too much like a lecture. The point of the exercise was to make it clear that a relatively universal system such as a CPU or commercial computer is ALWAYS prone to errors. It is completely unrealistic to expect that such systems will ever run flawlessly or that every Internet attack could be fought off.


Now to the topic artificial intelligence and ‘machine learning’, which are used by Google, Twitter, Facebook and so on. So far, we have mostly been pleased with the first versions of this AI. This is the spam filter that protects our e-mail inbox and comment areas from advertising junk. Perhaps some of you may remember the early days of spam detection, when you had to manually clean up your mailbox yourself. Good, especially centralized spam filters are already self-learning and therefore AI. And whose emails these days end up in the spam filter of an addressee, might get upset about how unintelligent and perhaps unfair this AI is. This is partly due to the fact that the spam distributors keep on searching for gaps in the system to spread their crap, and the people who program/configure the filter can only react (and sometimes overreact) to close the gap.

Now the operators of the internet platforms were confronted with the demand of ‘flawless democrats’ and their own sensitivities to remove so-called ‘hate speech’ from the net. They already have the necessary technology for this. Spam is no longer just unwanted advertising, but conservative or right-wing opinions and facts. As with the spam filter, it takes several years until filtering works reliably and with a low error rate. For this reason, the above-mentioned platform operators are currently employing thousands of people to manually adjust the “hate speech filter”.

As the team led by James O’ Keefe in Project Veritas is currently documenting, Twitter hides behind the term ‘algorithms’ when censoring. They still have difficulties in carrying out their censorship efficiently and as faultlessly as possible. What they use is AI and machine learning based on modern spam filters. Here, too, employees identified an ethical problem. One cannot on the one hand advertise the platform with freedom of expression (free speech) and on the other hand censor it so obviously. This would have a deterrent effect on many users. That’s why they came up with the shadow-banning trick, whereby the ‘delinquent’ user doesn’t even notice that nobody can receive his messages except a handful of his fans.

As James Damores complaint against Google has extensively documented and Project Veritas has also found inside Twitter, the large Silicon Valley Internet companies are already making sure that they are as left-wing and politically correct as possible when selecting their employees. Conservatives are not hired at all or mobbed out at short notice. It’s no different on Facebook, especially for Germany. Anyone who doesn’t obey and is not totally opposed to the AfD (German right of center party) will be fired. This ensures that the filters of the AI are equipped with data, which sends everything that is not as far-left as these people, into the digital nirvana. This applies more and more not only to conservative or ‘right-wing’ opinions, but also to facts that could justify such opinions. There are now enough cases where people spreading a link of a report about a ‘refugee’ who has emerged as a rapist or murderer, are quickly banned or have their profile completely deleted.

It is clear that this is pure arbitrariness and one should take care of the ethics problem. Given the fact that these platforms are quasi-monopolies, I also call for a clear legal regulation. But the political will for this is still thin at present. Usually only those who have recently been blocked or deleted get upset. The large mass of users of Internet make-up tips, cat videos and people interested in celebrities don’t give a damn about freedom of expression. And if the legal regulation looks like the German NetzDG (new internet law), then unfortunately freedom of expression does not benefit at all.


However, even if we imagine that there is an acceptable legal regulation (to prevent terrorist activities, for example), we must ask ourselves whether this is sufficient. The AI is still there, as are the politically correct staff of the major platforms. Even if the banning/deletion orgies were to be kept to a minimum, shadow-banning still exists and there are likely to be other methods in the future that work similarly subliminal. The AI must be able to adapt its filters in a self-learning manner, otherwise it is no good. You can’t standardize them and fix them once and for all.

As the spam filters show, you can always find a way around them. The Jew-hater of today no longer pleads for gas chambers on his website, but he makes three parentheses around the (((name)) of the person concerned. And how must the spam filter be enlarged when Islamists use commonplace words as code for their activities? If the victim is called ‘cats’, the explosives belt is ‘cornflakes’ and killing is called ‘breakfast’. This can take weeks to months until something like this is properly captured and in the meantime all cat lovers and those who eat cornflakes for breakfast complain that their mails and comments end up in the internet gulag. With such actions, one of the old ‘algorithms’ may slip into the filter again, on the whim of a left-green Facebook employee.

Even if the AI could be ultimately controlled (which is utopian), there would still be unknown or little-known errors in the processors, computers and operating systems that make hacking possible. It would surely be fun to see someone hacking the Twitter or Facebook filter database and turning everything around by 180 degrees. Then maybe only the ‘Daily Stormer’ would be on the net, every picture would be back-dropped with a swastika flag and the Googlers would shut down each of their servers for ‘maintenance’ 😀 😀 😀

With this in mind, I can only wonder about the technology phantasts/optimists, who, like aficionados of the Holy Mary, point to the imminent ‘green energy turnaround’ or the self-driving electric truck in the near future. The probability that Tesla will go bankrupt in the next few years is much higher than that they will become world market leaders. If the ‘cultural enrichment’ in Europe goes on like this for a few more years, it is also more likely that we will be heating with wood again in 20 years’ time rather than with solar power. The topic of AI, artificial intelligence, is still in its infancy. There is no doubt that considerable progress will be made in the next 20 years. However, we can already see how they play fast and loose with relatively small AI systems. The doors are wide open to abuse and arbitrariness. We can only solve some of the ethical problems very superficially and legally. The underlying technical problem of universal systems, which will always be error-prone, cannot be solved from today’s point of view.


Despite some readers’ impressions to the contrary, this is not an article of the category doomsday pornography. When I order my happy meal from McDonaldo on a tablet instead of from an employee, I won’t have to starve to death if the AI doesn’t sell anything to white (or brown) men through ideological programming. If the AI in the collision system of a Mercedes S-Class prefers to run over an old white (or brown) man rather than a black (or white) woman, then this is tragic but not earth-shattering. However, restricting freedom of expression will have disastrous consequences, with or without AI. I just wanted to put the idea out of some people’s head, who think AI will solve all problems of mankind, or that the AI is humane and fair. No, the AI is just as stupid and nasty as the person who programs it or creates the filter database. It can be even dumber and even more nasty than the programmer, and we might not even become aware of it.

All right, have a nice evening.

[end of very long rant]

PS: The picture today is again from pixabay

Chunk by chunk translated with, with around 46 handmade corrections by myself 😀

PS2: One commentator of the German version of this article directed me towards the DeepL translator, which turned out to be rather useful 😉 , since I became way too lazy to translate all my articles into English. He also referred to this article as Part 1: “the ghost in the machine”, and Part 2: “the sorcerer’s apprentice” (J.W. von Goethe) and the “spirits that I’ve cited, my commands ignore”. Very nicely put.


3 thoughts on “AI is always going to be abused

    1. I once had a girlfriend who was a translator. Very nice. Unfortunately, she freaked out when she thought she might be pregnant. So sad. Sorry, I am getting to personal.


Comments are closed.