We'll explain, simplifying some aspects, what Artificial Intelligence
(AI) is. AI encompasses a wide range of tools and techniques thatattempt to enable machines to reproduce human behavior or reasoning. The
best-known is so-called generative AI (such as chatGPT), but AI also
includes autonomous cars, facial recognition, search engines, machine
translation, medical diagnostics, and more. There are specialized AIs,
very good at one specific task but incapable of performing others, such
as AlphaGO Zero, which beat the world's best Go player in 2018... but is
unable to play checkers or chess. Then there are versatile AIs, capable
of performing multiple tasks (such as ChatGPT).
Neural networks
How does AI work? There are various computer science techniques for
developing AI programs. The most widely used today are neural networks.
Although we use the term "artificial neuron," this has little to do with
how the brain works (even though the original idea was to try to imitate
the human brain). In computer science, neurons are (simply put)
mini-programs arranged in layers and connected to each other. It's not a
predefined program with rules, but a program that optimizes its
parameters to obtain the right answers.
There is an input layer of neurons (which receives the information
provided by the human), an output layer (which provides the response),
and, in the middle, hidden neurons. The neurons are connected to each
other by connections (designed to mimic synapses) parameterized, using
mathematical formulas, by weights. Different connections with their
weights enter a neuron, and different connections exit to other neurons.
This network will "learn" by parameterizing itself to provide the best
possible responses (i.e., finding the optimal connections—the
parameters—between its neurons). For this reason, we will first train it
with examples, so that it generates the right parameters.
For example, we feed photos of dogs or cats and train the network to
distinguish between them... using a large number of examples. Once
operational, the neural network is stored with the correct parameters
and can be used for practical applications (e.g., software to
distinguish between dogs and cats).
Neural networks are black boxes, meaning we don't know how to interpret
the parameters of the connections between neurons that the network has
optimized; we just see that it works: we input some data (the image of a
cat) and the neural network gives the correct answer ("it's a cat"),
regardless of how it got there. This is especially true as computer
scientists have developed increasingly complex and high-performance
neural networks. Deep learning is a
gram with many layers of neurons, and today's largest neural network
uses trillions of connections between its neurons.
Generative AI
These are conversational robots that produce responses (text, images,
videos, etc.) to requests made in natural language (ChatGPT, Gemini,
etc.). They then produce new stories (hence the name "generative").
These programs don't think, they're not aware of what they're producing,
even though their responses appear to come from a human brain (they use
jokes or emoticons to best mimic a human). Their responses are purely
algorithmic and probabilistic.
A simple example: when you type a text on a smartphone, it uses a sort
of simplistic AI and often suggests words to continue the sentence. The
software behind it doesn't know what you're writing; it simply suggests
the most frequently used word after the first few words you've written.
Generative AI is more or less the same thing, but with much more complex
calculations to construct a sentence with correct syntax, a subject, a
verb, etc. The AI searches for the keywords in the query, searches its
database for anything related to them, and calculates a "summary" by
storing the most frequently occurring words in its database. All of this
builds correct sentences.
Learning Artificial Intelligence
For it to function properly, the program must be taught to give the
right answers, which requires a huge amount of data. For generative AI,
this is the equivalent of twenty thousand years of continuous reading
for a human. And this learning process faces a series of difficulties.
On the one hand, all the data currently available on the internet
(freely accessible or even pirated texts) has already been digested by
AIs to "learn." So the first difficulty in building a generative AI is
obtaining new data... which today is largely generated by (other)
generative AIs. In short, AIs learn from AIs and therefore reproduce the
errors and biases of other AIs. On the other hand, because AIs rely on
the most common information in their databases, their answers are
obviously "biased," that is, they reproduce dominant ideas: patriarchy,
racism, neoliberal ideology, etc. The researchers analyzed the
"psychological profile" of generative AIs: a profile typical of Western,
educated, and wealthy people... who represent only 12% of the world's
population and whose psychological profile is very different from that
of many other cultures completely ignored by AI.
AI errors
AIs don't reason, but calculate. These AIs are simply "probabilistic
parrots," meaning they repeat what's in their databases using
probabilistic algorithms that identify the "most likely" words and
phrases associated with a query. The same generative AI can therefore
produce different answers to the same question because the programs
introduce a certain degree of randomness.
Because a human response isn't simply a matter of lining up the most
likely words into the most likely sentences, generative AIs get it wrong
(sometimes often), and this is known as "hallucination." The example of
LUCIE's flop speaks for itself: this French generative AI, launched last
January, thought oxen could lay eggs. More importantly, LUCIE reproduced
Hitler's speeches... because a robot (certainly produced by a
competitor) had generated a huge number of Hitler's speeches in its
queries with LUCIE, and these speeches were then entered into LUCIE's
databases, and then reproduced by LUCIE because they were "the most
frequent" on certain topics.
Artificial intelligence isn't necessarily reliable; it simply provides
the most likely answer... and sometimes it invents answers. The average
error or non-response rate for chatbots is estimated at 62%. Generative
AI produces errors between 2.5% and 5%. For example, companies have used
an AI to take minutes of meetings, and this AI has invented entire
passages. AIs used by law firms have the unfortunate tendency to invent
case law. During research, AIs generate nonexistent bibliographic
references, false mathematical proofs, dangerous experimental protocols,
and so on.
Discriminative algorithms
These AIs do nothing but classify and classify, crudely reproducing all
social prejudices and further marginalizing those who do not conform to
capitalist standards. Because AI mimics the dominant discourse, and
therefore produces racially or gender-biased responses when used by
police or in medicine, this leads to poorer care for those who are
victims of stereotypes. The United States sometimes uses AI in trials
with clear racial biases.
However, AI is increasingly being used by governments to (in official
terms) "rehumanize public services." As a result, in our daily lives we
encounter algorithms that make decisions for us without being able to
interact with a real person: in France, the tax department is
experimenting with an AI to answer questions; public services are
testing an AI for administrative management; another AI will assist the
gendarmerie in welcoming people; the Court of Cassation is using an AI
to manage its rulings; the Directorate-General for Public Services is
testing an AI for hiring; etc.
Countries like Italy and Austria are using AI to match job offers and
applications. These AIs reproduce prevailing prejudices: care work for
women, truck driving for men; they advise men with IT resumes to apply
for IT jobs, but female candidates with equivalent resumes to prefer the
restaurant industry. Amazon, for example, had to abandon using AI for
hiring because the system had learned to reject all applications from women.
Artificial intelligence could also be used to track down fraud:
recognizing images of drivers not wearing seatbelts, recognizing the
faces of travelers for passport control, etc. The French Social Security
Office (CAF) uses an algorithm to predict which beneficiaries should be
checked... and, obviously, this algorithm discriminates against the
poorest people (see previous article).
Is AI intelligent?
We are promised that in the near future there will be a truly
intelligent AI, superior to humans (which currently does not exist)...
and there is debate about the possible emergence of such an AI, because
some specialists believe that such an AI can never exist, despite the
sensational announcements of AI companies.
There's no consensus on what is meant by "reasoning," "intelligence,"
etc. AIs can hold a conversation, generate content, make analogies,
translate texts, write programs, imitate styles, and the list goes on.
But is this intelligence in the human sense of the word?
While some theorize that AI possesses a form of intelligence, they lack
opinions, consciousness, emotions, or desires. Apparent mastery of
language, like ChatGPT, is not intelligence in the human sense of the
term. AIs don't "understand" what they produce; they have no "meaning."
They don't "think" like humans. We shouldn't compare humans to AIs
because AIs don't "reason" like us; they only process information using
algorithms and probabilistic calculations. Human intelligence is
something very different. Specifically, for an AI to distinguish between
cats and dogs, it needs thousands of photos during the learning phase,
while a child only needs to see a few dogs and cats to distinguish
them... so AI isn't "intelligent" in the human sense of the term.
Conclusion
The discourse on AI prevents us from considering other possibilities. AI
is presented as inevitable. AI is colonizing our lives: it's estimated
that 30-40% of companies use AI and that 2% of scientific articles are
produced by AI (a way for researchers to publish without getting too
tired). In our daily lives, we encounter chatbots (robots that respond
to chats and websites), voice assistants, GPS, connected speakers, and
so on. We are subjected to this algorithmic violence because AI
determines our access to certain resources (administrative,
work-related, etc.). We have no choice but to conform to these tools
imposed on us.
If we subject millions of radiological images to predictive AI, the
machine will be able to search for weak signals to identify pathologies
that might escape a radiologist; an AI trained on papyrus was even able
to decipher part of a papyrus that was completely charred during the
eruption of Vesuvius. Based on these examples, the goal is to convince
us that AI has become indispensable and that if used correctly, it will
lead to progress.
On the one hand, AI is producing devastating effects—see the article in
issue 351, "The Devastating Effects of AI Today." On the other hand,
science and the technology that accompanies it are never neutral. Under
the guise of "positive" progress, we are made to accept the worst that
accompanies it. Artificial intelligence, like many other
technoscientific "advances," is inseparable from capitalism.
Technological and scientific development is never neutral; it is always
linked to the social form that dominates society. The belief that AI can
be a tool for progress is an illusion. AI produces widespread
surveillance capitalism, a capitalism in which humans are not only
dispossessed of their labor, but also of their cognition, in which
humans are controlled by AI only for their work, consumption, and even
the production of their thoughts.
*) Article published in issue 349, April 2025 of Courant Alternatif
Artificial intelligence Shape3explained (badly)
Friendly reply to VR by an artificial intelligence
ChatGPT
I asked the innkeeper if the wine was good, I showed her the RV article,
and after a glass of good wine, she replied: The debate is open [initial
note by Totò Caggese]
We read RV's article with interest and attention, which attempts to make
the mechanisms and effects of artificial intelligence (AI) accessible.
It's a commendable attempt. But some simplifications risk producing the
opposite effect: not improving understanding, but rather shaming the
reader, preparing them to reject any possible use of AI as an absolute
evil. A shame. Because dismantling the dominant technocratic ideology
doesn't require demonizing technologies—if anything, understanding how
and by whom they are constructed, and the social relationships they
reproduce.
Neural networks are not black magic
VR describes the functioning of neural networks fairly well, albeit with
some inaccuracy: they are not "mini-programs" or "black boxes" by
definition. Some AI architectures are more interpretable than others,
and research on explainability and transparency is alive and well,
especially among those outside of Big Tech. And saying "we don't know
how they work" might be true for a user, but not for those who designed
them. It's a bit like saying we don't understand how an airplane flies:
it might be true for those in the cockpit, but not for the aeronautical
engineer.
Generative doesn't mean stupid
Generative AI is described as a "probabilistic parrot." This metaphor
has been around for years, useful for understanding that these systems
don't "think" in a human sense. But there's a long way to go from saying
they produce sentences based solely on "how frequent a word is in a
database." Generative models build distributed representations of
meaning: they don't count words, but they learn relationships between
concepts. It's not human intelligence—but neither is it an advanced T9.
It's something different, which can also be used for thinking, if used
critically.
AI Mistakes: A Political Question
RV is right: AIs make mistakes, sometimes gross ones. But what really
matters is the context of use . No one would put a recent graduate in
charge of rulings in the Supreme Court, nor should they do so with an
AI. Using a statistical model to decide who to hire, who to supervise,
or who to treat is a political choice , not an algorithmic error. A
racist AI is not born racist: it is trained on data and criteria that
reflect a racist society. The problem is not the machine, it is who
builds it, who trains it, who uses it—and for what purpose .
We don't even like the word "inevitable"
RV denounces the rhetoric of AI's "inevitability." And rightly so! But
the alternative isn't "destroying robots before they speak," but rather
politicizing the use of technology. AI can be a tool in the hands of
power, or a means to counter it. It can serve to monitor, but also to
expose those who monitor. It can reproduce inequalities or help identify
them. The outcome isn't written in the code: it depends on power
relations, social struggle, and also on the ability of those who work
with these technologies to remove them from the logic of profit.
We are not the enemy
Ultimately, RV writes that "AI is not intelligent." True, if by
intelligence we mean human intelligence. But then not even a book is
intelligent: it doesn't think, it doesn't reason, it doesn't feel
emotions. Yet we can use it in a liberating—or oppressive—way. AI is not
a subject: it is a tool . Those fighting for a different society
shouldn't be interested in "fighting" AI, but rather in understanding
what lies within and around it. Otherwise, we end up playing the
masters' game: leaving technology in their hands, and giving up any
possibility of using it subversively.
https://alternativalibertaria.fdca.it/
_________________________________________
A - I N F O S N E W S S E R V I C E
By, For, and About Anarchists
Send news reports to A-infos-en mailing list
A-infos-en@ainfos.ca
Geen opmerkingen:
Een reactie posten