SPREAD THE INFORMATION

Any information or special reports about various countries may be published with photos/videos on the world blog with bold legit source. All languages ​​are welcome. Mail to lucschrijvers@hotmail.com.

Search for an article in this Worldwide information blog

zondag 25 mei 2025

WORLD WORLDWIDE EUROPE FRANCE - news journal UPDATE - (en) France, OCL CA #349 - What is Artificial Intelligence? (ca, de, fr, it, pt, tr)[machine translation]

 We will explain, while simplifying certain aspects, what Artificial

Intelligence (AI) is. AI encompasses a wide range of tools and
techniques that attempt to enable machines to reproduce human behavior
or reasoning. The best known is what is called generative AI (like
chatGPT), but AI also includes self-driving cars, facial recognition,
search engines, machine translation, medical diagnostics, and more.
There are specialized AIs, very efficient at performing a specific task
but incapable of performing other tasks, like AlphaGO Zero, which beat
the world's best Go player in 2018... but is incapable of playing
checkers or chess. There are general-purpose AIs, capable of performing
multiple tasks (like ChatGPT).

Neural Networks
How does AI work? There are different computer science techniques for
developing AI programs. The most widely used network today is neural
networks. Even though we use the term "artificial" neuron, it has little
to do with how the brain works (even if the initial idea is to try to
imitate the human brain). In computer science, neurons are (to simplify)
mini-programs arranged in layers and connected to each other. It is not
a pre-established program with rules, but a program that will
automatically optimize its parameters to obtain the appropriate responses.

There is a "layer" of input neurons (which receive information given by
the human), and an output layer (which provides the response), and in
the middle, layers of hidden neurons. The neurons are connected to each
other by links (intended to mimic synapses) parameterized, via
mathematical formulas, by weights. Several links, with their weightings,
enter a neuron, and several links exit to other neurons. This network
will "learn" by configuring itself to provide the best possible answers
(i.e., find the optimal connections-parameters-between its neurons). So,
we will first train it with examples so that it generates the correct
parameters.
For example, we give the network photos of cats or dogs as input and
train it to differentiate between dogs and cats... using a large number
of examples. Once we consider it operational, we keep this
well-configured neural network, and it can be used in concrete
applications (such as software that differentiates between dogs and cats).

Neural networks are black boxes; we don't know how to interpret the
parameters of the connections between neurons that the network has
optimized; we simply observe that it works: we input data (a picture of
a cat) and the neural network gives the correct answer ("it's a cat"),
regardless of how it arrived at it. This is all the more true as
computer scientists have developed increasingly complex and powerful
neural networks. Deep learning is a program with many layers of neurons,
and the largest neural network currently uses a trillion connections
between its neurons.

Generative AI
These are conversational robots that produce responses (texts, images,
films, etc.) to queries made in natural language (ChatGPT, Gemini,
etc.). Thus, they produce new content (hence the name "generative").
These programs don't think; they have no awareness of what they produce,
even if their responses seem to come from a human brain (they use jokes
or emoticons to mimic a human as closely as possible). Their responses
are purely algorithmic and probabilistic.

A simple example: when you write a text on a smartphone, it uses a kind
of simplistic AI and often suggests words to continue your sentence. The
software behind it has no awareness of what you write; it simply
suggests the most frequently used word following the first few words you
typed. For generative AI, it's much the same, but with much more complex
calculations to construct a sentence with correct syntax, including a
subject, a verb, etc. The response to a query is produced by statistical
calculations based on a huge database. It searches for the keywords in
your query, searches its database for relevant information, and
calculates a "summary" by retaining the most frequent words in its
database. All while constructing correct sentences.

AI Training
For this to work well, the program must be taught to provide the correct
answers, which requires a huge amount of data. For generative AI, this
requires the equivalent of 20,000 years of non-stop reading for a human.
And this learning process comes up against various difficulties. On the
one hand, today, all data on the internet (open access texts, even
pirated ones) have already been digested by AIs to "learn". So the first
difficulty in building a generative AI is having new data... which today
is largely generated by (other) generative AIs. In short, AI learns from
AI and therefore reproduces the errors and biases of other AIs. On the
other hand, since AIs rely on what is most common in their databases,
their responses are obviously "biased", that is to say, they reproduce
the dominant ideas: patriarchy, racism, neoliberal ideology, etc. In
short, the production of AI reinforces the systems of domination already
in place. Researchers have analyzed the "psychological profile" of
generative AIs: a typical profile of Western, educated, and wealthy
people... who represent only 12% of the world's population and whose
psychological profile is very different from many other cultures
completely ignored by AI.

AI Mistakes
AI doesn't reason, it calculates. These AIs are simple "probabilistic
parrots" in the sense that they repeat what's in their databases using
probabilistic algorithms that identify the "most probable" words and
phrases associated with a query. The same generative AI can therefore
produce different answers for the same query because the programs
introduce a degree of randomness.

Since a human response isn't simply a matter of aligning the most
probable words in the most probable sentences, generative AIs make
mistakes (sometimes often); these are called "hallucinations." The
example of LUCIE's flop is telling: this French generative AI, launched
last January, believed that oxen could lay eggs. And above all, LUCIE
reproduced Hitler's speeches... because a bot (probably produced by a
competitor) had generated a huge number of Hitler's speeches in its
queries with LUCIE. These speeches were then entered into LUCIE's
databases, and thus reproduced by LUCIE as "the most frequent" on
certain topics.

AI is not necessarily reliable; it only gives the most probable
answer... and sometimes even invents the answers. The average error or
non-response rate for chatbots is estimated at 62%. Generative AIs
produce between 2.5% and 5% errors. For example, companies used AI to
take meeting minutes, and this AI invented entire passages. AIs used by
law firms have an unfortunate tendency to invent case law. In research,
AIs generate nonexistent bibliographic references, false mathematical
demonstrations, dangerous experimental protocols, and more. More
sympathetic: FactFinderAI, an Israeli AI that was supposed to post
content favorable to the Israeli government, calls IDF soldiers "white
colonizers of the Israeli apartheid regime."

Discriminatory Algorithms
These AIs only classify and rate; they crudely reproduce all social
biases and further marginalize people who do not conform to capitalist
standards. Since AI reproduces the dominant discourse through imitation,
and therefore produces racial or gendered responses when used by the
police or in medicine, this leads to degraded care for people who are
victims of stereotypes. The United States sometimes uses AI in trials
with obvious racial bias.

Yet, AI is increasingly being used by government agencies to
(officially) "rehumanize public services." As a result, we are
confronted in our lives with algorithms that make decisions for us
without being able to interact with a real person: in France, the tax
office is experimenting with an AI to answer questions; public services
are testing an AI for administrative management; another AI will assist
the gendarmerie in welcoming people; the Court of Cassation is using an
AI to manage its arrests; the General Directorate of Public Services is
testing an AI for recruitment; ...

States, such as Italy and Austria, are using AI to match job offers and
applications. These AIs reproduce the dominant biases: care work for
women, truck driving for men; they advise men with a computer science CV
to apply for IT jobs, but female candidates with a similar CV should
prefer catering. Amazon, for example, had to abandon the use of AI for
its recruitment because the system had learned to reject all
applications from women. In particular, AI could be used to track fraud:
recognize images of drivers not wearing seatbelts, recognize the faces
of travelers for passport control, etc. The CAF (Family Allowance Fund)
uses an algorithm to predict which beneficiaries should be checked...
and of course, this algorithm discriminates against the poorest (see the
previous article).

Is AI intelligent?
We are promised a truly intelligent AI in the near future, one that
would be superior to humans (which does not currently exist)... and
there is debate about the possible emergence of such an AI because a
number of specialists believe that such an AI will never exist, despite
the sensational announcements made by AI companies. There is no
consensus on what "reasoning" or "intelligence" means. AIs are capable
of holding conversations, generating content, making analogies,
translating texts, writing programs, imitating styles, and the list goes
on and on. But is this intelligence in the human sense of the term?

Even though some theorize that AI possesses a form of intelligence, AIs
have no opinions, no consciousness, no emotions, or no desires. Apparent
mastery of language, like ChatGPT, is not intelligence in the human
sense. AIs do not "understand" what they produce; there is no "meaning"
for them. They do not "think" like humans. We should not compare humans
to AIs because AIs do not "reason" like us; they only know how to
process information using algorithms and probabilistic calculations.
Human intelligence is something else entirely. Concretely, for an AI to
differentiate between dogs and cats, thousands of photos are required
during the learning phase, whereas a child only needs to see very few
dogs and cats to differentiate them... and therefore, AI is not
"intelligent" in the human sense of the term.

Conclusion
The discourse surrounding AI prevents us from considering other
possibilities. AI is presented as inevitable. AI is colonizing our
lives: it is estimated that 30% to 40% of companies use AI, and 2% of
scientific articles are produced by AI (a way for researchers to publish
without too much effort). In our lives, we encounter chatbots (bots that
respond to messages, chat, on websites), voice assistants, GPS, smart
speakers, and more. We are subjected to this algorithmic violence
because AI determines our access to certain resources (administrative,
work, etc.). We have no choice but to comply with these tools that are
imposed on us.

We can understand that not everything about AI is inherently harmful: if
we submit millions of radiology images to a predictive AI, the machine
will be able to look for weak signals to identify pathologies that might
escape a radiologist; an AI, trained on papyri, was also able to
decipher part of a papyrus completely charred during the eruption of
Vesuvius. Using these examples, we are told that AI has become essential
and that, if used properly, it would represent progress.

On the one hand, AI is currently producing devastating effects[1].
Furthermore, science, and the technology that goes with it, are never
neutral. Under the pretext of "positive" advances, we are made to accept
the worst that comes with them. AI, like many other technoscientific
"advances," is inseparable from capitalism. Technological and scientific
development is never neutral; it is always linked to the social form
that dominates society. Believing that AI could be a tool for progress
is a delusion. AI produces the capitalism of generalized surveillance, a
capitalism where humans are not only dispossessed of their work, but
also of their cognition, where humans are nothing more than controlled
by AI to work, consume, and even produce their thoughts.

RV

Notes
[1]See the article in the next issue "The Devastating Effects of AI Today"

http://oclibertaire.lautre.net/spip.php?article4422
_________________________________________
A - I N F O S  N E W S  S E R V I C E
By, For, and About Anarchists
Send news reports to A-infos-en mailing list
A-infos-en@ainfos.ca

Geen opmerkingen:

Een reactie posten