Any information or special reports about various countries may be published with photos/videos on the world blog with bold legit source. All languages are welcome. Mail to lucschrijvers@hotmail.com.
Pandora’s Box: Generative AI, ChatGPT, and Human Rights
Pandora’s Box is open, and ChatGPT and generative artificial intelligence (AI) are here to stay. Screenshots of chatbot conversations and AI-generated images are papering social media, and we can now “interact” with chatbots on search platforms.
ChatGPT is arguably the most well-known generative AI product. Since its release in November 2022, multiple tech companies, including Google, Amazon, and Baidu, have released their own products.
But as companies race to develop the newest tool, HRW is asking, what’s behind this technology? Who feeds it data and decides where the data comes from? And what does it have to do with human rights?
Well, turns out, a lot. I spoke with Anna Bacciarelli, program manager in HRW’s Tech and Human Rights division, about the questions at the center of this debate. Read some excerpts below.
How is this technology different?
This is really the first time that advanced, creative AI applications are accessible to anyone with a computer or smartphone.
Human Rights Watch has been working on AI’s impact on rights for the past five years, and generative AI is in many ways an extension of well-known concerns about AI and machine learning technology – that there is a heightened risk of surveillance, discrimination, and a lack of accountability for when things go wrong.
What are some concerns around privacy and data security?
Be careful what you type! We should assume everything we input into these products is to some extent being used to train and “improve” the model, but we don’t have enough information to know to what extent our information is being used and can be linked to individual identities.
It’s likely going to be a big problem. How do you trust what you see? We already have a problem knowing what is real online. That is about to get a whole lot bigger.
Are these systems reliable?
These systems are known to contain falsehoods and inaccuracies and are opaque in design.
The consequences for this are serious. There’s the case of the judge in Colombia who said he queried ChatGPT while preparing a judgement. How much could this tech influence a courtroom decision? In Belgium, a woman says her husband died by suicide after his interactions with a generative AI chatbot.
The bottom line: companies are rushing to put out products that are not safe for general use.
We need tech makers and regulators to pause and consider some of the big questions: How could this be misused? Even with the best of intentions, what could go wrong? Can this cause harm and if so, what can we do about it?
Police in the Xinjiang region of China rely on a master list of 50,000 multimedia files they deem “violent and terrorist” to flag Uyghur and other Turkic Muslim residents for interrogation. Over nine months from 2017 to 2018, police have conducted nearly 11 million mobile phones searches.
The two warring armed forces in Sudan have repeatedly used explosive weapons in urban areas that have killed civilians, damaged property and critical infrastructure, and left millions without access to necessities.
A new report documents how Croatian authorities are engaging in pushbacks of migrants and asylum seekers, including of unaccompanied children and families with young children.
This new documentary-style video uses witness testimonies and forensic investigations to show how forces from both Kyrgyzstan and Tajikistan committed apparent war crimes in attacks on civilians during their brief but intense four-day armed border conflict in September 2022.
The film focuses on events that took place during one day on September 16.
Geen opmerkingen:
Een reactie posten