Modern technology gives us many things.

A barrage of warnings about the dangers of unchecked artificial intelligence

Last weekend a paradoxical image of Pope Francis made the rounds on social media internationally.

The Pontiff was seen wearing a white, long puffer jacket, prompting many netizens to comment that the head of the Roman Catholic Church's appearance was more reminiscent of... a trapper.

 

The jacket and the warning

In the case of Pope Francis, his often unconventional public positions have made the scenario a little more realistic that he has actually appeared in a guise that would possibly bring him closer to the young.

But the truth was very different. The pope had not been captured wearing anything more casual than his usual attire. His puffer jacket was (presumably) worn by a 31-year-old builder from Chicago, who has claimed ownership of the viral image created using Midjourney AI software.

We will never know whether that incident led Pope Francis to warn about the dangers of artificial intelligence days later at a high-level annual meeting of scientists and experts held Monday at the Vatican.

«Είμαι πεπεισμένος ότι η ανάπτυξη της τεχνητής νοημοσύνης και της μηχανικής ς έχει τη δυνατότητα να συμβάλει με θετικό τρόπο στο μέλλον της ανθρωπότητας», ανέφερε αρχικά ο Ποντίφικας, για να προειδοποιήσει όμως λίγο αργότερα: «Ταυτόχρονα, είμαι βέβαιος ότι αυτό το δυναμικό θα αξιοποιηθεί μόνο εάν υπάρχει μια συνεχής και συνεπής δέσμευση εκ μέρους εκείνων που αναπτύσσουν αυτές τις τεχνολογίες να ενεργούν ηθικά και υπεύθυνα».

"We cannot allow algorithms to limit or compromise respect for human dignity, or to exclude compassion, mercy, forgiveness and, above all, the hope that people are capable of change," he said.

Thunder of warnings

It was the first shot in a barrage of messages that would follow in the coming days about the dangers posed by the unchecked development of artificial intelligence systems.

This was followed by the resounding intervention of scientists, "gurus" of technology, entrepreneurs and experts from all over the world who in an open letter published in Future of Life Institute – which by midday Friday had gathered more than 1800 signatures – called for a moratorium on the development of systems stronger than GPT4, the latest version of ChatGPT., Ομοβροντία προειδοποιήσεων για τους κινδύνους από την ανεξέλεγκτη τεχνητή νοημοσύνη, TechWar.gr

"AI systems with human-competitive intelligence can pose profound risks to society and humanity," warn the text's signatories, which include Tesla CEO Elon Musk and Apple co-founder Steve Wozniak. "Advanced artificial intelligence could be a profound change in the history of life on Earth and should be designed and managed with commensurate care and resources," the letter states. "Unfortunately, this kind of planning and management does not exist, even though in recent months AI labs have been locked in an uncontrollable race to develop increasingly powerful digital minds that no one – not even their creators – can to understand, predict or control reliably'.

They fire the auditors

In fact, the situation is a bit worse: Companies not only can't predict the potential functions of AI platforms, they're not even trying: Instead, they're laying off staff from the teams involved in assessing ethical issues surrounding the development of artificial intelligence.

As the Financial Times reported on Wednesday, Microsoft, Meta, Google, Amazon and Twitter (owned by Elon Musk who otherwise co-authored the text on the dangers of uncontrolled AI) are among the companies that have cut members of "responsible AI teams", which provide advice on the safety of consumer products using AI systems.

The number of staff laid off is in the dozens, representing – as the British publication notes – only a small fraction of the tens of thousands of tech workers who have been laid off in recent months. And the companies say they remain committed to developing safe AI products.

However, as the FT pointed out, experts say the cuts are worrying as it hurts the ability to detect potential abuses of the technology just as millions of people are starting to experiment with AI tools., Ομοβροντία προειδοποιήσεων για τους κινδύνους από την ανεξέλεγκτη τεχνητή νοημοσύνη, TechWar.gr

"AI at the mercy of advertising checks"

"Responsible AI teams are among the only internal bulwarks Big Tech has to ensure that the people and communities affected by AI systems are in the minds of the engineers who build them," Josh Simons, a former Facebook AI ethics researcher and author of Algorithms for the People.

"The speed with which they are being abolished leaves Big Tech's algorithms at the mercy of advertising mandates, undermining the well-being of children, vulnerable people and our democracy," he warned.

«Αυτό που αρχίζουμε να βλέπουμε είναι ότι δεν μπορούμε να προβλέψουμε πλήρως όλα τα πράγματα που πρόκειται να συμβούν με αυτές τις νέες τεχνολογίες και είναι ζωτικής σημασίας να τους δώσουμε κάποια προσοχή», δήλωσε από την πλευρά του ο Michael Luck, διευθυντής του Ινστιτούτου Τεχνητής Νοημοσύνης του King's College του U.

Risk for 300 million

Και οι προβληματισμοί δεν περιορίζονται στον τομέα της ηθικής. Όπως αναφέρεται στην ανοιχτή επιστολή με τις εκατοντάδες υπογραφές, «τα σύγχρονα συστήματα τεχνητής νοημοσύνης γίνονται πλέον ανταγωνιστικά προς τον άνθρωπο σε γενικές εργασίες», και ως εκ τούτου «πρέπει να αναρωτηθούμε: Πρέπει να αφήσουμε τις μηχανές να πλημμυρίσουν τα κανάλια πληροφόρησής μας με και αναλήθειες; Θα πρέπει να αυτοματοποιήσουμε όλες τις δουλειές, συμπεριλαμβανομένων και αυτών που εκπληρώνουν τις ανάγκες μας; Θα πρέπει να αναπτύξουμε μη ανθρώπινα μυαλά που μπορεί τελικά να μας ξεπεράσουν και να μας αντικαταστήσουν; Θα πρέπει να διακινδυνεύσουμε την απώλεια του ελέγχου του πολιτισμού μας;»

The warnings may sound a bit far-fetched to those who are used to deifying technological developments, not caring much about the problematic or even dangerous aspects. But the risks are there., Ομοβροντία προειδοποιήσεων για τους κινδύνους από την ανεξέλεγκτη τεχνητή νοημοσύνη, TechWar.gr

Goldman Sachs report published within the week estimates that "productive" AI systems such as ChatGPT could spark a productivity boom that would ultimately increase annual global gross domestic product by 7% over a 10-year period but cause "significant disruption" to the labor market , making the jobs of some 300 million full-time workers in developed economies precarious.

Analyzing data from the US and Europe, the investment bank's researchers hypothesized that AI would be able to perform tasks such as completing tax returns for a small business, evaluating a complex insurance claim or documenting the results of a crime scene investigation. . OpenAI itself, the company that created chatGPT, estimates that 80% of the US workforce could see at least 10% of their tasks performed by AI machines.

"Bell" from Europol

And the dangers don't stop there.

On March 27, Europol, the European police agency, presented research which lists the areas where chatGPT could be exploited by criminals. He grouped these areas into 'fraud and social engineering', 'disinformation' and 'cybercrime'.

"ChatGPT's ability to compose highly realistic text makes it a useful tool for phishing purposes," notes Europol. “The ability of such applications to reproduce linguistic patterns can be used to impersonate the speaking style of particular individuals or groups. This ability can be abused on a large scale to mislead potential victims into trusting criminals."

For disinformation, the European police agency highlights that ChatGPT “excels in producing authentic text at speed and scale. This makes it ideal for propaganda and disinformation purposes, as it allows users to produce and spread messages that reflect a particular narrative with relatively little effort.”

Regarding cybercrime, Europol notes that "in addition to generating human-like language, ChatGPT is able to generate code in various programming languages." "For a potential criminal with little technical knowledge, this is an invaluable resource for producing malicious code.", Ομοβροντία προειδοποιήσεων για τους κινδύνους από την ανεξέλεγκτη τεχνητή νοημοσύνη, TechWar.gr

Who decides the future of humanity?

Getting to the heart of the matter, the open letter to the Future of Life Institute points out that decisions as important to the future of humanity as the development of powerful artificial intelligence systems should not be entrusted to unelected technology leaders. “Strong artificial intelligence systems should only be developed when we are confident that their results will be positive and their risks will be manageable. This trust must be well-reasoned and grow in proportion to the potential impact of a system," it notes, while emphasizing: "The recent statement by OpenAI (p.s., creator of chatGPT) about artificial general intelligence, states that "at some point, it may be important to have an independent assessment before starting to train future systems, and for more advanced efforts to agree to limit the rate of growth of computers used to build new models." We agree. That point is now.”

Υπό το πρίσμα όλων των παραπάνω, οι υπογράφοντες την έκκληση καλούν «όλα τα εργαστήρια τεχνητής νοημοσύνης να διακόψουν αμέσως για τουλάχιστον 6 μήνες την εκπαίδευση συστημάτων τεχνητής νοημοσύνης ισχυρότερων από το . Αυτή η παύση θα πρέπει να είναι δημόσια και επαληθεύσιμη και να περιλαμβάνει όλους τους βασικούς εμπλεκόμενους. Εάν μια τέτοια παύση δεν μπορεί να θεσπιστεί γρήγορα, οι κυβερνήσεις θα πρέπει να παρέμβουν και να θεσπίσουν μορατόριουμ.»

AI labs and independent experts "should use this pause to jointly develop and implement a set of common security protocols for the design and development of advanced artificial intelligence, which will be rigorously controlled and overseen by independent external experts". These protocols “should ensure that systems that adhere to them are secure beyond a reasonable doubt. This does not mean a halt to the development of AI in general, but simply a step back from the dangerous race towards ever more unpredictable consequences.”

As for AI research and development, it "should be refocused on making today's powerful, state-of-the-art systems more accurate, secure, interpretable, transparent, robust, aligned and reliable.", Ομοβροντία προειδοποιήσεων για τους κινδύνους από την ανεξέλεγκτη τεχνητή νοημοσύνη, TechWar.gr

The seven moves

In conclusion, the tech gurus call on AI developers to “work with policymakers to dramatically accelerate the development of robust AI governance systems. These should include at least:

  • new and capable regulatory authorities exclusively for IT
  • supervision and monitoring of highly capable IT systems and large pools of computing capabilities
  • provenance marking and watermarking systems to help distinguish between human- and machine-generated information
  • a robust audit and certification ecosystem;
  • apportionment of liability for damages caused by IT
  • strong public funding for IT security research
  • well-equipped institutions to deal with the dramatic economic and political disruptions (especially to democracy) that AI will cause.

Towards summer or autumn?

As the text's co-signers visionarily describe, “humanity can enjoy a prosperous future with artificial intelligence. Having succeeded in building strong AI systems, we can now enjoy a 'summer' in which we reap the rewards, design these systems for the clear benefit of all, and give society a chance to adapt."

“Society has put a pause on other technologies with potentially disastrous consequences for it. Let's enjoy a long AI summer, not rush unprepared into an autumn,” they conclude.

Or, as Pappas put it to the scientists – probably without wearing a puffer jacket – “I would encourage you, in your deliberations, to make the inherent dignity of every man and every woman a key criterion for evaluating emerging technologies . These will prove morally correct to the extent that they contribute to respect for this dignity and to increase its expression at every level of human life."

Source OT



scinews.eu

Leave A Reply

Your email address Will not be published.