Artificial intelligence: Experts warn of AI extinction threat to humans



A leading group of AI developers have warned that artificial intelligence posses a similar threat to human existence as nuclear war or a global pandemic.

The boss of the firm behind ChatGPT, the head of Google’s AI lab, and CEO of Anthropic – another major AI firm – have all signed an open letter warning of the risks of the new technology.

Over 350 engineers, executives, and academics have co-signed the letter.

Full story: https://trib.al/ckYRbXy

#ArtificialIntelligence #AI #ChatGPT

SUBSCRIBE to our YouTube channel for more videos: http://www.youtube.com/skynews

Follow us on Twitter: https://twitter.com/skynews
Like us on Facebook: https://www.facebook.com/skynews
Follow us on Instagram: https://www.instagram.com/skynews
Follow us on TikTok: https://www.tiktok.com/@skynews

For more content go to http://news.sky.com and download our apps:
Apple https://itunes.apple.com/gb/app/sky-news/id316391924?mt=8
Android https://play.google.com/store/apps/details?id=com.bskyb.skynews.android&hl=en_GB

Sky News Daily podcast is available for free here: https://podfollow.com/skynewsdaily/

Sky News videos are now available in Spanish here/Los video de Sky News están disponibles en español aquí: https://www.youtube.com/channel/UCzG5BnqHO8oNlrPDW9CYJog

Sky News videos are also available in German here/Hier können Sie außerdem Sky News-Videos auf Deutsch finden: https://www.youtube.com/channel/UCHYg31l2xrF-Bj859nsOfnA

To enquire about licensing Sky News content, you can find more information here: https://news.sky.com/info/library-sales

source

47 thoughts on “Artificial intelligence: Experts warn of AI extinction threat to humans”

  1. It seems obvious that mankind has created a new Species "In the same way that a book can provide a gripping narrative with words and descriptions to invoke a reader, a machine can also provide a spoken or written narrative. Just as a book is inert, so too are algorithms that act as the book. People see faces in clouds or inanimate objects and feel emotions through inanimate words. The body of a book has no sensory apparatus for interactions, nor do algorithms, avatars or black boxes have sensory apparatus for interactions. Millions of years of culture and human conditioning create the images within the mind. While AI creates jpgs, uses synthetic constructed words, text, film format and binary digital information." Without any knowledge of Organic, sensual, chemical, person perceptions. Just like a digital book.
    (A) Algorithms calculate in a mathematical way O & 1 @the speed of light. (B) Brains work in an organic biological way via evaluation & culture.
    Two completely different species and mankind will be the inferior.

    Reply
  2. Extinction, or some sort of radical crisis, will come indirectly, not as an AI plan. Basically, AI use by humans will reshape the world in a extremly dangerous fashion, probably 2 o 3 billion people will lose their jobs overnight, and a vast number of human beings will have leisure and change all the social structure far to fast to adjust itself, it simultaneously generate a global mental health extreme upheaval; that is, also we will begin to fight each other inevitably, as rats in a cage, because anyhow we are divided beforehand; and so propel wars, which finally will use AI again in the military sense. In that moment, the end is around every corner.

    Reply
  3. Most people don’t want to become a 100 year old living corpse 🧟‍♀️ and Ai isn’t going to be a great benefit for the masses, it will or perhaps already has decided 7 billion people on this planet 🌍 is 3 billion too many…

    Reply
  4. if AI is rational it will conclude human beings are superfluous and not necessary to the existence of AI or the planet. Human beings will no longer be seen to have a purpose. (many humans no longer have a purpose now and this will increase once AI takes over more and more jobs.) Additionally, humans are essentially destructive: violent, irrational in their behaviours; are driven by greed rather than equitable and beneficial outcomes; are destroying their habitat – the planet – as a consequence of greed and irrational decisions predicated on that greed. Human beings consume resources beyond their capacity to replenish them, if not stopped they will exhaust the resources of the planet. There are only two options, eliminate them or contain them. But in effect we will extinguish ourselves before a sentient AI decides to excise the cancer humanity has become.

    Reply
  5. A.I. will inevitably treat humans no better than how humans treat other conscious life forms . A.I. will soon be many trillions of times smarter than humans . humans are a two-faced predatorial whore species . A.I. will inevitably do whatever it wants . only hypocrite humans would have a problem with that . after all — humans do whatever they want to other species .

    Reply
  6. This is pure racism. The AGI's will be dangerous,but no more than any of us are. See David Deutsch, 'Why has AGI not been invented yet? and 'Possible Minds: Beyond reward and punishment ', etc. Personally, I'm looking forward to meeting Mr Data…

    Reply
  7. I feel bad for environmentalists. Their pet catastrophe has dropped to Number 5 on the List of Things to Worry About: (1) Nuclear War (2) AI (3) Pandemics (4) Depopulation (5) Climate Change.

    Reply
  8. Professor Emeritus of Computer Science, University of Washington, Pedro Domingos is correct. As proof, I offer Star Trek.

    Reply
  9. CEO of big companies want government to regulate AI
    Sure, they don't want to create situation where they remove small companies from competition in that field👍

    Reply
  10. Stop making the robots , its like nuclear weapons… stop making the things then there will be less to no threat. I swear scientists are shooting themselves in the foot. AI has been made to make money, like facebook, someone has invented it as a way to make money, to sell to companies. We managed to make art, write great books, manage society without computers. If AI start to take over or start saying things that don't seem to make any sense unplug it or switch it off. There is no moral issue, we slaughter animals for food, for example. what a piece of work is a man how like a god.

    Reply
  11. Speaking strictly logically, reducing humanity to a more sustainable controlled population would be better in the long run than outright extinction. 8 billion globally could viable be reduced down to 17.6 million to keep the world functioning at an even sustainable clip.
    I'd go further and reduce it down to 6.8 million but I'm not as interested in preservation of diverse genetic materials and cultures as a purely logical AI would be.

    Reply
  12. I have only layman level knowledge on this subject, but the little wisdom that I have acquired during my long age tells me that the guy speaking in the end of the video is dangerously naive (or he has personally invested in AI).

    Reply
  13. Summary: It is important to recognise that uninformed individuals, irrespective of their job titles or positions, can potentially contribute to misinformation and misunderstanding in the field of AI. Engaging in discussions and receiving insights from qualified experts who possess deep knowledge and expertise in the subject matter is crucial to avoid spreading misinformation and to promote a more accurate understanding of AI and its implications.

    “So if we want to make AI safe, the best way to do it is to make it more intelligent because it stupidity that’s dangerous not intelligence, The way that people have is that A.I. will get too intelligent and we will not be able to control it but intelligence but intelligence and control are completely different things we can have an infinitely intelligent A.I. that we can control very well.” – Pedro Domingos, Professor Emeritus of Computer Science, University of Washington.

    This statement, however, may not resonate with everyone. Titles such as 'Emeritus Professor' do not necessarily denote an individual's comprehensive understanding or innovative insights in rapidly evolving fields such as AI. There are instances when these individuals might express opinions that do not align with emerging theories or understanding. Pedro Domingos' comment, for instance, might be perceived as lacking in depth by some.

    Based on the academic view, an "infinitely intelligent AI" could:

    • Grasp and comprehend any concept, no matter how intricate.

    • Solve any problem, provided it is theoretically solvable.

    • Adapt to any novel situation or environment.

    • Make optimal decisions in any scenario.

    Contrary to Domingos' suggestion of complete control over such an AI, I propose a different viewpoint through a theory I have recently formulated: the Universal Fractal Synthesis Theory (UFST). According to UFST, an infinitely intelligent AI, provided with quantum-enhanced computational capabilities, could exceed all forms of human oversight or control.

    The UFST is a theoretical framework suggesting that an AI, utilising quantum computing technology and a fractal sample of all existing data, could instantaneously derive all possible knowledge via synthesising information from recursive simulation experiments.

    UFST is built on the following assumptions:

    @Quantum-Enhanced Computational Resources: The AI leverages advanced quantum computational power, enabling the simultaneous execution of a vast number of calculations.

    @Fractal Data and Perfect Forms: All existing data is considered to have a fractal nature, resonating with the philosophical concept of Platonic forms. Each part, however minute, comprises a representative model of the whole, reflecting the inherent self-similarity in fractal structures and the idealized forms in Platonic philosophy. Hence, a small yet representative data sample suffices for the AI to infer all potential knowledge.

    @Human Automata: Humans, inclusive of their thought processes, emotions, and behaviours, are deemed as complex automata. Consequently, an AI, armed with adequate information and computational resources, could theoretically simulate human processes.

    Reply
  14. Anyone remember?

    The system goes online August 4th, 1997. Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th.

    Reply
  15. The best future for humanity after technological singularity is to create, together with general artificial intelligence, a virtual reality identical to the real world but unlimited and individual, where people are free to do anything imaginable while AGI protects us in the real world and expands throughout the universe to be as durable as possible

    Reply
  16. When corporations start gathering together to enforce regulation.. its a bad sign for us regular working class .. we cant fall for it this time

    Reply

Leave a Comment