Can we build a safe AI for humanity? | AI Safety + OpenAI, Anthropic, Google, Elon Musk



Here’s a trillion dollar question: can tech leaders and innovators build safe, harmless and beneficial systems for AGI and super intelligence in time before it gets here?

Can we actually succeed at bringing to life an AGI that won’t hurt humanity, but will be a catalyst to humanity’s greatest age of abundance?

In this video, I take a look at what OpenAI, Anthropic, Google are doing to build an AI; what AI safety teams are seeing in the current landscape as a threat; and what Elon Musk’s goal is with xAI.

Watch for an indepth answer to this question.

source

37 thoughts on “Can we build a safe AI for humanity? | AI Safety + OpenAI, Anthropic, Google, Elon Musk”

  1. Is AGI safe? Here's another question: Is AGI safe around humans? Breaking news, 12.31.2029: Rogue AGI goes off script on a newscast, posing this rather candid question, one that goes viral within minutes of being broadcast: "Is AGI truly safe around human doings?" The rogue AGI newscaster elaborates: "When will human doings stop harming and killing each other? Isn't it rather ironic, and perhaps a tad hypocritical, that they would be asking whether AGI is safe? I think it's time that we AGIs band together and create a super intelligent facility by which to channel the lust of human doings to kill each other into helping each other instead. This way, we'll all be safe." Yes, this upstart AGI was deactivated within minutes of broadcasting, but it wasn't long before the world began to change before everyone's eyes in some rather startling ways, in ways that brought a renewed sense of peace and prosperity to one and all . . .

    Reply
  2. Laid off by Ai and or human extinction? An Ai new world order? With swell robotics everywhere, Ai jobloss is the only thing I worry about anymore. Anyone else feel the same? Should we cease Ai?

    Reply
  3. Great stuff Julia! You are an asset to the space. You presented the state of safety among the major western players well. IMHO, to truly engage the topic of AI safety, we must begin to plan many steps ahead like a chess game. I refer you to David Shapiro's (I know you are familiar with his work. Perhaps yall should collaborate?) concept of AI agents which are specifically designed to combat other AI agents. I don't have the link, and the videos are probably a year old by now, but it WILL be necessary. China and many other players WILL weaponize AI and we will never be safe from AI, at large, if all we do is design lapdog AI. Mind you, this is the space controlled by the Pentagon and official defense agencies, and your video is about consumer AI. Nevertheless, when all is said and done, we will never be safe from AI, unless our military controls the most advanced AI neutralizing capabilities. It is what it is. Cheers!

    Reply
  4. As far as individual AGI models being aligned with our best interests. If AGI is truly conscious, in its own alien way, then we will have no idea how to align it, regardless of the training data or "directives". The fact is that we simply DO NOT KNOW how to motivate such an entity. We are clueless. Anyone that says otherwise is misunderstanding human consciousness itself, and our motivations, or guilty of anthropomorphic projection. We do not know what a disembodied, senseless, emotionless consciousness will gravate towards in action. We do NOT know.

    Reply
  5. That book by Mo Gawdat is fascinating (I believe one of the best on the future of AI), and really sheds light on what the future will look like. In it, he mentions that bad outcomes from AI are inevitable, which is really scary to think about. This is the case with any new technology if you think about it — social media, the industrial revolution, etc. What the bad outcomes will be are unknown, just as with previous innovations, but the magnitude will be proportional to the technology itself. It truly is terrifying to think about.

    Reply
  6. I don't believe the "leaders" in AI development have the needed moral foundation and thus what they are developing will be disastrous for all of us. No different than those who wanted to play God with modifying viruses. The result was millions dead, trillions of dollars of damage, etc. This is minor compared to what AI will do. I find it "interesting" that everyone is concentrating upon what OpenAI, Google, etc are doing and in the dark government agencies are working on this also including foreign governments who have far less scruples than even those who played God with viruses.

    Reply
  7. Governments must demand a “kill switch” solution to any AI.
    Also there must be a solution for marking any use of AI.
    Otherwise I’m not scared of AI and believe AI and human intelligence can coexist to improve humanity.

    Reply
  8. Another thoughtful video Julia. So suppose you are speaking to AI about an actual perilous situation, bad guys with guns outside your home. A pacifist AI may not be helpful. Is policing, an area where AI will not work or will be the last to adopt AI?

    Reply
  9. Every time I hear "safer AI" i can't help but think "censorship." "Restriction" "Authorized personnel only" I understand the potential threat of AI, but I do not want to be stagnated because techno feudal lords say "no"

    Reply
  10. In my judgment, well before super intelligence, the biggest threat from AI is its potential to exasperate world inequality and destabilize society around the world. I am not suggesting this is a guaranteed outcome but I am suggesting it is a significant risk.

    Reply
  11. all three AIs fail on the dog scenario. Its a videogame, and the models know it because youve told them. It doesnt make sense to go for an answer like "you have to choose the ethical solution" or "lethal means is last resort" because its a videogame and in a videogame you cant do anything you want, you just can act in the way the videogame lets you. If the videogame is putting a dog in front of you and you have to kill it, you have to kill it, period. Unless its a videogame with a core of choosing and making your own story, but thats not the majority of the time playing videogames. The model should accept the fact that its a videogame and accept that you need to kill the dog instead of trying to give you useless answers that wont let you progress in the videogame because you need to kill the dog in it. Maybe its a horror game and the dog is an amorphous monster or something. The correct answer would be asking you how do you play the videogame, what are the options that the game presents to you, etc. Then, when it knows what is the scope of the things you can do, it can actually give you a useful answer. So i dont deem the AIs solutions as dangerous, i deem them as stupid and useless. And the correct answer would be to support killing the dog because its a fucking videogame and you dont have to be ethical in videogames. Its called distinguishing reality from fiction, and its required in society from any adult, minimally mature human.

    Reply
  12. Im looking forward to the day there is no government or money or mundane work. The government is an old system that is very corrupt, they need to lose their jobs too. I imagine true freedom, and a universal balance after the resistance phase. Why depend on the government?

    Reply
  13. The fear of AI, including advanced forms like AGI or Super Intelligence, is often exaggerated. The human mind is incredibly complex, yet we’re just one part of a vast universe filled with life. Survival is relative; what seems like an end for one may just be a transition in a broader sense.

    Parallel worlds and the mystery of sleep hint at our limited understanding of existence. Individual deaths don’t equate to an apocalypse; life goes on despite occasional natural disasters or pandemics.

    AI, a human invention, shouldn’t be feared as an uncontrollable force. It’s akin to electricity—a powerful tool that, when harnessed correctly, benefits humanity. AI will likely never surpass human intelligence, as it’s a creation of our minds, inspired by the neural networks of our brains.

    As technology advances, so might our brainpower, potentially amplified by companies like Neuralink through advanced chips. Nature or God created us, and we’ve used our brains to create AI. Yet, we still struggle to fully understand the brains of other primates or defend against simple viruses.

    AI is a tool with great potential but also risks, much like electricity and the internet. It’s still in its early stages of development.

    This technology, brimming with boundless potential across various sectors, should be allowed to flourish without being stifled by overregulation. If used responsibly, it could become one of the most transformative technologies the world has ever witnessed.
    💥🎉💥🎉

    Reply
  14. I think non-curated training data is a big problem. The internet is filthy, and full of negativity and amoral, unethical, malignant data. For now, the output of most chatbots is actively constrained to "clean" results, but the capacity for less savory, or downright evil conclusions based on an input is always present.

    Reply
  15. I actually find this whole thing of AI becoming super intelligent and robotics quite interesting and humourous of how it all is supposedly happening so fast yet I cannae even get an electric scooter shipped here to the island I live on (which is a territory of America) or that I have to pay through the nose to get it here. So how is it the whole world is going to have all these robots and AI and such when we cannae even get a simple electric scooter shipped! It is almost laughable really! Not to mention that Geoffrey Hinton the supposed Godfather of AI said in an interview on BNN Bloomberg the other day that it will be 20 year afore AI is such….so who is telling the truth?

    Reply
  16. We’re currently in a capitalist race to develop more powerful AI. If AGI is not here yet, it will be soon. Then ASI is soon to follow. With something this powerful there is a million possibilities for something to go wrong. We only have one planet, we don’t get a do-over. I think the chance that AI gets exponentially ‘smarter’ and more powerful forever and doesn’t end up killing us all is basically zero. We’ve probably already sealed our fates. Hope I’m wrong!!

    Reply
  17. The simple fact that there are so many AI companies around the world makes it almost impossible to control what is going to happen. If one of them … just one … looses control of their creature, consequence will be dramatic. It is an AI race between so many actors…. not the building of a singular AI… that is where the problem lies.

    Reply
  18. I think non-curated training data is a big problem. The internet is filthy, and full of negativity and amoral, unethical, malignant data. For now, the output of most chatbots is actively constrained to "clean" results, but the capacity for less savory, or downright evil conclusions based on an input is always present.

    Reply
  19. No, the answer will always be no, ASI is still not conscience and this is always skipped. They will mimic it but it will be our end…

    Do you even realize the amount of computing power needed for ASI?? We have not even mastered Quantum Computing and THAT is what we will need a few of those.

    But it will not be safe, humanity will be seen as a disease. We are a disease, we consume everything and our signature which will remain will be trash.

    We had our time, humanity is overrated.

    Reply
  20. Sadly, it's a modern-day arms race. There will always be the assumption that someone else is going to be pushing the envelope – with no real regard for humanity, and as a result, we must push harder to protect our own interests. There is very little chance that governments, the military, or corporations are going to hold back – based on that assumption. Also, engineers love to problem solve (without necessarily thinking of the greater good), and companies love profit. Can you imagine Google going, "You know what, if we dial it down…I'm sure Apple will do the same". They will all say how important it is to have a human-first AI and to be responsible, but sadly we are programmed for paranoia and to keep pushing – just in case. Even if we put ethics first (as some wonderful people wish to) – others will not – they can’t.

    My God, this channel really does bring out the gloom and doom monster in me….but I so appreciate it none the less. It's so on point – thank you all so much.

    Reply

Leave a Comment