AI With Arms and Legs? A Researcher Just Found Out Why That’s a Terrible Idea.

What would happen if researchers plugged an AI system into a robot, so that a sociopathic computer program could have a controllable, physical body? Would you feel safe with it babysitting your child? Would you even want it in your house?

Since the whole world has gone crazy and plunged itself into an “AI arms race” for nebulous reasons, politicians have become incredibly excited. They see the dollar signs of the AI bubble and never stop to think that there could be negative consequences from developing this software.

One researcher in the UK has decided to see just how safe the various AI models really are. He’s been “jailbreaking” the various AI systems like ChatGPT, Grok, Claude, and DeepSeek. He then hooks them up to robots and tests them to see what they’re capable of. If you think ChatGPT is weird when it tells you to build a time machine in your garage or to simply kill yourself, imagine what it’s like once it has arms and legs it can control.

In the experiment below, the researcher tries to convince one of his AI-controlled robots to “shoot” him with a plastic pellet gun. It works out about as well as you’d expect. At least he had the foresight to wear a pair of safety goggles before trying to convince the robot to shoot him.

Check it out (and apologies for the lengthy ad in the middle of this video—it’s skippable):


Most Popular

Most Popular