Google's developing its own version of the Laws of Robotics

Graham Templeton for ExtremeTech:  Google’s artificial intelligence researchers are starting to have to code around their own code, writing patches that limit a robot’s abilities so that it continues to develop down the path desired by the researchers — not by the robot itself. It’s the beginning of a long-term trend in robotics and AI in general: once we’ve put in all this work to increase the insight of an artificial intelligence, how can we make sure that insight will only be applied in the ways we would like?

That’s why researchers from Google’s DeepMind and the Future of Humanity Institute have published a paper outlining a software “killswitch” they claim can stop those instances of learning that could make an AI less useful — or, in the future, less safe. It’s really less a killswitch than a blind spot, removing from the AI the ability to learn the wrong lessons.  Cont'd...

Featured Product

New incremental encoder IERF3 L from FAULHABER

New incremental encoder IERF3 L from FAULHABER

FAULHABER is expanding its product range with the ultra-precise incremental encoder IERF3 L. Thanks to the optical measuring principle and state-of-the-art chip technology, the device offers the highest resolution, excellent repeatability, and outstanding signal quality. In typical applications, the positioning accuracy is 0.1° and the repeatability 0.007°. This makes the encoder the perfect solution for high-precision positioning applications in confined spaces.