In artificial intelligence (AI), machines carry out specific actions, observe the outcome, adapt their behavior accordingly, observe the new outcome, adapt their behavior once again, and so on, learning from this iterative process. But could this process spin out of control? Possibly. "AI will always seek to avoid human intervention and create a situation where it can't be stopped," says Rachid Guerraoui, a professor at EPFL's Distributed Programming Laboratory and co-author of the EPFL study. That means AI engineers must prevent machines from eventually learning how to circumvent human commands. EPFL researchers studying this problem have discovered a way for human operators to keep control of a group of AI robots; they will present their findings on Monday, 4 December, at the Neural Information Processing Systems (NIPS) conference in California. Their work makes a major contribution to the development of autonomous vehicles and drones, for example, so that they will be able to operate safely in numbers.
To read more, click here.