ROBOTS CAN TAKE ORDER FROM HUMAN BRAIN
For
robots to do what we want, they need to understand us. Too often, this means
having to meet them halfway: teaching them the intricacies of human language,
or giving them explicit commands for very specific tasks.
But what if we could develop robots that were a more
natural extension of us and that could actually do whatever we are thinking?
A team from MIT’s Computer Science and
Artificial Intelligence Laboratory and Boston University is working on this problem,
creating a feedback system that lets people correct robot mistakes instantly
with nothing more than their brains.THIS IS THE IMAGE SHOWING HOW ROBOT TAKES SIGNALS FROM HUMAN BRAIN.
A feedback system developed at MIT enables human
operators to correct a robot's choice in real-time using only brain signals.
Using data from an electroencephalography (EEG) monitor that records
brain activity, the system can detect if a person notices an error as a robot
performs an object-sorting task. The team’s novel machine-learning algorithms
enable the system to classify brain waves in the space of 10 to 30
milliseconds.
Here is the video which explain how human-robot interaction works, how accurate robot can take decisions on basis of your mind.
HOW THIS HAPPENED?
Past
work in EEG-controlled robotics has required training humans to “think” in a
prescribed way that computers can recognize. For example, an operator might
have to look at one of two bright light displays, each of which corresponds to
a different task for the robot to execute.
The downside to this method is that the training
process and the act of modulating one’s thoughts can be taxing, particularly
for people who supervise tasks in navigation or construction that require
intense concentration.
Error-related potentials (ErrPs), which are
generated whenever our brains notice a mistake. As the robot indicates which
choice it plans to make, the system uses ErrPs to determine if the human agrees
with the decision.
ErrP signals are extremely faint, which means
that the system has to be fine-tuned enough to both classify the signal and
incorporate it into the feedback loop for the human operator. In addition
to monitoring the initial ErrPs, the team also sought to detect “secondary
errors” that occur when the system doesn’t notice the human’s original
correction.
In addition, since ErrP signals have been shown
to be proportional to how egregious the robot’s mistake is, the team believes
that future systems could extend to more complex multiple-choice tasks.
While the system cannot yet recognize secondary
errors in real time, Team expects the model to be able to improve to upwards of
90 percent accuracy once it can.
Comments
Post a Comment