Augmenting and training memory
In earlier days, said Maes, people used memory techniques to remember long speeches or orally transmitted history. One such technique was the ‘memory palace’, which makes use of the close connection in the brain between movement and memory. People imagined a building they were familiar with and which had many rooms, or a walking path they knew well with many distinctive sights. Then they associated parts of what they wanted to remember to successive rooms or points on the path. Remembering was taking an imaginary walk through the building or the landscape and picking up the associated bits of knowledge.
Today, we increasingly delegate memory to our mobile devices, and as a result we no longer train our memory. But, says Maes, it is possible to set up technology so that it can actually help us remember and train our memory again. To do so, Maes' research group has developed NeverMind, the memory palace technique implemented with augmented reality technology. As users follow a familiar path through AR glasses, they can add visual cues that represent what they want to remember. Memory is trained by moving a few times through the path. “After a short while, you'll be able to just imagine the path and the memory cues will spring up at the places where you left them.”
What matters most
Smartphones give us access to all the world’s knowledge, added Maes. “But one could argue that they don’t help us very much with what matters most to lead a successful life – being mindful and attentive, having a sharp mind, making better decisions, being creative, and regulating our emotions well.”
“In the Fluid Interfaces Research Group, we try to break through that conundrum – by using technology in a new way. We design innovative interfaces that mediate between humans and the vast virtual world. Interfaces that try to augment the users’ cognitive abilities: make them pay more attention, learn better, and make smarter decisions. The virtual memory palace is but one example of how we want to use technology to strengthen cognitive abilities instead of making people dependent on their devices.”
Creative like the greatest minds
The interfaces we design, continued Pattie Maes, usually have a number of sensors that measure internal and external states: what is going on around and inside a person. Based on these states, they seamlessly intervene to strengthen the users’ abilities and cognition. Take for example Dormio, a tool to enhance creativity, another key skill for the 21st century.
Like the memory tool, Dormio is based on an old technique, often called the 'steel ball technique’ (although also other objects were used). Creatives like Edison, but also the painter Dali, fell asleep with a heavy object in their hand, an object that would fall on the exact moment when they fell asleep, bringing them for a short moment in that creative state between wakefulness and sleep – also called hypnagogia – a state where the controlling activity of the prefrontal cortex is reduced and where people’s imagination is less constrained.
“We built a modern version of that technique, a system that monitors people’s heart rate, skin conductance, and muscle tone to detect when they enter hypnagogia. At that point, a robot starts talking, prompting people to think and talk around a chosen subject. That way, people do not have to become fully awake and snap out of the hypnagogia state to record their ideas; the robot prompts their thoughts and records them while they are still half asleep. So we basically help people dream about their field of interest. We ran user studies with Dormio, pitting a group that used the system against a control group that just rested but not fell asleep. The people who used Dormio were able to come up with much more fluid thoughts and creative ideas.”
Rules of engagement
According to Maes, the researchers at the Media Lab are continually challenged to think of ways to improve our way of living, learning, and working. But they do so in a framework, a number of ground rules that guide all the work of Maes’ group.
“For one, we build systems that are minimally disruptive for what the user is doing. We will for example try to steer behavior by using agreeable or less agreeable scents instead of breaking into people’s flow of activity. Two: it is of key importance that the users have a full understanding of the system and that they are in complete control. Our applications do not try to change behavior without users’ awareness; they help users to take conscious decisions. Next, to the extent possible, the systems we make avoid that the users become dependent on the electronic devices. The skills we teach should eventually become internalized. And last: through our sensors we collect an enormous amount of data, but we go to great lengths to ensure that we protect the users' privacy. So we don’t collect data to have them distributed to other parties.”
The systems we make avoid that the users become dependent on the electronic devices. The skills we teach should eventually become internalized
Compensating for cognitive failing
Much of the work at Maes’ lab is also directed at people with cognitive disabilities, e.g. students with AD/HD or people with mild dementia.
“Up to 10% of students in classrooms may suffer from AD/HD, and there are no solutions available that can successfully help everyone. We’ve been working at systems that use EEG data to track when someone is not attentive. We then try to give subtle biofeedback to make the person pay attention again. We have already shown that our system indeed helps adults, but we now hope to test the technology with children, in the hope that the system develops their ability to be attentive so that after some time they no longer need the device. As with many of our projects, the form factor is not yet ideal, as an EEG headset as monitoring device may stigmatize students. But we’re working on other form factors that are less conspicuous, like glasses that do the same type of sensing based on EOG measurements, so that kids who need a little bit of help just have to wear normal-looking but very high-tech glasses.”
“Another one of our projects is an audio-based AR system to augment memory for people with early-stage dementia. The system has a camera that recognizes people whom they know. It then subtly reminds them of their names and of the conversations they recently had with these friends.”
Based on new, powerful innovations
According to Maes, the innovative interfaces developed in the MIT group are all based on three enabling technologies, technologies that have become immensely more powerful and refined in the last decade, and that have much potential for further innovation.
“First there is of course all the sensor technology we use to monitor external and internal states. These are e.g. wearable EEG headsets to assess mental states, ECG patches to monitor heart rates and heart rate variability, and wristbands and other sensors that measure e.g. skin temperature and conductance, signals that can be used to deduce users’ emotional conditions.”
“Second, there is the rapidly evolving technology to augment or change the environment of the users, or their perception of that environment. This is the area of augmented/virtual reality techniques. The hardware that we use includes glasses, hearing buds, olfactory displays, and bone conduction actuators.”
“Third, there is artificial intelligence and machine learning, which we use to model the users’ behavior and intervene to best assist them. Also in this domain, tremendous advances have been made and many more interesting innovations are on the horizon.”
Want to learn more?
Pattie Maes is a professor in MIT's Program in Media Arts and Sciences as well as academic head of the MAS Program. She runs the Lab's Fluid Interfaces research group, which aims to radically reinvent the human machine experience. Coming from a background in artificial intelligence and human computer interaction, she is particularly interested in the topic of cognitive augmentation, or how immersive and wearable systems can actively assist people with memory, learning, decision-making, communication, and wellbeing. Prior to joining the Media Lab, Maes was a visiting professor and a research scientist at the MIT Artificial Intelligence Lab. She holds a bachelor's degree in computer science and a PhD in artificial intelligence from the VUB (Brussels, Belgium).
Published on:
28 June 2018