This ad will automatically close in seconds Close

Search for

Indian Institute of Science dives deep into neuromorphic computing and AI

Indian Institute of Science dives deep into neuromorphic computing and AI

With the Centre of Neuroscience seeking to develop neuromorphic computing, and the Centre of Computational & Data Sciences working on AI, IISc is paving the way for advanced tech in India. 

This cool button delivers CIO stories to you on Facebook:

Not long ago, creating a computer chip that could emulate the human brain was considered moonshot technology. Not surprising, considering the possibilities of a machine crunching data as fast as it could be generated.

Now, the Centre of Neuroscience (CNS) at the Indian Institute of Science (IISc) is seeking to decode brain function with a view to developing neuromorphic computing and algorithms based on a better understanding of how the human brain works.

How brain research contributes to neural architecture

Simply put, neuromorphic computing involves building circuits that mimic neuro-biological architectures present in the nervous system.

“There’s been a major revolution in the west around trying to understand computing systems with neural architecture, and how these systems can learn in novel and complex environments,” says Sridharan Devarajan, assistant professor–Cognitive Neuroscience, IISc.

Recognizing the importance of neuromorphic computing for the 21st century, DARPA (Defense Advanced Research Project Agency), a premier research agency of the US Department of Defense, has commissioned the SyNAPSE (Systems of Neuromorphic Adaptive Plastic Scalable Electronics) program that seeks to develop neuromorphic computing devices that mimic the mammalian brain.

“In some sense, the real difference between neural and digital architecture is that in digital architecture, there’s a need for extreme precision and computation. So, there’s a huge amount of hardware devoted to achieving that precision,” says Devarajan.

On the other hand, essential ideas from neuroscience can make for very unconventional computers, which are successful at things that everyday computers don’t do so well. Vision is one such example. Controlling movements in very complex terrains is another.

“A computer can handle core computing very well. But anything that requires dealing with variable, unpredictable events, and environments needs something that is programmed for flexibility, rather than something that’s programmed for repeatability. A fundamental challenge for computers these days, is to be able to produce naturalistic movements, especially in the case of identifying and avoiding obstacles,” says Sridharan.

Shedding more light on this challenge, Sridharan explains: “For example, in some sports like golf, one needs to replicate a stereotype action every time, so the objective there is to ensure getting identical performance every time. But in many real world applications, this is not enough."

If you try to train a robot to walk on a level floor with a conventional algorithm, the robot wouldn’t perform well in novel environments, such as a rugged terrain or on stairs.

“A lot of the research we do goes behind understanding how the human brain enables this flexibility, and some of it could be used to build next generation computers,” he adds.

Another classic example in understanding the difference between how computers differ from humans in recognizing visual patterns is the Captcha system used for authentication in websites.

Computers find it very hard to identify these images. Human vision, on the other hand, has evolved to process certain "invariances". This means irrespective of the shape or the angle of the font, you’ll be able to recognize the image, based on the closest match your brain forms.

The Centre also studies the concept of attention, which is a fascinating bottleneck in the human brain. When navigating in the real world, the brain must flexibly allocate its limited resources at any time to deal only with the most important objects. Research at CNS seeks to understand how this remarkable capacity for attention arises in the brain.

The CNS is also beginning to apply concepts learnt whilst developing artificial neural network models. For instance, the Google DeepMind system: The computer not only plays a game, but it also has an attention filter, which allows it to focus on the most relevant parts of the task. This helps it maximize its processing resources.

Sharing his optimism, Devarajan says “I strongly believe that essential ideas from neuroscience can make for very unconventional computers, which are successful at things that everyday computers don’t do so well. Vision is one such example. Controlling movements in very complex terrains is another.”

A computer can handle core computing very well. But anything that requires dealing with variable, unpredictable events and environments, needs something that is programmed for flexibility, rather than something that’s programmed for repeatability.

“A fundamental challenge for computers these days, is to be able to produce naturalistic movements, especially in the case of identifying and avoiding obstacles,” says Devarajan.

So, how does neuromorphic computing differ from traditional computing?

Earlier, neuromorphic computing used to predominantly deal with programming for flexibility. Today, it’s more about emulating neurons in hardware.

Conventional computing models the biophysics of the neuron, puts together a very large number of these neurons, and then simulates them. The model is typically described as some kind of differential equation. It runs several thousands of these equations in parallel to see how the neurons change over time, and given a certain input, how will these spool of neurons give you a certain output.

Neuromorphic computing, on the other hand, actually builds those neurons in hardware. It models current flowing through neurons as currents flowing through transistors and circuits. So, the physics is actually being emulated on these neuromorphic chips. It’s like building a simplistic version of the brain on a little chip. Fancy that!

The advantage here is that the emulations happen in real time. So, the input is fed in real-time and the output is given in real-time. This gives the enormous advantage of speed. However, what you sacrifice, to some extent, is that the configuration of the neurons (read architecture) is not as flexible as a software model.

“One of the main differences between a traditional computing model and neuromorphic computing is that the traditional model is deterministic, whereas the neuromorphic one is probabilistic,” explains Balaji Jayaprakash, principle investigator, Centre for Neuroscience, IISc.

Why conventional computing architecture doesn’t make the cut?

One major challenge with conventional computing is in dealing with natural and unpredictable environments. Using conventional algorithms may not fetch the desired results, because these programs have the ability to reproduce the action accurately. What you need though, is flexibility and the ability to learn on the fly.

So, if it’s a robotic arm you wish to control, neuromorphic computing is the way to go, considering the real-time capabilities and the fact that it is very light-weight. In addition to this, the model is also very scalable. All you need to do is to pile a bunch of chips and you get very powerful processing architectures. The model also consumes very less power compared to a digital computer.

In addition to this, researchers at CNS try to understand the relationship between memories of recent and remote life time events and how they interact and influence each other. The team uses techniques such as optogenetics, CLARITY, Virtual Reality, and in-vivo imaging along with various behavioral paradigms to address these questions.

A deep dive into Artificial Intelligence

AI and machine learning have been making great strides over the last few years--right from self-driving cars to smart missiles. They have achieved major milestones too, be it beating humans in a game of Jeopardy, or Deep Blue defeating Gary Kasporov.

With more and more devices capable of sensing the environment and collecting data, there's a massive amount of data being generated. So, technologies like
AI and deep learning are essential to understand the data and derive insights, especially in scientific domains. The Large Hadron Collider, for instance, recently churned out 300 TB for public viewing. 

To deal with the humongous amount of data being churned out from AI sources, various ingenious models have been developed. One such example is Hadoop MapReduce. 

Essentially, MapReduce is a programming model used for processing large data sets with a parallel, distributed algorithm on a Hadoop cluster. This allows for massive scalability across thousands of servers in the cluster.

"The heterogeneous part of it, that's the variety, is not as well understood. And there's a bigger push to understand the data at a semantic level. That's where our work fits in - how we can understand and decipher the data from this unstructured source," says Partha Talukdar, assistant professor, Department of Computational and Data Sciences, IISc.

There's a need for a common language between machines and humans. "If the machine is giving predictions that are not interpretable by humans, it amounts to nothing. So a semantic layer is very necessary to handle the heterogeneous or 'variety' part of things," explains Talukdar.

"There's a group at IISc trying to simulate a set of neurons in a lab environment, which can be trained to control the switching on or switching off an electrical circuit," he adds. 

Sriram Ganapathy, assistant professor at IISc, had worked as an IBM Watson researcher at Yorktown Heights, and shared his insights about deep learning.

“Deep learning builds a hierarchy of classifiers by taking the data and building a layer of representation, then taking another layer of representation and adding it to the first layer, and so on and so forth,” he says. 

One particular solution the IISc research team is working on sounds a lot like what Professor X uses to control thoughts. The technology makes use of a skull cap that could be used to record thoughts and convey them in situations where the user is afflicted with a speech or a physical disability. The device does this by capturing and conveying EEG (Electroencephalogram signals). The technology also helps in predicting what a user wants to type, based on his/her imagined speech.

What business does it serve?

When you talk about industrial applications, the opportunities are limitless. Companies are already using the deep network to predict your interests. It’s no rocket science. Consider Netflix: If you expressed an interest in thrillers, and watched a political movie the last time you logged in, guess what Netflix is going to pull up next? Political thrillers, of course.

India has been rather slow in warming up to deep learning, but there's a reason behind this. Deep learning requires a robust computational model. So, organizations and labs which already had this computational prowess and funding in place were much faster in identifying and adopting these technologies. 

"We’ve identified that these models are data-hungry. The more you feed them, the better they perform. Previous models didn't have them. And these machines like to be overfed," reveals Ganapathy.

Deep learning doesn’t just help bring home the bacon, it’s also being used to spot the bad guys.

What the government is interested in is identifying who's speaking on a call, what language is being spoken, and whether a particular keyword has been used or not. Sifting through petabytes of data is humanly not possible, and that's where deep learning steps in. 

It's a touchy issue, though. Listening to every word being spoken is a privacy issue for sure. But if the system can be harnessed to determine red alerts, or if two speakers are one and the same, the job is done. “In fact, DRDO (Defense Research Development Organization) has already started working on some of these projects,” reveals Sriram.

Banks, too are now investing in this technology. There are instances when a customer might not want to key in a password, but say it. “That's when neuromorphic computing does the job –to identify a speech pattern and match it to the authorized one,” he adds.

A few years ago, Google made this transition from 'strings to things'. So, now when you search on Google, you see a 'knowledge panel' on the right-hand side. For example, if you’re looking for Albert Einstein, you’ll not only see when he was born and when he died, but you’ll also get details on his education and the famed E = mc 2. 

The Cray XC40 (SahasraT) at IISc, one of India’s 11 supercomputers, is currently being used for machine learning applications. "With 33,000 computing cores, connected by a very fast network, we can do distributed learning in this environment, therefore enabling large-scale machine learning," says Talukdar.

In addition to this, the supercomputer is also being used for wind simulation in aircraft designing and to simulate supernovas.

Democratizing Artificial Intelligence

Founded by Elon Musk, Sam Altman, and other visionaries, Open AI, as the name suggests, is a non-profit, AI research, company that freely collaborates with other institutions and researchers by making its patents and research open to the public.

"I believe this is an interesting development. There's also LNAI, a non-profit AI research group. We've derived so much utility from these technologies just by scratching the surface,” says Talukdar.

"Yes, we've a long, long way to go, and yes, there will be failures, but as Elon Musk says failure is an option. If things are not failing, you're not innovating enough," he says.