AI uses deep learning models than ML for better decision making: Prashant Rao, MathWorks

Rao highlights how organizations can ensure false-positives and false-negatives are kept to a minimum in AI-based solutions. 

Mathworks Sep 11th 2019 A-A+

Prashant Rao, Head of Application Engineering at MathWorks India, and his team has worked with customers to enable the adoption of MATLAB and Simulink products for technical computing and model-based design. In this interview, Rao explains why it is important to focus on the complete solution and not just on the trained models in order to have a true AI system. He also sheds light on why deep learning models are used instead of machine learning models to train AI systems.

Edited Excerpts: 

With numerous OEMs marketing ML solutions as AI offerings, what’s your definition of what truly qualifies as AI, and how is significantly better than what ML-based solutions have to offer? 

We see AI as the capability or capacity of a system to take decisions autonomously, even when facing uncharted or untested situations. The main opportunities we see are incorporating AI to automate an important task of an overall system. For example, lane tracking for autonomous driving. These applications are often image and video based, such as pedestrian detection, but they can also be non-image based using sensor, signal and audio data.  

Machine learning (ML) and Deep learning (DL) are technologies or methodologies that are used to implement AI systems. It’s important to keep in mind that to truly have an AI system, you need to focus on the complete solution, not just the trained model: this starts with the curating and preprocessing of large amounts of data, building the right predictive model, designing a system that incorporates the model to make decisions, and deploying the model to a final destination, either a large scale enterprise system or embedded hardware being 2 popular options.  

  • AI tends to use deep learning models rather than machine learning models to automatically learn important features in the data to make decisions. This differs from ML based approaches, in which you must do the manual feature selection of the data. 
  • There’s another interesting application of AI for reinforcement learning. Most people think of reinforcement learning is for gaming systems, but it can also be uniquely used for industrial applications. Training of the model is done through a trial-and-error process, and learning through simulation is often cheaper, faster and safer than testing the model in a real-world environment.  

Follow up for more information here

How can organizations ensure that false-positives and false-negatives are kept to a minimum in AI-based solutions? [building real system] 

Eliminating errors is crucial for developing real systems, and you need to be mindful of the possibility of errors through the entire development process: including testing, validating and updating your model.  

Without the right input data, and the right amount of data, there is a chance you could introduce errors into your system. This speaks to the ability to preprocess data: you need the right amount of clean data and ensure you have tested all edge cases the system will be required to handle. Techniques for doing this include weighting images and categories, or class balancing within the training process. You can derive additional images through data synthesis – or synthetic data generation – which can test the exact scenarios and edge cases that the system will be required to handle. 

Testing the algorithm in a production-level environment is crucial to keeping any errors out of the system. 

It’s also important to use tools that allow you to try multiple approaches and iterate on the final model before moving to production. 

Could you highlight the importance of metadata in AI.  

We see the importance of metadata highlighted in the case of image and video data, where the scenario of when and where an image is taken can be just as important as the image itself. For the case of automated driving, scenarios of driving conditions change depending on time of day, weather conditions and location in the world, which can influence the types of decision the AI system will make. For example, you need to ensure your system can handle pedestrian detection on sunny days with glare, and cloudy days with rain.  

Engineers need access to good labeling tools to accurately label thousands of images correctly. This can be expensive and time consuming, so tools for automatic labeling of datasets are in high demand. This is also becoming increasingly popular for labeling non-image data such as audio and signal data. Image-based applications can use pretrained models to reduce the amount of data you need.

With the widespread democratization of AI and AI-as-a-Service, will the enterprise see a level playing field for large and small players? How can larger organizations maintain their competitive edge in the current scenario? 

There are some applications where large enterprises have a big advantage because of the wealth of data they own to train AI systems – think the likes of Apple, Google, Facebook, and Microsoft.  There are also some easy AI-as-a-Service tools cloud providers offer for someone to learn and get started for specific applications, an example is an objection detection system to recognize your spouse’s car has arrived at your house. 
 
However, the industrial applications we’re helping customers with are ones where AI is used as part of an overall system that enhances a system design or product. In this context, we’re seeing an interest in AI across all industries, especially auto and aerospace and defense, because every company is looking to see how they can improve their system using AI, and what they find is turnkey AI as a service doesn’t work. 
 
One of our customers, Caterpillar is using MathWorks tools to identify tractors and people in the field. AI-as-a-service cannot meet their requirements because they need a unique solution specific to their scenario. You can watch the Caterpillar presentation outlining their challenge, solution and the results here

What are the trends/use cases in AI that will completely change the game for enterprise by 2020?  

As AI becomes increasingly prevalent, and almost commonplace, we expect to see five key trends worth tracking in 2020:  

  1. As the realization that AI is no longer exclusive to computer scientists continues to grow, one trend we’re seeing which will continue into 2020 is the use of deep learning by domain experts, engineers and scientists. To further support this trend, models created by deep learning experts will continue to be widely available for use in industry, allowing the latest deep learning research to be used by experts and non-experts alike. 
  2. The ability to incorporate reinforcement learning will be a continuing trend into 2020. This means the use of deep learning and AI in a controls environment, training systems based on reward system, and using data to continuously improve the outcome of the system.  
  3. AI will increase in popularity for applications beyond images: applications using signal, wireless and radar communications. 
  4. AI will be deployed across multiple hardware targets – not just embedded GPUs but also FPGAs and lower power systems, and a single AI model may need to be deployed to multiple targets, so tools to automatically target multiple platforms is a growing need we expect to continue. Within this, for low power systems, we’re seeing a growing trend to create the most accurate model with the smallest footprint, taking advantage of techniques like half-precision, and int8 fixed point designs. 
  5. There will be a growing need to manage AI complexity within a project. This will require workflows that include iterative training, network design exploration and tracking the artifacts used in training an AI model to identify the optimal solution.
     
AI uses deep learning models than ML for better decision making: Prashant Rao, MathWorks

Rao highlights how organizations can ensure false-positives and false-negatives are kept to a minimum in AI-based solutions. 

Mathworks

Prashant Rao, Head of Application Engineering at MathWorks India, and his team has worked with customers to enable the adoption of MATLAB and Simulink products for technical computing and model-based design. In this interview, Rao explains why it is important to focus on the complete solution and not just on the trained models in order to have a true AI system. He also sheds light on why deep learning models are used instead of machine learning models to train AI systems.

Edited Excerpts: 

With numerous OEMs marketing ML solutions as AI offerings, what’s your definition of what truly qualifies as AI, and how is significantly better than what ML-based solutions have to offer? 

We see AI as the capability or capacity of a system to take decisions autonomously, even when facing uncharted or untested situations. The main opportunities we see are incorporating AI to automate an important task of an overall system. For example, lane tracking for autonomous driving. These applications are often image and video based, such as pedestrian detection, but they can also be non-image based using sensor, signal and audio data.  

Machine learning (ML) and Deep learning (DL) are technologies or methodologies that are used to implement AI systems. It’s important to keep in mind that to truly have an AI system, you need to focus on the complete solution, not just the trained model: this starts with the curating and preprocessing of large amounts of data, building the right predictive model, designing a system that incorporates the model to make decisions, and deploying the model to a final destination, either a large scale enterprise system or embedded hardware being 2 popular options.  

  • AI tends to use deep learning models rather than machine learning models to automatically learn important features in the data to make decisions. This differs from ML based approaches, in which you must do the manual feature selection of the data. 
  • There’s another interesting application of AI for reinforcement learning. Most people think of reinforcement learning is for gaming systems, but it can also be uniquely used for industrial applications. Training of the model is done through a trial-and-error process, and learning through simulation is often cheaper, faster and safer than testing the model in a real-world environment.  

Follow up for more information here

How can organizations ensure that false-positives and false-negatives are kept to a minimum in AI-based solutions? [building real system] 

Eliminating errors is crucial for developing real systems, and you need to be mindful of the possibility of errors through the entire development process: including testing, validating and updating your model.  

Without the right input data, and the right amount of data, there is a chance you could introduce errors into your system. This speaks to the ability to preprocess data: you need the right amount of clean data and ensure you have tested all edge cases the system will be required to handle. Techniques for doing this include weighting images and categories, or class balancing within the training process. You can derive additional images through data synthesis – or synthetic data generation – which can test the exact scenarios and edge cases that the system will be required to handle. 

Testing the algorithm in a production-level environment is crucial to keeping any errors out of the system. 

It’s also important to use tools that allow you to try multiple approaches and iterate on the final model before moving to production. 

Could you highlight the importance of metadata in AI.  

We see the importance of metadata highlighted in the case of image and video data, where the scenario of when and where an image is taken can be just as important as the image itself. For the case of automated driving, scenarios of driving conditions change depending on time of day, weather conditions and location in the world, which can influence the types of decision the AI system will make. For example, you need to ensure your system can handle pedestrian detection on sunny days with glare, and cloudy days with rain.  

Engineers need access to good labeling tools to accurately label thousands of images correctly. This can be expensive and time consuming, so tools for automatic labeling of datasets are in high demand. This is also becoming increasingly popular for labeling non-image data such as audio and signal data. Image-based applications can use pretrained models to reduce the amount of data you need.

With the widespread democratization of AI and AI-as-a-Service, will the enterprise see a level playing field for large and small players? How can larger organizations maintain their competitive edge in the current scenario? 

There are some applications where large enterprises have a big advantage because of the wealth of data they own to train AI systems – think the likes of Apple, Google, Facebook, and Microsoft.  There are also some easy AI-as-a-Service tools cloud providers offer for someone to learn and get started for specific applications, an example is an objection detection system to recognize your spouse’s car has arrived at your house. 
 
However, the industrial applications we’re helping customers with are ones where AI is used as part of an overall system that enhances a system design or product. In this context, we’re seeing an interest in AI across all industries, especially auto and aerospace and defense, because every company is looking to see how they can improve their system using AI, and what they find is turnkey AI as a service doesn’t work. 
 
One of our customers, Caterpillar is using MathWorks tools to identify tractors and people in the field. AI-as-a-service cannot meet their requirements because they need a unique solution specific to their scenario. You can watch the Caterpillar presentation outlining their challenge, solution and the results here

What are the trends/use cases in AI that will completely change the game for enterprise by 2020?  

As AI becomes increasingly prevalent, and almost commonplace, we expect to see five key trends worth tracking in 2020:  

  1. As the realization that AI is no longer exclusive to computer scientists continues to grow, one trend we’re seeing which will continue into 2020 is the use of deep learning by domain experts, engineers and scientists. To further support this trend, models created by deep learning experts will continue to be widely available for use in industry, allowing the latest deep learning research to be used by experts and non-experts alike. 
  2. The ability to incorporate reinforcement learning will be a continuing trend into 2020. This means the use of deep learning and AI in a controls environment, training systems based on reward system, and using data to continuously improve the outcome of the system.  
  3. AI will increase in popularity for applications beyond images: applications using signal, wireless and radar communications. 
  4. AI will be deployed across multiple hardware targets – not just embedded GPUs but also FPGAs and lower power systems, and a single AI model may need to be deployed to multiple targets, so tools to automatically target multiple platforms is a growing need we expect to continue. Within this, for low power systems, we’re seeing a growing trend to create the most accurate model with the smallest footprint, taking advantage of techniques like half-precision, and int8 fixed point designs. 
  5. There will be a growing need to manage AI complexity within a project. This will require workflows that include iterative training, network design exploration and tracking the artifacts used in training an AI model to identify the optimal solution.