top of page
Search
Writer's pictureMohamed Zaid

How to apply machine learning and deep learning methods to audio analysis

Updated: May 9, 2020


While much of the writing and literature on deep learning concerns computer vision and natural language processing (NLP), audio analysis — a field that includes automatic speech recognition (ASR), digital signal processing, and music classification, tagging, and generation — is a growing subdomain of deep learning applications. Some of the most popular and widespread machine learning systems, virtual assistants Alexa, Siri and Google Home, are largely products built atop models that can extract information from audio signals.

Many of our users at Comet are working on audio related machine learning tasks such as audio classification, speech recognition and speech synthesis, so we built them tools to analyze, explore and understand audio data using Comet’s meta machine-learning platform.

Real-Time Sensors

The sensors that capture information range from high-definition, thermal, and infrared cameras to GPS, radar, speed indicators, and other platform systems. While the data from video cameras is typically compressed before it’s sent to the mission computer for processing, data from other systems may be sent in the raw form.

30 views0 comments

Recent Posts

See All

Comments


bottom of page