Skip to content

Google announces “hum to search” machine learning music search feature

The company also notes that people need not worry about delivering a perfect pitch to take advantage of this new search capability.


Image: iStock/inueng

On Thursday, Google announced a new “hum to search” feature enabling people to pinpoint songs by simply humming part of a track. Google notes that people need not worry about their musical capabilities, “you don’t need perfect pitch to use this feature,” in a press release.

More about Innovation

The new search capability is available on the Google app and mobile devices as well as the Google Search widget. When using the widget, people will first need to tap the small microphone icon and prompt the feature by either clicking the button labeled “search a song” or saying “what’s this song?” Next, the person will need to proceed to hum part of the song.

The hum to search function is also available on Google Assistant using a similar framework. To identify a song in this format, first ask “Hey Google, what’s this song?” before humming a song. It’s important to remember that an audio enquirer will need to know a bit of the song to help target a particular track. Per the Google release, people will need to hum a portion of the song for 10-15 seconds.

SEE: TechRepublic Premium editorial calendar: IT policies, checklists, toolkits, and research for download (TechRepublic Premium)

The feature uses machine learning to identify potential tracks based on a person’s hummed sequence. After humming a tune, Google will provide a series of “most likely options based on the tune.” Then, people can play these closest matches and peruse information related to the performing artists, tracks, albums, and more.

The machine learning identification process

A given song will have myriad identifying characteristics, and Google likens a song’s melody to its “fingerprint.” With this in mind, Google has designed its machine learning to be able to “match your hum, whistle or singing to the right ‘fingerprint.'” These models create a “number-based sequence representing the song’s melody” based on a person’s hummed melody.

SEE: OpenAI unveils neural network capable of creating music and releases debut mixtape (TechRepublic)

The models have been trained to pinpoint specific songs on multiple sources such as in-studio recordings, singing, whistling, and humming, according to Google, and all other elements of a recording such as instruments the tone and timbre of the voice are removed leaving “the song’s number-based sequence, or the fingerprint.” These numerical sequences are then compared to thousands of other tracks to determine possible matches.

Currently, the feature is only available on iOS in English. On Android, hum to search is available in more than 20 languages and the company hopes to expand these capabilities to more languages, per the release.

Also see