Pluggd to make podcasts chunkier, searchable

Seattle based podcast discovery and management service Pluggd is unveiling a major new feature at DEMO this weekend that combines speech recognition and semantic analysis to let users search for and skip to parts of an audio file that are related to topics of interest to them. It’s more than just speech recognition.

This is one of the most compelling examples I’ve seen lately of a growing trend: making multimedia content more granular and letting users take even greater control over the media we consume. We don’t just want to consume what we wish, we want to consume it in the way we wish.

Called “Hear Here”, the new feature is only available for use with a single test file this weekend, but CEO Alex Castro told me that with his team’s background in scaling large distributed computing at places like Amazon and Microsoft, they decided to take on the hardest part first – the relevance determination. Pluggd aims to have hundreds of thousands of podcasts analyzed and searchable by the end of the year, all with nothing required of the original publisher. Castro has been working with speech recognition technologies since he was 17 and at Bell Labs.

TechCrunch first profiled Pluggd when they launched in June. The company’s basic feature set is very cool, but not as cool as this new search function. They now report having more than 100,000 users and say they’ve seen their monthly uniques grow well ahead of schedule. The company has six full time employees and six part-timers; they’ve raised some angel backing and are working to raise further funding. They’re going to need it to crunch the kind of data this new feature seeks to engage with.

Here’s how this new search will work. When users decide they only want to hear a part of a file concerning a given topic, they enter a search term. Pluggd then searches for instances of that term and related terms being used in the file. Relevance is displayed on the file timeline with a heat map, sections of the file most related to your term appear in red, less related in green and unrelated in blue. Hover over any relevant point on the timeline and you’ll see the terms used there that Pluggd determined were related to your search term.

Users can click to listen to the file at that point, or select another option to tag, describe and share a particular section of the file. Castro says the company aims to set that data free, not keep it trapped in Pluggd. All of this is being done first with audio, but the company intends to implement the same technology with video as well. Podzinger already uses speech recognition with podcasts but doesn’t offer the semantic analysis of terms related to your search query. Blinkx does something similar for video, but Pluggd is building on speech recognition and adding even more value to search results.

Castro told me that the company built a language model in house to harden the speech recognition technology they licensed from another vendor. The results of that speech recognition are then passed through a relevance heuristic that Pluggd built by crawling the web to index terms regularly used in connection with each other. Search for Royals in a podcast and Pluggd will show you where else in the file baseball and Kansas City are discussed as well.

From the exploding popularity of short-form video to the basic inclusion of permalink URLs inside of Google Videos to the big interest in startup Viddler months before launch to the Hummer Winblad funded Widgetbox marketplace – putting content in chunks that users can work with is big. That’s with good reason; content chunks, or whatever you want to call them, substantially improve the web’s usefulness for users. Pluggd’s new search feature in particular sounds like just the kind of technology that multi-media search companies will be offering in years to come.