DigitalSmiths Claims to Be the Google of Video Ads

By The Myers Report Archives
Cover image for  article: DigitalSmiths Claims to Be the Google of Video Ads

Imagine if someone could do for video advertising what Google AdSense has done for text. A North Carolina company that recently received $6 million in venture funding claims it can.

Originally published: July 9, 2007

DigitalSmiths says the VideoSense ad matching technology it introduced this year can recognize pieces of video and audio and then tell an ad server to place contextually relevant ads in, around or even over the video.

In presentations and exclusive interviews with Jack Myers Media Business Report, DigitalSmiths CEO Ben Weinberger explained the technology and what it can do. One of his presentations calls VideoSense "the first platform agnostic broadband video contextual advertising solution to integrate seamlessly with existing online ad networks."

In plainer English, that means the technology recognizes a lot of what's in a video — objects (clothing, furniture, a soda can, a cell phone, etc.), scenes and locales (Paris, New York, Los Angeles, a beach, a ski slope), and even famous faces — as well as speech (which it converts to text for recognition), then plugs that information into its algorithm, accesses a database and tells an ad server from, say, DoubleClick, Zedo or Atlas, to put in an ad that matches the content.

Weinberger showed one example in which the Thrivent financial company worked with Habitat for Humanity to build homes. When the video showed a Milwaukee Brewers stadium logo, an ad for their ballpark appeared in an ad unit next to the video window. The software recognized builders working with houses and movers carrying boxes, and showed ads for Home Depot and a moving company, respectively.

For a demo episode of The Office, the DigitalSmiths presentation showed ads for Men's Wearhouse next to a character in a nicely cut suit, for J.Crew when that company was mentioned and for Fossil when someone mentioned wristwatches.

Recognizing Tom Cruise at the Beach

The presentations were not true demos. But if the technology works as promised, it could herald a revolution in the way ads are served for video. Imagine if, in addition to choosing a channel based on its demographic makeup (young women, older adults, children …) or general content strain (cooking, lifestyle, etc.) marketers could place ads relevant to individual content parsed at multiple levels.

What if you could, for example, tell an ad network to recognize every image of Tom Cruise at an airport but not in the mountains, or to show women's skiwear ads whenever there were women walking in a snowy village but not in the middle of a city or by a pool?

Weinberger says his technology can do all that and that it is also smart enough to not place ads with unfortunate meanings. He tells of one occurrence when a text ad for Samsonite luggage appeared alongside a story about a serial killer who had placed bodies in suitcases. The VideoSense contextual recognition is deep, broad and multilayered and unlikely to make such mistakes, Weinberger says.

Yet his presentation showed contextual advertising in a rather literal and blunt sense. The Office episode, for example, was about some characters' inability to balance books due to possible pilfering, and so an ad for Deloitte Touche appeared. The J.Crew ad came up when someone said illicit purchases may have been made from that chain. Not only were the ads a bit distracting, but the companies' ad buyers might prefer their messages not appear in those contexts.

Weinberger insists, though, that the technology will give the ad industry what it ultimately wants.

"There's no doubt in [advertisers'] minds that contextual advertising is always better than non-contextual," Weinberger says. "I saw a story yesterday about animal abuse and the ad was for weight loss. It would have been better targeted to something relevant to animals."

VideoSense will certainly be refined over time. Weinberger says DigitalSmiths' software experts, some of them PhDs in video pattern recognition, have built the components of VideoSense from the ground up so the company will have control and be able to continually improve the product. Even the facial recognition software is derived from academic research and algorithms rather than any existing product.

"We've never taken someone else's technology and tried to bend or meld it into a video engine," Weinberger says.

Another key aspect of the technology is its ability to tell ad servers not only what content is playing, but also which kind of ads to serve. VideoSense can call for standard units such as banners or rectangles, for ads within a video stream and even a relatively new ad unit that forms a transparent layer atop the lower third of a Web video.

Weinberger points out that the technology could also be used to provide interactive polls, related Webisodes or other items that can be inserted and are relevant to content.

Tests With a TV Studio

The technology is currently being tested by a TV studio, Web portals and ad platform networks, but Weinberger won't say who because they don't want to tip off competitors. DigitalSmiths has strong inroads in Hollywood because of a previous technology that helped studios parse and organize archival footage so it could be resold, for example in TV promos.

Weinberger plans to use the funding DigitalSmiths received late last month to about double its staff to 35 people over the next year or so. That's a long way from 1998, when Weinberger co-founded the company in a dorm room at Southern Illinois University. Today the company it is based in the Raleigh-Durham Research Triangle area.

DigitalSmiths CEO Ben Weinberger can be reached at phone number 843.379.7878 x215

Dorian Benkoil is a regular columnist for Jack Myers Media Business Report.

Copyright ©2024 MediaVillage, Inc. All rights reserved. By using this site you agree to the Terms of Use and Privacy Policy.