Google’s AI watched thousands of hours of TV to learn how to read lips better than you

Google’s AI watched thousands of hours of TV to learn how to read lips better than you


Researchers from Google’s UK-based artificial intelligence division DeepMind have collaborated with scientists from the University of Oxford to develop the world’s most advanced lip-reading software – and it probably reads lips better than you. To accomplish this, the researchers fed thousands of hours of TV footage from the BBC to a neural network, training it to annotate videos based on mouth movement analysis with an accuracy of 46.8 percent. For context, when tasked with captioning the same video, a professional human lip-reader proved to be almost four times less efficient, accurately guessing the right word only 12.4 percent of the…

This story continues at The Next Web

The makers of Siri are back with a new super-smart AI called Viv

The makers of Siri are back with a new super-smart AI called Viv

Siri, the velvet-voiced iOS assistant that can give you directions, beatbox, do math and chat with you about Game of Thrones, is usually associated with Apple. But Siri was not originally made by Apple; it was launched in 2007 by Stanford Research Institute as a spin-off company, led by Dag Kittlaus, Adam Cheyer and Tom Gruber, and was sold to Apple in 2010.

Six years later, Cheyer and Kittlaus are back with a new product called Viv which, according to the Washington Post, will be publicly demonstrated at an industry conference Monday. 

More about Virtual Intelligence, Voice Based Assistant, Ai, Siri, and Viv

Movidius packs plug-and-play AI into a USB stick

Movidius packs plug-and-play AI into a USB stick


If you’re looking to add artificial intelligence to a hardware project for doing things like sensing objects in an enviroment or understand voice commands, Movidius’ new Fathom USB stick might be just the thing. The company is known for its Myriad 2 deep learning chip that allows DJI drones to avoid obstacles. The Fathom is essentially a portable version of the chip that can be plugged into the USB 3.0 port of Linux-based devices to run fully-trained neural networks while consuming very little power. It’s compatible with Caffe and TensorFlow frameworks and is capable of 150 gigaFLOPS (150 billion floating-operations…

This story continues at The Next Web

Posted in AI |