Presentation of AI Cloud Services integrated with Snap! at the Connective Ubiquitous Technology for Embodiments Center of the National University of Singapore and Keio University on 16 March 2017 by Ken Kahn

To an audience of nearly one hundred students and faculty Ken Kahn described the eCraft2Learn project followed by three demonstrations. The first demonstration included speech recognition and synthesis in an application where the user gave verbal commands to a virtual robot. The second demonstration sent an image from the laptop’s camera to Google, IBM, and Microsoft’s AI cloud services. Descriptions of what was in the image were then displayed in the app as below:


The output of three AI cloud services describing what is in front of a laptop’s camera


The third demo combined speech and vision into one program. All of these programs used new Snap! commands implemented by Ken Kahn in JavaScript for the eCraft2Learn project. These commands were designed to be usable by novice student programmers. A portion of the spoken command program can be seen here:

A Snap! program to respond to spoken commands

A Snap! program to respond to spoken commands

Leave a Reply