Home Vendors & SolutionsIBM IBM Speech Sandbox

IBM Speech Sandbox

by Stuart McIntyre
1 comment
IBM Speech Sandbox

On the next WTF Tech podcast episode (to be published this week), Darren, Jesse and I spent a fair amount of time discussing Virtual Reality and our personal experiences with using the new technologies.

[Spoiler: Both Darren and Jesse now own HTC Vive VR headsets.]

One of the apps available for the Vive system via HTC’s VivePort solution marketplace is IBM’s Watson-based IBM Speech Sandbox:

IBM Speech SandboxWhat if you could talk to your environment in VR? With this tech demo showcasing IBM Watson speech services and the Watson Unity SDK, you can. Create, modify and destroy objects in an immersive sandbox world using your voice.

IBM has more details on the application and the background to its development in their Mobile Innovation Lab, and the entire article is worth a read if you’re interested in how IBM believes VR can be used in future applications:

banner

Because the world is immersive and users are fully absorbed in the experience, even slightly unintuitive behaviors can be extremely jarring in unpredictable ways. Therefore, it’s important to test often with real users as you are building your app.

People are affected by VR very differently. Some have used VR before and immediately understand the controls, others might not even know to look around the environment, and many could be easily affected by motion sickness. It is important to test with people that have varying degrees of familiarity with VR systems because they will all react differently.

Our process for user testing involved creating different hypotheses, then implementing simple builds of the app that demonstrated each hypothesis. Creating objects using voice was one of the most difficult things to get right; we had to ensure that every user would be able to successfully create objects with their voice and that objects would materialize in the world where our users expected them to appear. We created 3 different interaction models and tested each one, which is how we arrived on the laser-pointer system present in the current version of the game.

The current ‘tech demo’ available on VivePort is somewhat limited in functionality terms, but still allows for the possibilities of voice control within VR to be experienced first-hand.

This video shows the app in use:

https://www.youtube.com/watch?v=FlMvLDw6cYc

Personally, the demo left me feeling a little cold… It would seem to need more than just the ability to create pre-canned named objects in the 3D space to be even approaching usefulness. What about naming the size, colour and character of the objects? What about changing or animating objects that already exist etc.? However, there’s clearly value in researching, modelling and testing these technologies out, and I applaud IBM’s effort to get Watson-capabilities into the hands of Vive users at this early stage.

Please do add a comment if you’ve tried out the Speech Sandbox, and let us know what you think of the capabilities it offers…

You may also like

Soledad is the Best Newspaper & Magazine WordPress Theme with tons of customizations and demos ready to import. This theme is perfect for blogs and excellent for online stores, news, magazine or review sites.

Subscribe

u00a92022 Soledad, A Technology Media Company – All Right Reserved. Designed and Developed by PenciDesign

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?
-
00:00
00:00
Update Required Flash plugin
-
00:00
00:00