Since my ARIS project is close to over, I decided to work on it by myself and started to work on the new project. New project is the smart glasses that will translate voice into texts and show texts through augmented reality. Also, I would like this smart glass to be able to send the movements of the sign language, and translate it into voice or texts. At 1stPlayable, I played with Microsoft Hololens. Even though I knew the existences of all those VR and AR devices, this was the first time to actual use it. Hololens has voice sensor and gesture sensors. Thus, it is possible for users to command through either voice or easy hand movements. Also, the biggest parts are the ability that it can read the size of the room by sending lights and calculating the lights' reflections. I played one game, which a character is on the floor and in the air. Most of the games require the necks' movements. It was really exciting but at the same time, my neck got hart because I moved my neck with the heave device. Microsoft Hololens has more than enough for my project. But at the same time, reading the hand movements of sign language will be a difficult problem because even with this hololens, it is only possible to decipher easy movements. I want to research more about Soli, which is a Google's project making it more possible to read a hand's and fingers' movements.
After I played with Hololens, I talked with my mentor about possible softwares to program. I decided to use Android platform and start to make a program that will translate voice into text using the Google's Speech API. To program on Android platform, I started to study.
0 Comments
Leave a Reply. |
AuthorNana Takada Archives
March 2017
Categories |