We were inspired by the recent advances in technology surrounding self-driving cars, especially regarding their ability to help transport people more quickly and safely to their destinations. We wanted to focus in on how our idea could expedite the process of getting patients to their destination. That way they can receive care as fast as possible, possibly saving their lives. We were also interested in integrating Alexa into our project to create an alternative to expensive, intrusive devices such as LifeAlert. For many that struggle with disabilities or are living alone, the hands-free voice activated nature of Alexa is critically important. The result is an app that makes emergency response cheaper, faster, and more reliable.

What it does

By invoking an Alexa skill, an ambulance is notified of your emergency and immediately dispatched to your location. With new devices bringing Alexa to smartphones, wristwatches, and all sorts of IOT devices, our life-saving app is wherever people need it. Our project is used on a toy town mat, with destinations like the post office or mall. The app can recognize these locations and send you to the closest doctor.

How we built it

We built our project by leveraging a plethora of tools. The Alexa skill was made with Node.js, which interfaces with a Java websocket server to stay in contact with the Android-based vision processing camera. The camera then processes the images and communicates with the NXT over Bluetooth to determine and execute the best possible path for the vehicle to follow. We were working with many platforms, and thus divided up the work and specialized in areas, like the Alexa response or NXT brick, to make the most of our time.

Challenges we ran into

Troubleshooting the various problems we ran into with OpenCV took an incredible amount of time, which made it difficult to stay at a productive pace. Some of us were familiar with it, but it still proved to be a difficult beast to tame. On the NXT device, the software we were using was a niche, and thus finding documentation and even updated code proved a daunting task.

We prototyped our Alexa integration using IBM's bluemix software. We visualized our ideas and had an almost functional version of our code, but weren't able to finish the linking within the software even with the help of mentors. It nonetheless proved a useful tool for making a flowchart of our Alexa skill, and we were able to translare it into a Node.js codebase.

Accomplishments that we're proud of

Getting communications to work across the 5+ layers of networks/devices was a massive test of our ability to collaborate and the platforms we were pushing to their limits, but we are proud we pulled it off. Everyone on the team had the fortitude to stay up for all but one hour throughout the entire event, and we were able to finally get everything talking hours before the deadline. It made the vision look easy. It was exhilarating to see all of our individual efforts come together as one cohesive unit

What we learned

All of us learned that even with many layers to a project, breaking it down into pieces will make it manageable and achievable. We learned lots of trade knowledge of our respective tools, and feel more confident in computer vision and IOT integration.

What's next for üBear

We'd like to eventually improve the vision algorithm, scale up the hardware, and integrate it further with Alexa.

Tools & Frameworks
Programming Languages
computer vision