Revive (Mobile)

Revive (Mobile)

Cover
EFD65CBD-D797-4BE1-BF8F-7C729E728D3E.jpeg
Short description
Mobile application that uses Tensorflow.js to analyze images and determine where you can recycle that item that was analyze.
Role / Service
Web Design
Branding
UI Design
Product Design
Art Direction
Kind
Hackathon
Year
2019
 

PROJECT OVERVIEW

ReVive

Inspiration

"New York City has no landfills or incinerators, yet residents produce 12,000 tons of waste every day. What happens when you throw something away?". We would want people to answer "give it to ReViVe" because as people say "one man's trash is another man's treasure". In a more serious note, we are well aware of the problems garbage collection is creating, not only in the environment but also in politics. Recently China declared they did not want to take our garbage anymore, so now where is it going to go? We would like recyclabe garbage to go to centers where they can be reused. Our app name is literally what we want to do for the Earth, help us Revive It!

What it does

  1. Take a picture
  1. Find nearby center
  1. Take action!
Three steps are all that separates YOU from recycling unwanted items. Our application is a cross-platform app that will enable users to find nearby recycling centers that specifically accept the item users take pictures of. First, we take a picture that is sent to Google Vision API to recognize what it is. From there we show the user the results within the app and request any adjustments to be made. Users are able to take pictures of multiple items (different types possible) and are automatically saved in a list of To Recycle Items. When the user is ready to recycle, the app will ask your location and map out the nearest recycle centers that accept your items. These locations are found using external API's, including Earth911 and NYC Open Data The information can be filtered (display only category X, display all). Depending to the dataset used for a specific item, the app will provide detailed information about the recycling center, such as name, address, number, email, categories accepted.Lastly, it will be up to you to take ACTION! and either call them or drop it off.

Challenges we ran into

  • Data fetched from an API was JSON-fyed but some nested structures were too many levels deep , making the trivial way of accessing its entries to be undefined
  • First time using google vision api and google cloud services
  • Troubleshooted the React camera library because it was not letting none of our phones to take pictures, save them or render them.

Accomplishments that we're proud of

  • Enabling image cache storage using react native.
  • Implementing Machine Learning to successfully categorize items.
  • Giving and receiving help throughout the event.

What we learned

First of all we were all able to learn React NativeGoogle Vision APIExpo, and other APIs. But we also learn about teamwork, each one of us is from a different CUNY school and yet somehow we were able to work together for 2 days along with other teams, everyone helping each other out, so what we truly learned is that working together great things can be achieved!

Built with

  • React native
  • google vision api
  • google cloud services
  • expo
  • EARTH911 api

LINK